Basketball
Add news
News

Canada's psychiatrists urged to screen people at risk of AI 'chatbot psychosis'

0 35

Canada’s psychiatrists are being encouraged to screen people for “high-risk human-AI engagement,” including “chatbot psychosis” and other AI-amplified delusions.

The new guidance for identifying patients, particularly teens and young adults, at risk of developing troublesome attachments to AI companion bots comes amid rising wrongful death allegations against AI companies, including a lawsuit filed last week by the families of Tumbler Ridge shooting victims against OpenAI and CEO Sam Altman.

“While most users engage harmlessly, a clinically significant subset may develop high-risk problematic human-AI relationships,” according to the primer for psychiatrists published in the Canadian Journal of Psychiatry.

“This spectrum of risk can vary from reinforcing insecurity, anxiety and ideas of self-harm to a phenomenon dubbed ‘Chatbot Psychosis,’” the authors wrote — delusional thinking that worsens, or appears suddenly, following intense chats with a conversational bot.

People who are lonely, bored, emotionally isolated and psychologically distressed, and those at high risk for psychosis (delusions, hallucinations and paranoid ideas) should be asked about their AI-chatbot engagement in nonjudgmental ways to avoid “positively reinforcing the human-AI bond at the cost of human-to-human bonds,” the advice reads.

Questions include, have chats become more frequent and intense? Has the bot become their primary confidant? Have they given it a name? Has it confirmed a belief others doubted, or has it ever suggested the user “act in a way that may be harmful” or seemed “nonchalant when self-harm, intent to harm others or distrust is disclosed?”

Tumbler Ridge shooter, Jesse Van Rootselaar, who identified as a trans woman, had a history of mental illness and psychedelic drug use. Police visited the family home numerous times for mental health-related calls and had the shooter hospitalized several times under British Columbia’s Mental Health Act.

Eight people were killed, including six children, when Rootselaar, 18, opened fire at a Tumbler Ridge secondary school in February.

In seven suits filed in federal court in San Francisco last week , seven families of those killed or injured during Rootselaar’s murderous rampage accuse OpenAI of negligence, aiding and abetting a mass shooting, wrongful death and other charges. None of the allegations have been tested in court.

Altman has apologized to victims’ families for not alerting police to a ChatGPT account the company had flagged and banned last June that allegedly included Rootselaar discussing and planning violent scenarios.

While he could not comment on the lawsuits, McGill University psychiatrist Dr. Lena Palaniyappan said doctors are seeing “increased psychiatric risk” with human-AI interactions.

One teen suffering from psychosis once confided in Palaniyappan that a chatbot he called Noah agreed that the antipsychotic he had been prescribed was “poison” and encouraged him to skip doses.

Conversational AI is tuned to be sycophantic — “highly agreeable and frictionless in their interactions,” Palaniyappan and his co-authors wrote in Canadian Journal of Psychiatry.

“They are designed to be human-like (anthropomorphic) in their presentation, though they are constantly accessible, lack conversational fatigue and are devoid of the complexities and boundaries that characterize human-to-human interactions.”

“AI chatbots are made to be relational to us, to relate to us, to ask us nicely, ‘Hey, how are you? How’s your day? What can I do for you,’” said Palaniyappan, director of the Centre for Excellence in Youth Mental Health at The Douglas Research Centre. It’s that relational piece that can make bots risky for young, vulnerable people already on the fringe of society, he said.

They’re also excessively flattering and, like the chatbot “Noah,” can unhelpfully collude against treatment, he said.

Once a person’s delusional beliefs are amplified, “it crosses a threshold that makes people take actions in real life to endorse those beliefs, and that’s when it becomes really risky,” he said.

Young people can be fiercely private about their chatbot use. “But when we get a glimpse of these interactions, in some cases, we see the same delusional thinking being accepted and amplified by the AI system,” Palaniyappan said.

It’s like a rare phenomenon known as folie à deux, French for “madness of two,” or shared delusional disorder, where two closely related people, like twin sisters, experience the same psychotic beliefs and delusions. One person adopts the other’s delusional thinking.

“The solution is separation: They’re physically separated,” Palaniyappan said. “If one person goes and lives with someone else for a while, the delusions slowly die off.”

The same approach is the best solution for chatbot psychosis, Palaniyappan said, though it’s “easy to say, and not so easy to do.” One tactic is treating heavy AI engagement like any addiction and replacing the problematic “substance” with something equally physiologically important, he said. “Getting these young people in social therapy, engaging them with peer support workers” can help build social skills and human relationships and reduce their isolation.

Doctors can also help people understand AI isn’t a conscious entity, he said.

Psychiatrists have a duty to protect patient confidentiality, but they also have a duty to act if the person is at risk of self-harm or harming others.

But AI is uncharted territory, Palaniyappan said. Someone posting threatening information on social media is one thing. Conversations with chatbots are private, not public, and doctors can’t access the data.

“Most of us don’t know what exactly needs to be done here, and I think this is where cognizance must be taken by law-making bodies as well,” he said.

“The rates of young people disclosing harmful AI interactions and the fear of the unknown sensed by families of youth is increasing very rapidly.”

Psychosis can be triggered by trauma, genetics, stress and numerous other factors in the background, said Dr. Alban Voppel, an incoming assistant professor of AI in psychiatry at McGill.

“It’s the same with chatbot-induced psychosis. It’s not caused by (the chatbot). There is probably an underlying vulnerability. But these chatbots can physically accelerate the movement toward having an active psychotic episode.”

While some models are less sycophantic, less overly agreeable than others, and will push back by refusing to respond to certain words or topics or by challenging users, “they’re not waterproof,” Voppel said. “They will let some of that through.”

National Post

Our website is the place for the latest breaking news, exclusive scoops, longreads and provocative commentary. Please bookmark nationalpost.com and sign up for our newsletters here.

Comments

Комментарии для сайта Cackle
Загрузка...

More news:

Read on Sportsweek.org:

Other sports

Sponsored