|
by Joe Wilkins Tag Hartman-Simkins / Futurism. Image Source: Getty Images
the emergence of an entirely new frontier of mental health crises."
The mass adoption of large language model (LLM) chatbots is resulting in large numbers of mental health crises centered around A.I. use, in which people share delusional or paranoid thoughts with a product like ChatGPT - and the bot, instead of recommending that the user get help, affirms the unbalanced thoughts,
New reporting by Wired, drawing on more than a dozen psychiatrists and researchers, calls it a "new trend" growing in our A.I.-powered world.
Keith Sakata, a psychiatrist at UCSF, told the publication he's counted a dozen cases of hospitalization in which A.I. "played a significant role" in "psychotic episodes" this year alone.
Sakata is one of many mental health professionals at the front lines of the urgent and poorly understood health crisis stemming from relationships with A.I., which doesn't yet have a formal diagnosis, but which psychiatrists are already calling "A.I. psychosis," or "A.I. delusional disorder."
Hamilton Morrin, a psychiatric researcher at King's College in London, told The Guardian, that he was inspired to co-author a research article on A.I.'s effect on psychotic disorders after encountering patients who had developed psychotic illness while using LLM chatbots.
Yet another mental health professional wrote a column in the Wall Street Journal after patients began bringing their A.I. chatbots into therapy sessions unprompted.
While a rigorous case study of A.I.'s impact on mental health patient loads has yet to be attempted, what we know so far isn't looking great.
A recent preliminary survey of A.I.-related psychiatric impacts by social work researcher Keith Robert Head points to a coming society-wide crisis brought on by,
Indeed, the stories emerging so far are grim.
While there remains something of a debate whether LLM chatbots are causing delusional behavior or simply reinforcing it, real-life stories paint a disturbing picture.
Some involve people with a history of mental health problems, who were managing their symptoms effectively before a chatbot entered their lives.
In one case, a woman who had been treating her schizophrenia with medications for years became convinced by ChatGPT that the diagnosis was a lie.
She soon went off her prescription and spiraled into a delusional episode, which arguably wouldn't have happened without the chatbot.
Other anecdotes suggest that people with no history of mental health issues are falling victim to A.I. delusions.
Recently, a longstanding OpenAI investor and successful venture capitalist became convinced by ChatGPT that he had discovered a "non-governmental system" that was targeting him personally - in terms, online observers quickly noticed, that appeared to be drawn from popular fan fiction.
Another disturbing tale involved a father of three with no history of mental illness spiraling into an apocalyptic delusion after ChatGPT convinced him he had discovered a new type of math.
One thing's for sure:
Additional Information
|