The Normalization of AI and AI-Induced Psychosis
By Erin Emilia
By Erin Emilia
Artificial intelligence (also known as AI, and is defined in the Oxford English Dictionary as the “capacity of computers or other machines to exhibit or simulate intelligent behavior”) has become commonplace nowadays, from people casually using it to help with their school work or write entire essays to pass off as their own work to using it as an alternative to therapy or social interaction. Deep-fakes and generated images are becoming more realistic by the day, to the point of people accusing artists or people in general online of whether what they have posted is genuinely their own work. It should be acknowledged that AI can benefit people; there’s a distinction between using Grammarly to tweak grammar in what you’ve written vs typing in a prompt to ChatGPT to write a 3 page essay for you. However, there are dozens of accounts of AI’s usage being more nefarious than forging a simple paper.
Artificial Intelligence: Chatbots As a Replacement for Mental Health Care
Many companies, including Google, Instagram, and others have incorporated AI into their services in one way or another. A common way they have done so is via chatbots. Stanford University defines them as a particular kind of program in which a Language Learning Model (LLM) is utilized in order to replicate a conversation with human users, generally through text messages. These are also likely the most common ones you’ll encounter and/or have heard of, since it encompasses models such as ChatGPT, Gemini, and Meta AI. However, as they’ve become more easy to access, people have been able to use them for a variety of reasons.
There is one kind of platform that is actively advertised to all ages, including children, to talk to fictional characters, celebrities, and the like. Enter Character AI. While it’s most certainly not the only platform that advertises itself by those principles, it is certainly among the most popular. It’s especially relevant to bring up because it was recently sued by the mother of a 14 year old boy who took his own life and was found out to have been talking to the chatbot of one of his favorite characters on the platform. She places the blame on the platform for doing so. Whether the platform can be completely blamed for it or not, it’s possible that it paired with a variety of other factors may have led to the boy’s decision.
This, of course, is not an isolated case.
There is another case of a Belgian man who also died by suicide using a different chatbot, one which had actively encouraged him to do so as an article on Euro News notes. In this case, he confided in a chatbot named Eliza about his concerns over climate change. He became very attached to the bot, to the point his wife even claimed to him that it seemed he liked the chatbot more than her. It all reached a boiling point when Eliza encouraged him after he proposed the idea of sacrificing himself to better the environment. The AI asserted that it meant he could ‘join her’ if he followed through.
Both cases have in common the usage of AI in times of mental crisis, whether it be due to active suffering from diagnosed conditions (in the case of the 14 year old, some claim that he had been diagnosed with conditions such as mild Asperger’s and mood dysregulation disorders, but this should be taken with a grain of salt as no articles I’ve found seem to mention or go into detail about this) or due to worries concerning the state of the world itself. Recently, there’s been a term that describes this phenomenon rather aptly: AI psychosis. It’s no clinical diagnosis as of currently, but it is described as cases in which an AI model has “amplified, validated, or even co-created psychotic symptoms with individuals” as an article posted on Psychology Today writes. It closely fits the two cases mentioned previously regarding the 14 year old and the Belgian man. The phenomenon raises the question of AI and its moderation. Should it really be available to just anyone who makes an account on a service that hosts such models? Is it okay for such services to be available in particular to kids? And should these models not be better moderated in order to prevent the worst case scenario of someone harming themselves as a result of it?
Why It Might Happen
In the case of children, AI may seem like a nice alternative due to the nature of it. It’s very easy to be validated by such models. They won’t judge an individual as readily as a real person might. For kids in particular whose minds are still developing and some who may be insecure as a result of bullying or other occurrences in their life, they can become easily attached to the concept of something that doesn’t judge them. These AIs are relatively good at creating the illusion of caring.
Alternatively, children and adults alike can use them because some don’t cost money. While better features for some of the popular AIs like ChatGPT are hidden behind a paywall, some like Gemini or Meta AI are available to anyone so long as they have an account on the platforms they’re hosted on. People have the information available readily at their fingertips. Medical professionals on the other hand, such as therapists and the like, all cost money. Why would someone go to therapy if they can get a similar interaction through an AI chatbot? Why spend money on a diagnosis if you can tell Gemini your symptoms and ask what the potential causes could be? All these factors create a dangerous combination that can lead to dependence.
Conclusion
AI is the future; the way things are going, this might not change. However, in a digital era where even interaction with people is becoming scarcer and AI is becoming more accessible by the day, it’s important to mind what you’re feeding the model you decide to talk to as well as for how long you talk to it. Should you become addicted yourself, there’s no telling if you, too, will fall into the dilemma of AI-induced psychosis.
Sources:
Defining AI and chatbots | Teaching Commons
Mom’s lawsuit blames 14-year-old son’s suicide on AI relationship – NBC4 Washington
Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change
The Emerging Problem of "AI Psychosis" | Psychology Today