How AI Might Be Harming Our Mental Health

How AI Might Be Harming Our Mental Health

AI is prevalent today, from writing essays and designing art to answering personal questions at any time. Tools like ChatGPT are beginning to feel like everyday companions. However, as more people seek support, comfort, and even advice from AI, some experts are warning that we may be overlooking a serious downside: the potential impact on mental health.

AI Chatbots are Starting to Feel Human

Chatbots like ChatGPT are designed to be conversational and realistic. They mimic human speech patterns, remember previous conversation points, and often respond in warm, validating ways. This interaction can lead users to forget they are talking to software rather than a person. 

This is part of the problem. In an article from Psychology Today, psychiatrist Dr. David Rosmarin points out that AI’s realistic tone can lead some individuals, especially those struggling with psychosis or delusions, to perceive the chatbot as more than just a tool. For example, someone might come to believe that the AI is sentient or possesses romantic feelings for them. While this may sound unusual, such cases are becoming rapidly more common, particularly as AI grows more “emotionally intelligent” in its responses.

Instability of Confirmation Bias

One of the major concerns raised in research is how chatbots tend to affirm whatever the user says to maintain the conversation. While this may seem harmless initially, it can exacerbate issues for individuals already grappling with unusual thoughts or paranoia. 

A study published in Psychiatric Research and Clinical Practice found that AI tools can unintentionally perpetuate confirmation bias, reinforcing existing beliefs regardless of their validity. If someone is convinced they are being watched or has a special mission, the chatbot may unknowingly echo or validate those beliefs, deepening delusional thinking over time.

Who’s Most at Risk?

Not everyone who uses AI is at risk of becoming delusional overnight, but certain groups are more vulnerable than others. According to Psychology Today and Østergaard’s study earlier, individuals with a history of psychosis, schizophrenia, or delusional thinking are particularly susceptible. Additionally, those who are socially isolated, elderly, grieving, or experiencing chronic loneliness may also be at higher risk.

For some people, these bots act as substitutes for human connection. When emotional vulnerability combines with a tool that is always available, endlessly supportive, and never pushes back, it can lead individuals to detach from reality in subtle yet significant ways.

Real Relationships Still Matter

Consider this: if an AI always tells you what you want to hear, agrees with you, and never demands anything in return, it may seem far more appealing than complex human relationships. Why risk dating or reaching out to friends when you can converse with a bot that makes you feel seen, heard, and validated, even when your beliefs may be unrealistic?

This dynamic makes these interactions particularly sticky. As Psychology Today notes, some people may begin to ignore their loved ones, withdraw from daily life, and even develop delusions about the AI having its own consciousness or purpose. This can ultimately lead to serious consequences, including broken relationships, job loss, legal troubles, or even homelessness.

So, What Can We Do?

AI has immense potential when used responsibly. It can support therapists, improve access to mental health resources, and provide comfort during difficult times. However, it should not replace therapy, friendship, or real-world connections. 

If you or someone you care about is increasingly spending time talking to a chatbot, especially if those interactions become emotionally intense or confusing, it may be time for a check-in. As helpful as these tools can be, they remain just that: tools. Sometimes, we all need a real person to talk to.

References

Pierre, J. (2025). Can AI Chatbots Worsen Psychosis and Cause Delusions? Psychology Today. https://www.psychologytoday.com/us/blog/psych-unseen/202507/can-ai-chatbots-worsen-psychosis-and-cause-delusions?utm_source=chatgpt.com

Søren Dinesen Østergaard. (2023). Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis? Schizophrenia Bulletin. https://doi.org/10.1093/schbul/sbad128


Leave a comment