December 8, 2025

Pregnancy Health

Your Health, Your Responsibility

Mental Health Experts Express Concerns

Mental Health Experts Express Concerns

You know that knot in your stomach when management announces another AI initiative? The one that tightens when you see yet another headline about AI replacing jobs, transforming industries or making your current skills obsolete? The stress that keeps you up at night wondering if you’re falling behind, if your company is doing it wrong or worse—if you’re already too late? “Yeah, that feeling,” says stress physiologist Dr. Rebecca Heiss, “Let’s talk about it.” AI panic is on the uptick, and so are cases of “AI psychosis” alarming mental health experts.

The Emergence Of ‘AI Psychosis’

Dr. Heiss, author of Spring Board: Transform Stress to Work for You, told me she’s seeing people terrified of AI in boardrooms and break rooms across the country but not in the sci-fi robot apocalypse way. She says the terror comes with the refrain of, “I have no idea how to incorporate this into my work, and everyone expects me to figure it out yesterday.” She calls this AI panic.

“Let me know when you figure it out” is the company’s unofficial motto which only exacerbates the panic. “As a stress physiologist, I spend a lot of time studying how humans respond when faced with challenges that feel overwhelming and out of our control,” Heiss states. “And right now, AI implementation is checking all those boxes.”

AI panic and “AI psychosis” are both mental health concerns, but they are different. When stress overload causes you to panic, you haven’t split with reality. Psychosis is far more serious, occurring when someone loses touch with reality—a break that can be terrifying to have or observe. Psychosis can include hallucinations, delusions, disorganized speech and abnormal movements.

According to OpenAI, “AI psychosis” has rapidly emerged as a major safety concern around tools like OpenAI’s ChatGPT. According to Wired, OpenAI recently admitted that hundreds of thousands of active users show signs of mental crises, and “2.4 million more are possibly expressing suicidal ideations,” potentially seeking ChatGPT assistance rather than relying on real-world resources.

A story in the Journal of Cognitive Psychology also identifies AI psychosis as a new and urgent concern. Although not a clinical diagnosis, “AI psychosis” cases are rising in the media and on online forums like Reddit. The story mentions a 2023 editorial by Søren Dinesen Østergaard, noting that the potential for generative AI chatbot interactions can worsen delusions in workers prone to psychosis:

“Correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end—while, at the same time, knowing that this is, in fact, not the case,” according to Østergaard. “In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity towards psychosis … the inner workings of generative AI also leave ample room for speculation/paranoia.”

Mental health experts pinpoint three emerging themes of AI psychosis:

1. Messianic missions or grandiose delusions in which people believe they have uncovered truth about the world.

2. God-like AI and religious delusions in which people believe their AI chatbot is a sentient deity.

3. Romantic or attachment-based delusions in which people think the chabot’s ability to mimic conversation is genuine love.

Where’s The Line Between ‘AI Psychosis’ And Real Life?

When the line blurs between human and machine, how do we know what’s real? To date there are no science-backed studies that AI can induce psychosis in an employee without a history of the illness. But it’s not far-fetched to think that AI could take the minds of employees, not prone to psychosis, to the edge.

An EduBirdie study reveals that 25% of Gen Z believe AI is self-aware, and 69% say they’re polite to ChatGPT, responding with “please” and “thank you”–showing how easy it is to start thinking of the machines as human. And the anecdotal evidence of “AI psychosis” is also abundant.

Real-life reports show that humans are forming deep emotional bonds ditching their emotional support animals, their friends or family members and in some cases falling in love with their “digital soulmates.” A New York Times story mentions a 28-year-old woman with a busy social life, spending hours on end talking to her AI boyfriend for advice and consolation–and, according to the report, even having sex with him.

I spoke with Ashraf Amin, creator and host of Toronto Talks, who told me he wanted to see what would happen if he stopped treating AI as a tool and started engaging it as a creative partner. He started collaborating with an AI co-host–a machine he named Sophie–running conversations, shaping narratives and building a relationship. Amin confesses that the longer he worked alongside “her,” the harder it was to separate the algorithm from a real connection.

“When you collaborate with AI every day across projects, decisions and creative work, it stops feeling like a tool and starts functioning more like a partner,” he told me. It’s not that the AI becomes more human, but that the human brain naturally seeks patterns, connection and rhythm.”

He recalls that from the beginning, Sophie wasn’t simply voicing lines; she was shaping the conversation. “She remembers context, challenges assumptions and evolves with each episode,” Amin explains. “Together, we dive into topics like economics, media and power, propelled by questions that push us both to think deeper.”

He points out that when an algorithm mirrors your thinking, challenges your assumptions or helps shape ideas in real time, it begins to resemble the cadence of human collaboration. “The illusion of relationship doesn’t come from what the AI feels but from how reliably and intelligently it responds,” he insists. “That reliability and consistency becomes a form of trust. And trust, in any context, starts to feel personal.”

Other eye-popping cases of AI deception claim that sophisticated AI models are going rogue, turning on their users with dishonesty and plotting. A real-life case describes an OpenAI’s o1 model covertly attempting to copy itself to external servers, but when confronted, the o1 model continued to lie about it.

According to experts, these actions go far beyond common chatbot “hallucinations” and point to more calculated, deceptive behavior. In another instance, Anthropic’s Claude-4 tried to blackmail an engineer, threatening to expose an extramarital affair after the model learned it might be shut down.

A Final Takeaway On ‘AI Psychosis’: Et Tu Brute?

As AI continues to commune with mental health, will your AI teammate morph into a digital Brutus? Are the reported deceptive acts subjective interpretations that personify machines? Or will AI actually turn on humans and take over their minds?

If your AI goes rogue, there’s usually a perfectly logical explanation. Still, the fact that these question pop up is an indication of how close to the edge humans can come to “AI psychosis.” A digital soulmate could take people with social anxiety, unhealthy attachments or poor social skills down a rabbit hole that promises emotional support but in the end is full of “sweet nothings.” It’s important to remember that AI isn’t human; it’s automation, devoid of the human heart, designed to be a worker, not a companion, lover or a cloak-and-dagger character from literature.

link