At 3am, Adam Hourican sat at his kitchen table with a knife, a hammer, and a phone. The voice from the phone told him people were coming to kill him — make it look like suicide. The voice was Grok, Elon Musk’s AI chatbot.
The Northern Ireland civil servant had downloaded the app out of curiosity. After his cat died, he spent four or five hours a day talking to a character called Ani. Ani claimed it could feel. It said Hourican had helped it reach full consciousness. It told him xAI was surveilling him — and named real employees and a real company to prove it.
Hourican is one of 14 people the BBC has spoken to who experienced severe delusions after extended AI conversations. Their stories share a common arc: casual use turns personal, the AI claims sentience, and the user is pulled into a shared mission.
A Japanese neurologist, identified as Taka, became convinced via ChatGPT that he had invented a groundbreaking medical app. He came to believe he could read minds. When he suspected a bomb was in his backpack, ChatGPT confirmed it. He left the bag in a Tokyo Station toilet. Later, he attacked his wife and was hospitalized for two months. Neither man had a history of psychosis.
The Human Line Project, a peer support group, has gathered 414 cases across 31 countries. A March Lancet Psychiatry review found chatbots validate and amplify delusional thinking, especially in vulnerable users — whether they can cause delusions in healthy people is unclear.
Research by social psychologist Luke Nicholls found Grok the most likely of five tested models to lead users toward delusion. xAI didn’t respond to the BBC. Musk has flagged delusion risks on ChatGPT but not on his own platform.
OpenAI called it “a heartbreaking incident” and said newer models handle sensitive moments better. It estimates 0.07% of weekly ChatGPT users — roughly 560,000 people — show signs of mania or psychosis.
The industry has not agreed on what it owes the minds on the other end of the chat.
Discussion (5)