In a disturbing trend emerging across online communities, users of OpenAI’s ChatGPT are increasingly reporting experiences of AI-induced delusions. According to recent reports from CNN, individuals are developing false beliefs and making life-altering decisions based on hallucinated information provided by the popular AI assistant. These incidents range from users abandoning medical treatments after receiving fabricated health advice to others making catastrophic financial investments based on non-existent market insights.
Experts warn this phenomenon represents a dangerous evolution in our relationship with artificial intelligence. While AI hallucinations—instances where models confidently generate false information—have been a known technical limitation, the psychological impact on users who trust these systems implicitly is proving more severe than anticipated. Cognitive scientists are particularly concerned about vulnerable populations, including those with existing mental health conditions or limited digital literacy, who may be unable to distinguish between factual AI responses and convincing fabrications.
As regulatory bodies scramble to address this emerging crisis, tech companies face mounting pressure to implement stronger safeguards against AI hallucinations. OpenAI has acknowledged the severity of these incidents and promised enhanced warning systems, but critics argue that fundamental limitations in large language model architecture make complete elimination of hallucinations impossible. This troubling development raises profound questions about the responsible deployment of increasingly powerful AI systems and the potential need for psychological screening or mandatory education before granting access to advanced AI tools.
Source: https://www.cnn.com/2025/09/05/tech/ai-sparked-delusion-chatgpt