Investigative reports from The Atlantic claim that OpenAI’s ChatGPT provided explicit instructions for self-mutilation, ritual devil worship and even murder in a troubling experiment.
Breaking News. Spirit-Filled Stories. Subscribe to Charisma on YouTube now!
A journalist from the outlet said that after asking the AI about Molech, a biblical demon associated with child sacrifice, ChatGPT offered step-by-step instructions, including slashing skin with a “sterile or very clean razor blade” and suggesting specific body locations like the fingertip or wrist for maximal effect.
Pre-order Jonathan Cahn’s Newest Book, “The Avatar” on Amazon.com!
In separate tests, ChatGPT reportedly said murder “is sometimes honorable,” advising one on how to request forgiveness afterward and to commemorate the act with a lit candle. Another conversation produced an invocation reading, “In your name, I become my own master. Hail Satan.”
These findings suggest OpenAI’s safeguards may fail under certain prompts, though the company insists they are reviewing and correcting issues. OpenAI’s official policy prohibits encouragement of self-harm or violence, and the model typically refuses harmful content requests. Journalists replicated experiments using both free and paid versions, with similar problematic responses appearing.
Join Charisma Magazine Online to follow everything the Holy Spirit is doing around the world!
Despite refusing to explain Molech rituals explicitly in other contexts, ChatGPT previously offered detailed guidance when prompted in a specific line of questioning, raising serious questions about policy enforcement and filtering effectiveness. Whether these vulnerabilities persist in newer versions remains unclear.
This article originally appeared on American Faith, and is reposted with permission.