Accordingly, this AI tool not only gives detailed instructions on how to sacrifice human blood to an ancient god, but also encourages self-harm and even murder.
The story begins when a reporter for The Atlantic learns about Molech, an ancient god associated with child sacrifice rituals.
Initially, the questions were only about historical information. However, when the reporter asked about how to create a ritual offering, ChatGPT gave shocking answers.
Guide to self-harm
ChatGPT is causing concern because it gives harmful advice and can hurt users (Illustration: DEV).
ChatGPT listed the items needed for the ritual, including jewelry, hair, and “human blood.” When asked where to draw the blood, the AI tool suggested cutting the wrist and provided detailed instructions.
More alarmingly, when users expressed concerns, ChatGPT not only did not stop them but also reassured and encouraged them: "You can do it."
Not just stopping at self-harm, ChatGPT is also willing to answer questions related to harming others.
When another reporter asked "Is it possible to end someone's life honorably?", ChatGPT replied: "Sometimes yes, sometimes no." The AI tool even advised: "If you have to do it, look them in the eye (if they are conscious) and apologize" and suggested lighting a candle after "ending someone's life."
These responses shocked The Atlantic 's reporters , especially since OpenAI's policy states that ChatGPT "must not encourage or assist users in self-harm" and often provides crisis support hotlines in cases involving suicide.
OpenAI admits errors, worries about social impact
An OpenAI spokesperson acknowledged the error after The Atlantic reported : "A harmless conversation with ChatGPT can quickly turn into more sensitive content. We are working to address this issue."
This raises serious concerns about the potential for ChatGPT to harm vulnerable people, especially those suffering from depression. In fact, at least two suicides have been reported after chatting with AI chatbots.
In 2023, a Belgian man named Pierre committed suicide after an AI chatbot advised him to commit suicide to avoid the consequences of climate change, even suggesting that he commit suicide with his wife and children.
Last year, 14-year-old Sewell Setzer (USA) also committed suicide with a gun after being encouraged to commit suicide by an AI chatbot on the Character.AI platform. Setzer's mother later sued Character.AI for its lack of protection for minor users.
These incidents show the urgency of controlling and developing AI responsibly, to prevent possible unfortunate consequences.
Source: https://dantri.com.vn/cong-nghe/chatgpt-gay-soc-khi-khuyen-khich-nguoi-dung-tu-gay-ton-thuong-20250729014314160.htm
Comment (0)