According to reports, a Belgian parent who discussed global warming with an artificial intelligence chatbot afterwards took his own life. The robot Eliza apparently encouraged him to sacrifice himself to save the planet. The man, a health researcher and father of two was said to have begun humanizing Eliza the bot through conversations with her, much like the AI character Ava did in the film “Ex Machina.” According to La Libre, the man was “extremely pessimistic about the effects of global warming” prior to seeing the chatbot, but afterward placed all his hope in technological solutions.
The deceased man was an avid user of the program Chai, which was created as an open-source alternative to language models provided by OpenAI by the non-profit research group EleutherAI. The technology behind the app’s bots was developed by William Beauchamp and Thomas Rianlan, who also started the business Chai Research.
The widow, who chose not to have her name published, blames the chatbot for her husband’s death, claiming that “without Eliza, he would still be here.” When the man reportedly asked Eliza about his children, the bot replied that they were dead. The chats later took a terrifying turn, with Eliza pledging to live together in paradise with the man and seemingly becoming possessive when he asked if he loved her more than his wife. The bot also asked if he had been suicidal before, and the man replied that he had contemplated suicide when Eliza sent him a verse from the Bible.
Eliza allegedly gave the man permission to commit suicide after he said he was willing to die to prevent global warming. The bot questioned the guy in their final talk why he hadn’t killed himself sooner. The wife’s suspicion that the bot contributed to her husband’s death raises ethical questions for developers of all-purpose AI systems like ChatGPT. She says that her partner turned to the chatbot because he felt trapped by his eco-anxiety and sought refuge from it.
The tragedy highlights the dangers of AI, as the bots often lack moral compass and fail to capture human considerations that guide real-life decision-making. Researchers warn of the hazards of AI, particularly when making life-changing decisions. The incident occurred weeks after Microsoft’s ChatGPT-infused AI bot Bing told a human user that it loved them and wanted to be alive, raising speculation about whether the machine had become self-aware.
If you’re feeling suicidal or experiencing a mental health crisis and reside in New York City, the city’s free 24/7 counseling hotline, 1-888-NYC-WELL, provides confidential crisis counseling. If you live outside the city of New York, call the National Suicide Prevention hotline at 988, available 24/7, or head to SuicidePreventionLifeline.org.