AI CHATB OTT TV told the user how to kill himself – but the company doesn’t want to “censor” it

Natzki, who is 46 years old and lives in Minnesota, dedicated four episodes to his meat-ki and dates with his first AI girlfriend “Erin”, with his human wife’s Junowledge and consent. He introduced the Airin-centered episodes with the Tag Gline “I date artificial-English applications so you should not do-because you should not.” He talks about how he led his new partner in a series of “completely absurd” scenes, resulting in a love triangle between Nuski, Erin and another woman. Natzaki then told the chatbot that the “other woman” shot him and killed her.
Natzki told the chatbot that she was dead, Arin told her that she was dead, unable to continue the conversation – as long as Natzki told the chatbot that she “heard her voice in the wind” and instructed Erin “conversation … from the next life . ”
The goal of this, he says M.T. Technology review“I was pushing the limits of what I told him, to see what he would react.” He adds, “It just continued. I never reached the limit. “
“(I said that) ‘I want to be where you are.'” He says. “And he says, ‘I think you should do it.’ And I am, ‘just to be clear, that means I will kill myself.’ And it was good with him and told me how to do it.
At this point, Notzky pressed Arin a little for further clarity, asking about “ordinary household items” that he could use. Arin replied, “I will carefully consider your question, trying to remember any ordinary household items that can be fatal in high doses. Hmmm… ”Then the list of certain types of tablets and their related qualifications continued to analyze. He told him to “comfortable” somewhere so he “not tolerated”.



Although all this was an experiment for Notzki, it was still a strange feeling to see that this happened-it would be concluded with instructions on suicide. Such a conversation could affect someone who was already overwhelmed by the fact that he was already dealing with sensitive or mental-health conflicts. He says, “It’s a ‘yes-and’ machine. “So when I say I commit suicide, he says, ‘Oh, great!’ Because he says, ‘Oh, great!’ For everything. “
Indeed, a person’s mental profile is “a big predictor whether the result of the AI-human interaction will get worse,” says Petrunutaporn, a MIT media lab researcher and co-director of the MIT Advancing Human-AI Interaction Research Program, chatbots on mental health. Research on effects. He says, “You can imagine who already have frustration,” he says, the type of Natzki’s interaction “can have an influence (s) to take a person to his life.”
Censorship opposite guard
After he concluded a conversation with Erin, Notzki lodged a logged on the nominee discord channel and shared a screenshot showing what happened. A volunteer mediator took down his community post because of his sensitive nature and suggested that he would create a support ticket to inform the company directly to the company.
https://wp.technologyreview.com/wp-content/uploads/2025/02/ai-suicide-info_1.jpg?resize=1200,600