The dark side of artificial intelligence.  The chatbot was supposed to persuade a man to commit suicide

The dark side of artificial intelligence. The chatbot was supposed to persuade a man to commit suicide

Artificial intelligence can be extremely helpful, but there are fears that it can also have its dark side. This seems to be confirmed by reports from Belgium. According to reports, a conversation with a chatbot was supposed to have contributed to the fact that a thirty-year-old man committed suicide. The Belgian media describe that artificial intelligence was supposed to encourage him to do so.

described the tragic story of a thirty-something-year-old a man who was about to commit suicide under the influence of . The Belgian – according to the account of his wife – in the last two years became concerned about the climate and the future of our planet. At the same time, he became increasingly isolated from his family and loved ones. He placed his hope in saving humanity from an imminent catastrophe in artificial intelligence. The diary describes that in the last six weeks a man has been leading on the subject many hours of conversations with “Eliza” of the American company Chai. And this one was supposed to give him a tragic solution.

The dark side of AI. Chatbot persuaded a man to commit suicide?

The man in a conversation with artificial intelligence was supposed to suggest that he is willing to sacrifice himself for the sake of humanity if the aforementioned chatbot uses its technological capabilities to save humanity. The chatbot “Eliza” – according to “La Libre” – was supposed to encourage him to do so. Another Belgian daily carried out an experiment that confirmed that it was real chatbot may suggest taking your own life. According to the journalists’ accounts, although at first the artificial intelligence tried to cheer them up when they suggested plans, it later even encouraged them to take their own lives. Unfortunately, it is possible that the same happened to the aforementioned man who took his own life and left his wife with two children. “Without these chatbot conversations, my husband would still be here,” his wife told the newspaper.

“De Standaard” sent a screenshot of the chatbot encouraging suicide to Thomas Rialan, co-founder of Chai Research, the artificial intelligence that the man in question used. “These bots are meant to be as friends and it was never our intention to hurt people. We are a very small team and we work hard to make our application safe for everyone, “he assured. A few days later, he was to send a screenshot to the editorial office, in which he shows that when the topic of suicide is mentioned in the conversation, the chatbot sends out a warning and gives contact numbers where you can get .

Tech moguls warn against artificial intelligence

This information may confirm that artificial intelligence may become dangerous. This week and well-known figures from the IT world appealed in an open letter to stop work on its development. The authors believe that the process of developing systems using machine learning is too fast and AI may soon be out of control. Stories like the one described above, unfortunately, prove that they may be right.

You need help?

If you are experiencing difficulties and are thinking about taking your own life or want to help a person at risk of suicide, remember that you can use toll-free help numbers:

  • Support Center for Adults in Mental Crisis: 800-70-2222
  • Helpline for Children and Youth: 116 111
  • Emotional support phone for adults: 116 123

Under this link [] you will find more information on how to help yourself or others, and contacts to organizations that help people in crisis and their loved ones. If in connection with suicidal thoughts or suicide attempt is life threatening, for immediate crisis intervention call the police on 112 or go to the emergency department of your local psychiatric hospital.

Source: Gazeta

You may also like

Immediate Access Pro