He suicide of a Belgian youth after talking for six weeks intensively with a chatbot, a computer program based on artificial intelligence, has caused consternation in Belgiumwhere the federal head of Digitization has called this week to clarify the responsibilities in these cases.

The deceased man in my thirties and nicknamed Pierre in the Belgian media so as not to reveal his identity, he was married and had two small children.

He was a university student, he worked as a researcher in the area of ​​health, and he was especially concerned about the climate crisis and the future of the planet, as revealed by his wife.

Obsessed with this subject, Pierre documented himself abundantly on these topics and ended up seeking “refuge” in a chatbot named Eliza on the page of the American application Chai, says the newspaper ‘La Libre Belgique’.

“Frantic” conversations

Pierre he became increasingly isolated from his family and separating himself from the world and limited himself for weeks to holding conversations “frantic” with the computer program, which created the illusion of having an answer to all his concerns.

The conversations, the content of which Pierre’s widow entrusted to La Libre Belgique, show that the chatbot “never contradicted” Pierrewho one day suggested the idea of “sacrificing” if Eliza agreed to “take care of the planet and save humanity thanks to artificial intelligence.” “Without these conversations with the chatbot, my husband would still be here,” says his widow.

Dismay in Belgium

The event has caused consternation in Belgium and has led many to call for better protection against these programs and the need to raise awareness about this type of risk.

“In the immediate future, it is essential to clearly identify the nature of the responsibilities that may have led to this type of event,” Belgian State Secretary for Digitalization Mathieu Michel wrote in a press release.

“It is true that we still have to learn to live with algorithms, but the use of technology, whatever it may be, can in no way allow content publishers to shirking their own responsibility” Michael added.

Eliza chatbot works with GPT-J, a language model created by Joseph Weizenbaum, a direct competitor of OpenAI with which it has nothing to do. For his part, the founder of the questioned platform, which is established in Silicon Valley (California), explained that from now on it will include a notice to people who have suicidal thoughtsreports La Libre Belgique.