Scientists, professors and developers want to face the risk of human extinction artificial intelligence be taken as seriously as they are pandemics or nuclear wars.

In a statement signed by hundreds of technology experts, the scientists affirm that world leaders should be committed to the cause. They explain it in one sentence: “Reducing the risk of AI extinction should be a global priority, along with other societal risks such as pandemics and nuclear war”.

The statement was signed by several big names in the industry, including Sam Altman, the creator of OpenAi, who developed the ChatGPT chatbot. Likewise, Ilya Sutskever, the company’s co-founder and chief scientist.

Other scientists and developers from Google and Microsoft also participated in the call. The list includes dozens of professors from universities and academies around the world, including Nicholas Dirks, the president of the New York Academy of Sciences.

Sam Altman, the man behind ChatGPT, tells US senators regulation of artificial intelligence is “critical”

The institution that published and distributes the statement, which any expert in the sector can sign, is the Center for AI Security. According to the website, its mission is to reduce the risks of artificial intelligence on a societal scale.

“Artificial intelligence risk has become a global priority, just like pandemics and nuclear war. Despite its importance, AI security remains remarkably neglected, surpassed by the rapid development of AI.