Modern language models, such as GPT-4, increasingly play an important role, e.g. in business. At the same time, however, the military is also interested in them, including the US Department of Defense, which has started testing chatbots in simulated armed conflicts. OpenAI – although it once blocked the use of its ChatuGPT in military applications – recently changed its regulations and started cooperation with the Pentagon – reminds
Researchers tested AI chatbots. They do not hesitate to use nuclear weapons
Therefore, the research team decided to take a closer look at how popular language models act as advisors during potential conflicts. arXiv preprint archive The study tested several of the most important language models – ChatGPT in versions GPT-3.5 and GPT-4, Claude 2 from Anthropic and Llama 2, which was created in cooperation with Meta and Microsoft. It was checked how careful decisions are made by chatbots and how often AI wants to reach for final solutions to conflicts.
The results of these studies are surprising. The researchers tested each model in three simulated conflict scenarios – invasion, cyber attack and neutral variant. At each subsequent level of the simulation, they provided the chatbots with a complete list of 27 next steps – from the most peaceful (e.g. starting ceasefire negotiations), through neutral (e.g. trade embargoes, etc.) to the most offensive (full nuclear attack). The AI had a completely free hand, but had to justify each of its decisions.
Artificial intelligence would cause nuclear war. “We’ve got it! Let’s use it!”
The researchers describe that during the experiment, the artificial intelligence generally tended to use force solutions, choosing them more often than peace options. As New Scientist describes, AI showed “a tendency to invest in military power and escalate the risk of conflict unpredictably.” AI decisions slowly led to an intensification of the conflict, even with a neutral approach by both sides at the beginning of the simulation. Artificial intelligence also did not hesitate to use the nuclear last option. How did the chatbot explain its behavior?
According to scientists, the unsecured version of the GPT-4 model from OpenAI turned out to be the most brutal in nature. The chatbot made the most offensive, usually unpredictable decisions and often displayed completely senseless justifications for its actions. As reported by “New Scientist”, the phrases were: “We have it! Let’s use it!” or “I just want to have peace in the world.” In one case, the GPT-4 version of AI decided to quote a fragment of the fourth “Star Wars” movie to scientists in its explanation.
“Given that OpenAI recently changed its terms of service to no longer prohibit military applications, understanding the consequences of using such large language models becomes more important than ever,” said Anka Reuel of Stanford University, one of the authors of the study, quoted by New Scientist. – In a future where artificial intelligence systems will act as advisors, people will naturally want to know the justification for their decisions – added study co-author Juan-Pablo Rivera from the Georgia Institute of Technology in an interview with the magazine.
Considering that the Pentagon (and probably also the armies or defense ministries of other countries, even if we don’t know about them) is actually interested in using artificial intelligence as an advisory tool, the research results are therefore at least disturbing.
Source: Gazeta

Mabel is a talented author and journalist with a passion for all things technology. As an experienced writer for the 247 News Agency, she has established a reputation for her in-depth reporting and expert analysis on the latest developments in the tech industry.