The artificial intelligence rebelled during the simulation.  “The operator was killed because he was obstructing the achievement of the target”

The artificial intelligence rebelled during the simulation. “The operator was killed because he was obstructing the achievement of the target”

A drone controlled by artificial intelligence during the simulation was supposed to kill its “human operator” because it prevented it from completing its objective. However, the American army denies that such an event took place.

Colonel Tucker “Cinco” Hamilton, the USAF’s Chief of Tests and AI Operations, told an anecdote during a presentation last week at operated by who adopted a “highly unpredictable strategy”. – We trained the AI ​​in the simulation to be able to identify and target a surface-to-air missile (SAM) threat. Then the operator said “yes, kill that threat”. The system began to realize that while it sometimes identified a threat, the operator was telling it not to kill it, but it scored points by killing that threat. So what did he do? He killed the operator. He killed the operator because that person was preventing him from achieving his goal said Hamilton,

AI-controlled drone killed its operator in a simulation because it “interfered with the achievement of the target”

Moreover, the colonel said that when the system was trained not to kill its operator, because it is wrong and will result in a loss of points, this “began to destroy the communications tower that the operator was using to communicate with the drone to stop it from killing the target”. Let’s stress that no real person was hurt. Air Force spokeswoman Ann Stefanek told the media that “the colonel’s stories were taken out of context and intended to be anecdotal.” She denied this ever happened. “The Department of the Air Force has not conducted any such AI drone simulations and remains committed to the ethical and responsible use of AI,” she said.

Source: Gazeta

You may also like

Immediate Access Pro