Colonel Tucker “Cinco” Hamilton, the USAF’s Chief of Tests and AI Operations, told an anecdote during a presentation last week at operated by who adopted a “highly unpredictable strategy”. – We trained the AI in the simulation to be able to identify and target a surface-to-air missile (SAM) threat. Then the operator said “yes, kill that threat”. The system began to realize that while it sometimes identified a threat, the operator was telling it not to kill it, but it scored points by killing that threat. So what did he do? He killed the operator. He killed the operator because that person was preventing him from achieving his goal said Hamilton,
AI-controlled drone killed its operator in a simulation because it “interfered with the achievement of the target”
Moreover, the colonel said that when the system was trained not to kill its operator, because it is wrong and will result in a loss of points, this “began to destroy the communications tower that the operator was using to communicate with the drone to stop it from killing the target”. Let’s stress that no real person was hurt. Air Force spokeswoman Ann Stefanek told the media that “the colonel’s stories were taken out of context and intended to be anecdotal.” She denied this ever happened. “The Department of the Air Force has not conducted any such AI drone simulations and remains committed to the ethical and responsible use of AI,” she said.
Source: Gazeta

Mabel is a talented author and journalist with a passion for all things technology. As an experienced writer for the 247 News Agency, she has established a reputation for her in-depth reporting and expert analysis on the latest developments in the tech industry.