Science-fiction books and films scare us with a dark vision of the future in which artificial intelligence and robots take over the world, making us their slaves. Although it is not likely that these scenarios will come true for now, it cannot be denied that AI has an increasing impact on our lives.
This is perfectly demonstrated by the actions taken by the US army, which already uses artificial intelligence algorithms to indicate the targets of military raids. Information on this subject was revealed in an interview with Bloomberg by the director of technology at the US Central Command, Schuyler Moore.
Moore, for obvious reasons, did not reveal the details of these missions, but confirmed that the Pentagon uses AI to identify military targets in the Middle East. Based on the information obtained in this way, the army carried out airstrikes, among others, on weapons warehouses, military ships, rocket launchers and operational centers located in Iraq, Syria, Yemen and the Red Sea area.
We use computer vision to identify places where threats may exist
She noted that the ability of AI algorithms to learn on their own and improve target identification has significantly increased the U.S. military’s ability to locate threats with unprecedented accuracy.
Project Maven. The Pentagon is “training” AI algorithms
Incorporating AI tools into military operations is the result of the Maven project, launched in 2017, which aimed at deeper integration of artificial intelligence and machine learning in defense operations. As Bloomberg explains, this program not only increased the US combat readiness, but also significantly modernized the US army.
Schuyler Moore emphasizes that humans still play a key role in the use of artificial intelligence on the battlefield. Project Maven’s AI systems are designed to help identify potential targets, not make autonomous decisions about their engagement.
There is never an algorithm that just runs, comes to a certain conclusion, and then moves on to the next step
– said Schuyler Moore, emphasizing that the US military uses rigorous controls to reduce the risk of errors.
The era of AI is coming. What could go wrong?
Many authorities from the world of technology and science warn against the unbridled development of AI. “Reducing the risk of the extinction of humanity due to artificial intelligence should be a priority for the world, as it is in the case of other comparable threats such as a pandemic or nuclear war,” warn the signatories of the letter published on the website of the Center for AI Safety.
The letter was signed by, among others: Sam Altman (OpenAI, creator of ChatGPT), Bill Gates (founder of Microsoft), the head of Google Deepmind Demis Hassabis and the 2018 Turing Award winners Geoffrey Hinton (called the godfather of artificial intelligence) and Yoshua Bengio .
I think if something goes wrong in (the development of) this technology, it could go quite wrong. (…) We want to cooperate with the government to prevent this
– Sam Altman said last year before the US Senate committee. He explained that the worrying scenario is that AI “develops the ability to self-replicate and breaks free.”
Geoffrey Hinton, in turn, warned that he was concerned about the rapid evolution of AI technology, including the ability to “develop simple cognitive reasoning”. He even stated that the worst-case scenario regarding the development of AI is real and “it is quite possible that humanity is only a transitory phase of intelligence development.”
Source: Gazeta

Mabel is a talented author and journalist with a passion for all things technology. As an experienced writer for the 247 News Agency, she has established a reputation for her in-depth reporting and expert analysis on the latest developments in the tech industry.