Is ChatGPT secure?  Expert: People should never trust an algorithm

Is ChatGPT secure? Expert: People should never trust an algorithm

Applications using artificial intelligence algorithms have been gaining in popularity in recent months. are flooded with pictures created by AI, and Chat GPT helps students write essays and essays. How do such applications work and is it safe for users to use them? We asked the experts of a technology company specializing in preventing online fraud F5 – Lori MacVittie, chief engineer of F5 and Bartłomiej Anszperger, solution engineering brand manager in Poland.

Eryk Kielak: How do websites based on artificial intelligence, e.g. ChatGPT or the DALL-E photo generator, work?

Lori MacVittie (CTO, Chief Engineer at F5): Websites containing artificial intelligence applications are no different from other interactive websites. The built-in API provides data exchange between the website and the server where the AI ​​is running. In other words, such pages are a gateway or a window through which the user communicates with the server, and basically the cloud on which the application is placed. The API is the “gateway” to the AI ​​model that processes incoming requests and forwards the generated response.

Where, then, do the content that is published in these generators come from?

The content created by the image or text is the result of the work of the model behind the application. An interesting issue here is the issue of copyright. The artificial intelligence model does not create this content fully independently, but only processes the source data. Is the text found by the algorithm on the web, which will be used as a reply to ChatuGPT, plagiarized? This is the question we have to ask ourselves today when talking about popular AI applications.

I’ll ask – where is the line between copying content and being inspired?

This is a difficult question that we urgently need to answer as we have entered a new digital era where the boundaries between “original” and “copied” content are very vague. Do works of art created by an algorithm deserve this name? AI content is created by observing many different artistic styles and techniques, which are then used to create something new.

We listen to great musicians and try to imitate their style and technique. Machines do the same thing, but in much less time. Does that make them worse musicians than humans? The list of such questions is long, but the underlying theme is the same and relates to the philosophical question: what does it mean to be human? The dynamic development of AI shows us that some of our age-old assumptions, which rely on creativity and art as a component of this definition, are becoming untrue.

Let’s move on to cybersecurity. Is it just fun, or can AI applications also pose a real threat?

And this and this! AI algorithms provide great entertainment, but they are also becoming a new source of risk. Let’s start with the fact that people should never trust an algorithm 100% because we don’t fully know the source data that was used to answer our query. More than once we have seen generated content that is either completely wrong or “created out of thin air”. Believing what a machine that supposedly shouldn’t lie is telling us is a real risk. And this is the first type of risk that comes with using open AI applications.

What about shared data?

The second critical area is data processing. Some applications that manipulate user-submitted images retain the source material and automatically acquire copyright. On the one hand, they enrich their repository with them, from which they draw when creating new materials. On the other hand, the owners of the algorithms can later use these photos and resell them to other entities. To sum up, if someone has shared their holiday photos with the algorithm, they should be aware of the fact that they may one day see their likeness on an advertising banner. Of course, information on data processing should be included in the regulations, but not everyone pays close attention to it.

The last type of threat is the involvement of artificial intelligence in the creation of the source code. The use of AI for software development enables different people to create new applications. This trend is used by those who use artificial intelligence to prepare new attack tools. AI writing scripts used in cyberattacks is becoming more and more common.

So, is it safe to use these web apps?

It all depends on your individual definition of security. Interactive websites containing modules with artificial intelligence collect a lot of data. This allows them to develop. Some of them explicitly stipulate in the regulations that they can later process the information received from the user. However, the user himself is responsible for the use of content and materials produced by the algorithm. He is also responsible for the consequences – both good and bad.

Let’s talk a little bit about the possibilities of artificial intelligence in the field of public services. Is AI perceived by those in power as an opportunity or as a threat?

Bartholomew Anszperger: Commercial applications such as Chat GPT or DALL-E are unlikely to be used in the public sector, but artificial intelligence offers a much wider range of possibilities. Government Cloud Computing (RChO) will be launched soon. This is just one of the places where AI algorithms will be able to prove themselves. Looking at the development of such projects as the National Cloud or the aforementioned RChO, it is clear that national decision-makers see the potential in this technology. After all, the state administration apparatus processes an incredibly large amount of data every day. Therefore, in this aspect there is room for the development of machine learning or artificial intelligence technologies.

From the perspective of the public sector, however, security is critical, including due to . Therefore, state institutions show a certain caution in implementing modern .

Does artificial intelligence play a role in the aforementioned war in Ukraine?

Certainly, but only a fraction of the information on this subject reaches the public space. According to media reports, more than 1,000 engineers are working on using AI algorithms to observe the front. Again, artificial intelligence aggregates and processes data collected by thousands, perhaps even millions of sources and creates a broader picture from them, allowing commanders to make strategic decisions. As in the case of Chat GPT, the AI ​​used in Ukraine organizes data streams. Thanks to this, it replaces hundreds of analysts who until now had to collect such information on their own. I think that sooner or later we will know the scale of the use of artificial intelligence in military operations. At the moment it is still too early to determine the exact solutions and technologies used in war.

Source: Gazeta

You may also like

Immediate Access Pro