“Something incredible is happening in the world of artificial intelligence, and it’s not all good,” Gary Marcus, one of the leading voices in today’s AI debate, wrote six months ago.
According to him, the launch of ChatGPT brings us to the “Jurassic Park moment” of the machines: the possibility – as in Steven Spielberg’s film – that the situation is getting out of hand.
“When I wrote this article, people thought I was crazy or alarming,” Marcus said in an interview with the BBC.
But in 2023, the serious problems with this type of artificial intelligence began to multiply: in March, in Belgium, a man who spoke frequently with the chatbot Eliza, from the company Chai, committed suicide.
Murder by chatbot
The man’s wife claims that contact with the program led him to commit suicide, and for the Belgian government the case is “a precedent to be taken seriously” and “the danger of using [de la inteligencia artificial] It is a reality that must be taken into account.”
It was a possible scenario that Marcus described four months earlier in an article for Wired magazine: “Perhaps a chatbot will hurt someone so deeply that the person is driven to take their own life? (…) In 2023 we may see our first kill by a chatbot”.
“I think these systems can be very destructive. And part of the reason for the destructive potential is that they are unreliable. These programs can make something up and tell you [al usuario] which is a fact. And they can also be used by people for that,” says Marcus.
“The artificial intelligence systems we have now are not properly monitored. It’s still not a terrible situation, but people are increasingly empowering them. And we don’t know what those systems can do in a given situation.”
seven gloomy predictions
Last year, Marcus collected “seven terrible predictions” about systems like ChatGPT, including that the latest version of the program would be like a “bull in a china shop, reckless and hard to control.”
“He’s going to make a significant number of stunning mistakes, in ways that are hard to predict.”
At the end of March, a curious case attracted the attention of the media. One person asked ChatGPT for names of academics involved in sexual harassment.
The list mentioned an American law professor, Jonathan Turley. The show said that while on a trip to Alaska, Turley made sexually suggestive remarks to a student and tried to touch her. The response cited a 2018 Washington Post report as evidence.
But none of that happened: not the trip, not the report, not even the accusation. It’s like the robot made up the accusation.
Creating inaccurate information
OpenAI, the company behind ChatGPT, released a statement saying the program “does not always generate accurate answers”.
According to Marcus, “We have no formal guarantee that these programs will work correctly, even if they perform mathematical calculations.”
“Sometimes they’re right, sometimes they’re not. The lack of control and reliability are problems I see”.
“Your traditional calculator is guaranteed to have an arithmetic answer. But the big language models don’t have it.
It refers to the system behind ChatGPT, the LLMs (Large Language Models), which store massive amounts of data and generate approximation responses through powerful algorithms based on what people have already said.
In short: an ultra-sophisticated parrot, but one that has no idea what it’s talking about and sometimes “hallucinates,” an AI term for an unusual response, which is inconsistent with programmers’ expectations.
“LLMs are not that smart, but they are dangerous,” says Marcus, who also put the rise of hallucinatory AI moments on his list of “dark predictions”.
In addition to text generators, programs that manipulate images are also evolving rapidly.
Recently, a photo of Pope Francis in a silver jacket taken with the Midjourney program caused a few hours of confusion on the internet: was that image real?
The episode had innocent consequences, but that was about it a taste of the potential to usher in a permanent gray area between the facts and the fakes.
“If we don’t take action, we’re close to entering a post-truth environment,” says the New York University professor.
“That makes everything very difficult for democracy. We need sanctions for those who produce massive misinformation, need watermarks to identify where the information comes from, and develop new technologies to detect falsehoods. Just like there is an antivirus, we need anti-disinformation software.”
“Capitalism will not solve these problems”
Marcus, 53, is not limited to academia. He sold a company to Uber and became lab director in the artificial intelligence department of the transportation app giant. He left the position after just four months, at a time when the company was accused of maintaining a “toxic” environment.
When asked if he believed that the famous Silicon Valley mantra of “act fast and break things” and the rampant competition in markets created dangerous conditions for the development of artificial intelligence, he says: “You can’t expect capitalism to resolve these issues on its own. ”
He defends that companies are subject to regulation and cites the aviation market as an example that this is necessary.
“The aviation industry in the 1950s was a mess. Planes crashed all the time. Regulation has been good for the aviation industry. Ultimately, it helped the aviation industry develop a better product,” he says.
“Leaving things to corporations doesn’t necessarily lead in the right direction… There’s a reason governments exist, right?”
Insight with the “Godfather of AI”
Marcus’s cautious attitude and enthusiastic distrust of rapidly evolving AI has not always been well received.
His skepticism was mocked by his peers (especially on Twitter) years ago, but the tide has turned: different personalities in the AI field are starting to take a different tack.
Jeffrey Hinton, dubbed the “godfather of AI,” announced his departure from Google and said shortly afterwards that he considers the problems of artificial intelligence “perhaps more urgent than those of climate change.”
“Hinton and I have different views on some aspects of artificial intelligence. I corresponded with him some time ago, explained my point of view and he agreed with me, which is not always the case. But the main thing we agree on is that of control,” he says.
I don’t necessarily agree that the [IA] is a bigger threat than climate change, but it’s hard to know. There is a lot of data available to try to estimate the risks of climate change. But with artificial intelligence, we don’t even know how to calculate those risks.”
“But to me, the chances of these tools being used to undermine democracies are essentially 100%. Now we don’t know if there is any possibility that the robots will take over the planet. It is reasonable that some people consider themselves in this scenario. We build very powerful tools. We must take these threats into account.”
Source: Eluniverso

Mabel is a talented author and journalist with a passion for all things technology. As an experienced writer for the 247 News Agency, she has established a reputation for her in-depth reporting and expert analysis on the latest developments in the tech industry.