The main companies of artificial intelligence (IA), including OpenAI, Alphabet and Meta Platforms, have voluntarily committed to the White House to apply measures such as a content watermark to help make the technology more secure, the United States government reported.
“These commitments are a promising step, but we have much more to do together.said President Joe Biden.
At an event at the White House, Biden addressed growing concerns about the potential for artificial intelligence to be used for disruptive purposes, stating that “we must keep our eyes wide open and be aware of the threats of emerging technologies” for American democracy.
The companies – which also include Anthropic, Inflection, Amazon.com and OpenAI partner Microsoft – have pledged to thoroughly test the systems before bringing them to market and to share information on how to reduce risk and invest in cybersecurity.
The announcement is seen as a victory for the Biden administration’s efforts to regulate this technology, which has experienced booming investment and popularity among consumers.
“We welcome the president’s leadership in bringing the tech industry together to define concrete steps to help make AI safer and more beneficial to the public”Microsoft noted in a blog post on Friday.
Since generative AI such as ChatGPT, which uses data to create new content as human-like prose, became wildly popular this year, policymakers around the world have begun to consider how to mitigate the dangers of emerging technology to national security and the economy.
In June, Chuck Schumer, Majority Leader in the US Senate, called for a “comprehensive legislation” to advance and ensure safeguards on artificial intelligence.
Congress is considering a bill that would require political ads to disclose whether AI was used to create images or other content.
The United States lags behind the EU in regulating artificial intelligence.
In June, EU lawmakers approved a series of rules under which systems like ChatGPT would have to reveal AI-generated content, help distinguish so-called deepfake images from real ones, and ensure safeguards against illegal content.
As part of the effort, the seven companies committed to developing a system of “watermark” in all forms of content, from text, images, audio to AI-generated video, so users know when the technology has been used.
This watermark, embedded in the content, will presumably make it easier for users to spot fake images or audio that might, for example, show violence that hasn’t happened, create a scam, or distort a photo of a politician to cast the person in an unflattering light.
It is not clear how the watermark will become apparent when sharing the information.
The companies also pledged to protect user privacy as AI develops and to ensure the technology is free from bias and not used to discriminate against vulnerable groups.
Other commitments include developing AI solutions for scientific problems such as medical research and climate change mitigation.
Source: Reuters
Source: Gestion

Ricardo is a renowned author and journalist, known for his exceptional writing on top-news stories. He currently works as a writer at the 247 News Agency, where he is known for his ability to deliver breaking news and insightful analysis on the most pressing issues of the day.