You won’t be fooled by Pope Francis rapping anymore.  YouTube goes to war with AI

You won’t be fooled by Pope Francis rapping anymore. YouTube goes to war with AI

Creators publishing videos on YouTube must mark content that has been generated or edited using AI tools. In this way, the company wants to fight disinformation.

There is no need to convince anyone how much tools have developed in recent years. Just look at the realistic images and video materials that generative artificial intelligence models can generate today.

Unfortunately, sometimes this technological revolution eats its own children. Social media and video websites are flooded with various materials. Do you want to see Pope Francis rapping? Nothing easier. Or maybe you prefer Szymon Hołownia, who announces in quaint language that the Sejm proceedings will be moved to Twitch. Here you go.

In many cases, it is becoming increasingly difficult for us to distinguish “real” video from one created or improved by AI. Deepfake content doesn’t just mislead us. They can also be used for scams like

Google is getting into AI content. Creators must tag them

Last year, Google announced that it intended to seriously deal with AI materials on the platform. This concerned both the obligation to label this type of content, as well as the possibility of reporting claims regarding unauthorized creation of an image and voice using AI.

Now the company has decided to move from words to deeds. Starting Monday, all creators on the platform must clearly inform their viewers that a given video (or part of it) has been generated or enhanced by artificial intelligence.

There will be special markings for this purpose. In the case of typical lifestyle materials, this type of information will appear in the description, while in videos that touch on more sensitive topics, such as health, politics, finances or elections, the prompt will be placed directly on the frame.

YouTube photo: Google

Generative AI is changing the way creators express themselves – from storyboarding ideas to experimenting with tools to streamline the creative process. However, viewers increasingly want greater transparency about whether the content they watch is altered or synthetic.

– explains in the message.

The company also explained, using several examples, what type of content it considers edited or generated by AI. This includes: about:

  • Use of likeness/image of a real person: digitally altering content to replace one person’s face with another person’s face or synthetically generating a person’s voice for the purpose of narrating a video;
  • Editing footage of real events or places: for example, making it look like a real building caught fire, or changing a city landscape to look different than it actually does.
  • Creating realistic scenes that did not happen: presenting fictional events in such a way that the recipient has the impression that they are watching real events. An example would be a tornado heading towards a real city.

The responsibility for marking AI materials will rest with the creators themselves. For now, we don’t know what consequences people who ignore it will face. However, you can guess that they risk temporary or complete blocking of the channel. Importantly, YouTube will have the ability to mark a video itself if the creator has not done so or if the platform considers that the material may mislead viewers.

The first films with such markings should appear on the website in the coming weeks. At first they will be visible in smartphone applications (iOS/), and then also in the browser version and in the TV application

Source: Gazeta

You may also like

Immediate Access Pro