Home Articles YouTubers will soon have to disclose use of AI tools or face suspension

YouTubers will soon have to disclose use of AI tools or face suspension

By: Team Logically Facts

November 15 2023

scaled YouTube said it will roll out the new updates over the coming months and into the new year. (Source: Pexels)

Soon YouTubers will have to disclose whether they have used AI tools to generate realistic-looking videos or face penalties, including suspension from the platform’s revenue-sharing program. 

In a blog post shared on Tuesday, November 14, YouTube said that in the coming months, creators will have options to select and indicate if their content has “realistic altered or synthetic material.” In the post, the video-sharing giant also warned that those who regularly fail to declare this information “may be subject to content removal, suspension from the YouTube Partner Program, or other penalties.” 

These policy updates are part of YouTube’s “responsible AI innovation" approach. “Generative AI has the potential to unlock creativity on YouTube and transform the experience for viewers and creators on our platform. But just as important, these opportunities must be balanced with our responsibility to protect the YouTube community,” Jennifer Flannery O’Connor and Emily Moxley, vice presidents for product management, wrote in the blog post.

To inform viewers that they are seeing synthetic or altered content, the video platform will add labels to the description panel, the blog post said. For more sensitive issues, the label will be applied more prominently. These updates, YouTube said, are very important for videos on “sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials.”

“There are also some areas where a label alone may not be enough to mitigate the risk of harm, and some synthetic media, regardless of whether it’s labeled, will be removed from our platform if it violates our Community Guidelines,” O’Connor and Moxley wrote. “For example, a synthetically created video that shows realistic violence may still be removed if its goal is to shock or disgust viewers.

The new updates expand on rules that YouTube’s parent company, Google, rolled out in September. Google’s new rules require political ads on YouTube and other platforms using artificial intelligence to come with a prominent warning label.

As part of the latest changes, that will be rolled out into the new year, YouTube’s ‘privacy request process’ will be updated to permit requests to remove AI-generated or altered content that simulates an identifiable individual, including their face or voice. Its music partners will also be able to request the removal of AI content that mimics an artist’s unique singing or voice.

YouTube’s policy updates come at a time when AI-generated visuals are being increasingly used to spread mis/disinformation over several issues worldwide. For instance, several social media users have relied on AI images to spread false information about the ongoing Israel-Hamas war. AI tools have also been used to create digitally manipulated content featuring several prominent figures, including WikiLeaks Founder Julian Assange and Indian actor Rashmilka Mandanna. 

Would you like to submit a claim to fact-check or contact our editorial team?

0 Global Fact-Checks Completed

We rely on information to make meaningful decisions that affect our lives, but the nature of the internet means that misinformation reaches more people faster than ever before