YouTube is adding recent rules surrounding content generated or manipulated by artificial intelligence, including labeling requirements.
The Google-owned video platform announced Tuesday in a blog post that it is going to be rolling out a series of updates over the following few months, including requiring creators to reveal whether their content has been AI-generated upon uploading it, which is able to add a label to the video alerting viewers.
YouTube gave the instance of AI-generated videos realistically depicting a purported event that never happened, or showing an individual saying or doing something they didn’t, adding, “This is very necessary in cases where the content discusses sensitive topics, equivalent to elections, ongoing conflicts and public health crises, or public officials.”
Creators who repeatedly fail to reveal AI-generated content under the brand new rules face having their content taken down from YouTube, the corporate said.
The brand new labels will appear in the outline of videos where AI was used, and content about sensitive topics can have a second, more outstanding label added to the video player.
The video platform said it’s also going to start allowing people to request the removal of AI-generated content that “simulates an identifiable individual” including using their face or voice, through the corporate’s privacy request process. Nonetheless, not all of the requests shall be honored.
“Not all content shall be faraway from YouTube, and we’ll consider quite a lot of aspects when evaluating these requests,” the blog post reads. “This might include whether the content is parody or satire, whether the person making the request will be uniquely identified, or whether it contains a public official or well-known individual, by which case there could also be a better bar.”
YouTube also said that it will be “constructing responsibility” into its AI tools and features.
“We’re pondering fastidiously about how we will construct upon years of investment into the teams and technology able to moderating content at our scale,” the announcement said. “This includes significant, ongoing work to develop guardrails that can prevent our AI tools from generating the style of content that doesn’t belong on YouTube.”