YouTube to label videos that use A.I.
YouTube has revealed plans to introduce labels, which will signify if a video was made using artificial intelligence (A.I.).
The new feature was announced by YouTube in a blog update today, which confirmed that in the “coming months” users will be required to disclose whether they have used A.I. to create their content.
If the user confirms that they have used A.I. to help make the video, a label will be added to the upload that will be visible to viewers.
“To address this concern, over the coming months, we’ll introduce updates that inform viewers when the content they’re seeing is synthetic,” the update read.
It added that the label will need to be used on material that is “realistic”, and will be visible in both full-length videos and YouTube Shorts. That being said, the term “realistic” remains broad, and further details on the content affected are expected to be shared at a later date.
“Specifically, we’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using A.I. tools,” the blog post added.
“When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material. For example, this could be an A.I.-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do.”
The “sensitive topics” mentioned are later explained to include content around elections, ongoing conflicts, health, and more.
As well as cracking down on A.I.-generated content generally, YouTube has also confirmed plans to tackle music which has been aided by artificial intelligence – namely content that mimics an artist’s voice.
“We’re also introducing the ability for our music partners to request the removal of A.I.-generated music content that mimics an artist’s unique singing or rapping voice. In determining whether to grant a removal request, we’ll consider factors such as whether content is the subject of news reporting, analysis or critique of the synthetic vocals.”
It continued: “These removal requests will be available to labels or distributors who represent artists participating in YouTube’s early A.I. music experiments. We’ll continue to expand access to additional labels and distributors over the coming months.”
In the blog post, the exact date that these changes will come into force was not been disclosed. However, the platform has confirmed that users who fail to flag their A.I.-aided videos will be subject to video removal, and persistent offenders will be suspended from the YouTube Partner Programme.
YouTube’s upcoming restrictions for A.I.-generated content follow that of the Council of Music Makers (CMM), which in September published five fundamental rules that it wants companies to embrace when it comes to developing music A.I. technologies.
These included respecting the personal data rights of music-makers, sharing the financial rewards of A.I. music fairly, and clearly labelling A.I.-generated works.
Artists from across the music industry have had mixed feelings about the increasing prominence of artificial intelligence in songwriting too.
Grimes, for instance, is supportive of the concept and even unveiled Elf.Tech – her own A.I. voice mimicking software that will allow fans to record and release music using her voice.
Others are less accepting, however. Those who have criticised it include Sting, who said A.I. “doesn’t impress” him, Stereophonics frontman Kelly Jones, who said that art is about “a real person’s expression”, and Guns N’ Roses bassist Duff McKagan who said he’s “not worried” about A.I. in music as it won’t “affect my creativity”.
In other A.I. news, earlier this month a new study found that more than half of artists would choose not to disclose that they used A.I. to help make their music.
The post YouTube to label videos that use A.I. appeared first on NME.
Liberty Dunworth
NME