How Are Spotify, YouTube, SoundCloud & More Preparing For AI Music?
This analysis is part of Billboard’s music technology newsletter Machine Learnings. Sign up for Machine Learnings, and other Billboard newsletters for free here.
Have you heard about our lord and savior, Shrimp Jesus?
Last year, a viral photo of Jesus made out of shrimp went viral on Facebook — and while it might seem obvious to you and me that generative AI was behind this bizarre combination, plenty of boomers still thought it was real.
Bizarre AI images like these have become part of an exponentially growing problem on social media sites, where they are rarely labeled as AI and are so eye grabbing that they draw the attention of users, and the algorithm along with them. That means less time and space for the posts from friends, family and human creators that you want to see on your feed. Of course, AI makes some valuable creations, too, but let’s be honest, how many images of crustacean-encrusted Jesus are really necessary?
This has led to a term called the “Dead Internet Theory” — the idea that AI-generated material will eventually flood the internet so thoroughly that nothing human can be found. And guess what? The same so-called “AI Slop” phenomenon is growing fast in the music business, too, as quickly-generated AI songs flood DSPs. (Dead Streamer Theory? Ha. Ha.) According to CISAC and PMP, this could put 24% of music creators’ revenues at risk by 2028 — so it seems like the right time for streaming services to create policies around AI material. But exactly how they should take action remains unclear.
In January, French streaming service Deezer took its first step toward a solution by launching an AI detection tool that will flag whatever it deems fully AI generated, tag it as such and remove it from algorithmic recommendations. Surprisingly, the company claims the tool found that about 10% of the tracks uploaded to its service every day are fully AI generated.
I thought Deezer’s announcement sounded like a great solution: AI music can remain for those who want to listen to it, can still earn royalties, but won’t be pushed in users’ faces, giving human-made content a little head start. I wondered why other companies hadn’t also followed suit. After speaking to multiple AI experts, however, it seems many of today’s AI detection tools generally still leave something to be desired. “There’s a lot of false positives,” one AI expert, who has tested out a variety of detectors on the market, says.
The fear for some streamers is that a bad AI detection tool could open up the possibility of human-made songs getting accidentally caught up in a whirlwind of AI issues, and become a huge headache for the staff who would have to review the inevitable complaints from users. And really, when you get down to it, how can the naked ear definitively tell the difference between human-generated and AI-generated music?
This is not to say that Deezer’s proprietary AI music detector isn’t great — it sounds like a step in the right direction — but the newness and skepticism that surrounds this AI detection technology is clearly a reason why other streaming services have been reluctant to try it themselves.
Still, protecting against the negative use-cases of AI music, like spamming, streaming fraud and deepfaking, are a focus for many streaming services today, even though almost all of the policies in place to date are not specific to AI.
It’s also too soon to tell what the appetite is for AI music. As long as the song is good, will it really matter where it came from? It’s possible this is a moment that we’ll look back on with a laugh. Maybe future generations won’t discriminate between fully AI, partially AI or fully human works. A good song is a good song.
But we aren’t there yet. The US Copyright Office just issued a new directive affirming that fully AI generated works are ineligible for copyright protection. For streaming services, this technically means, like all other public domain works, that the service doesn’t need to pay royalties on it. But so far, most platforms have continued to just pay out on anything that’s up on the site — copyright protected or not.
Except for SoundCloud, a platform that’s always marched to the beat of its own drum. It has a policy which “prohibit[s] the monetization of songs and content that are exclusively generated through AI, encouraging creators to use AI as a tool rather than a replacement of human creation,” a company spokesperson says.
In general, most streaming services do not have specific policies, but Spotify, YouTube Music and others have implemented procedures for users to report impersonations of likenesses and voices, a major risk posed by (but not unique to) AI. This closely resembles the method for requesting a takedown on the grounds of copyright infringement — but it has limits.
Takedowns for copyright infringement are required by law, but some streamers voluntarily offer rights holders takedowns for the impersonation of one’s voice or likeness. To date, there is still no federal protection for these so-called “publicity rights,” so platforms are largely doing these takedowns as a show of goodwill.
YouTube Music has focused more than perhaps any other streaming service on curbing deepfake impersonations. According to a company blog post, YouTube has developed “new synthetic-singing identification technology within Content ID that will allow partners to automatically detect and manage AI-generated content on YouTube that simulates their singing voices,” adding another layer of defense for rights holders who are already kept busy policing their own copyrights across the internet.
Another concern with the proliferation of AI music on streaming services is that it can enable streaming fraud. In September, federal prosecutors indicted a North Carolina musician for allegedly using AI to create “hundreds of thousands” of songs and then using the AI tracks to earn more than $10 million in fraudulent streaming royalties. By spreading out fake streams over a large number of tracks, quickly made by AI, fraudsters can more easily evade detection.
Spotify is working on that. Whether the songs are AI or human-made, the streamer now has gates to prevent spamming the platform with massive amounts of uploads. It’s not AI-specific, but it’s a policy that impacts the bad actors who use AI for this purpose.
SoundCloud also has a solution: The service believes its fan-powered royalties system also reduces fraud. “Fan-powered royalties tie royalties directly to the contributions made by real listeners,” a company blog post reads. “Fan-powered royalties are attributable only to listeners’ subscription revenue and ads consumed, then distributed among only the artists listeners streamed that month. No pooled royalties means bots have little influence, which leads to more money being paid out on legitimate fan activity.” Again, not AI-specific, but it will have an impact on AI uploaders with bad motives.
So, what’s next? Continuing to develop better AI detection and attribution tools, anticipating future issues with AI — like AI agents employed for streaming fraud operations — and fighting for better publicity rights protections. It’s a thorny situation, and we haven’t even gotten into the philosophical debate of defining the line between fully AI generated and partially AI generated songs. But one thing is certain — this will continue to pose challenges to the streaming status quo for years to come.
Kristin Robinson
Billboard