Streaming fraudsters are not short of imagination when it comes to extracting money from platforms. According to Deezer, up to 70% of streams of tracks generated by artificial intelligence on the service are fraudulent. This is an impressive figure, but it still only concerns a minority segment: AI-generated music only accounts for 0.5% of total streams. However, it is growing rapidly... attracting fraudsters.
20,000 AI tracks uploaded every day
These scammers exploit a well-established system: they create artificial songs in series that are uploaded massively—nearly 20,000 tracks per day, according to Deezer!—then they use bots to listen to them on repeat. The goal is to capture a share of the revenue from royalties, a system that is based on the volume of streams. To circumvent traditional detections, smart actors avoid concentrating too many listens on a single track, instead deploying their efforts across a large catalog of fake productions.
"As long as there's money at stake, some will try to profit from it," summarizes Thibault Roucou, director of copyright at Deezer, to the Guardian. "That's why we're investing in the fight against these abuses." The platform claims to be able to detect 100% of content generated by certain AI models like Suno or Udio, and blocks the payment of royalties for listens deemed fraudulent. Furthermore, Deezer has removed this content from its algorithmic recommendations.
The threat extends far beyond Deezer. The global streaming market, valued at $20.4 billion in 2024 by the International Federation of the Phonographic Industry (IFPI), is attracting covetousness. And for the IFPI, the rise of generative AI is only making the situation worse.
In its latest annual report, the organization denounces the exploitation of musical works by AI developers to train their models, without consent or remuneration. The IFPI also highlights the dangers of large-scale streaming manipulation, which is all the easier to achieve since AI tools can generate titles, as well as cover art, and even false artistic identities.
"It's theft," asserts the IFPI. These practices harm not only legitimate artists, but also user trust and the integrity of streaming platforms. And, more broadly, art too! This is why the Federation is calling on governments to regulate the use of AI, notably by prohibiting the unauthorized training of models on protected content and by requiring transparency on the origin of the tracks generated.
Far from opposing artificial intelligence outright, the music industry is also exploring legitimate and regulated uses of these technologies. The IFPI's 2025 report highlights several initiatives led by major record labels. Warner Music, for example, allowed Randy Travis, a stroke victim, to record a new track using an AI vocal model validated by the artist.
0 Comments