Spotify has removed more than 75 million “spammy” tracks over the past year as part of a broader effort to curb AI misuse, impersonation, and mass uploads that mislead listeners and hurt artists.
What Spotify changed
The company stated that the takedowns are part of a multi-pronged approach to protect artist identity and prevent deceptive uploads. Spotify newsroom explains the measures include a new policy on AI voice clones, wider detection for fraudulent uploads, and a spam filter designed to block tactics like duplicate tracks and artificially short songs from being recommended.
Unauthorized impersonation of an artist’s voice will not be allowed unless it is officially licensed. Spotify will also expand protections to enable the removal of tracks uploaded to the wrong artist profile more quickly. The company says the spam detection system will begin rolling out this fall.
AI disclosures in credits
Spotify is backing an industry standard for AI credits, so creators and labels can indicate how AI tools were used, from vocals to instrumentation. These disclosures will appear in the app when distributors and partners provide the information, giving listeners more context about how a track was created.
The announcement follows earlier controversy this year when listeners discovered alleged AI bands and resurrected voices being published on streaming services. Cases like The Velvet Sundown prompted debate about how platforms should handle synthetic artists and the issue of crediting.
This change affects artists, labels, and listeners who rely on streaming recommendations and playlists, and it targets accounts that flood the service with low-quality or deceptive content. Share thoughts in the comments and follow us on X and Bluesky.