This video discusses how musicians can combat generative AI by using adversarial noise, a type of encoding that makes music files untrainable and can degrade the quality of AI datasets.
- The Problem with Generative AI: The video highlights the issue of tech companies scraping copyrighted data from platforms like Spotify and YouTube without consent, leading to intellectual property infringement. This practice has resulted in opportunists using AI to generate millions of songs, siphoning royalties away from human musicians.
- Adversarial Noise and its Applications: The speaker introduces adversarial noise, a technology that can manipulate AI’s perception of sound. He demonstrates how this noise can be used for malicious purposes, but also for protective measures for musicians.
- Harmony Cloak and Poisonify: Jordan collaborates with researchers to discuss Harmony Cloak, a project that uses adversarial noise to break an AI’s ability to recognize melody and rhythm. He also introduces his own attack method, “Poisonify,” which specifically targets instrument classification.
- Testing the Technology: The video shows several tests of Harmony Cloak and Poisonify against popular AI music generators, where the encoded music files cause the AI to produce distorted or low-quality extensions.
- Challenges and Future Plans: Jordan acknowledges the challenges of this technology, including its potential to disrupt Spotify’s recommendation algorithms and the high computational cost of encoding. He reveals that he is working with Symphonic Distribution to potentially offer this “AI proofing” as an optional service for musicians.
