Google allegedly used copyright music for its AI audio models
Key takeaways
- Image and text-based generative AI models are already sophisticated, and video and audio formats are next.
- Google is allegedly sitting on some very powerful text-to-audio tools.
- They allegedly trained these using copyright data and intend to negotiate with rights holders later.
By now, many of us have already realized that generative AI tools like ChatGPT, DALL-E, and Midjourney are pushing the boundaries of creativity, allowing for the synthesis of content that can mimic various styles, formats, and genres.
This has created a new revolution in creativity that few, if any, saw coming a few years ago.
As generative AI evolves, its capability to produce derivative content blurs the lines between originality and copying, challenging notions of copyright and ownership.
It’s a legal blackhole – a rift in intellectual property law to which there’s no obvious solution.
We’ve already seen the consequences of this in the visual arts and writing communities, where numerous high-profile lawsuits have targeted AI companies, some involving influential figures like Game of Thrones creator George R R Martin.
Plaintiffs argue that AI companies scraped their data without permission to subvert their jobs and income streams. AI companies argue that this is ‘fair use.’
This isn’t merely academic; we’ve seen numerous layoffs across tech and the arts potentially related to AI, including several cases specifically linked to AI, like recent redundancies at Duolingo.
Generative AI is coming for the music industry
Is music next? Almost certainly.
Google’s AI advancements in the field of audio generation were recently captured in the words of Lyor Cohen, Google and YouTube’s global head of music, who, after witnessing a demonstration of Google Deepmind’s AI capabilities in music, exclaimed:
“Demis [Hassabis, CEO of Google Deepmind] and his team presented a research project around genAI and music and my head came off of my shoulders. I walked around London for two days excited about the possibilities, thinking about all the issues and recognizing that genAI in music is here — it’s not around the corner.”
Tools like Suno already demonstrate the ability to generate music and lyrics, which is very impressive and useful for sound design.
However, enthusiasm for AI music generators is not universally shared across the industry.
The contention lies in Google’s method of training its AI models, which some speculate involved using copyrighted music tracks without obtaining prior consent from the rights holders.
We don’t know this for sure – it was a claim made by Billboard that hasn’t been verified.
That article says, “Google trained its model on a large set of music — including copyrighted major-label recordings — and then went to show it to rights holders, rather than asking permission first, according to four sources with knowledge of the search giant’s push into generative AI and music.”
This essentially argues that Google intends to do things backward – train AI on copyright material first and broker for permission later.
Rather than artists opting out of having their work used to train AI models, this shows how tech companies might use their clout to train the AI models first and then pick up the pieces with creators later.
Tech companies are also actively securing permission from artists.
For instance, YouTube’s AI-driven Dream Track feature allows creators to generate music inspired by artists based on text prompts and was created with permission from John Legend, Demi Lovato, T-Pain, and Sia, among others.
Other influential artists are embracing AI, such as Will.i.am, who is developing an AI-powered radio show after unveiling an AI audio experience for Mercedes vehicles a few weeks ago.
Cohen highlighted the industry’s ties with creative industries, stating, “Our superpower was our deep collaboration with the music industry.”
To some, this is a slippery slope they want the music industry to avoid. Financial incentives for big artists to hand over the keys to their work to AI companies might signal that this is what the music industry wants.
Labels react
Thus far, labels have been keen to reassert their loyalty to musicians.
As Dennis Kooker, president of global digital business and U.S. sales for Sony Music Entertainment, articulated at a Senate forum on AI, “If a generative AI model is trained on music for the purpose of creating new musical works that compete in the music market, then the training is not a fair use. Training in that case, cannot be without consent, credit, and compensation to the artists and rights holders.”
As this situation evolves, AI companies will continually advocate for innovation – seeing this as progress for their stakeholders and humanity.
Meanwhile, creators are finding ways to fight back. For example, in addition to lawsuits, visual artists are now using a tool called “Nightshade” to ‘poison’ AI models from the inside.
A bitter conflict is developing here, where music labels and influential artists could set the scene for millions of others who depend on their own creative work to survive.
Post Views: 190