In a historic case, US District Court Judge William Alsup recently made a decision. This case has implications for the future of creative economies. Anthropic’s use of copyrighted books to train an artificial intelligence may be considered fair use. However, this is only applicable in very narrowly defined situations. The judge reaffirmed a key point about AI goods. To be considered legally transformative, they must be “spectacularly” different from the original content.

Moreover, practice on pirated or illegally accessed material would not constitute fair use. While this case involved books, its fundamental questions and legal principles are particularly relevant to the music industry. AI-generated music is now ubiquitous in this industry. This ruling is a threat and a blessing in disguise for producers, musicians, and rightsholders.
On the negative side, the case signals the dangers of unregulated AI generation. As music composition software becomes increasingly dependent on AI, it is trained on massive databases. These databases may include copyright material. Such material may be scraped from unlicensed sources such as YouTube or streaming portals. Anthropic’s action creates a clear legal line in the sand. Developers could face giant legal repercussions if training data includes pirated material. One of the ongoing class action lawsuits against Anthropic is filed by authors. It asserts that millions of their works were plagiarized without their permission. This issue can be worth more than one trillion dollars in damages.

It is especially significant to the music industry. If AI companies have to legalize the music they rely on, they must obtain the right licenses. This legalization will bring new sources of revenue to rights holders. Here, the ruling empowers musicians and publishers with the legal mechanisms to demand payment and openness. In contrast to the early days of file-sharing or digital streaming, artists were typically on the defensive. This time, however, the law might support the creators.
Yet, the risks of generative AI cannot be overlooked. One of the biggest worries is the threat of musical homogenization. AI algorithms that generate music by copying past songs can over-saturate the market with derivative music. This obscures the division between inspiration and copying. If highly similar outputs to current works are deemed “transformative,” artists might lose control over their individual sounds. Computers could reinterpret and publish these sounds without giving them credit or offering licensing. This whittles away artistic imagination and reduces the cultural richness that comes from human expression.
AI music can also be a source of economic displacement. Since technology can already compose, mix, master, and even sing, labels can be tempted to forego using human performers. AI can at times be used to create background music, commercial jingles, or ambiance songs without using musicians. In the long run, this can limit opportunities for composers, engineers, and session musicians. Their work is the foundation of the enterprise. And if royalties and credits are not split fairly, profitability in music as a livelihood can be threatened.

But AI has genuine benefits if ethically implemented. Music production has always been expensive and technical, discouraging new musicians from entering the industry. Presently, all these tools are offered by user-friendly AI platforms like Moises.ai, LALAL.AI, and Landr. These platforms allow artists to extract stems, master tracks, or construct demos. They can do this without expensive studio sessions. It breaks open the creative space to more voices by making music-making democratic. Here, AI is not a replacement, but an ally—helping musicians refine ideas or try out new styles.
Another encouraging advancement is the addition of AI transparency on streaming sites. Sites like Deezer are exploring how to label AI-generated tracks, giving listeners differences between machine and human productions (RouteNote). As another example, Anthropic has pledged to include guardrails. These guardrails prevent its Claude chat from generating copyrighted lyrics when it is prompted (Reuters, Wall Street Journal). These actions indicate a growing perception that creators must be safeguarded, and that listeners are entitled to know.
Furthermore, AI can open up new doors to creativity rather than shut them. Collaborative ventures present possible futures. In these futures, human producers and AI programs work together. Timbaland’s application of AI to develop virtual artists is an example of this innovation in genre-bending sounds (RouteNote). These projects are not intended to supplant human touch—they’re intended to add to it. When guided by human intent, AI is an artistic tool rather than an imposter.
For creatives and industry stakeholders, the future is sure but complex. Firstly, there must be enforceable licensing frameworks drafted for AI uses specifically. This involves per-use payment. Metadata reporting is also necessary. Additionally, opt-in consent enables creatives to exercise control over what is done with their music. Secondly, the legal concept of “transformative” in music use must be honed. The court’s terminology in the Anthropic case has set a standard. However, the level of diversity that would make music “new” is still debatable.
Finally, the Anthropic decision marks a turning point. The choice doesn’t simply determine the future of AI—it determines the equilibrium of power between creators and corporations. Music creators need to call for licensing, openness, and creative deference. Then, AI can be a force for positive change. But if they sit on their hands, they will be brushed aside. A technology wave will not ask whom it borrowed from.
As we move ahead, one thing is certain: AI will be a part of music’s future. Whether that future inspires artists—or puts them on the sidelines—will depend on how clearly we define the rules today.
