LONDON — Sweeping new laws regulating the use of artificial intelligence (AI) in Europe, including controls around the use of copyrighted music, have been approved by the European Parliament, following fierce lobbying from both the tech and music communities.
Members of the European Parliament (MEPs) voted in favor of the EU’s Artificial Intelligence Act by a clear majority of 523 votes for, 46 against and 49 abstentions. The “world first” legislation, which was first proposed in April 2021 and covers a wide range of AI applications including biometric surveillance and predictive policing, was provisionally approved in December, but Wednesday’s vote formally establishes its passage into law.
The act places a number of legal and transparency obligations on tech companies and AI developers operating in Europe, including those working in the creative sector and music business. Among them is the core requirement that companies using generative AI or foundation AI models like OpenAI’s ChatGPT or Anthropic’s Claude 2 provide detailed summaries of any copyrighted works, including music, that they have used to train their systems.
Trending on Billboard
Significantly, the law’s transparency provisions apply regardless of where in the world a tech company acquired its data from. For instance, even if an AI developer scraped copyright protected digital music from a non-EU country — or bought data sets from outside the 27-member EU state — as soon as they are used in Europe the company is required to make publicly available a “sufficiently detailed summary” of all copyright protected music it has used to create AI works.
There is also the requirement that any training data sets used in generative AI music or audio-visual works are water marked, so there is a traceable path for rights holders to track and block the illegal use of their catalog.
In addition, content created by AI, as opposed to human works, must be clearly labelled as such, while tech companies have to ensure that their systems cannot be used to generate illegal and infringing content.
Large tech companies who break the rules – which govern all applications of AI inside the 27-member block of EU countries, including so-called “high risk” uses — will face fines of up to 35 million euros ($38 million) or up to 7% of global annual turnover. Start-up businesses or smaller tech operations will receive proportionate financial punishments.
Speaking ahead of Wednesday’s vote, which took place in Strasbourg, co-rapporteur Brando Benifei said the legislation means that “unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected.”
Co-rapporteur Dragos Tudorache called the AI Act “a starting point for a new model of governance built around technology.”
European legislators first proposed introducing regulation of artificial intelligence in 2021, although it was the subsequent launch of ChatGPT — followed by the high-profile release of “Heart on My Sleeve,” a track that featured AI-powered imitations of vocals by Drake and The Weeknd, last April — that made many music executives sit up and pay closer attention to the technology’s potential impact on the record business.
In response, lobbyists increased their efforts to convince lawmakers to add transparency provisions around the use of music in AI – a move that was fiercely opposed by the technology industry, which argued that tougher regulations would put European AI developers at a competitive disadvantage.
One of the AI Act’s most high-profile critics has been Sam Altman, CEO of ChatGPT developer OpenAI. Last year, Altman accused the EU of “overregulating” the nascent AI industry and said his company, which is backed by Microsoft, might consider leaving Europe if it could not comply with the legislation (He walked back this statement a few days later).
Altman was far from alone in being hostile to the act. In the almost three-year run up to Wednesday’s vote, a small army of lobbyists acting on behalf of tech giants like Alphabet, Meta and Microsoft have been out in force in Brussels, the de facto capital of the European Union, trying to weaken the legislation’s transparency provisions, multiple music executives tell Billboard.
The fact that their efforts failed and AI developers are now required to keep detailed records of training data represents an “important victory” for artists, labels and publishers, John Phelan, director general of international music publishing trade association ICMP, tells Billboard.
“First and foremost, the AI Act clarifies that general purpose AI systems must respect existing copyright law. That means they need to secure prior authorization from rightsholders, as well as the consequent transparency and licensing obligations – all of the things that any digital service must do if they want to engage with music,” says Phelan.
“Currently, a lot of these tech companies are like glorified stream-rippers. They are just scraping digital audio and they do not care about the songwriters or artists. This legislation is an important part of the industry putting a stop to all of that,” he says.
“OpenAI, Microsoft and other AI companies say the genie can’t be put back in the bottle in terms of having data, but this is not the case. They cannot access, train or generate legally without permission and the EU legislation gives them no ground to say we’re doing so legitimately under a [text and data mining or fair use] exception.”
Following its approval by the European Parliament, the 300-page legislative text will undergo a number of procedural rubber-stamping stages before it is published in the EU’s Official Journal – most likely in late April or early May – with its regulations coming into force 20 days after that.
There are, however, tiered exceptions for tech companies to comply with its terms and some of its provisions are not fully applicable for up to two years after the act’s enactment. (The rules governing existing generative AI models commence after 12 months, although any new generative AI companies or models entering the EU market after the act comes into force will have to comply with its regulations from the start).
The EU’s newly-formed AI Office will be responsible for publishing templates for tech companies to use when compiling summaries of AI training data sets.
“As with every EU law, there are many aspects of the text that will have to be further detailed and the biggest question mark for people in the music industry is over the training data summary,” says Sophie Goossens, a partner at global law firm Reed Smith. “How detailed will it have to be? What will be the balance between disclosing content and protecting trade secrets or business competition?” she says.
In response to Wednesday’s vote, a coalition of European rights holder and creative organizations, including ICMP and global recorded-music trade body IFPI, issued a joint statement thanking regulators and MEPs for the “essential role they have played in supporting creators and rightsholders”.
“While these obligations provide a first step for rightsholders to enforce their rights, we call on the European Parliament to continue to support the development of responsible and sustainable AI by ensuring that these important rules are put into practice in a meaningful and effective way,” said the 18 signatories, which also included European independent labels trade association IMPALA, European Authors Society GESAC and CISAC, the international trade organization for copyright collecting societies.
The first-of-its-kind legislation, which applies to any company that operates in the European Union, comes as other countries, including the United States, Canada, Mexico, China, Japan, India, Singapore and the United Kingdom, explore their own paths to policing the rapidly evolving AI sector.
“The EU has taken the lead with other jurisdictions and we expect these standards to be, not a ceiling, but really a springboard for the rest of the world,” says Phelan.
“Lots of countries are already looking at what the EU has done as a source of inspiration,” adds Goossens. “Its impact is already being felt.”