Published 14:37 IST, September 20th 2024
Tech giants push for softer regulations in Europe’s AI act
In May, EU lawmakers passed the AI Act, a set of rules designed to govern AI technologies, following months of intense debate.
Tech giants push back: The world's largest tech companies are making a final push to convince the European Union (EU) to adopt a more lenient approach to regulating artificial intelligence (AI), aiming to avoid the threat of heavy fines that could reach billions of dollars.
In May, EU lawmakers passed the AI Act, a set of rules designed to govern AI technologies, following months of intense debate. However, with the law's accompanying codes of practice yet to be finalized, there is uncertainty over how strictly these rules will be applied to “general purpose” AI systems like OpenAI's ChatGPT. The industry fears the risk of copyright lawsuits and significant financial penalties.
The EU has invited companies, academics, and other stakeholders to contribute to drafting the code of practice, receiving close to 1,000 applications, an unusually high number, according to a source who wished to remain anonymous due to the sensitivity of the matter.
Compliance code key for innovation
While the code will not be legally binding when it takes effect late next year, it will serve as a checklist for companies to demonstrate compliance. Failure to adhere to the code could expose companies to legal challenges.
"The code of practice is critical. If we get it right, innovation can continue," said Boniface de Champris, a senior policy manager at CCIA Europe, a trade group that includes members like Amazon, Google , and Meta.
"But if it’s too restrictive or detailed, it could stifle progress," Champris added.
Debate over data scraping
Companies like Stability AI and OpenAI have faced scrutiny for allegedly using copyrighted material, such as bestselling books or images, to train their models without permission from creators.
Under the AI Act, firms will need to provide "detailed summaries" of the data used to train their models. This requirement could open the door for creators to seek compensation if their content was used without consent, although this is still being tested in court.
Some in the business community argue that these summaries should include minimal information to safeguard trade secrets, while others believe copyright holders have the right to know if their work has been used.
OpenAI, which has previously been criticised for its lack of transparency regarding its data sources, has also applied to join the working groups responsible for drafting the code, according to an insider. Google has confirmed its participation, while Amazon stated it aims to "contribute expertise to ensure the code’s success."
Maximilian Gahntz, AI policy lead at the Mozilla Foundation, voiced concerns that companies are "avoiding transparency." He argued that the AI Act offers a crucial opportunity to bring transparency to this area and provide insights into the “black box” of AI systems.
Innovation and regulation
Some business leaders have accused the EU of focusing more on regulation than fostering innovation. As the drafting of the code of practice continues, efforts are being made to strike a balance between regulatory obligations and innovation.
Recently, former European Central Bank chief Mario Draghi stressed the need for more coordinated industrial policies, faster decision-making, and large-scale investments to keep up with China and the United States.
Meanwhile, Thierry Breton, a strong advocate for EU regulations and a critic of non-compliant tech firms, stepped down as European Commissioner for the Internal Market following disagreements with European Commission President Ursula von der Leyen.
As protectionist policies gain momentum in the EU, local tech companies are pushing for exemptions in the AI Act that would benefit emerging European firms. "We’ve argued that these requirements should be reasonable and, if possible, tailored for startups," said Maxime Ricard, policy manager at Allied for Startups, a network of trade groups representing smaller tech businesses.
Once the code is released early next year, tech companies will have until August 2025 to align with its requirements.
Non-profit groups, including Access Now, the Future of Life Institute, and Mozilla, are also involved in shaping the code. Gahntz cautioned that as the AI Act’s provisions are clarified, it's essential to ensure that major AI companies don't weaken important transparency mandates.
(With Reuters Inputs)
Updated 14:37 IST, September 20th 2024