Following three consecutive days of meetings which ended on 8 December 2023, European Parliament and Council negotiators celebrated the political agreement on the Artificial Intelligence Act (AI Act). This time it was Spain’s turn to claim fame, after a line of Member States clung to ‘high-level’ agreements at the end of their Council presidency. However, as can be expected from a highly technical file like the AI Act, plenty of technical meetings were still needed to iron out ripples in the text.
As negotiations went on, there was speculation that France, Germany and Italy would form a blocking minority in Council over the regulation of foundation models or use of biometric identification in law enforcement. These three countries that took a hardline approach in negotiations, together with a few other Member States and refused to provide their blessing as long as they hadn’t seen the final text.
Now six weeks on, and though there are not as many selfies of AI policy wonks doing the rounds on X, celebrations are arguably more justified. It seems that none of the Member States are willing to spend political capital blocking a hard-fought deal on a flagship EU law. And so there is a general sense of optimism that the AI Act will be finalised by the end of the month.
With these developments, people are slowly turning their heads to what’s ahead. For one, attention will turn to the AI Liability Directive; the Commission proposal for which was parked by lawmakers for the duration of AI Act negotiations. This Directive is meant to further clarify accountability along the value chain through rules on the disclosure of evidence and the burden of proof for damages caused by an AI system. Other issues that EU policymakers and regulators will likely want to address are the protection of AI generated intellectual property, the expected rise of energy usage driven by AI deployment and sector specific rules on AIs.
Moreover, as with GDPR, the EU is planning to export its new regulatory rulebook by relying on its first mover advantage and sheer market size. The bloc hopes that its risk-based tiered approach will become a global model for AI and thereby create a level the playing field for its own tech companies. However, this may prove an uphill battle, not least because Europe has very few leading AI companies of its own and these typically rely on one of the large international providers for scalable infrastructure.
Meanwhile, other initiatives across the globe are already underway. The United Kingdom adopted its National AI Strategy and the U.S. President Biden’s AI Executive Order, demonstrating consensus to that this technology should be regulated. Although some elements of the risk-based approach can be found in these initiatives, there are nevertheless stark differences. The US executive order focuses on setting standards and guidelines rather than regulatory prohibitions and obligations. And instead of a horizontal approach, the UK chooses different legislators for healthcare products, data protection and financial services.
The EU may hope to tap into this momentum and drive forward the large variety of international initiatives at play. But also on the intergovernmental stage, preference is so far for flexibility and promoting innovation as opposed to a implementing a rigid regulatory structure. Most notable, in November 2023, the G7’s Hiroshima Process merely resulted in a set of voluntary rules for generative AI.
As such, the AI Act, whether it is sealed this month or the next, is merely the start of a journey to regulate a technology which transgresses borders and will have very significant societal impact that have only begun to unravel.
You can reach out to us for more information: [email protected]
About the authors
Patrick Birken: A Senior Consultant in Portland’s Brussels office, Patrick advises clients on EU public affairs and strategic communications in tech and sustainability.
Enrico Pelosato: An Associate Consultant in Portland’s Brussels office, Enrico advises clients on EU public affairs in tech and mobility.
Beatrice Gori: An Account Executive in Portland’s Brussels office, Beatrice specialises in EU tech policy research and analysis.
Arthur Faure: An Intern in Portland’s Brussels office, Arthur conducts research in EU tech and sustainability policy.
Jonathan Sage: Senior Tech Policy Advisor for Portland in Europe, Jonathan is a leading practitioner in cloud and cybersecurity policy.