The European Union is entering a new phase of its artificial intelligence policy, shifting from regulating technology to ensuring Europe’s competitiveness in the AI sector. In the last mandate, policymakers devoted considerable energy to crafting rules to make AI systems safe, ethical, and trustworthy. This was essential groundwork. However, with most legal safeguards in place, the bloc can focus on fostering innovation, deployment, and scale-up, aiming to cement its place on the global stage.
Much of this renewed effort will focus on establishing the EU as a more appealing hub for AI businesses to thrive and expand. Access to funding and venture capital is fundamental to achieving this. Without a robust investor pipeline, Europe risks ceding talent and its share of global tech revenues (which dropped from 22% to 18% between 2013 and 2023) to competitor markets – such as the United States and China – where funding is more straightforward to secure.
Only four of the world’s top 50 tech companies are European, and just one of the world’s 20 largest internet companies is headquartered on the continent, which speaks volumes about the precariousness of Europe’s tech industry. This hinders the bloc’s growth prospects in AI, limiting its ability to capitalise on the technology’s potential to drive innovation and economic growth across other sectors.
So, how does the EU reverse this trend? Completing the long-discussed Capital Markets Union, which will enable startups and scale-ups to access the resources they need to flourish, is a crucial first step.
But money is just one part of the story. Policymakers are rolling out several initiatives to encourage the uptake of AI across industries and the public sector. One key programme is the Apply AI Strategy, which is designed to integrate AI solutions into manufacturing, healthcare, and education.
The EU is also tackling the hardware side of the equation. Projects like the AI Factories initiative and the Cloud and AI Development Act aim to increase Europe’s computational capacity and build interoperable cloud systems to support AI training and development. These efforts are crucial if policymakers are serious about reducing the EU’s dependence on foreign infrastructure.
Beyond supporting its industry, the EU isn’t easing up on enforcing competition rules in tech against mostly non-European firms. The Digital Markets Act – designed to rein in the monopolistic behaviours of large platforms – is a key tool in this strategy, and there seems to be consensus on using the instrument to ensure competition in AI markets. There is speculation that its scope may include generative AI. However, the lack of resources the Commission has allocated to this area suggests otherwise.
This is all set against a backdrop of significant change, with outgoing competition commissioner Margrethe Vestager making way for Teresa Ribera, an accomplished public law professor who helped negotiate the 2015 Paris Climate Agreement but lacks experience in digital markets and AI policy.
Meanwhile, the EU is also working on setting technical standards for AI, a move intended to bring much-needed clarity for businesses navigating compliance with the AI Act. Standardisation would simplify the legal landscape, reduce administrative burdens, and create a more innovation-friendly environment. Yet, progress has been frustratingly slow.
Standardisation organisations, which are consensus-driven, often find it challenging to meet political deadlines. The European Commission has threatened to impose technical specifications if the process drags on. However, this is unlikely to solve the deeper problem of resource shortages within EU institutions and private industry stakeholders.
The question is whether these initiatives will cut it.
Simplifying EU rules can achieve legal certainty outside of standard setting. The second Von der Leyen Commission will look to reduce unnecessary red tape across the board, but little has been announced within the tech and digital policy areas.
Revisiting major legislation like the AI Act or GDPR seems politically implausible but simplifying the Data Act might be a more realistic strategy with tangible benefits. For example, clearing up ambiguous definitions and clarifying data-sharing rights would support AI developers by offering them high-quality data to train their models.
When it comes to facilitating the deployment and development of new solutions, one of the most intriguing questions for the EU’s AI strategy is whether it should focus on excelling in specific areas of the AI value chain rather than spreading its efforts too thinly.
Europe has unique strengths it could leverage to succeed in AI. For instance, the bloc’s rich linguistic diversity positions it to develop models capable of working across multiple languages, a feature increasingly in global demand. On the hardware side, Europe could focus on creating AI-optimised chips, an area that is still evolving worldwide. Of course, this kind of selective investment risks favouring specific member states over others, which could ruffle feathers. Yet the alternative—falling further behind global leaders—poses a more significant threat.
In short, the European Union’s shift from regulation to innovation in its AI strategy presents business opportunities and challenges. While policymakers work to address issues like resource shortages and regulatory complexity, success will depend on practical implementation. For stakeholders, staying informed and engaging proactively with policy developments will be crucial to navigating risks and seizing emerging opportunities in Europe’s rapidly evolving AI policy landscape.