European Central: Examining The European Union’s Artificial Intelligence Policy
Tara Winstead
Artificial Intelligence (AI) is poised to play a central role in shaping the future global economy. As a result, governments worldwide are racing to not only make significant investments in AI technologies but are also intensifying their efforts to regulate them. However, different governments have taken differing approaches to AI regulation. In the United States, for example, ‘AI risk management is highly distributed across federal agencies, many adapting to AI without new legal authorities.’ Additionally, the US ‘has invested in non-regulatory infrastructure, such as a new AI risk management framework, evaluations of facial recognition software, and extensive funding of AI research.’ In comparison, the European Union’s approach to AI regulation ‘is characterized by a more comprehensive range of legislation tailored to specific digital environments.’ Specifically, the EU has rolled out new legislation implementing requirements ‘on high-risk AI in socioeconomic processes, the government use of AI, and regulated consumer products with AI systems.’
The EU’s comprehensive AI policy took several years to develop. In April 2021, the EU Commission presented its first AI package, which included a coordinated plan for fostering a ‘European approach to AI,’ a review of the Coordinated Plan on Artificial Intelligence aimed at aligning member states’ AI policies, and an outline of its first-ever ‘regulatory framework proposal on artificial intelligence’ (i.e., the AI Act).
Over the last several years, the AI Act has worked its way through the various institutions and bureaucratic layers of the European Union. On July 12, 2024, the AI Act was published in the EU Official Journal, before entering into force for all twenty-seven EU member states on August 1. The AI Act will become fully applicable on August 2, 2026.
According to the European Commission, the AI Act ‘gives AI developers, deployers, and users the clarity they need by intervening only in those cases that existing national and EU legislations do not cover.’ The AI Act’s approach is based on four different levels of risk: ‘minimal risk, high risk, unacceptable risk, and specific transparency risk.’ AI systems that fall under the level of minimal risk—which is the vast majority—are not subject to specific regulations or rules. Systems that fall under high risk are those that can pose ‘serious risks to health, safety or fundamental rights.’ These models are thus subject to ‘strict obligations’ (e.g., risk assessment and mitigation systems or ‘appropriate human oversight measures’). AI systems that are classified as unacceptable risk are considered ‘a clear threat to the safety, livelihoods and rights of people’ and are accordingly banned. The AI Act expressly prohibits eight practices, including: ‘harmful AI-based manipulation and deception’ and the use of ‘biometric categorisation to deduce certain protected characteristics.’ Finally, specific transparency risk refers to ‘risks associated with a need for transparency around the use of AI.’ The AI Act establishes specific disclosure requirements to ensure that individuals are informed when they are interacting with AI systems, such as online chatbots. Moreover, creators of generative AI products must make sure that AI-generated content can be easily identified. For example, AI-generated deepfakes should be clearly and prominently labeled.
So how exactly is this legal framework implemented? After a ‘high risk’ AI system is created, it must undergo an AI conformity assessment and satisfy several determined requirements. It is then registered in a separate, EU-wide database for AI systems. After this step, a declaration of conformity is signed and the AI system is labeled and placed on the market. (The AI system can be forced to undergo this regulatory process again if significant changes occur while it is on the market.) Once an AI system is placed on the market, the European AI Office and relevant authorities in the member states are in charge of market surveillance and enforcement. However, the AI system provider is ultimately responsible for implementing a ‘post-market monitoring system’ and reporting any serious incidents to regulators.
The goal of the EU’s AI Act is to establish a unified set of rules that apply across all EU member states, with the aim of building trust in AI systems throughout Europe. Indeed, the AI Act is part of a much larger EU initiative to win the artificial intelligence race. The EU Commission has also launched an AI Innovation Package and announced funding for the creation of state-of-the-art ‘AI Factories’ to develop advanced generative AI models.
Despite these ambitious measures, certain elements of the EU’s AI policy has received some pushback from the tech sector. Specifically, the tech sector is concerned with the proposed rules for providers of General-Purpose Artificial Intelligence (GPAI) following the release of the latest draft of the Code of Practice—in March 2025—by the European Commission. The Code of Practice on GPAI aims to help AI model providers comply with the AI Act, but industry representatives, particularly publishers and rights-holders, have raised concerns over issues such as copyright, transparency, and risk assessments. Critics argue that the new Code of Practice draft fails to provide legal certainty that is needed to advance AI innovation. Elias Papadopoulos, director of policy at internet lobby group DOT Europe, said that the draft ‘has been somewhat improved,’ but that certain provisions still exceed the requirements of the AI Act. ‘For example, mandatory third-party risk assessment pre- and post-deployment, although not an obligation in the AI Act itself, unfortunately remains in the new draft,’ he said. Thus, while the European Union has engineered a truly unique and pioneering AI regulatory framework, it remains to be seen how effectively it will balance innovation with regulation.