{{item.title}}
{{item.text}}
{{item.title}}
{{item.text}}
On February 2, 2024, representatives of all 27 EU member states approved the latest draft of the AI Act. The unanimous vote signals the resolution of concerns raised by France, Germany and Italy that the regulation could stifle European innovation. The approved text will now advance to a vote by key committees of the European Parliament in mid-February, followed by a full plenary vote in April. It will enter into force 20 days after publication in the EU’s official journal.
The approved language reflects a range of new provisions hammered out since EU policymakers reached a provisional agreement on December 8. The outcome is an ambitious framework that will have a global impact and will likely become a template for other AI regulators.
The regulation calls for governance, testing and other guardrails to manage the risks of AI systems throughout their life cycle. New provisions include a framework for general-purpose AI (GPAI) systems, including heightened standards for those that pose systemic risk, as well as new requirements and exemptions for “high-risk AI systems.” To support innovation, an AI Office will supervise and enforce GPAI provisions and develop voluntary codes of conduct and guidelines.
The evolution of the AI Act, a process filled with sometimes fraught negotiations and compromises made since its original proposal in 2021, highlights the ongoing balance that regulators must strike — protecting stakeholders from the risks of AI, while at the same time fostering innovation.
With the approved text in hand, organizations now have a detailed view of the requirements to guide their readiness planning. Affected companies should take immediate steps given the regulation’s scope and complexity. Some provisions will become enforceable as soon as six months after the measure becomes law.
As described in the AI Act’s recitals, the overarching policy objective is to improve the EU market’s functioning by establishing a uniform legal framework for developing, marketing and using AI systems; to promote the adoption of trustworthy AI while protecting the health, safety and fundamental rights of individuals; and to support innovation.
The regulation applies to providers, deployers, importers and distributors of AI systems in the EU market, as well as product manufacturers placing on the EU market or putting into service an AI system as part of their product and under their name.
New provisions at a glance. The approved text provides the technical language and details reflecting the changes negotiated last December. Key additions include:
High-risk systems. The bulk of the AI Act applies to “high-risk AI systems,” which are subject to extensive requirements for safety, accuracy and security. Under Article 6, these include AI systems that are used as a product or as safety component of a product covered by EU harmonization legislation listed in Annex II such as machinery, toys, medical devices, protective equipment, elevators, vehicles, aircraft and watercraft. Also included are AI systems listed in Annex III, including:
Providers of high-risk AI systems face many risk mitigation requirements. Article 9 obligates them to establish a risk management system that meets detailed criteria for scope and testing. Article 10 imposes data governance standards around training, validation and testing of data sets. Article 11 requires technical documentation showing compliance before the system goes to market. Article 12 mandates automatic event-logging to facilitate risk identification and post-market monitoring. Article 13 requires that systems include instructions for deployers and be designed to enable transparency so deployers can interpret their output. Article 14 calls for systems to enable and support human oversight. Article 15 imposes standards for accuracy, robustness and cybersecurity.
To demonstrate compliance, providers of high-risk AI systems must follow the conformity assessment procedures described in Article 43, which references separate procedures for assessments based on internal control (Annex VI) and those based on quality management systems and technical documentation (Annex VII). Other provisions lay out the requirements for certificates, declarations of conformity, CE markings and registration.
High-risk system compliance overview
Source: Trustworthy AI: European regulation and its implementation (PwC Germany)
General-purpose AI models. The approved text includes a new, separate framework for GPAI models in Articles 52-52e. Key provisions include:
Innovation support. Regulatory sandboxes — controlled environments for AI system development, training, testing and validation prior to placing on the market or putting into service — have greater prominence under the approved text. Title V sets out detailed procedures for how these sandboxes should operate in practice in an effort to foster responsible innovation. Each EU member state will have to establish at least one sandbox within 24 months after the regulation takes effect.
Separately, the EC announced an AI innovation package with major funding to support AI startups and small- and medium-size enterprises, as well as the AI Pact, a voluntary industry consortium designed to help participants prepare for compliance.
Enforcement and penalties. Member states will enforce the regulation through their respective market surveillance authorities. Under Article 68f, however, the GPAI provisions will come under exclusive EC supervision and enforcement, in coordination with the AI Office.
Maximum fines will range from €7.5 million or 1.5% of global turnover to €35 million or 7% of global turnover, depending on the organization’s size and the specific infringement.
Implementation timeline. Once it’s formally adopted, the regulation will enter into force 20 days after publication in the EU’s official journal. Most provisions will become applicable and enforceable 24 months after that point. Key exceptions include:
To prepare for the AI Act’s formal adoption, companies with EU operations should understand the potential direct and secondary impacts, identify the gaps and opportunities, and plan accordingly. Consider taking these steps.