Next Move special edition: EU AI Act

The issue

On February 2, 2024, representatives of all 27 EU member states approved the latest draft of the AI Act. The unanimous vote signals the resolution of concerns raised by France, Germany and Italy that the regulation could stifle European innovation. The approved text will now advance to a vote by key committees of the European Parliament in mid-February, followed by a full plenary vote in April. It will enter into force 20 days after publication in the EU’s official journal.

The approved language reflects a range of new provisions hammered out since EU policymakers reached a provisional agreement on December 8. The outcome is an ambitious framework that will have a global impact and will likely become a template for other AI regulators.

The regulation calls for governance, testing and other guardrails to manage the risks of AI systems throughout their life cycle. New provisions include a framework for general-purpose AI (GPAI) systems, including heightened standards for those that pose systemic risk, as well as new requirements and exemptions for “high-risk AI systems.” To support innovation, an AI Office will supervise and enforce GPAI provisions and develop voluntary codes of conduct and guidelines.

The evolution of the AI Act, a process filled with sometimes fraught negotiations and compromises made since its original proposal in 2021, highlights the ongoing balance that regulators must strike — protecting stakeholders from the risks of AI, while at the same time fostering innovation.

With the approved text in hand, organizations now have a detailed view of the requirements to guide their readiness planning. Affected companies should take immediate steps given the regulation’s scope and complexity. Some provisions will become enforceable as soon as six months after the measure becomes law.

The regulator’s take

As described in the AI Act’s recitals, the overarching policy objective is to improve the EU market’s functioning by establishing a uniform legal framework for developing, marketing and using AI systems; to promote the adoption of trustworthy AI while protecting the health, safety and fundamental rights of individuals; and to support innovation.

The regulation applies to providers, deployers, importers and distributors of AI systems in the EU market, as well as product manufacturers placing on the EU market or putting into service an AI system as part of their product and under their name.

New provisions at a glance. The approved text provides the technical language and details reflecting the changes negotiated last December. Key additions include:

  • GPAI framework: Articles 52-52e establish a new GPAI framework for developers of foundation models and certain generative AI tools, as discussed in more depth below.
  • AI Office: Article 55b requires the European Commission (EC) to establish an AI Office, as announced recently. The office will have considerable oversight powers across the European Union, including responsibility for supervision and enforcement of GPAI provisions.
  • Bias mitigation duty: Article 10(2)(f) obligates providers of high-risk AI systems to identify, detect, prevent and mitigate harmful biases that may result in discrimination or otherwise curtail the fundamental rights of individuals under EU law.
  • Human oversight: Article 14 requires deployers of high-risk AI systems to assign responsibility for operational oversight of the performance of each system to specific individuals with appropriate training.
  • High-risk system exemption: Article 6(2a) exempts certain AI systems from the high-risk category if they don’t pose a significant risk of harm to the health, safety or fundamental rights of individuals. This is true of AI systems intended to perform a narrow procedural task, to improve the result of a previously completed human activity, to detect decision-making patterns or deviations, or to perform a preparatory task to an assessment relevant to the use cases listed in Annex III.
  • Investigation of incidents: Article 62(1f) requires providers of high-risk systems that experience a serious incident to launch an internal investigation, perform a risk assessment and identify any necessary corrective actions.
  • Scientific research exemption: Article 2(5a) excludes from the regulation’s scope models developed for the sole purpose of scientific research and development. 
  • AI literacy: Article 4b obligates providers and deployers to establish a sufficient level of AI literacy among their employees and other individuals involved in the operation and use of AI systems.
  • Right to file complaint, request explanation: Article 68b empowers individuals and organizations to lodge complaints with market surveillance authorities regarding alleged violations of the act. Article 68c gives individuals subject to certain decisions taken by high-risk AI systems a right to request a clear and meaningful explanation from the system’s deployer.
  • Fundamental rights assessment: Article 29a requires deployers of high-risk systems to perform a fundamental rights impact assessment in certain circumstances. This is essentially an AI risk assessment that considers the potential negative effects of the system’s use on individuals and how these will be addressed through compliance measures.

High-risk systems. The bulk of the AI Act applies to “high-risk AI systems,” which are subject to extensive requirements for safety, accuracy and security. Under Article 6, these include AI systems that are used as a product or as safety component of a product covered by EU harmonization legislation listed in Annex II such as machinery, toys, medical devices, protective equipment, elevators, vehicles, aircraft and watercraft. Also included are AI systems listed in Annex III, including:

  • Biometrics: Biometric categorization systems and emotion recognition systems, as well as remote biometric identification systems, that aren’t prohibited under Article 5 and are otherwise lawful.
  • Critical infrastructure: AI systems used as safety components in critical digital infrastructure, road traffic and the supply of water, gas, heating and electricity.
  • Education: AI systems used, for example, to determine access or admission to schools and vocational institutions at all levels, to evaluate learning outcomes or to assess the appropriate level of education for a person.
  • Essential services: AI systems used to determine eligibility for public assistance or healthcare, to determine creditworthiness (other than for fraud detection), to classify emergency calls for dispatching first responders or to assess risk and pricing for life and health insurance.
  • Law enforcement: AI systems used for various law enforcement purposes permitted by law such as assessing a person’s risk of becoming a crime victim, supporting polygraphs and evaluating criminal evidence.

Providers of high-risk AI systems face many risk mitigation requirements. Article 9 obligates them to establish a risk management system that meets detailed criteria for scope and testing. Article 10 imposes data governance standards around training, validation and testing of data sets. Article 11 requires technical documentation showing compliance before the system goes to market. Article 12 mandates automatic event-logging to facilitate risk identification and post-market monitoring. Article 13 requires that systems include instructions for deployers and be designed to enable transparency so deployers can interpret their output. Article 14 calls for systems to enable and support human oversight. Article 15 imposes standards for accuracy, robustness and cybersecurity.

To demonstrate compliance, providers of high-risk AI systems must follow the conformity assessment procedures described in Article 43, which references separate procedures for assessments based on internal control (Annex VI) and those based on quality management systems and technical documentation (Annex VII). Other provisions lay out the requirements for certificates, declarations of conformity, CE markings and registration.

High-risk system compliance overview

General-purpose AI models. The approved text includes a new, separate framework for GPAI models in Articles 52-52e. Key provisions include:

  • Definition: GPAI models are defined as “an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.”
  • General obligations: Article 52c introduces horizontal obligations for all GPAI models such as maintaining and providing and technical documentation to the new AI Office and national competent authorities, as well as providing certain documentation to downstream providers to help them understand the model’s capabilities and limitations and to comply with the regulation. GPAI model providers must also adopt a copyright policy and issue a summary of content used to train the model.
  • Authorized representative: Under Article 52ca, GPAI model providers outside the European Union must appoint an authorized EU-based representative to perform required tasks. These tasks include verifying technical documentation and cooperating with the AI Office and national authorities. The representative must terminate its mandate if the provider is noncompliant and must inform the AI Office. This obligation doesn’t, however, apply to models available under a free and open-source license unless they present systemic risks.
  • Systemic risk: Additional, stricter requirements apply to GPAI models with systemic risk. These are GPAI models that have high-impact capabilities (determined based on appropriate technical tools and methodologies) or are identified as such by the EC, particularly if their training involves significant computational power. Under Article 52d, providers of GPA models with systemic risk must perform model evaluation, make risk assessments and take risk mitigation steps, provide adequate cybersecurity protection, and report serious incidents to the AI Office and national authorities.
  • Codes of practice: Compliance with these requirements can be achieved through codes of practice developed by industry, with the participation of member states (through the AI Board), and facilitated by the AI Office. Developing the codes of practice should be an open process to which all interested stakeholders will be invited, both companies as well as civil society and academia. The AI Office will evaluate these codes and the EC can formally approve them. If they’re not finalized by the time the regulation becomes applicable or the AI Office deems them inadequate, the EC can make common rules for implementing the obligations through an implementing act.

Innovation support. Regulatory sandboxes — controlled environments for AI system development, training, testing and validation prior to placing on the market or putting into service — have greater prominence under the approved text. Title V sets out detailed procedures for how these sandboxes should operate in practice in an effort to foster responsible innovation. Each EU member state will have to establish at least one sandbox within 24 months after the regulation takes effect.

Separately, the EC announced an AI innovation package with major funding to support AI startups and small- and medium-size enterprises, as well as the AI Pact, a voluntary industry consortium designed to help participants prepare for compliance.

Enforcement and penalties. Member states will enforce the regulation through their respective market surveillance authorities. Under Article 68f, however, the GPAI provisions will come under exclusive EC supervision and enforcement, in coordination with the AI Office.

Maximum fines will range from €7.5 million or 1.5% of global turnover to €35 million or 7% of global turnover, depending on the organization’s size and the specific infringement.

Implementation timeline. Once it’s formally adopted, the regulation will enter into force 20 days after publication in the EU’s official journal. Most provisions will become applicable and enforceable 24 months after that point. Key exceptions include:

  • Within 6 months, the Title I general provisions and Article 5 prohibitions on certain AI practices will apply.
  • Within 12 months, the penalty provisions and the requirements for GPAI models will apply. Providers of GPAI models that were already on the market before the GPAI model provisions apply will have two additional years (or three years total) to comply.
  • Within 36 months, the obligations relating to high-risk AI systems to be used as a safety component of a product, or that are themselves a product, covered by the specific EU harmonization legislation listed in Annex II will apply.
  • Within 4 years, providers or deployers of high-risk AI systems intended to be used by public authorities must comply.

Your next move

To prepare for the AI Act’s formal adoption, companies with EU operations should understand the potential direct and secondary impacts, identify the gaps and opportunities, and plan accordingly. Consider taking these steps.

  1. Inventory your AI use. Identify all AI projects in your organization and their status (e.g., planning, development or operation). This inventory will form the basis for all further decisions in establishing AI governance. A starting point for this exercise might be your model governance function. Assess how comfortable you are with the inventory’s completeness and whether you’re at risk of having “shadow AI” within your organization. Monitor your AI-based activities now to avoid rushing into this task during the compliance readiness period. Consider adopting a cloud-based model risk management governance solution like Model Edge, a PwC product.
  2. Conduct a regulatory impact assessment. Based on your inventory, determine which AI systems and use cases are within scope of the AI Act. Do they potentially meet the thresholds for prohibited AI practices, high-risk AI systems and/or GPAI models with systemic risk? Do they qualify for any exemptions (e.g., for systems that don't pose a significant risk of harm to health, safety or fundamental rights)? Assess your potential exposure and the consequences for your strategy, product design, operations and compliance program to get a preliminary view on the mitigation lift. Classify risks and assign roles according to the AI Act requirements, as well as any sector-specific regulations.
  3. Perform a gap analysis. Compare your existing programs and processes against the AI Act compliance obligations. This will help determine concrete workstreams and overlaps with other regulations. Existing programs and processes can sometimes be expanded to include AI-specific measures, such as risk management, data management or cybersecurity.
  4. Develop a readiness plan. Based on your gap analysis, create a plan to mitigate your exposure and bolster your program. Follow principles of responsible AI, including governance, testing, training and risk management.
    • Create a risk taxonomy reflecting the guiding risk principles that enable AI risks to be measured, managed and, if necessary, transparently reported to others over time. If you have an existing taxonomy, review it for sufficiency.
    • Develop or enhance your AI governance model and integrate it with your broader enterprise risk management (ERM). A critical and foundational step to developing a governance model is aligning the roles and responsibilities of existing teams, as well as defining new ones, to support oversight.
    • Strengthen capabilities to assess and test your AI systems according to their risk. These should support the many types of assessments mandated by the regulation, including assessments of conformity, fundamental rights, systemic risk and serious incidents.
    • Train your staff to attain AI literacy in the operation and use of AI systems, as required under Article 4b, taking into account their technical knowledge, experience, education and training and the context in which AI systems will be used.
  5. Engage with industry and regulators. Consider whether joining the AI Pact can yield benefits. The EC is looking to convene, on a voluntary basis, key EU and non-EU industry players to share leading practices, demonstrate commitment toward the AI Act’s objectives and take concrete steps to prepare for compliance (e.g. building internal processes, preparing staff and self-assessing AI systems). You can apply on your company’s behalf here.
Follow us