The European Union adopted the Artificial Intelligence Act

03/04/24

On March 13, 2024, the European Parliament adopted the Artificial Intelligence Act, the long-awaited European regulation on artificial intelligence (hereinafter referred to as the "Regulation").

The Regulation envisages creating a unified legal approach to regulating the use of artificial intelligence (AI), regardless of the industry or technology. It applies to operators (suppliers, importers, distributors, and manufacturers) of AI systems in the EU or outside it, if AI is used within the EU territory.

Non-compliance with the Regulation's requirements may result in the imposition of high fines, which can amount to €35 million or 7% of annual turnover.

The Regulation introduces a risk-based approach and categorises AI systems into four risk groups:

  • Unacceptable risk (e.g., AI systems manipulating human behaviour are prohibited from being put into service, placed on the market, or used).
  • High-risk (AI systems that may impact safety or fundamental rights, mostly used in critical infrastructure, education, employment, healthcare, banking services, law enforcement, migration and border control management, judiciary, electoral processes).
  • Limited risk (AI systems with limited impact for manipulations).
  • Minimal risk (AI systems not falling into the above-mentioned categories).

Generative artificial intelligence systems must label their output as artificially generated. Specifically, any audio or video content (including so-called "deep fakes"), images created using AI, must contain appropriate indications.

All remote biometric identification systems are considered high-risk and are subject to strict requirements. The use of remote biometric identification in public places for law enforcement purposes is prohibited, except in specific cases provided by law.

The use of AI systems with limited or minimal risk will be subject to compliance with the requirements set by the Regulation. AI systems with high risk undergo the most legal regulation.

Before using a high-risk AI system, the operator must conduct an impact assessment on fundamental rights, i.e., how the use of such AI systems could potentially affect the fundamental rights of affected individuals.

Operators of high-risk AI systems are obliged to:

  • Implement risk management systems for the entire life cycle of the AI system.
  • Ensure the relevance, completeness, and representativeness of datasets for training, verification, and testing of the AI system to minimise potential risks.
  • Develop technical documentation to demonstrate compliance of the AI system with the Regulation's requirements and conduct conformity assessment.
  • Incorporate the ability to automatically record events important for determining national-level risks and significant modifications throughout the life cycle of the system.
  • Provide instructions for using the AI system in case of integration with other providers to ensure compliance upon further deployment.
  • Ensure proper human oversight measures to minimise risk in case of integration of the AI system with other providers.
  • Develop high-risk AI systems to achieve the appropriate level of accuracy, reliability, and cybersecurity.
  • Establish a quality management system to ensure compliance with Regulation.

The Regulation specifically highlights General-purpose AI systems (GPAI) and their corresponding basic models, which serve as the base or components for generative AI programs such as ChatGPT.

GPAI systems can be used as high-risk AI systems or integrated into them. GPAI system operators must collaborate with operators of high-risk AI systems to ensure compliance with the Regulation's requirements.

Special requirements are defined for regulating GPAI. Specifically, GPAI systems and the models they are based on must meet certain transparency requirements, including compliance with EU copyright laws and publishing descriptions of the content used for training. More powerful GPAI models that may pose systemic risks must fulfil additional requirements, including model assessment, risk assessment and mitigation, and reporting of incidents.

The Regulation covers a wide range of applications for various AI systems, and changes will be implemented gradually after the Regulation comes into force:

  • Within the first 6 months (approximately December 2024): provisions regarding the prohibition of AI systems with unacceptable risks (social scoring, behavioural manipulation).
  • Within 12 months (approximately June 2025): application of obligations for GPAI systems.
  • Within 24 months (approximately June 2026): all provisions of the Regulation will come into effect, including requirements for high-risk AI systems.
  • Within 36 months (approximately June 2027): enforcement of specific provisions regarding high-risk AI systems.

Significant regulation applies to the use of high-risk AI systems in the public sector, particularly in the following key areas:

  • Smart platforms for citizens and other individuals interacting with the public sector.
  • Public administrative procedures of the public sector.
  • Public safety sector.

For Ukrainian companies developing AI systems intended for use within the EU, it's crucial during the development process to determine the risk category of the AI system, assess its potential impact on security and fundamental rights, and implement necessary security changes to ensure compliance with European legislation.

The public sector must establish organisational structures of public administration outlined in the Regulation, including the creation of competent authorities such as registration bodies, conformity assessment bodies, and bodies handling objections.

For Ukraine, the Regulation's provisions are not yet mandatory, but considering the Eurointegration direction, it's important to consider the Regulation's requirements when developing legislation in the field of artificial intelligence.

For legal consultation on regulatory issues related to the application of artificial intelligence, please contact our team.

Contact us

Oleksiy Katasonov

Oleksiy Katasonov

Partner, Leader, Tax, Legal & People services, PwC in Ukraine

Tel: +380 44 354 0404

Maksym  Dudnyk

Maksym Dudnyk

Partner, Head of Tax practice, PwC in Ukraine

Tel: +380 44 354 0404

Anastasiia Yushyna

Anastasiia Yushyna

Senior Associate, IP, IT, and Data Protection, Attorneys Association "PwC Legal in Ukraine"

Tel: +380 44 354 0404