03/04/24
On March 13, 2024, the European Parliament adopted the Artificial Intelligence Act, the long-awaited European regulation on artificial intelligence (hereinafter referred to as the "Regulation").
The Regulation envisages creating a unified legal approach to regulating the use of artificial intelligence (AI), regardless of the industry or technology. It applies to operators (suppliers, importers, distributors, and manufacturers) of AI systems in the EU or outside it, if AI is used within the EU territory.
Non-compliance with the Regulation's requirements may result in the imposition of high fines, which can amount to €35 million or 7% of annual turnover.
The Regulation introduces a risk-based approach and categorises AI systems into four risk groups:
Generative artificial intelligence systems must label their output as artificially generated. Specifically, any audio or video content (including so-called "deep fakes"), images created using AI, must contain appropriate indications.
All remote biometric identification systems are considered high-risk and are subject to strict requirements. The use of remote biometric identification in public places for law enforcement purposes is prohibited, except in specific cases provided by law.
The use of AI systems with limited or minimal risk will be subject to compliance with the requirements set by the Regulation. AI systems with high risk undergo the most legal regulation.
Before using a high-risk AI system, the operator must conduct an impact assessment on fundamental rights, i.e., how the use of such AI systems could potentially affect the fundamental rights of affected individuals.
Operators of high-risk AI systems are obliged to:
The Regulation specifically highlights General-purpose AI systems (GPAI) and their corresponding basic models, which serve as the base or components for generative AI programs such as ChatGPT.
GPAI systems can be used as high-risk AI systems or integrated into them. GPAI system operators must collaborate with operators of high-risk AI systems to ensure compliance with the Regulation's requirements.
Special requirements are defined for regulating GPAI. Specifically, GPAI systems and the models they are based on must meet certain transparency requirements, including compliance with EU copyright laws and publishing descriptions of the content used for training. More powerful GPAI models that may pose systemic risks must fulfil additional requirements, including model assessment, risk assessment and mitigation, and reporting of incidents.
The Regulation covers a wide range of applications for various AI systems, and changes will be implemented gradually after the Regulation comes into force:
Significant regulation applies to the use of high-risk AI systems in the public sector, particularly in the following key areas:
For Ukrainian companies developing AI systems intended for use within the EU, it's crucial during the development process to determine the risk category of the AI system, assess its potential impact on security and fundamental rights, and implement necessary security changes to ensure compliance with European legislation.
The public sector must establish organisational structures of public administration outlined in the Regulation, including the creation of competent authorities such as registration bodies, conformity assessment bodies, and bodies handling objections.
For Ukraine, the Regulation's provisions are not yet mandatory, but considering the Eurointegration direction, it's important to consider the Regulation's requirements when developing legislation in the field of artificial intelligence.
For legal consultation on regulatory issues related to the application of artificial intelligence, please contact our team.
Oleksiy Katasonov
Partner, Leader, Tax, Legal & People services, PwC in Ukraine
Tel: +380 44 354 0404
Anastasiia Yushyna
Senior Associate, IP, IT, and Data Protection, Attorneys Association "PwC Legal in Ukraine"
Tel: +380 44 354 0404