The EU AI Act: Empowering Innovation. Building Trust.

The EU AI Act: Empowering Innovation. Building Trust.
  • June 03, 2024

Artificial Intelligence (AI) is fast becoming a staple technology in our private and professional lives. From general use cases such as text creation and image generation to more specific business functions (such as analysing customer trends for sales teams, or supporting legal counsels in categorising contractual risks), AI technology has sparked a wave of transformation across several industries. 

PwC’s Global Investor Survey 2023 shows that investors see accelerated adoption of AI as critical for value creation. Of those who responded, 61% believe that faster adoption is either very or extremely important. This sentiment is rightly spurred by AI’s ‘possibility of everything’ nature.

More and more organisations are adopting Generative AI (GenAI) for their business functions, including generating legal agreements, code, and hyperrealistic portraits, to name a few, from a couple of text prompts. Given its vast capabilities, AI is now impacting organisations and their supply chains on a wide scale.

Opportunities across the business

The technology can provide businesses with a competitive advantage, both on the local and international stage, by driving projects at faster rates. In recent times, we have witnessed the application of GenAI across several functions, such as:

Legal and Compliance
Legal and Compliance

Legal teams are increasingly using AI-powered legal assistants and contract management tools to generate contract templates in one click, review large volumes of critical documentation and legal claims to assess and score business risks and negotiate digital contracts with multiple stakeholders simultaneously.

IT
IT

Various organisations have invested in data loss prevention (DLP) tools, which can continuously monitor, identify and predict network breaches and failures, allowing for proactive response in such cases.

Human Resources
Human Resources

From automated CV screening and talent matching to analysing employee feedback and monitoring employee productivity, the use of AI can enhance the way businesses address their workforce challenges.

However, a renowned cliché remains relevant more than ever: with great power comes great responsibility. The power to generate outputs on a whim can also exacerbate the dissemination of misinformation and fake news. AI content can be subject to bias and data sources may be illegitimate. The risks associated with AI adoption are various, but implementing appropriate safeguards and meeting regulatory requirements, such as under the EU AI Act, can help organisations make AI trustworthy.



Overview of the EU AI Act

The EU AI Act has been hailed by the President of the European Commission as ‘the first-ever comprehensive legal framework on Artificial Intelligence worldwide’. This landmark piece of legislation aims to create a legal framework to regulate AI systems and General Purpose AI (GPAI) models across the EU with a particular focus on the protection of fundamental human rights, democracy, the rule of law and environmental sustainability. Non-compliance with the EU AI Act can result in significant fines.

Overview of the EU AI Act

For the purposes of the EU AI Act, a technology is considered to be an AI system when it meets the following requirements:

  • It is a machine-based system with varying levels of autonomy;

  • It may exhibit adaptiveness after deployment;

  • For explicit or implicit objectives, it infers, from the input it receives, how to generate outputs (e.g. predictions, content, recommendations); and

  • Such outputs can influence physical or virtual environments.

Main roles under the regulation

Organisations can be classified under different roles in terms of the EU AI Act, namely as providers, deployers, importers, and distributors of AI systems. Depending on their classification under the law, organisations will have different requirements to meet - with most of the obligations being on the providers and the deployers. In some instances, businesses may be classified as both providers and deployers at the same time. It is therefore crucial for organisations dealing with AI systems to review their role on an ongoing basis to fulfil their obligations correctly.

Classification of Risks

Classification of Risks

The EU AI Act adopts a risk-based approach based on the expected impact AI systems may have on the health, safety or fundamental rights of EU citizens. The four categories are:

  1. prohibited practices;

  2. high-risk AI systems;

  3. limited-risk AI systems; and

  4. general purpose AI (GPAI) models.

Each category under the regulation has different requirements and determining the classification of an AI system is key to ensure compliance with the EU AI Act.


Requirements regarding high-risk AI systems

High-risk AI systems need to comply with an ex-ante conformity assessment, together with several organisational and technical requirements, as set out in the EU AI Act. These requirements include establishing a risk management system, implementing data governance for training, validation and testing data, record-keeping, and information provision.

Requirements regarding high-risk AI systems

Regulation Timeline

The AI Act was formally approved by the European Council on 21 May 2024. The final text will enter into force 20 days following its publication in the EU’s Official Journal.

Tentative timeline which may be subject to change depending on when the EU AI Act is published in the Official Journal.

Entry into force

The Act will enter into force 20 days following its publication in the Official Journal of the EU

Prohibitions

Prohibitions of certain AI practices apply 6 months following entry into force of the Act

Rules on GPAI

Obligations on providers of GPAI models go into effect 12 months after and appointment of member state competent authorities

High risk AI systems

Obligations of high-risk AI systems listed in Annex III start applying

High risk AI systems

Obligations of high-risk AI systems intended for use as a safety component of a product, or is itself a product, go into effect

Data & Privacy

Where to start?

With the fairly short transitory periods of the EU AI Act, it is important for organisations adopting AI systems and GPAI models to be adequately prepared for their respective obligations. As a point of departure, businesses are recommended to take the following steps:

  • Enhancing literacy in AI, by educating stakeholders (including C-suite leaders and Board members) on the organisation’s obligations under the EU AI Act. Increased awareness can help organisations better formulate business and risk strategies and make informed decisions;

  • Conducting a scoping exercise with respect to the EU AI Act, in order to adequately understand the organisation’s role/s, the risk-classification level of the AI system/ model, and the corresponding obligations under the regulation; and

  • Performing a regulatory gap analysis exercise, including evaluation of current policies and procedures, to easily identify and prioritise key risk areas in terms of the EU AI Act.

How can we help?

At PwC Malta, our Governance and Privacy & Data teams have the expertise to help your organisation navigate compliance requirements in its AI adoption journey. For more information on how we can help, please reach out to our sector leaders below.

Contact us

Chris Mifsud Bonnici

Chris Mifsud Bonnici

Partner, PwC Malta

Tel: +356 79757005

Lee Ann Agius

Lee Ann Agius

Senior Manager, Tax, PwC Malta

Tel: +356 2564 4027

Claire Balzan

Claire Balzan

Manager, Tax, PwC Malta

Tel: +356 2564 2410