Among the various obligations, the EU AI Act sets stringent rules around the development and adoption of Artificial Intelligence (AI). But from a practical perspective, what actions can businesses take from now as they prepare for the new AI regulation?
The democratisation of Artificial Intelligence (AI) has made the technology available to an unprecedented number of individuals and businesses, pushing it beyond the exclusive reach of specialised researchers and big tech. In fact, 68% of CEOs in PwC’s 2024 Global CEO Survey agree that generative AI will increase competitive intensity in their respective industry by 2027.
Disagree
Agree
Generative AI will significantly change the way my company creates, delivers and captures value.
Disagree
Agree
Generative AI will require most of my workforce to develop new skills.
Disagree
Agree
Generative AI will increase competitive intensity in my industry.
Note: Disagree is the sum of ‘slightly disagree,’ ‘moderately disagree’ and ‘strongly disagree’ responses; Agree is the sum of ‘slightly agree,’ ‘moderately agree’ and ‘strongly agree’ responses.
Source: PwC's 27th Annual Global CEO Survey
AI is significantly reinventing the service and product delivery capabilities of organisations, be it for medical diagnosis, financial fraud detection or customised customer service chatbots. As most business leaders aim to scale their AI adoption in the coming months to generate sustainable value, it will be essential for their in-house compliance teams and lawyers to understand the EU AI Act’s impact on their business.
Adopting AI governance across all business functions enables adequate oversight on AI projects. A governance framework also enables the C-suite to make informed decisions on investment and scaling based on tangible metrics such as risk tolerance and complexity.
In many cases, a single AI technology can be applied in different use cases - with each of them posing new challenges for governance. With the recent EU AI Act, executives have an opportunity to innovate safely within regulatory guardrails and at the same time, establish a framework which can be adapted to different risk profiles.
Here are four key steps to get started.
Under the EU AI Act, AI systems are divided into various categories depending on the potential risks they may represent to the health, safety and fundamental rights of individuals. Developing an AI Inventory and classifying an AI system’s risks based on EU taxonomy can assist in-scope organisations in understanding the extent of their obligations under the law.
In any case, a risk classification of AI systems can fast-track AI adoption by putting into focus priority areas where the business needs to take immediate action. Without a solid governance foundation, compliance teams may be unable to identify and mitigate the risks adequately.