{{item.title}}
{{item.text}}
{{item.title}}
{{item.text}}
Responsible AI (RAI) is the only way to mitigate AI risks. Now is the time to evaluate your existing practices or create new ones to responsibly and ethically build technology and use data, and be prepared for future regulation. The payoff for early adopters is an edge that competitors may never be able to overtake.
When you use AI to support business-critical decisions based on sensitive data, you need to be sure that you understand what AI is doing, and why. Is it making accurate, bias-aware decisions? Is it violating anyone’s privacy? Can you govern and monitor this powerful technology? Globally, organisations recognise the need for Responsible AI but are at different stages of the journey.
A variety of factors can impact AI risks, changing over time, stakeholders, sectors, use cases, and technology. Below are the six major risk categories for application of AI technology.
AI algorithms that ingest real-world data and preferences as inputs may run a risk of learning and imitating possible biases and prejudices.
Performance risks include:
For as long as automated systems have existed, humans have tried to circumvent them. This is no different with AI.
Security risks include:
Similar to any other technology, AI should have organisation-wide oversight with clearly-identified risks and controls.
Control risks include:
The widespread adoption of automation across all areas of the economy may impact jobs and shift demand to different skills.
Economic risks include:
The widespread adoption of complex and autonomous AI systems could result in “echo-chambers” developing between machines, and can have broader impacts on human-human interaction.
Societal risks include:
AI solutions are designed with specific objectives in mind which may compete with overarching organisational and societal values within which they operate. Communities often have long informally agreed to a core set of values for society to operate against. There is a movement to identify sets of values and thereby the ethics to help drive AI systems, but there remains disagreement about what those ethics may mean in practice and how they should be governed. Thus, the above risk categories are also inherently ethical risks as well.
Enterprise risks include:
As organisations start to adopt AI, they need to be aware of certain barriers that may complicate technology implementation. Being aware of emerging AI regulation to govern the use of AI is only one part of the equation in mitigating risks. They will also need to look inwards and challenge any siloes in their approach to AI and data governance, and assess if their workforce has the necessary skills critical to AI adoption.
Here are 3 steps organisations can take to build greater trust in AI.
To govern the use of AI, ensure that all stakeholders are involved. This means the team tasked with overseeing governance should comprise representatives from various areas of the business, including leadership, procurement, compliance, human resources, technology and data experts, and process owners from different functions.
If there is an existing governance structure in place, you may extend it by adopting a three lines of defence risk management model.
Ensure that you have the right AI policies, standards, controls, tests and monitoring for all risk aspects of AI.
Having a common AI playbook may serve as a ‘how to’ guide for approaching new AI initiatives to build trust in this technology. It may be helpful to guide how you collaborate and discuss risks based on your goals, while identifying the level of rigour required to address risks based on their severity.
Keep the momentum going as you familiarise yourself with AI and learn how to manage the risks. Observing good governance and risk management may not necessarily slow you down in this regard. The right level of explainability, for example, will depend on each AI model’s level of risk and required accuracy levels, allowing for quicker progress in some areas than others.
Your stakeholders, including board members, customers, and regulators, will have many questions about your organisation's use of AI and data, from how it’s developed to how it’s governed. You not only need to be ready to provide the answers, you must also demonstrate ongoing governance and regulatory compliance.
Our Responsible AI Toolkit is a suite of customisable frameworks, tools and processes designed to help you harness the power of AI in an ethical and responsible manner - from strategy through execution. With the Responsible AI toolkit, we’ll tailor our solutions to address your organisation’s unique business requirements and AI maturity.
Clarence Chan
Partner, Digital Trust and Cybersecurity Leader, PwC Malaysia
Tel: +60 (3) 2173 0344