Ethical AI: 10 principles the world (mostly) agrees on — and what to do about them

Example pattern for mobile
Example pattern for desktop

Maria Luciana Axente

Responsible AI Lead, PwC United Kingdom

Email

Ilana Golbin

Director and Responsible AI Lead, PwC US

Email

If you’re taking a long-term approach to artificial intelligence (AI), you’re likely thinking about how to make your AI systems ethical. Building ethical AI is the right thing to do. Not only do your corporate values demand it, it’s also one of the ideal ways to help minimize risks that range from compliance failures to brand damage. But building ethical AI is hard.

The difficulty starts with a question: what is ethical AI? The answer depends on defining ethical AI principles — and there are many related initiatives, all around the world. Our team has identified over 90 organizations that have attempted to define ethical AI principles, collectively coming up with more than 200 principles. These organizations include governments, multilateral organizations, non-governmental organizations and companies. Even the Vatican has a plan

How can you make sense of it all and come up with tangible rules to follow? After reviewing these initiatives, we’ve identified ten core principles. Together, they help define ethical AI. Based on our own work, both internally and with clients, we also have a few ideas for how to put these principles into practice.

Knowledge and behavior: the 10 principles of ethical AI

The ten core principles of ethical AI enjoy broad consensus for a reason: they align with globally recognized definitions of fundamental human rights, as well as with multiple international declarations, conventions and treaties. The first two principles can help you acquire the knowledge that can allow you to make ethical decisions for your AI. The next eight can help guide those decisions.

  1. Interpretability. AI models should be able to explain their overall decision-making process and, in high-risk cases, explain how they made specific predictions or chose certain actions. Organizations should be transparent about what algorithms are making what decisions on individuals using their own data.

  2. Reliability and robustness. AI systems should operate within design parameters and make consistent, repeatable predictions and decisions.

  3. Security. AI systems and the data they contain should be protected from cyber threats — including AI tools that operate through third parties or are cloud-based.

  4. Accountability. Someone (or some group) should be clearly assigned responsibility for the ethical implications of AI models’ use — or misuse.

  5. Beneficiality. Consider the common good as you develop AI, with particular attention to sustainability, cooperation and openness.

  6. Privacy. When you use people’s data to design and operate AI solutions, inform individuals about what data is being collected and how that data is being used, take precautions to protect data privacy, provide opportunities for redress and give the choice to manage how it’s used.

  7. Human agency. For higher levels of ethical risk, enable more human oversight over and intervention in your AI models’ operations.

  8. Lawfulness. All stakeholders, at every stage of an AI system’s life cycle, must obey the law and comply with all relevant regulations.

  9. Fairness. Design and operate your AI so that it will not show bias against groups or individuals.

  10. Safety. Build AI that is not a threat to people’s physical safety or mental integrity.

These principles are general enough to be widely accepted — and hard to put into practice without more specificity. Every company will have to navigate its own path, but we’ve identified two other guidelines that may help.

To turn ethical AI principles into action: context and traceability

A top challenge to navigating these ten principles is that they often mean different things in different places — and to different people. The laws a company has to follow in the US, for example, are likely different than those in China. In the US they may also differ from one state to another. How your employees, customers and local communities define the common good (or privacy, safety, reliability or most of the ethical AI principles) may also differ.

To put these ten principles into practice, then, you may want to start by contextualizing them: Identify your AI systems’ various stakeholders, then find out their values and discover any tensions and conflicts that your AI may provoke. You may then need discussions to reconcile conflicting ideas and needs.

When all your decisions are underpinned by human rights and your values, regulators, employees, consumers, investors and communities may be more likely to support you — and give you the benefit of the doubt if something goes wrong.

To help resolve these possible conflicts, consider explicitly linking the ten principles to fundamental human rights and to your own organizational values. The idea is to create traceability in the AI design process: for every decision with ethical implications that you make, you can trace that decision back to specific, widely accepted human rights and your declared corporate principles. That may sound tricky, but there are toolkits (such as this practical guide to Responsible AI) that can help.

None of this is easy, because AI isn’t easy. But given the speed at which AI is spreading, making your AI responsible and ethical could be a big step toward giving your company — and the world — a sustainable future.

Responsible AI

Embrace AI-driven transformation while preserving value and managing risk through Responsible AI principles

Learn more

 

Unlock the full potential of analytics and artificial intelligence

Unlock the full potential of analytics and artificial intelligence

PwC’s Analytics & AI Transformation Solution

Learn more

Next and previous component will go here

Follow us