PwC's Responsible AI

AI you can trust

AI is here to stay—bringing limitless potential to push us forward as a society. Used wisely, it can create huge benefits for businesses, governments, and individuals worldwide.

How big is the opportunity? Our research estimates that AI could contribute $15.7 trillion to the global economy by 2030, as a result of productivity gains and increased consumer demand driven by AI-enhanced products and services. AI solutions are diffusing across industries and impacting everything from customer service and sales to back office automation. AI’s transformative potential continues to be top of mind for business leaders: Our CEO survey finds that 72% of CEOs believe that AI will  significantly change the way they do business in the next five years.

With great potential comes great risk. Are your algorithms making decisions that align with your values?  Do customers trust you with their data? How is your brand affected if you can’t explain how AI systems work? It’s critical to anticipate problems and future-proof your systems so that you can fully realise AI’s potential. It’s a responsibility that falls to all of us — board members, CEOs, business unit heads, and AI specialists alike.

AI Risks

Performance

AI algorithms that ingest real-world data and preferences as inputs run a risk of learning and imitating our biases and prejudices.

Performance risks include:

  • Risk of errors
  • Risk of bias
  • Risk of opaqueness
  • Risk of instability of performance
  • Lack of feedback process

Security

For as long as automated systems have existed, humans have tried to circumvent them. This is no different with AI.

Security risks include:

  • Cyber intrusion risks
  • Privacy risks
  • Open source software risks
  • Adversarial attacks

Control

Similar to any other technology, AI should have organisation-wide oversight with clearly-identified risks and controls.

Control risks include:

  • Risk of AI going “rogue”
  • Inability to control malevolent AI

Economic

The widespread adoption of automation across all areas of the economy may impact jobs and shift demand to different skills.

Economic risks include:

  • Risk of job displacement
  • Risk of concentration of power within 1 or a few companies
  • Liability risk

Societal

The widespread adoption of complex and autonomous AI systems could result in “echo-chambers” developing between machines, and have broader impacts on human-human interaction.

Societal risks include:

  • Risk of autonomous weapons proliferation
  • Risk of an intelligence divide

Ethical

AI solutions are designed with specific objectives in mind which may compete with overarching organisational and societal values within which they operate.

Ethical risks include:

  • Values misalignment risk

PwC’s Responsible AI Toolkit

Your stakeholders, including board members, customers, and regulators, will have many questions about your organisation's use of AI and data, from how it’s developed to how it’s governed. You not only need to be ready to provide the answers, you must also demonstrate ongoing governance and regulatory compliance.

Our Responsible AI Toolkit is a suite of customizable frameworks, tools and processes designed to help you harness the power of AI in an ethical and responsible manner - from strategy through execution. With the Responsible AI toolkit, we’ll tailor our solutions to address your organisation’s unique business requirements and AI maturity.

Our Responsible AI Toolkit addresses the five dimensions of responsible AI

Governance

Who is accountable for your AI system?

The foundation for responsible AI is an end-to-end enterprise governance framework. This focuses on the risks and controls at along your organisation’s AI journey, from top to bottom

 

Interpretability & Explainability

How was that decision made?

An AI system that human users are unable to understand can lead to a “black box” effect, where organisations are limited in their ability to explain and defend business-critical decisions. Our Responsible AI approach can help. We provide services to help you explain both overall decision-making and also individual choices and predictions, tailored to the perspectives of different stakeholders.

Bias & Fairness

Is your AI unbiased? Is it fair?

An AI system that is exposed to inherent biases of a particular data source is at risk of making decisions that could lead to unfair outcomes for a particular individual or group. Fairness is a social construct with many different and—at times—conflicting definitions. Responsible AI helps your organisation to become more aware of bias, and take corrective action to help systems improve in their decision-making.

Robustness & Security

Will your AI behave as intended?

An AI system that does not demonstrate stability, and consistently meet performance requirements, is at increased risk of producing errors and making the wrong decisions. To help make your systems more robust, Responsible AI includes services to help you identify weaknesses in models, assess system safety and monitor long-term performance.

Ethics & Regulation

Is your AI legal and ethical?

Our Ethical AI Framework provides guidance and a practical approach to help your organisation with the development and governance of AI solutions that are ethical and moral. 

As part of this dimension, our framework includes a unique approach to contextualising ethical considerations for each bespoke AI solution, identifying and addressing ethical risks and applying ethical principles.

Innovate responsibly

Whether you're just getting started or are getting ready to scale, Responsible AI can help. Drawing on our proven capability in AI innovation and deep global business expertise, we'll assess your end-to-end needs, and design a solution to help you address your unique risks and challenges.

Contact us

Contact us today. Learn more about how to become an industry leader in the responsible use of AI.

Follow us

Required fields are marked with an asterisk(*)

By submitting your email address, you acknowledge that you have read the Privacy Statement and that you consent to our processing data in accordance with the Privacy Statement (including international transfers). If you change your mind at any time about wishing to receive the information from us, you can send us an email message using the Contact Us page.

Contact us

Charles Loh

Charles Loh

Singapore Consulting Leader, PwC Singapore

Tel: +65 9735 2389

Winston Nesfield

Winston Nesfield

Insurance and Wealth Leader, South East Asia Consulting, PwC Singapore

Tel: +65 9159 1425

Mark Jansen

Mark Jansen

Data Trust Services Leader, PwC Singapore

Tel: +65 8100 7123

Ronald Chung

Ronald Chung

Partner, Digital Solutions, PwC Singapore

Tel: +65 9621 0634

Hide