PwC’s 2024 US Responsible AI Survey

AI is becoming intrinsic to business strategy and operations. Here’s how to speed up initiatives, manage risk and generate value.

As we move well into the second year of generative AI’s (GenAI) broad emergence — which propelled AI to the top of corporate agendas — there’s no question it's becoming integral to business. Companies slow to make AI an intrinsic part of their businesses may find themselves so far behind that it will be difficult to catch up. While it’s still early days, leaders now have a better understanding of how AI affects business strategy. They also are beginning to realize what it takes to build and deploy AI solutions that not only drive productivity and business transformation but also manage risk and preserve the incremental value that these solutions create. The key lesson of these early efforts? As AI continues to advance and redefines the nature of work and innovation, success requires sustained focus and a holistic view of the risks and the opportunities to build AI solutions responsibly.

Our significant work with clients as well as recent surveys tell us the range of adoption for AI remains wide — some companies are just starting to experiment, while others are all-in. The same goes for adopting Responsible AI, a set of practices that can help organizations unlock AI’s transformative potential while holistically addressing inherent risks. Some companies are progressing well — others have yet to lay the foundation. Many companies are overlooking the fact that Responsible AI isn’t a one-time exercise; it’s an ongoing commitment that needs to be woven into every step of developing, deploying, using and monitoring AI-based technologies.

And despite their best efforts, building AI responsibly isn’t enough to guarantee trust. That’s up to stakeholders. It’s their perceptions and experiences that determine whether trust is earned, and whether AI initiatives are ultimately successful.

To get better insight on how companies are doing with Responsible AI (RAI), we surveyed 1,001 US business and technology executives whose organizations use or intend to use AI. (You can see how you stack up to the respondents, with PwC’s Responsible AI survey and benchmark report.) Most respondents (73%) tell us they use or plan to use both traditional forms of AI and GenAI. Of those, slightly more are focused on using the technologies solely for operational systems used by employees (AI: 40%; GenAI: 43%). A slightly smaller number of companies are targeting both employee and customer systems in their AI efforts (AI: 38%; GenAI 35%).

Bar chart showing how far along businesses are with AI and GenAI
How far along businesses are with AI and GenAI

AI use
GenAI use
Exploring
%
%
Limited use
%
%
Deployed in employee systems for operational tasks only
%
%
Deployed in both employee and customer systems
%
%
*Note: Responses to ‘Other,’ ‘Unsure’ and ‘NA’ not shown.
Qs. Which of the following best describes your organization's current status regarding the use of artificial intelligence? Which of the following best describes your organization's current status regarding the use of generative artificial intelligence?
Source: PwC’s 2024 US Responsible AI Survey, August 15, 2024 base of 865 currently use or intend to use AI, base of 870 currently use or intend to use GenAI

Competitive differentiation and value creation

When it comes to assessing the risks of their AI and GenAI efforts, only 58% of respondents have completed a preliminary assessment of AI risks in their organization. RAI, however, can enable business objectives far beyond risk management — and many executives report that their organizations are targeting this value. Competitive differentiation is the most cited objective for RAI practices, with 46% citing it as a top 3 objective, with risk management close behind at 44%. Other top objectives include building trust with external stakeholders (such as customers) and driving value generation or preservation (39% each).

Bar chart showing what drives Responsible AI investment
What drives Responsible AI investment

Differentiate your organization, products and services
%
Holistically manage risk of AI-based technologies
%
Meet regulatory and compliance requirements
%
Build trust with external stakeholders
%
Drive value creation or preservation
%
Protect your brand
%
Right thing to do
%
Directive from board
%
Directive from internal stakeholders
%
*Note: Responses to ‘Other,’ ‘Unsure’ and ‘NA’ not shown.
Q. What are your organization's main objectives for investing in or planning to invest in responsible AI (RAI) practices? (Ranked top 3)
Source: PwC’s 2024 US Responsible AI Survey, August 15, 2024 base of 1,001


In the work we do with clients, we see RAI already yielding this value across many business areas. This is corroborated by the responses to our survey, which indicate that RAI’s top 5 reported benefits relate to risk management and cyber security program enhancements, improved transparency, customer experience, coordinated AI management and innovation. This value creation is possible because responsible AI helps AI initiatives succeed more quickly, often with fewer issues, pauses and mistakes. It can help build trust with stakeholders by helping you verifiably demonstrate success.

Bar chart showing benefits achieved from investing in RAI practices
Benefits achieved from investing in RAI practices

#1 Enhanced customer experience
%
#2 Enhanced cybersecurity and risk management
%
#3 Facilitated innovation
%
#4 Improved transparency
%
#5 Facilitated coordinated management of AI in our organization
%
*Note: Showing top 5 choices out of 12 options.
Q. Which, if any, of the following benefits has your organization achieved or expects to achieve from investing in responsible AI (RAI) practices? (Response to ‘Already achieved measurable value’.)
Source: PwC’s Responsible AI Survey, August 8, 2024 base of 1,001

How does it work? Responsible AI enables AI-specific governance, risk-managed intake for use cases, AI-powered cyberdefense and more. Through oversight and reporting, it builds toward transparency, whether internal or external to satisfy current or anticipated requirements. Tools and frameworks for privacy, data governance, bias identification and mitigation and reliable AI outputs enhance the customer experience. By assigning clear roles and responsibilities, RAI practices help coordinate AI management. By providing a secure environment and suitable skills, policies and guardrails, it enables people to innovate more freely — secure in the knowledge that risks are being appropriately addressed and managed.

Responsible AI implementation: Progress lacking on key capabilities

Responsible AI requires a broad range of capabilities. Our survey asked about a subset that organizations appear to be most commonly prioritizing today: upskilling, embedded AI risk specialists, periodic training, data privacy, data governance, cybersecurity, model testing, model management, third-party risk management, specialized AI risk-management software, and monitoring and auditing. Most survey respondents (80% or more) report some progress on each of these 11 capabilities.

Only 11% of executives report having fully implemented these fundamental responsible AI capabilities — and we suspect many are overestimating progress.

Even those reporting full implementation of capabilities will need to be vigilant to changes in the landscape and evolve accordingly — practices that are sufficient now may not be so in the future. In addition, they will need to focus on the “provability” of these capabilities — that is, how they would stand up to rigorous scrutiny. Finally, as with all things AI, external expectations are evolving, with standards and regulations only just starting to take sufficient form to action.

Privacy and consent frameworks, for example, often don’t have specific provisions for customer data entering AI systems. Data governance may not cover GenAI models’ access to swaths of internal data, some of which may be sensitive. Legacy cybersecurity rarely considers risks from GenAI’s many new users, or from “model poisoning” and other threats that may be greater with GenAI. With AI advancing so quickly, RAI should always be advancing too.

Bar chart showing how few companies report having fully implemented key RAI capabilities
Few companies report having fully implemented key RAI capabilities
Qs. Which best describes the progress your organization has made in updating specific capabilities to enable the responsible use of artificial intelligence (AI)? Which best describes the progress your organization has made in updating specific capabilities to enable the responsible use of generative AI (GenAI)? (Response to ‘Fully implemented’.)
Source: PwC’s 2024 US Responsible AI Survey, August 15, 2024 base of 865 currently use or intend to use AI, base of 870 currently use or intend to use GenAI

The top challenge facing Responsible AI is the same as in most risk programs: It’s hard to quantify the value of having dodged a bullet, such as avoiding a major scandal from a poor AI interaction.

It’s even harder to put a number to bullets that you may dodge in the future. For example, RAI today will make it easier to meet future regulations on privacy, bias, reporting and so on, but how do you quantify the value of compliance savings before the regulations even exist?

Bar chart showing what holds back companies from investing in Responsible AI
What’s keeping companies from investing in Responsible AI
*Note: Responses to ‘Other,’ ‘Unsure’ and ‘NA’ not shown.
Q. What are the biggest challenges your organization is facing or has faced in investing in responsible AI (RAI) practices? (Ranked 1st.)
Source: PwC’s 2024 US Responsible AI Survey, August 15, 2024 base of 1,001

A standardized framework to document assessment of risks, responses and ongoing monitoring can help address this challenge. It should consider both AI's inherent risks and those that may remain after you have made informed choices that match your risk appetite. It should also document that mitigations have not only been designed and assessed but that their ongoing effectiveness has been demonstrated.

Getting the business involved will help integrate RAI in development and avoid pushback. With your company’s help, you can also document and quantify faster rollouts of AI initiatives, an improved brand around privacy and other RAI benefits.

Actions to help manage AI risk and enable an AI-intrinsic business

Based on our survey and our experience with Responsible AI — both with clients and in-house at PwC — here’s how you can help advance RAI efforts in your organization.

  • Create ownership. Today, ownership of RAI is varied and often fragmented. It needs to be owned by a single individual who can then assemble a multi-disciplinary team to support the business. This single executive owner will coordinate the business, risk, IT and other groups with different roles across the AI life cycle.
  • Think beyond AI. You need to consider the bigger picture, understanding how AI is becoming integrated in all aspects of your organization. That means having your RAI leader working closely with your company’s chief AI officer (or equivalent) to understand changes in your operating model, business processes, products and services.
  • Act end-to-end. Responsible AI needs to start at the start — assessing and prioritizing potential use cases based on both value and risk — and go through the entire AI life cycle, including output validation and performance monitoring. The building blocks are already there for most companies, from existing risk management functions to initial RAI investments.
  • Move beyond the theoretical. While many companies have done the paper exercise of setting up policies, governance structures and committees, this is just the start. RAI should become operational, scaling across the business. This requires addressing a variety of risk domains and looking at AI risk management and value creation holistically.
  • Focus on ROI. While it has been challenging to date to quantify RAI’s value, that’s changing quickly. Forthcoming regulations, the need for AI to be audited and rising societal expectations will all contribute to the ROI equation. Companies that are already advancing their RAI efforts will be better prepared to respond and will be least burdened by changing expectations and requirements.
  • Assess impact on trust. Develop a plan for transparency and ongoing reporting to stakeholders to monitor whether your RAI programs have in fact earned trust.

About the survey

In April 2024, PwC Research surveyed 1,001 US executives (500 in business roles, 501 in technology roles) to understand current or intended business use of AI and GenAI and responsible AI practices. Respondents are from public and private companies in six major industries: financial services (24%); health (21%); technology, media and telecommunications (17%); consumer markets (14%); industrial products (13%); energy, utilities and mining (12%).

Wondering how you stack up on Responsible AI practices?

Take our Responsible AI survey and receive an immediate benchmark report with actionable insights.

Learn more

 

What can Responsible AI do for you?

Find out how your business can implement practices that can preserve value and engender safety and trust.

Get in touch

Next and previous component will go here

Follow us