AI is becoming intrinsic to business strategy and operations. Here’s how to speed up initiatives, manage risk and generate value.
As we move well into the second year of generative AI’s (GenAI) broad emergence — which propelled AI to the top of corporate agendas — there’s no question it's becoming integral to business. Companies slow to make AI an intrinsic part of their businesses may find themselves so far behind that it will be difficult to catch up. While it’s still early days, leaders now have a better understanding of how AI affects business strategy. They also are beginning to realize what it takes to build and deploy AI solutions that not only drive productivity and business transformation but also manage risk and preserve the incremental value that these solutions create. The key lesson of these early efforts? As AI continues to advance and redefines the nature of work and innovation, success requires sustained focus and a holistic view of the risks and the opportunities to build AI solutions responsibly.
Our significant work with clients as well as recent surveys tell us the range of adoption for AI remains wide — some companies are just starting to experiment, while others are all-in. The same goes for adopting Responsible AI, a set of practices that can help organizations unlock AI’s transformative potential while holistically addressing inherent risks. Some companies are progressing well — others have yet to lay the foundation. Many companies are overlooking the fact that Responsible AI isn’t a one-time exercise; it’s an ongoing commitment that needs to be woven into every step of developing, deploying, using and monitoring AI-based technologies.
And despite their best efforts, building AI responsibly isn’t enough to guarantee trust. That’s up to stakeholders. It’s their perceptions and experiences that determine whether trust is earned, and whether AI initiatives are ultimately successful.
To get better insight on how companies are doing with Responsible AI (RAI), we surveyed 1,001 US business and technology executives whose organizations use or intend to use AI. (You can see how you stack up to the respondents, with PwC’s Responsible AI survey and benchmark report.) Most respondents (73%) tell us they use or plan to use both traditional forms of AI and GenAI. Of those, slightly more are focused on using the technologies solely for operational systems used by employees (AI: 40%; GenAI: 43%). A slightly smaller number of companies are targeting both employee and customer systems in their AI efforts (AI: 38%; GenAI 35%).
When it comes to assessing the risks of their AI and GenAI efforts, only 58% of respondents have completed a preliminary assessment of AI risks in their organization. RAI, however, can enable business objectives far beyond risk management — and many executives report that their organizations are targeting this value. Competitive differentiation is the most cited objective for RAI practices, with 46% citing it as a top 3 objective, with risk management close behind at 44%. Other top objectives include building trust with external stakeholders (such as customers) and driving value generation or preservation (39% each).
In the work we do with clients, we see RAI already yielding this value across many business areas. This is corroborated by the responses to our survey, which indicate that RAI’s top 5 reported benefits relate to risk management and cyber security program enhancements, improved transparency, customer experience, coordinated AI management and innovation. This value creation is possible because responsible AI helps AI initiatives succeed more quickly, often with fewer issues, pauses and mistakes. It can help build trust with stakeholders by helping you verifiably demonstrate success.
How does it work? Responsible AI enables AI-specific governance, risk-managed intake for use cases, AI-powered cyberdefense and more. Through oversight and reporting, it builds toward transparency, whether internal or external to satisfy current or anticipated requirements. Tools and frameworks for privacy, data governance, bias identification and mitigation and reliable AI outputs enhance the customer experience. By assigning clear roles and responsibilities, RAI practices help coordinate AI management. By providing a secure environment and suitable skills, policies and guardrails, it enables people to innovate more freely — secure in the knowledge that risks are being appropriately addressed and managed.
Responsible AI requires a broad range of capabilities. Our survey asked about a subset that organizations appear to be most commonly prioritizing today: upskilling, embedded AI risk specialists, periodic training, data privacy, data governance, cybersecurity, model testing, model management, third-party risk management, specialized AI risk-management software, and monitoring and auditing. Most survey respondents (80% or more) report some progress on each of these 11 capabilities.
Only 11% of executives report having fully implemented these fundamental responsible AI capabilities — and we suspect many are overestimating progress.
Even those reporting full implementation of capabilities will need to be vigilant to changes in the landscape and evolve accordingly — practices that are sufficient now may not be so in the future. In addition, they will need to focus on the “provability” of these capabilities — that is, how they would stand up to rigorous scrutiny. Finally, as with all things AI, external expectations are evolving, with standards and regulations only just starting to take sufficient form to action.
Privacy and consent frameworks, for example, often don’t have specific provisions for customer data entering AI systems. Data governance may not cover GenAI models’ access to swaths of internal data, some of which may be sensitive. Legacy cybersecurity rarely considers risks from GenAI’s many new users, or from “model poisoning” and other threats that may be greater with GenAI. With AI advancing so quickly, RAI should always be advancing too.
The top challenge facing Responsible AI is the same as in most risk programs: It’s hard to quantify the value of having dodged a bullet, such as avoiding a major scandal from a poor AI interaction.
It’s even harder to put a number to bullets that you may dodge in the future. For example, RAI today will make it easier to meet future regulations on privacy, bias, reporting and so on, but how do you quantify the value of compliance savings before the regulations even exist?
A standardized framework to document assessment of risks, responses and ongoing monitoring can help address this challenge. It should consider both AI's inherent risks and those that may remain after you have made informed choices that match your risk appetite. It should also document that mitigations have not only been designed and assessed but that their ongoing effectiveness has been demonstrated.
Getting the business involved will help integrate RAI in development and avoid pushback. With your company’s help, you can also document and quantify faster rollouts of AI initiatives, an improved brand around privacy and other RAI benefits.
Based on our survey and our experience with Responsible AI — both with clients and in-house at PwC — here’s how you can help advance RAI efforts in your organization.
In April 2024, PwC Research surveyed 1,001 US executives (500 in business roles, 501 in technology roles) to understand current or intended business use of AI and GenAI and responsible AI practices. Respondents are from public and private companies in six major industries: financial services (24%); health (21%); technology, media and telecommunications (17%); consumer markets (14%); industrial products (13%); energy, utilities and mining (12%).
Take our Responsible AI survey and receive an immediate benchmark report with actionable insights.
Find out how your business can implement practices that can preserve value and engender safety and trust.