Unlocking value with AI agents: A responsible approach

Summary

  • AI agents are becoming essential to enterprise workflows, offering autonomous decision-making and efficiency gains.
  • To manage new challenges in oversight and risk, organizations should adapt responsible AI practices and evolve governance frameworks.
  • A centralized and transparent oversight approach helps maintain consistency, compliance and alignment with broader digital strategies.

Interest in AI agents has grown significantly, given the promise to become a sophisticated digital workforce capable of addressing intricate problems autonomously. This is coming into focus as companies begin to take advantage of AI agents across applications and platforms, changing how we do the work we do and workforce issues overall.

Capabilities such as PwC’s agent OS are emerging to ease the orchestration and management of agents across platforms. A well-governed fleet of AI agents will be essential to delivering value at speed. But as their autonomy grows, so does their potential for risk — making a strong human-at-the-helm approach critical. AI agents are designed to adapt dynamically to changing environments, making hardcoded logic not just obsolete, but impossible.

With interest turning into adoption, an important question emerges: Can organizations scale AI agents responsibly without compromising privacy, governance and trust?

At PwC, we believe that the responsible use of AI is the key to unlocking the biggest long-term value and impact of this technology, and that applies to AI agents as well. Current Responsible AI and AI governance programs were not often designed for an agentic world. It is urgent for business leaders to adapt their Responsible AI programs and risk management to address the near- and long-term impact of AI agents while enabling innovation and the overarching business strategy.

Unlocking value with AI agents: A responsible approach

Share

Summary

  • AI agents are becoming essential to enterprise workflows, offering autonomous decision-making and efficiency gains.
  • To manage new challenges in oversight and risk, organizations should adapt responsible AI practices and evolve governance frameworks.
  • A centralized and transparent oversight approach helps maintain consistency, compliance and alignment with broader digital strategies.

6 minute read

April 10, 2025

Interest in AI agents has grown significantly, given the promise to become a sophisticated digital workforce capable of addressing intricate problems autonomously. This is coming into focus as companies begin to take advantage of AI agents across applications and platforms, changing how we do the work we do and workforce issues overall.

Capabilities such as PwC’s agent OS are emerging to ease the orchestration and management of agents across platforms. A well-governed fleet of AI agents will be essential to delivering value at speed. But as their autonomy grows, so does their potential for risk — making a strong human-at-the-helm approach critical. AI agents are designed to adapt dynamically to changing environments, making hardcoded logic not just obsolete, but impossible.

With interest turning into adoption, an important question emerges: Can organizations scale AI agents responsibly without compromising privacy, governance and trust?

At PwC, we believe that the responsible use of AI is the key to unlocking the biggest long-term value and impact of this technology, and that applies to AI agents as well. Current Responsible AI and AI governance programs were not often designed for an agentic world. It is urgent for business leaders to adapt their Responsible AI programs and risk management to address the near- and long-term impact of AI agents while enabling innovation and the overarching business strategy.

AI agent challenges

An AI agent is an autonomous entity, usually powered by generative AI, designed to pursue specific goals by making decisions and taking actions in dynamic environments. PwC is already seeing demonstrated applications for AI agents to streamline and enhance many functions, including tax and finance. Like all new technologies, the use of AI agents may also introduce risks.

Exposing sensitive data

AI agents operate with a level of autonomy that makes direct oversight difficult, increasing the risk of accidental data leaks.

Consider an agentic system designed to handle customer questions about an upcoming flight. This system will need access to sensitive information such as booking details, payment information and personal identification. If not properly managed, an AI agent might expose that information by, for instance, entering personal information in external search engines or browsers accessed to complete the task at hand.

Mitigation actions

Implement data anonymization techniques and restrict access rights within any multi-agent system. An AI agent performing web searches or external queries, for example, should not have access to customer information.

Regular monitoring is essential. You can use data loss prevention (DLP) tools to monitor and flag potential data leaks, escalating to a human in the loop when needed. To add another layer of protection, consider setting up a dedicated AI agent as a “security specialist” to review and approve any external data searches.

Finally, you should conduct user testing, red teaming and regular assessments to monitor compliance with data privacy guidelines.

Overreliance on automation

As AI agents take on more tasks, employees may become overly dependent on them — or they may be forced to become more dependent because of changing incentive structures — thereby reducing human oversight.

In the example of a ticketing assistance system, AI agents processing refunds for plane tickets could lead to unchecked errors or fraud. In a high-pressure environment prioritizing speed and efficiency, humans might naturally start to skip even low-effort review steps, eroding the effectiveness of the review cycle over time.

Mitigation actions

Design AI agents to flag certain decisions for human review. You might set a threshold, like “plane ticket refunds above $200 must be approved by a human.”

Comparing the decisions made by AI and humans can help identify gaps in the agentic system. Use periodic training sessions and user testing to enable effective human-AI collaboration.

As AI agents reshape workflows, consider the long-term workforce impacts. While most agents currently augment rather than replace jobs, evaluate whether agent deployment could fundamentally change your company’s job roles in unintended ways. Leadership may evaluate how to train and incentivize employees so they can play new roles as AI agent manager more effectively.

Indefinite temporary solutions

While AI agents can bridge gaps between modern and legacy systems, they don’t eliminate the need for infrastructure upgrades.

For instance, the fictional agentic ticketing assistance system might easily integrate modern help desk software with legacy IT systems, like an out-of-date ticket booking system. Without a structured transition plan, this workaround could become a permanent crutch, delaying necessary system upgrades to the underlying architecture.

Mitigation actions

Agentic solutions should align with a broader digital transformation strategy, not just serve as stopgaps. The design and development of AI agents should incorporate “retirement plans,” outlining clear milestones to phase them out.

Aligning AI agent strategy with Responsible AI

Some may assume that the use of AI agents conflicts with Responsible AI principles. In reality, Responsible AI is what makes the rapid development and scalable deployment of AI agents sustainable by defining clear approval paths and criteria for testing and monitoring.

46%

of executives cited Responsible AI investment as a competitive differentiator.

PwC's 2024 US Responsible AI Survey

Responsible agentic AI involves setting clear operational guidelines for AI agents and clear roles and responsibilities for the people interacting with the AI systems — from design through use and monitoring. Stakeholder engagement, feedback loops and human oversight are essential to adapt and refine the AI and agentic systems as they evolve, to make sure they remain aligned with organizational values and objectives.

To achieve this alignment, we suggest five key tactics, complemented by essential technical controls and monitoring practices.

1. Adapt AI governance to reflect agent oversight

  • AI agents should not be governed separately but addressed as an integral part of the broader AI governance framework.
  • Establish a function within the governance program oriented toward “horizon scanning” to identify new technologies — such as AI agents — that may put pressure on the operations and scaling of the existing AI governance program.
  • Consider where to streamline and accelerate governance practices so risk mitigation becomes a key driver of the business strategy rather than an inhibitor.
  • Observe where practices need to adjust given friction caused by new technology needs.

2. Build risk management for AI agents

  • Incorporate agents’ degree of autonomy and the potential impact of their decisions into risk tiering and prioritization structures. This will enable greater levels of governance for AI agents that have more autonomy or are used in more critical spaces while fast tracking lower risk agents.
  • Determine the attributes of an AI agent that would trigger inclusion into a centralized AI inventory. Consider whether these agents are locally deployed or shared among teams, as well as the degree of autonomy under which they perform.
  • Track usage, access and performance metrics of critical and high-risk agents.
  • Align on clear criteria for evaluation of agent performance.
  • Define processes for iteratively building and testing agents before scaling their use.

3. Establish infrastructure to support responsible work from AI agents

  • Use data anonymization techniques, such as masking and tokenization, so that AI agents don’t expose sensitive information.
  • Deploy data loss prevention (DLP) tools to monitor and prevent data exfiltration. Configure DLPs to alert a human if an AI agent attempts to access or transmit sensitive information outside approved channels.
  • Require multifactor authentication (MFA) for AI agents accessing critical decisions and systems, making sure they gain access only when specifically granted by a human.
  • Implement role-based access control to restrict AI agent access to sensitive data and systems.
  • Provide the AI agents with the minimum level of access necessary for their goals.

4. Implement testing and monitoring practices

  • Use real-time anomaly detection to identify deviations in AI behavior.
  • Implement continuous monitoring to observe AI agent performance over time and identify long-term considerations, including performance drift.
  • Conduct regular security audits to assess agent compliance with data and security policies.
  • Perform AI red teaming exercises and user testing to test agent responses to simulated attacks and potential data breaches.
  • Integrate automated testing processes in the development life cycle to catch issues early.
  • Maintain detailed logs of AI agent access to sensitive data and systems and regularly review logs to detect unauthorized access or anomalies.

5. Employ AI agents in contexts with human oversight

  • Deploy AI agents in environments where they can work alongside humans, confirming there’s always a human at the helm for critical decision-making processes.
  • Define clear escalation paths for AI-driven decisions that require human intervention.
  • Compare AI and human decision accuracy to refine performance and escalation thresholds over time.
  • Regularly compare agent and human decisions to identify gaps and enable proper human-AI collaboration.

By implementing these strategies and technical controls, you can enable your AI agents to operate securely, efficiently and in alignment with Responsible AI principles. This approach will help mitigate risks, enhance performance and build trust in the deployment of AI agents.

Eirik Sverd also contributed to this article.

Jennifer Kosar

AI Assurance Leader, PwC United States

Email

Rohan Sen

Principal, Data Risk and Responsible AI, PwC United States

Email

Ilana Golbin

Director and Responsible AI Lead, PwC United States

Email

Follow us