
Trust to the power of Responsible AI
Embrace AI-driven transformation while managing the risk, from strategy through execution.
Learn moreInterest in AI agents has grown significantly, given the promise to become a sophisticated digital workforce capable of addressing intricate problems autonomously. This is coming into focus as companies begin to take advantage of AI agents across applications and platforms, changing how we do the work we do and workforce issues overall.
Capabilities such as PwC’s agent OS are emerging to ease the orchestration and management of agents across platforms. A well-governed fleet of AI agents will be essential to delivering value at speed. But as their autonomy grows, so does their potential for risk — making a strong human-at-the-helm approach critical. AI agents are designed to adapt dynamically to changing environments, making hardcoded logic not just obsolete, but impossible.
With interest turning into adoption, an important question emerges: Can organizations scale AI agents responsibly without compromising privacy, governance and trust?
At PwC, we believe that the responsible use of AI is the key to unlocking the biggest long-term value and impact of this technology, and that applies to AI agents as well. Current Responsible AI and AI governance programs were not often designed for an agentic world. It is urgent for business leaders to adapt their Responsible AI programs and risk management to address the near- and long-term impact of AI agents while enabling innovation and the overarching business strategy.
Interest in AI agents has grown significantly, given the promise to become a sophisticated digital workforce capable of addressing intricate problems autonomously. This is coming into focus as companies begin to take advantage of AI agents across applications and platforms, changing how we do the work we do and workforce issues overall.
Capabilities such as PwC’s agent OS are emerging to ease the orchestration and management of agents across platforms. A well-governed fleet of AI agents will be essential to delivering value at speed. But as their autonomy grows, so does their potential for risk — making a strong human-at-the-helm approach critical. AI agents are designed to adapt dynamically to changing environments, making hardcoded logic not just obsolete, but impossible.
With interest turning into adoption, an important question emerges: Can organizations scale AI agents responsibly without compromising privacy, governance and trust?
At PwC, we believe that the responsible use of AI is the key to unlocking the biggest long-term value and impact of this technology, and that applies to AI agents as well. Current Responsible AI and AI governance programs were not often designed for an agentic world. It is urgent for business leaders to adapt their Responsible AI programs and risk management to address the near- and long-term impact of AI agents while enabling innovation and the overarching business strategy.
An AI agent is an autonomous entity, usually powered by generative AI, designed to pursue specific goals by making decisions and taking actions in dynamic environments. PwC is already seeing demonstrated applications for AI agents to streamline and enhance many functions, including tax and finance. Like all new technologies, the use of AI agents may also introduce risks.
AI agents operate with a level of autonomy that makes direct oversight difficult, increasing the risk of accidental data leaks.
Consider an agentic system designed to handle customer questions about an upcoming flight. This system will need access to sensitive information such as booking details, payment information and personal identification. If not properly managed, an AI agent might expose that information by, for instance, entering personal information in external search engines or browsers accessed to complete the task at hand.
Mitigation actions
Implement data anonymization techniques and restrict access rights within any multi-agent system. An AI agent performing web searches or external queries, for example, should not have access to customer information.
Regular monitoring is essential. You can use data loss prevention (DLP) tools to monitor and flag potential data leaks, escalating to a human in the loop when needed. To add another layer of protection, consider setting up a dedicated AI agent as a “security specialist” to review and approve any external data searches.
Finally, you should conduct user testing, red teaming and regular assessments to monitor compliance with data privacy guidelines.
As AI agents take on more tasks, employees may become overly dependent on them — or they may be forced to become more dependent because of changing incentive structures — thereby reducing human oversight.
In the example of a ticketing assistance system, AI agents processing refunds for plane tickets could lead to unchecked errors or fraud. In a high-pressure environment prioritizing speed and efficiency, humans might naturally start to skip even low-effort review steps, eroding the effectiveness of the review cycle over time.
Mitigation actions
Design AI agents to flag certain decisions for human review. You might set a threshold, like “plane ticket refunds above $200 must be approved by a human.”
Comparing the decisions made by AI and humans can help identify gaps in the agentic system. Use periodic training sessions and user testing to enable effective human-AI collaboration.
As AI agents reshape workflows, consider the long-term workforce impacts. While most agents currently augment rather than replace jobs, evaluate whether agent deployment could fundamentally change your company’s job roles in unintended ways. Leadership may evaluate how to train and incentivize employees so they can play new roles as AI agent manager more effectively.
While AI agents can bridge gaps between modern and legacy systems, they don’t eliminate the need for infrastructure upgrades.
For instance, the fictional agentic ticketing assistance system might easily integrate modern help desk software with legacy IT systems, like an out-of-date ticket booking system. Without a structured transition plan, this workaround could become a permanent crutch, delaying necessary system upgrades to the underlying architecture.
Mitigation actions
Agentic solutions should align with a broader digital transformation strategy, not just serve as stopgaps. The design and development of AI agents should incorporate “retirement plans,” outlining clear milestones to phase them out.
Some may assume that the use of AI agents conflicts with Responsible AI principles. In reality, Responsible AI is what makes the rapid development and scalable deployment of AI agents sustainable by defining clear approval paths and criteria for testing and monitoring.
of executives cited Responsible AI investment as a competitive differentiator.
Responsible agentic AI involves setting clear operational guidelines for AI agents and clear roles and responsibilities for the people interacting with the AI systems — from design through use and monitoring. Stakeholder engagement, feedback loops and human oversight are essential to adapt and refine the AI and agentic systems as they evolve, to make sure they remain aligned with organizational values and objectives.
To achieve this alignment, we suggest five key tactics, complemented by essential technical controls and monitoring practices.
By implementing these strategies and technical controls, you can enable your AI agents to operate securely, efficiently and in alignment with Responsible AI principles. This approach will help mitigate risks, enhance performance and build trust in the deployment of AI agents.
Eirik Sverd also contributed to this article.