If you’re a chief risk officer (CRO), chief compliance officer (CCO), chief information security officer (CISO), chief legal officer (CLO) or other professional in a risk-focused area, then governing AI is at least on your radar – and probably on your priority list. The reason is the speed with which generative AI (GenAI) is advancing. It’s driving productivity gains today. It’s laying the groundwork for new business models tomorrow. Yet for GenAI (or any AI) to deliver value, it should be well governed.
What does good AI governance look like? How do you achieve it, as part of a Responsible AI approach, and keep it improving as AI keeps evolving? And how can AI governance not only manage risks, but also help AI deliver value more quickly?
We’re risk professionals and AI practitioners ourselves. These are the kind of questions that we answer every day, both for our own firm’s GenAI implementation and as we help clients on their AI journeys. Based on this experience, we offer five insights that can help achieve good AI governance.
Many proposed use cases for GenAI, we’ve found, have something in common: They're not, in fact, use cases for GenAI. A different AI tool or other AI technology may be better suited to get the job done. Mapping the right tools and data to the right use case is something that good AI governance can do.
Effective "governance at intake” requires methods and tools to assess use cases for feasibility, complexity, suitability and risk. These methods and tools should be aligned across business functions and be applied by cross-functional teams that have technology, business and risk experience.
Unlike traditional AI where a model is typically built for a very specific purpose, GenAI-based solutions are more likely to perform multiple use cases, with different risk profiles, in different functions. And it’s not just a small group of tech specialists who use GenAI. Additionally, GenAI is increasingly embedded in third-party services and everyday enterprise applications.
These differences often mean governance must expand its speed, scale and reach. This enhanced governance should cover procurement, third-party risk management, security, privacy, data, compliance and more. Enterprise-wide governance also benefits from a common, enterprise-wide view of risks that a risk taxonomy can provide.
A comprehensive, standardized, AI-focused risk taxonomy can help make governance decisions consistent and repeatable. It can help your people prioritize risk, escalate incidents, remediate issues, communicate with stakeholders and more. The AI risk taxonomy we use covers six areas:
There’s a sweet spot for AI governance — where it neither holds back the business nor leaves you too vulnerable to risks. In this sweet spot, governance helps prevent delays in AI initiatives, since it addresses problems before they start. With good governance, you won’t have to halt and reverse-engineer projects later — including when new AI regulations emerge. You’ll already be prepared. And by identifying areas where AI risks are most manageable, governance can help guide strategy.
To achieve AI governance that advances AI strategy, give governance a seat at the table from the very start. Together, AI specialists, business leads and risk professionals can align business goals and risk management needs. They can also work together to build trust into AI initiatives from Day One. A trusted foundation of good AI governance will help you keep innovating in line with your chosen risk profile, even as technology evolves and new opportunities emerge.
There are powerful technology tools that can help with AI governance. But these tools, like AI itself, need well-trained, engaged people to manage them. Your entire AI governance team — which will include risk, AI and business specialists — may need coaching to understand AI and AI governance tools, and to collaborate effectively. Clear roles and responsibilities can help speed up prioritization, approvals, and remediation where necessary. As AI spreads, the broader workforce may need change management and upskilling.
Also consider updating codes of conduct and acceptable use policies and create channels to help people report new risks. And always remember that people should be in the lead when making the big decisions on governing and building AI so that it can both deliver business value and grow stakeholder trust.
Ana Mohapatra contributed to this article.
Lead with trust to drive sustained outcomes and transform the future of your business.
What security, privacy, internal audit, legal and compliance leaders need to know now.