How do executives feel about generative AI? Conflicted.
Take a look at the findings on cyberdefence and AI from PwC’s latest Digital Trust Insights survey. A solid majority of the nearly 4,000 business leaders who participated in the survey are optimistic about the technology’s potential impact on their business.
And yet, 52% of those same survey participants—71% if you exclude IT and cybersecurity executives—say they expect generative AI to lead to a catastrophic cyber attack in the next year. What’s going on?
The fact is, we’ve often seen this tension between fearleading and cheerleading—to borrow a phrase from our colleagues at strategy+business—when working with clients who are grappling with the security implications of generative AI. And we get it. Senior leaders are eager to leverage generative AI before their competitors do, but they’re also apprehensive about the risks and overwhelmed by the flood of news about the technology.
Are those 52 percenters being alarmists? The answer is, yeah, a little. We don’t think most companies will face a catastrophic gen-AI-powered attack in the coming year (the technology is as new to attackers as it is to defenders), but we do think businesses could face long-term consequences if they don’t balance their enthusiasm for generative AI with a clear-eyed understanding of what they’re up against. Among the top threats posed by generative AI are:
How can businesses respond? For starters, they need to go back to the fundamentals of what they are doing around cybersecurity. They need to think about how the risks for generative AI are different, and think through what controls can be used to mitigate those unique risks. It’s not necessarily about creating anything new—but taking a step back and looking at their cyber risk management program from a new perspective in light of these new risks.
Additionally, they should put in place the governance policies and guardrails that too many executives—including a sobering 64% of the DTI survey respondents—say they’re willing to initially forgo in favour of fast adoption. That means establishing training and guidelines for responsible use of generative AI, and creating a sandbox for workers to experiment safely. Many companies are creating proprietary, fully walled-off generative AI solutions that prevent the leaking of data, and they’re deploying generative AI in a manner that leverages organisational data to reduce the risks arising from biases and misinformation.
But those are table stakes. When it comes to defending against gen-AI-powered attacks, the technology itself is proving to be a game-changer. CISOs and other cybersecurity leaders should get busy in three areas.
The good news is that adoption of these tools is accelerating: 69% of survey respondents are planning to use generative AI for cyberdefence in the next 12 months, and nearly half (47%) are already using it for cyber-risk detection and mitigation. Those are big steps toward a future in which business leaders can tap into generative AI’s immense potential without constant fear of a catastrophic cyber attack.
Partner, Global Cybersecurity and Privacy Leader, Risk Services leader, PwC United States