Generative AI presents both opportunity and risk

  • Blog
  • 4 minute read
  • September 05, 2023

This powerful and unprecedented technology comes laden with new cybersecurity concerns

This was first published on AGBI

You can’t read the news at the moment without seeing a story about AI, particularly Generative AI (GenAI), which generates text, images and other media.

Management consultancy Strategy& estimates that the overall economic impact of GenAI (also known as Large Language Models) in the GCC area could reach $23.5 billion per year by 2030.

The pace at which people and organisations are adopting GenAI is remarkable. That being said, I can’t help but imagine that its pace is outstripping our ability to think about the potential risks being introduced into our lives.


Cybersecurity risks

I want to share some thoughts on the emerging cybersecurity risks posed to us and the role that regulation and risk management play in mitigating these threats.

Phishing and deepfakes

One of the most significant risks I’ve seen is the potential for GenAI to be manipulated by threat actors for producing harmful or misleading content. 

This technology can be used to create realistic phishing lures, including deepfake videos or audio, or impersonating someone familiar or in a position of authority.

It can also be used to create amazing content of course, like this World Cup advertisement from Orange.

But, in my view, we are likely to see more and more examples of unauthorised access to sensitive information, financial losses and reputational damage to organisations and individuals.

Misinformation and disinformation

Following on from this, I’ve also been watching how GenAI can be used to spread misinformation, for example, in the context of elections or conflicts around the world.

I find myself questioning everything I see at the moment. 

Ultimately, we could see GenAI undermining democratic processes, fuelling social unrest and damaging an organisation’s reputation.

Malware factories

It’s never been easier for inexperienced threat actors to create targeted malware, including ransomware. These tools allow people to build and execute attacks without possessing any pre-existing knowledge. 

Barriers to entry have more or less disappeared. In my opinion, ease-of-access will lead to an increase in the number and sophistication of cyberattacks, posing a major challenge for cybersecurity professionals.


Legal risks

There are arguably a whole host of legal issues arising from the prolific use of GenAI technologies:

Leakage of confidential information

Lax data security measures can publicly expose the company’s trade secrets and other proprietary information as well as customer data.

Failing to thoroughly review generative AI outputs can result in inaccuracies, compliance violations, breach of contract, copyright infringement, erroneous fraud alerts, faulty internal investigations, harmful communications with customers and reputational damage.

Data privacy

GenAI applications use a massive amount of data and create even more new data, which is vulnerable to bias, poor quality, unauthorised access and loss.

This could lead to breaches of privacy regulations and potential legal repercussions. We are already seeing this across several EU states.

Intellectual property

GenAI can generate content that closely resembles the works of content creators, leading to potential intellectual property disputes.

This could result in legal disputes and potential reputational damage.

Manipulation of AI systems

Finally, there is the potential for threat actors to exploit GenAI to manipulate AI systems, causing them to make incorrect predictions or deny service to customers.

This could disrupt business operations, lead to financial losses and damage an organisation’s reputation.

Regulation and risk management

Understanding and managing the associated cybersecurity risks is crucial as we continue to adopt and integrate GenAI into our operations.

In this regard, implementing robust regulatory frameworks and risk management strategies is also prudent.

Regulation should focus on ensuring transparency and accountability in the use of GenAI. This includes establishing best practices for auditing algorithmic decision-making aids designed for use in government services and policy domains.

However, I’m not without my concerns for regulation when considering the global context and geopolitical trends.

A common question is what happens if multiple sovereign regulatory strategies are in play – some focused on developing the most sophisticated AI at all costs, while others focus on caution and protection.

In such a scenario, we could end up with unequal playing fields affecting economic development and safety.

Risk management strategies identify critical services and subsystems that require “human-in-the-loop” decision-making. Selection criteria may include high-risk systems or systems that require special accountability.

I would recommend limiting the role of artificial agents in these systems to a strictly advisory capacity.

I remain excited and optimistic about AI as a whole. It will present us with many advantages as individuals and organisations. But as we continue to navigate this new landscape, we must remain vigilant and proactive in managing the associated cybersecurity risks.

Only then can we fully harness the potential of the technology while ensuring the safety and security of our digital world.


Author

Samer Omar

Cybersecurity & Digital Trust Leader, PwC Middle East

Email

Contact us

Jade Hopkins

Middle East Marketing & Communications Leader, PwC Middle East

PR Team

Get in touch with the PR team, PwC Middle East

Follow us