The emerging threat of AI-powered fraud

Artificial Intelligence (AI) has extraordinary potential to drive positive change across all areas of business and society. PwC’s research reinforces this notion, with both employers and employees expressing high expectations for AI to enhance productivity and efficiency in the workplace, as revealed in our 27th Annual Global CEO Survey - West Africa report and PwC's Global Workforce Hopes and Fears Survey.

However, as AI adoption grows, it's important to acknowledge the dual-edged nature of this technology. While AI can bring numerous benefits, it also poses risks, evident by the growing rate of AI fraud and scams.


How fraudsters are using AI

The key impact of AI will be to enable fraudsters to create content at greater speed and in greater volume, and to make scams more believable. Some of the ways fraudsters are exploiting GenAI for fraud includes:

  • Generating text and image content. GenAI can be used to create tailored emails, instant messages and image content as the bait to hook potential scam victims, for example, in phishing and smishing attempts, or through fraudulent adverts. GenAI can also make these scams harder to detect by eliminating the traditional ‘tells’ such as poor spelling and grammar. There have been instances where AI generated images (e.g. of damaged property) were used to support insurance claims. 
  • AI-enabled chatbots. Fraudsters are leveraging elements of AI in chatbots that converse with victims to manipulate them in a scam. Chatbots have the potential to amplify fraudsters’ ability to reach victims, delivering volumes of scams that would previously have required a large team of individuals operating in a scam centre.
  • Deep fake videos. Deep fakes are now used as ‘click bait’ to direct users onto malicious websites (where their credit card information may then be harvested) or which use a trusted persona to encourage investment in a scam.
  • Voice cloning. Deep fake technology can copy voices to an increasingly high degree of accuracy. Currently, voice clones potentially require as much as an hour of training data to perfect, but that requirement is reducing all the time. Voice clones can then be used to trick individuals into making payments and can be used to break through systems where voice biometrics are used for ID verification.


The threat of AI fraud is real; there are already reports of fraudsters using AI to target businesses with sophisticated attacks. Business executives must update their fraud framework, perform risk assessment on areas of their businesses that are vulnerable to GenAI scams and ensure their anti-fraud team have the right skills and tools to detect and respond to threats from AI.

PwC can help you:
  • Understand how rapidly evolving technologies are changing fraud threats faced by your business

  • Build resilient fraud defences that can adapt quickly to changing methods of attack

  • Develop fraud risk management strategies, controls and processes that are compliant with changing laws and regulation

  • Implement technology systems to prevent, detect and investigate suspected fraud

 



Contact us

Habeeb Jaiyeola

Habeeb Jaiyeola

Partner & Leader Forensics Services, PwC Nigeria

Tel: +234 1 271 1700

Adeola Adekunle

Adeola Adekunle

Associate Director, PwC Nigeria

Tel: +234 (1) 271 1700

Follow us