How cybercriminals are evolving the use of payment fraud AI
By Debra R. Richardson, MBA, CFE, CAPP
According to IBM, Generative Artificial Intelligence (GenAI) tools refers to deep-learning models (such as ChatGPT) that can generate high-quality text, images, and other content based on the data they were trained. AI tools appear to have unlimited capabilities – making most of what you do easier and more efficient, both at home and at work. You and many of your colleagues may be waiting for your companies to approve access to AI tools. Fraudsters are not waiting. AI tools are helping fraudsters’ requests appear legitimate and its proving successful in perpetrating payment fraud.
In this three-part series, we’re looking at real cases of payment fraud using AI and identifying how you can mitigate that fraud. This final article looks at how GenAI is used maliciously by fraudsters to perpetrate payment fraud.
The Fraudsters Struggle for Their Scam Emails to Appear Legitimate
Fraudsters target the Accounts Payable, Procurement and Vendor teams because they know they have access to change vendor remittance information and divert vendor payments. Business Email Compromise scams start with an email, and then use social engineering tactics to make the emails appear legitimate. Historically there has been a problem with fraudulent emails. Where the perpetrators were not native English speakers, the emails contained grammatical errors.
As a result, cybersecurity awareness training to spot phishing emails always includes grammatical errors as a red flag for fraud. Always evolving, fraudsters tried to solve for these errors. In September of 2021, Threat Post published the article ‘BEC Scammers Seek Native English Speakers on Underground‘ highlighting that a fraudster was looking for native English speakers. According to the article, native English speakers are in high demand because fraudsters primarily target North American and European markets. They want their phishing emails to appear legitimate – which means no grammatical errors.
They have more than native English speakers now. Now they have malicious large language models (LLMs). LLMs can learn patterns and relationships from large volumes of textual data to understand the structure of a language, then use that understanding to generate new text – in that language – with no grammatical errors. ChatGPT, FraudGPT and WormGPT are examples of LLMs.
From ChatGPT to FraudGPT and WormGPT
According Cyber Security News article ChatGPT, FraudGPT, and WormGPT Plays A Vital Role in Social Engineering Attacks, GenAI models are playing a key role in social engineering attacks. FraudGPT and WormGPT are illicit adaptations of OpenAI’s ChatGPT, manipulated for malicious purposes. FraudGPT, emerging in July 2023, is specifically designed for cybercrime. It facilitates large-scale phishing, malware distribution, and hacking by automating these processes with a high degree of personalization and sophistication. It is sold on the dark web with subscription plans, highlighting its accessibility to cybercriminals.
WormGPT, developed from the GPT-J model, was released earlier in March of 2021, and focuses on crafting and disseminating malware. It excels in creating convincing phishing emails and exploiting software vulnerabilities, making it a potent tool for spreading malicious code and facilitating network breaches.
How Can You Mitigate AI Generated Fraudulent Emails to Avoid Payment Fraud?
Get the vendor process out of email. If you currently process vendor requests via email, including collecting sensitive data (such as vendor remittance details), use a more secure method. Solutions can start at simply removing sensitive steps out of risky email through automating the vendor setup and maintenance process.
- Secured Email – Requires vendors to sign in to receive your email requesting sensitive data by form or by embedding fields within the email for the vendor to enter.
- eInvoicing Portal – If you currently use an eInvoicing Portal to process invoices, it has a messaging and upload function, and vendors have to login to access it, then use it to collect sensitive data.
- Vendor Self-registration Portal – These portals can handle vendor inquiries, vendor onboarding and allow the vendors to maintain their information.
Fraudsters are improving their tools to their fraudulent requests appear legitimate in order to perpetrate payment fraud. And it’s working. The best way to mitigate GenAI fraudulent emails is to remove some or all of the vendor process from risky email. Implement authentication techniques, internal controls, best practices, and vendor validations in the accounts payable, vendor setup and maintenance and payment process as needed to prevent payment fraud.