Scammers Using Generative AI in Identity Attacks and Fraud

0

Threat actors are using generative AI to fuel identity attacks and fraud, according to a new report released last week by Transmit Security.

The report, The GenAI-Fueled Threat Landscape: A Dark Web Research Report by Transmit Security, is the result of continuous investigation by a team of fraud analysts in the Transmit Security Research Lab and reveals that the powerful capabilities of blackhat generative AI platforms are helping fraudsters to create new fraud campaigns at new levels of sophistication, speed and scale.

There has been a significant rise in sophisticated scams and fraud cases in Australia and New Zealand. The Australian Payment Fraud Report indicated a 35.6% increase in fraud on payment cards in the 12 months leading up to June 2023, amounting to AUD677.5 million. Additionally, the New Zealand banking ombudsman has highlighted an increase in sophisticated unauthorised payment scam cases, costing New Zealanders over NZD200 million annually. Fraudsters are using new tools to create more realistic attacks, which previously would have been very difficult to execute due to the effort required to get the correct language, look, and feel. The report covers:

Proliferation of GenAI Tools
  • Ease of Access and Use: Blackhat generative AI tools like FraudGPT and WormGPT are easily accessible on the dark web and require minimal skills to use. This lowers the barrier for novice fraudsters to launch sophisticated attacks.
  • Advanced Fraud Capabilities: These tools automate the creation of malicious code, data harvesting, and the execution of highly deceptive fraud campaigns, increasing the volume, velocity, and variety of attacks.
Enhanced Fraud Techniques
  • Automated Pentesting: Generative AI tools can identify enterprise vulnerabilities quickly and efficiently, enabling fraudsters to exploit security gaps.
  • Creation of Synthetic Identities: Fraudsters use generative AI to generate synthetic identity data and high-quality fake IDs that bypass security checks, including AI-driven identity verification.
Dark Web Marketplaces
  • Robust Ecosystem: These marketplaces offer services like remote desktop protocols (RDPs) and credit card checkers, along with high seller ratings and escrow services to ensure product efficacy. This ecosystem supports a wide range of fraudulent activities.
Deepfakes and Voice Cloning
  • Realistic Deceptions: Video and voice deep fakes are used to lure victims into scams and evade voice authentication systems, making it harder for organisations to detect fraudulent activities.

“Fraudsters are doing a much better job working together as a community, collaborating and sharing information on generative AI tools and techniques,” said Transmit Security’s Chief Identity Officer David Mahdi. “This collaborative approach among fraudsters makes it imperative for IT leaders to arm themselves with information and leverage advanced technologies to stay ahead.”

Transmit Security says that to fortify security, organisations should implement converged fraud prevention, identity verification, and customer identity management services powered by generative AI, AI, and machine learning. A unified, smart defence is crucial to removing data silos, closing security gaps, and detecting and stopping today’s advanced fraud with accuracy and speed.

You can read the full report here.

Share.