Threat actors are using generative AI to fuel identity attacks and fraud, according to a new report released last week by Transmit Security.
The report, The GenAI-Fueled Threat Landscape: A Dark Web Research Report by Transmit Security, is the result of continuous investigation by a team of fraud analysts in the Transmit Security Research Lab and reveals that the powerful capabilities of blackhat generative AI platforms are helping fraudsters to create new fraud campaigns at new levels of sophistication, speed and scale.
There has been a significant rise in sophisticated scams and fraud cases in Australia and New Zealand. The Australian Payment Fraud Report indicated a 35.6% increase in fraud on payment cards in the 12 months leading up to June 2023, amounting to AUD677.5 million. Additionally, the New Zealand banking ombudsman has highlighted an increase in sophisticated unauthorised payment scam cases, costing New Zealanders over NZD200 million annually. Fraudsters are using new tools to create more realistic attacks, which previously would have been very difficult to execute due to the effort required to get the correct language, look, and feel. The report covers:
- Ease of Access and Use: Blackhat generative AI tools like FraudGPT and WormGPT are easily accessible on the dark web and require minimal skills to use. This lowers the barrier for novice fraudsters to launch sophisticated attacks.
- Advanced Fraud Capabilities: These tools automate the creation of malicious code, data harvesting, and the execution of highly deceptive fraud campaigns, increasing the volume, velocity, and variety of attacks.
- Automated Pentesting: Generative AI tools can identify enterprise vulnerabilities quickly and efficiently, enabling fraudsters to exploit security gaps.
- Creation of Synthetic Identities: Fraudsters use generative AI to generate synthetic identity data and high-quality fake IDs that bypass security checks, including AI-driven identity verification.
- Robust Ecosystem: These marketplaces offer services like remote desktop protocols (RDPs) and credit card checkers, along with high seller ratings and escrow services to ensure product efficacy. This ecosystem supports a wide range of fraudulent activities.
- Realistic Deceptions: Video and voice deep fakes are used to lure victims into scams and evade voice authentication systems, making it harder for organisations to detect fraudulent activities.
Transmit Security says that to fortify security, organisations should implement converged fraud prevention, identity verification, and customer identity management services powered by generative AI, AI, and machine learning. A unified, smart defence is crucial to removing data silos, closing security gaps, and detecting and stopping today’s advanced fraud with accuracy and speed.
You can read the full report here.