
The report shows how deepfakes have moved beyond hype into real-world exploitation, undermining digital trust, exposing companies to new risks, and accelerating the business models of cybercriminals.
Andrew Philp, ANZ Field CISO at Trend Micro: “AI-generated media is not just a future risk, it’s a real business threat. We’re seeing executives impersonated, hiring processes compromised, and financial safeguards bypassed with alarming ease. This research is a wake up call—if businesses are not proactively preparing for the deepfake era, they’re already behind. In a world where seeing is no longer believing, digital trust must be rebuilt from the ground up.”
The research found that threat actors no longer need underground expertise to launch convincing attacks. Instead, they are using off-the-shelf video, audio, and image generation platforms, many of which are marketed to content creators, to generate realistic deepfakes that deceive both individuals and organisations. These tools are inexpensive, easy to use, and increasingly capable of bypassing identity verification systems and security controls.
The report outlines a growing cybercriminal ecosystem where these platforms are used to execute convincing scams, including:
- CEO fraud has become increasingly harder to detect as attackers use deepfake audio or video to impersonate senior leaders in real-time meetings.
- Recruitment processes are being compromised by fake candidates who use AI to pass interviews and gain unauthorised access to internal systems.
- Financial services firms are seeing a surge in deepfake attempts to bypass KYC (Know Your Customer) checks, enabling anonymous money laundering through falsified credentials.
The criminal underground is actively trading tutorials, toolkits, and services to streamline these operations. From step-by-step playbooks for bypassing onboarding procedures to plug-and-play face-swapping tools, the barrier to entry is now minimal.
As deepfake-enabled scams grow in frequency and complexity, businesses are urged to take proactive steps to minimise their risk exposure and protect their people and processes. This includes educating staff on social engineering risks, reviewing authentication workflows, and exploring detection solutions for synthetic media.