
An average of 1.2% of Australian workers clicked on a phishing link each month in the last 12 months–a 140% increase since the last count, accorded to Netskope Threat Labs.
Nearly one in five clicks (19%) were driven by phishing messages impersonating Microsoft or Google, with attackers aiming to steal workers’ corporate credentials, and access company systems and sensitive data. Phishing efforts impersonating gaming platforms, personal cloud or government services were also particularly efficient at generating clicks, with threat actors also targeting personal accounts holding valuable data.
Ray Canzanese, Director of Netskope Threat Labs explains: “The general availability of AI tools continues to enable threat actors to refine their social engineering techniques, and sophisticated phishing campaigns and convincing voice or video deepfakes are now regularly reported as the source of high profile data breaches. However, deliberate data theft is only part of the picture. Our data shows that the use of AI in the workplace is also a major risk vector for accidental data loss.”
Today, 87% of organisations in Australia have employees using genAI applications on a monthly basis, up from 75% nine months ago. ChatGPT (73%), Google Gemini (52%), and Microsoft Copilot (44%) are the most popular applications, but ChatGPT usage in Australia has declined between May and June, and for the first time since its launch in 2022, as Gemini and Copilot continue to gain ground. DeepSeek is the application local organisations block the most (69%), while almost a third (30%) are also banning Grok.
GenAI’s inherent risk materialises through regular, and largely unintentional attempts by employees to leak sensitive data in prompts or documents sent to genAI apps for work purposes, with intellectual property (42%), source code (31%) and regulated data (20%) most often exposed in such instances. This risk is compounded by the fact that over half of local workers (55%) use personal genAI accounts for work purposes, which is hindering security teams’ ability to monitor whether sensitive data is leaking via genAI apps.
Australian organisations are starting to get control over the risk, deploying company-approved genAI apps to their workforce to centralise and monitor usage, and apply data security guardrails. Authorising safe channels, however, cannot eradicate all of the risks of shadow AI (which refers to deployments or usage unknown to corporate IT). As AI adoption in Australia matures, employees increasingly test AI tools such as genAI platforms and LLM interfaces, introducing new data security risks.
GenAI models, platforms and AI agents can directly connect to, and feed from, enterprise data sources to train or complete their tasks, and their permission levels need to be restricted to ensure sensitive data will not be exposed. Some LLM interfaces also come with weak security standards that can compromise data if they are not optimised by security teams before use. In Australia, almost one in three (29%) and one in five (23%) organisations are using genAI platforms and LLM interfaces respectively, and with adoption quickly growing, security teams must prioritise detecting this usage, and eliminate shadow AI to avoid data security incidents.
Ray Canzanese, Director of Netskope Threat Labs says: “We expect more individuals within organisations to experiment with generative or agentic AI deployments, which presents significant shadow AI and data security risks. We are seeing positive signs from Australian organisations, who have been proactive in deploying data loss prevention to avoid data leaks via genAI applications specifically, but they should now turn their attention to detecting and securing emerging and future AI systems so that teams can enjoy the benefits of AI innovation without leaving the front door wide open.”
The company also concluded that:
- Workers based in Australia continue to use personal cloud applications at work, with
regulated data (54%), intellectual property (28%) and passwords and keys (9%) being the types of data most often involved in data leaks to personal cloud apps. - 0.2% of workers based in Australia encounter malicious content such as infected files and malware each month.