
By Katherine Boiciuc, EY Regional Chief Technology and Innovation Officer, Oceania
In boardrooms across Australia, executives are asking “is it secure?” whenever artificial intelligence (AI) implementation comes up. While the rest of the world races ahead with artificial intelligence, Australians are tapping the brakes – and hard.
Our recently released EY Global AI Sentiment Index shows cybersecurity fears and a lack of trust are two major reasons why we’re lagging behind the rest of the world. The consequences for our economic future could be severe if we don’t tackle this head-on.
There is a trust breakdown. Australians are deeply sceptical about AI security, with 74% ranking security failures as their top concern – well above the global figure of 63%. This security anxiety has dragged our overall AI sentiment score down to a mere 54 out of 100, while the global average sits at a much healthier 70.
Even more striking, 80% of us worry about AI-generated fakery and misinformation. We’re more concerned about this than almost any other country surveyed. This explains why only 37% of Australians believe the benefits of using AI outweigh its risks, compared to 51% globally. This shows a specific, security-focused trust breakdown.
The fear of being fooled runs deep. As sophisticated AI-generated content is hard to differentiate from material created by a human, Australians are right to question what this means for information integrity.
For businesses, this is even more concerning. Imagine AI-cloned voices of executives authorising fraudulent transfers, manipulated video calls giving false directives, and fabricated communications redirecting payments. These aren’t far-fetched scenarios but real threats that keep cybersecurity professionals awake at night.
Unlike security threats that target systems, AI-powered attacks target trust itself – and rebuilding that trust is far harder than restoring compromised data.
Our research also suggests a generational divide, with 63% of Australian Gen Z and 60% of Millennials say they’re comfortable with AI, as opposed to 48% of Baby Boomers and 52% of Gen X.
I put this down to hard-earned wisdom, with those who’ve witnessed decades of change often asking the most penetrating questions about security – questions that deserve proper answers, not dismissal.
These concerns require rigorous governance and putting into place security strategies to tackle them.
But there is an economic price for hesitation. When just 16% of Australians have a good understanding of AI (a modest improvement from 13% last October), we have a knowledge gap compounding our security fears. Less than half of us (48%) feel comfortable using AI day-to-day, dramatically below the global figure of 63%.
Make no mistake – this gap threatens Australia’s economy. While we cautiously deliberate, global competitors are moving forward with AI and our status as a laggard could leave us struggling to catch up. Security concerns that prevent adoption today could ironically leave us more vulnerable tomorrow as we rush to implement unfamiliar systems under competitive pressure.
I have five practical security approaches that work:
- Show exactly how you’re protecting data, limiting access, and preserving integrity. Document this in plain language that non-specialists can grasp.
- Regular exercises and third-party security audits aren’t just good practice – they’re powerful trust builders.
- As misinformation is a major concern for Australians, you need to invest in both detection technology and verification protocols, particularly for high-stakes situations.
- Security education significantly boosts AI confidence across generations. Focus on practical, role-specific training that people will use day-to-day.
- Develop and share how you plan to communicate in a security crisis. Nothing builds trust like showing you’re prepared for when things go wrong.
Australia’s heightened security awareness doesn’t have to be our Achilles’ heel. It could become our strategic advantage – if we channel it productively. By developing security-first AI systems, Australian organisations can build technology that is powerful and trustworthy.
The link between security confidence and AI uptake is unmistakable in our data. Organisations that successfully close this gap will be those treating cybersecurity not as a compliance checkbox but as the foundation of their strategy, particularly when it comes to AI.
The views expressed in this article are the views of the author, not Ernst & Young. This article provides general information, does not constitute advice and should not be relied on as such. Professional advice should be sought prior to any action being taken in reliance on any of the information. Liability limited by a scheme approved under Professional Standards Legislation.