Research Reveals Need for Greater Cybersecurity Team Involvement in AI Solutions

0

Almost half of companies exclude cybersecurity teams when developing, onboarding, and implementing AI solutions according to new ISACA research.

Only around a quarter (26%) of cybersecurity professionals or teams in Oceania are involved in developing policy governing the use of AI technology in their enterprise, and nearly half (45%) report no involvement in the development, onboarding, or implementation of AI solutions, according to the recently released 2024 State of Cybersecurity survey report from global IT professional association ISACA.

In response to new questions asked by the Adobe-sponsored annual study, security teams in Oceania noted they are primarily using AI for:

  • Automating threat detection/response (36% vs 28% globally);
  • Endpoint security (33% vs 27% globally);
  • Automating routine security tasks (22% vs 24% globally); and
  • Fraud detection (6% vs 13% globally).

Jamie Norton, an Australia-based cybersecurity expert and member of ISACA’s Board of Directors, emphasised the critical role of cybersecurity professionals in AI policy development.

“ISACA’s findings reveal a significant gap. Only around a quarter of cybersecurity professionals in Oceania are involved in AI policy development, a concerning statistic given the increasing presence of AI technologies across industries,” he said. “The integration of AI into cybersecurity and broader enterprise solutions must be guided by responsible policies. Cyber professionals are essential in this process to ensure that AI is implemented securely, ethically and in compliance with regulatory standards. Without their expertise, organisations are exposed to unnecessary vulnerabilities.”

To support cybersecurity professionals in engaging with AI policy creation and integration, ISACA has developed a comprehensive paper, Considerations for Implementing a Generative Artificial Intelligence Policy, alongside other resources and certifications.

“Cybersecurity teams are uniquely positioned to develop and safeguard AI systems, but it’s important that we equip them with the tools to navigate this transformative technology,” added Norton. “ISACA’s AI policy paper offers a valuable roadmap, addressing critical questions such as how to secure AI systems, adhere to ethical principles and set acceptable terms of use.”

Exploring the Latest AI Developments

In addition to the 2024 State of Cybersecurity survey report findings on AI, ISACA has been developing AI resources to help cybersecurity and other digital trust professionals navigate this new technology.

This includes an EU AI Act white paper. Enterprises need to be aware of the timeline and action items involved with the EU AI Act, which puts requirements in place for certain AI systems used in the European Union and bans certain AI uses, most of which will apply from August 2, 2026. ISACA’s new white paper, Understanding the EU AI Act: Requirements and Next Steps, recommends some key steps, including instituting audits and traceability, adapting existing cybersecurity and privacy policies and programs, and designating an AI lead who can be tasked with tracking AI tools in use and the enterprise’s broader approach to AI.

The second resource deals with authentication in the deepfake era. Cybersecurity professionals should know the advantages and risks of AI-driven adaptive authentication, says a new ISACA resource, Examining Authentication in the Deepfake Era. While AI can improve security by being used in adaptive authentication systems that adapt to each user’s behaviour, making it harder for attackers to access, AI systems can also be manipulated through adversarial attacks, are susceptible to bias in AI algorithms, and can come with ethical and privacy concerns. Other developments, including research into integrating AI with quantum computing that could have implications for cybersecurity authentication, should be monitored, according to the paper.

Share.