Eight protocols for building an effective AI security program

0

Written by Professor Jason Lau, Board Director, ISACA.

The future is exciting for organisations of all sizes – from large enterprise to SMEs – thanks to the vast potential of generative AI. But opportunity carries the burden of risk, which is weighing heavily on the shoulders of many CISOs across the globe.

Indeed, the rapid adoption of AI across organisations has outpaced the development of comprehensive frameworks to manage the associated risks effectively. The allure of AI’s transformative potential has led to its widespread integration across industries, from healthcare and finance to manufacturing and retail. However, this swift embrace has, in some instances, overshadowed the critical need for robust risk management strategies.

To alleviate the enormous weight and potential perils of this new technology, there are eight foundational protocols for building an AI security program that can be adapted to organisations of all sizes, as outlined in ISACA’s white paper, The Promise and Peril of the AI Revolution: Managing Risk.

1. Trust but Verify

Misplaced trust in a tool that is perpetually evolving is risky. Despite this, there are many organisations blindly following the first AI-generated outputs. For example, organisations may have implemented earlier versions of OpenAI within their operational environment, and just going from ChatGPT 3.0 to 4.0 can yield vastly different depth of results, and there are proven security vulnerabilities of the earlier versions as well. It is essential to conduct a thorough audit, and schedule routine re-audits of existing AI systems to identify vulnerabilities, assess compliance with industry standards and regulations, and evaluate the ethical implications of AI applications.

While a burdensome task, AI-generated output must be perpetually validated, with systems and mechanisms in place to approve artificially developed work.

2. Design Acceptable Use Policies

Procedures and rules must be developed to enforce safe and ethical AI use. As employees increasingly incorporate AI into their everyday work, risk increases around the sharing of proprietary company data.

Policies should:

  • Be modified and adapted to meet evolving local regulatory requirements.
  • Be pressure tested against unintended bias and discrimination.
  • Incorporate approval chains and review processes to circumvent insider threats.
  • Be clearly communicated to employees, via training on the appropriate and inappropriate use of these tools.
3. Designate an AI Lead

Until the day comes when enterprise C suites include a Chief of Artificial Intelligence, organisations should appoint a project manager to track and document the company’s AI evolution, with input across divisions including cybersecurity, data privacy, confidentiality, legal, procurement, risk and audit departments. Accountability is key, and this role should be the subject matter expert to help govern and guide the successful AI design to deployment life-cycle.

Historical company records are essential for optimal utilisation of AI technology. These records help the AI system to analyse and glean insights from past mistakes and resource misallocations. Further, AI-driven decisions can be more easily explained, as the steps for implementation become replicable.

4. Perform a Cost Analysis

It is important to conduct a thorough cost-benefit analysis for AI. Is it more cost effective to build or buy AI tools?

Organisational factors influencing this decision must extend beyond the cost-effectiveness of AI as a tool and consider expenses associated with security measures, along with potential productivity enhancements and optimisation of the workforce.

5. Adapt and Create Cybersecurity Programs

An AI security program’s effectiveness is intrinsically linked to the robustness of an organisation’s overarching cybersecurity and privacy framework. While AI security programs address specific challenges posed by AI technologies, they should not operate in isolation. Instead, they should be integrated components of a comprehensive cybersecurity strategy.

For instance, an organisation might employ advanced AI tools for threat detection and response, but if there are gaps in the broader cybersecurity framework, such as weak access controls or inadequate data encryption, the organisation remains vulnerable. A real-world example can be drawn from healthcare, where AI is used to predict patient outcomes, but a lack of robust cybersecurity can lead to data breaches, compromising privacy and trust.

Ensuring AI-related risk considerations and security solutions are integrated from the outset will go a long way to minimising potentially costly technology overhauls.

6. Mandate Audits and Traceability

Enterprises will need better auditing and traceability capabilities around the AI model to understand where an AI tool is pulling its data and how it arrives at its decisions. Where did the source data come from? Has that data been manipulated by the AI or the human interacting with it? Is systemic bias a factor?

These questions are integral in evaluating the trustworthiness of AI tools.

7. Develop a Set of AI Ethics

The use of AI presents a number of potential challenges in terms of the ethical use of AI tools.

Across many sectors, AI can significantly accelerate service or product delivery compared to the time it took using traditional methods. For those professionals who charge hourly rates, a re-evaluation of pricing structures and client agreements becomes important.

There are instances where cutting-edge software can identify AI-generated work, however it may not capture every scenario. Organisations’ must address ethics surrounding the use of AI and factor policies into standard business operations.

Ethical guidelines must also include bias detection and mitigation. Transparency and accountability in AI development processes are also paramount, enabling scrutiny and fostering trust in AI systems.

8. Societal Adaptation

AI is infiltrating most elements of society including workplaces, education facilities and professional services such as recruitment.

To reduce societal risks, organisational procedures must undergo continual reassessment. For example, academics might need to modify their evaluation and marking methods, employers may need to reconsider their performance appraisal benchmarks and the public must be educated about AI and what it entails, in order to avoid engaging in disinformation campaigns.

It is clear that AI offers unprecedented opportunities for innovation and efficiency. And it is imperative for organisations to approach its adoption with caution and diligence, ensuring that the potential risks are adequately managed to harness AI’s full potential responsibly and sustainably.

By adopting these eight core protocols, organisations can effectively manage the risks associated with AI, ensure compliance with ethical standards, and leverage AI technologies to their full capacity while safeguarding against potential pitfalls.

Share.

Leave A Reply