How To Keep Up With Security As AI Deployments Increase In Complexity And At Pace

0

By Tony Burnside, VP APJ, Netskope

For Australian organisations, the conversation around AI security has shifted rapidly. It was only recently that copy-pasting sensitive corporate data into public generative AI (genAI) was the primary concern. Today, while those risks remain, the frontier of AI risk is moving at pace. We are transitioning to an era of agentic AI and private AI deployments, where autonomous agents act on behalf of the organisation, and machine-to-machine communications accelerate, often without direct human oversight.

As these agents are empowered to take actions, execute code, and access and move enterprise data sources, they create a complex web of obscured (and therefore unsecured) data flows. To secure this new ecosystem, organisations must address the unique vulnerabilities inherent in autonomous machine-to-machine interactions.

Securing machine traffic

As agentic AI deployments accelerate within Australian organisations, so does the volume of non-human traffic. AI agents use new protocols such as the model context protocol (MCP) to interact with internal databases and SaaS applications, and collect the data and information they need to achieve their tasks. Traditional enterprise security is not equipped to provide visibility and govern these machine-to-machine interactions at scale, and manual oversight is impossible given that those interactions happen at the speed of inference.

Security teams aiming to address this widening blind spot should focus on three capabilities:

  • Monitor and secure agentic traffic in real-time. Identifying and monitoring active agents and the traffic path between agents and MCP servers, tools, and the data sources they communicate with is crucial. For compliance reasons, organisations must make sure that they can see and search every agentic interaction happening in their environment.
  • Assess MCP servers’ security. Understanding the risk profile of public MCP servers before approving their use in agentic workflows is essential, and these checks should be automated.
  • Expand zero trust and data protection to non-human interactions. Least-privilege principles should be applied to AI agents to ensure they don’t misuse sensitive data, and to block any unauthorised MCP communication. Data protection policies should act as a safety net and block the exfiltration of sensitive data should an agent proceed with unauthorised data movements.

Private AI deployments: the latest expansion of shadow AI

Private AI deployments are proliferating, in particular in highly regulated industries where the need to comply with strict regulations and data sovereignty mandates precludes public system use. Counter-intuitively, securing these deployments can be challenging specifically because the traffic generated by private AI apps or LLMs never leaves the organisation, and cloud-based security proxies often never see—so therefore cannot secure—that traffic. Because these interactions and autonomous data flows only occur internally doesn’t mean they shouldn’t be monitored and governed, especially if they involve high-value databases or automate business-critical workflows.

In order to close this gap, security teams need to extend the capabilities designed to secure public AI directly into privately hosted AI environments, whether on-premises, or in virtual private clouds.

But security cannot stop there. Private models should also be stress tested before they are deployed to ensure there are no hidden vulnerabilities.

Mitigating jailbreaking and prompt injection

Standard AI models are equipped with safety protocols designed to prevent the generation of harmful content, but they are increasingly susceptible to sophisticated attacks like prompt injection and jailbreaking. In these scenarios, an attacker crafts a manipulative input that forces the LLM to ignore its system instructions. This can lead to the model leaking sensitive training data, generating malicious code, or performing unauthorised actions. Because these attacks are based on natural language, traditional signature-based security tools are blind to them.

To defend against such attacks, content moderation tools are designed to inspect and understand the intent behind every request and response of both human and agentic interactions to ensure any attempt to compromise a model is detected and blocked.

AI security: an opportunity for consolidation

For too long organisations have been operating with fragmented security stacks, with individual security tools that don’t integrate well and often overlap with each other. As new AI risk and threat vectors emerge one after the other and at high speed, continuing to add point solutions to address specific risks is no longer sustainable.

The shift towards modern AI security is also an opportunity for a shift towards a unified security posture and major security consolidation. The Netskope One platform enables this unification of security pillars under one umbrella, including cloud security, zero trust, data protection, and now AI security too. With the recent launch of Netskope One AI Security, a suite of AI security solutions within the Netskope One platform designed to address modern AI risks, organisations can safely enable AI innovation. The suite includes the following tools:

  • Netskope One Agentic Broker governs AI agents and traffic in real-time, ensuring that sensitive corporate data is protected at all times.
  • Netskope One AI Guardrails protects models against emerging AI attacks, including jailbreaking and prompt injections.
  • Netskope One AI Gateway provides visibility and protection for AI in private environments.
  • Netskope One AI Red Teaming extends Netskope’s protection into the development cycle to proactively remove vulnerabilities in AI deployments, exposing models to thousands of simulated attacks.

More information is available at Netskope.com/AI.

Share.