DigiCert has announced a new “AI Trust” architecture aimed at helping organisations secure AI systems and verify AI-generated outputs, as enterprises increasingly deploy autonomous agents and third-party models and face growing concerns about content authenticity.
The company said the architecture is designed to provide cryptographic verification across AI agents, AI models and digital content, with new capabilities delivered as enhancements to its DigiCert ONE platform.
DigiCert argued that the rapid adoption of AI is creating gaps in enterprise trust controls, citing autonomous agents operating across systems, supply chain and intellectual property risks associated with AI models, and the difficulty of verifying whether digital content has been altered or generated by AI.
“AI has created a new trust challenge,” said Amit Sinha, CEO of DigiCert. “Organisations are relying on agents, models, and content they can’t always verify.”
According to DigiCert, the new architecture is intended to establish what an AI system is, what it is authorised to do, and what it produces, using cryptographic methods to support identity-based governance, model integrity validation and content provenance within a single framework.
The company detailed three components:
AI Agent Trust is positioned as a capability for discovery, identity, governance and lifecycle management of AI agents, including authentication, authorisation and audit controls. DigiCert said it issues cryptographic identities to agents and enforces policy-based controls to support attribution and compliance oversight.
AI Model Trust focuses on securing AI models through packaging, signing and runtime validation. DigiCert said the goal is to create a verifiable chain of custody for models from development to deployment, including checks that models have not been tampered with and are running in trusted environments, including distributed or third-party infrastructure.
Content Trust is designed to cryptographically sign and verify digital content using the C2PA standard, with the aim of providing tamper-evident provenance and transparency. DigiCert said the capability can be used to help address misinformation, impersonation and AI-generated fraud by providing a way to verify content origin and whether it has been altered.
“AI is forcing organisations to rethink trust from the ground up,” said Jennifer Glenn, Research Director for IDC Security and Trust Group. “Bringing cryptographic assurance to AI systems gives enterprises the ability to independently verify identity, integrity, and provenance of content, enabling these organisations to build trustworthy AI at scale.”
DigiCert said the unified approach is intended to reduce reputational and regulatory risk and make security and compliance more measurable and audit-ready as AI adoption increases.

