The Open Group’s Zero Trust & AI Security Extensions are a set of emerging standards that extend the classic Zero Trust model into AI systems—covering AI agents, data pipelines, model access, and governance. They build on The Open Group’s authoritative Zero Trust Body of Knowledge and integrate with global standards such as NIST, MITRE, OWASP, ISO/IEC, and the EU AI Act.
Below is a clear, structured explanation of what these extensions are and how they fit into the broader Zero Trust ecosystem.
The Open Group is developing a modular, standards‑based security body of knowledge that includes:
Zero Trust Commandments
Zero Trust Reference Model
Security Principles for Architecture
Security Matrix (in development)
Enterprise Risk Integration (planned)
These components form the foundation for extending Zero Trust into AI systems, especially as AI introduces new trust boundaries and attack surfaces.
Although The Open Group has not yet released a standalone “AI Security Standard,” the Zero Trust Reference Model and Commandments are already being extended to AI through:
AI‑specific trust boundaries
AI agent identity and privilege models
Data governance for training and inference
Integration with OWASP AI Exchange and EU AI Act security controls
Alignment with Microsoft’s Zero Trust for AI (ZT4AI) guidance
These are prescriptive “must/shall” statements that define how to apply Zero Trust across any system—including AI.
They emphasise:
Identity‑first security
Least privilege
Continuous verification
Strong separation of duties
Secure-by-design architecture
These principles map directly to AI workloads, where models, agents, prompts, plugins, and data sources must all be treated as separate identities with explicit trust boundaries.
This model provides the architectural blueprint for implementing Zero Trust across:
Identity
Devices
Networks
Applications
Data
Workloads
AI systems fit into this model by treating:
Models as workloads
Agents as identities
Prompts as untrusted input
Plugins/tools as external services
This aligns with Microsoft’s Zero Trust for AI approach, which explicitly extends Zero Trust to the full AI lifecycle.
These principles help architects embed consistent security across all systems—including AI pipelines.
They support:
Secure data ingestion
Secure model training
Secure inference
Secure agent behaviour
Governance and auditability
These principles are foundational for AI security reference architectures.
The Security Matrix will map:
Controls
Roles
Responsibilities
Threats
Mitigations
…across traditional and AI‑enabled systems.
This will become a key tool for AI governance and risk management.
The Open Group’s Zero Trust extensions support AI security by:
User → Agent
Agent → Model
Model → Data
Agent → Tools/APIs
Model → Vector Stores
Verify explicitly: authenticate agents, models, plugins, and data sources
Least privilege: restrict model access, tool use, and agent permissions
Assume breach: design for prompt injection, data poisoning, and model misuse
The Open Group’s work aligns with:
NIST AI RMF
EU AI Act
OWASP AI Exchange
MITRE ATLAS
Microsoft ZT4AI
This makes it a neutral, standards‑aligned foundation for AI security architectures.
Component - Purpose AI Relevance
Zero Trust Commandments
Prescriptive rules for Zero Trust - Applies to agents, models, data, plugins
Zero Trust Reference Model
Architectural blueprint - Defines AI trust boundaries
Security Principles for Architecture
Embeds security into design - Supports secure AI pipelines
Security Matrix (future)
Controls & roles mapping - Will map AI threats & mitigations
Enterprise Risk Integration (future)
Risk quantification - Supports AI risk governance