Below is a structured list of the most widely recognised and actively updated AI security reference architectures, with citations from the search results.
Cisco provides one of the most detailed, pattern‑based AI security architecture libraries, covering:
Secure chatbot design patterns
RAG (Retrieval‑Augmented Generation) patterns
Agentic AI patterns
Threat models (prompt injection, misalignment, untrusted input, unvalidated output)
A foundational U.S. federal framework for AI risk, governance, and security.
Addresses model behaviour, uncertainty, and ML‑specific risks
Complements traditional cybersecurity frameworks
A comprehensive security model for AI supply chain and model lifecycle security.
Focuses on secure AI supply chain
Covers data, model weights, inference endpoints, and human‑AI interaction risks
Microsoft’s end‑to‑end security architecture includes AI‑specific components and Zero Trust alignment.
Covers hybrid/multicloud, IoT, OT, and AI
Includes updated AI security guidance and standards mapping
AWS provides a dedicated AI security reference architecture for generative AI workloads.
Generative AI Security Scoping Matrix
Agentic AI Security Scoping Matrix
Secure integration with Amazon Bedrock and cloud workloads
Focuses on secure, scalable AI deployments across hybrid multicloud.
AI runtime security
AI data delivery and pipeline security
OWASP LLM Top 10 alignment
Distributed inference and RAG security
While not a full architecture, it is a critical component of AI security design.
Covers prompt injection, data leakage, insecure output handling, supply chain risks
Used by Cisco, F5, AWS, and others as a baseline
Not a reference architecture, but increasingly used as a design baseline.
High‑risk AI system requirements
Data governance, robustness, cybersecurity, and monitoring
Referenced in Microsoft’s MCRA updates.
Zero Trust Reference Model
Security Matrix for AI‑enabled environments