Cisco publishes a suite of AI Security Reference Architectures that provide secure design patterns, threat models, and mitigation strategies for teams building LLM‑powered applications. These architectures are organised by AI application type and map common risks to recommended controls.
They are not a single diagram—they are a library of secure patterns, each tailored to a specific AI use case.
Focuses on securing basic LLM chatbots used for customer service, helpdesks, and education.
Untrusted user input
Misaligned or poorly fine‑tuned models
Prompt injection
Unvalidated model output
Input validation and sanitisation
Guardrails and system prompt hardening
Output filtering and verification
Abuse detection and rate‑limiting
Covers systems that combine LLMs with enterprise knowledge bases.
Poisoned or manipulated knowledge sources
Data exfiltration via retrieval layer
Over‑permissive vector store access
Secure embedding pipelines
Access‑controlled vector databases
Content provenance and integrity checks
For AI agents that plan, reason, call tools, and take autonomous actions.
Over‑permissioned tools
Unsafe code execution
Agent‑to‑agent collusion
Unmonitored API calls
Fine‑grained tool permissions
Sandboxed execution environments
Audit logging of agent actions
Identity and access controls for agents
Cisco’s newer DefenseClaw framework extends this with:
Skill scanning
AI Bill of Materials (AI‑BoM)
Agent‑to‑agent communication auditing
Cisco also publishes a broader Integrated AI Security and Safety Framework, which complements the reference architectures by providing:
A unified taxonomy of AI risks
Lifecycle‑aware threat modelling
Guidance for red‑teaming and risk prioritisation
This framework is used to underpin the secure design patterns in the reference architectures.
Cisco’s AI Security Reference Architectures are designed to be:
Vendor‑agnostic (usable across Azure, AWS, GCP, on‑prem)
Threat‑driven (aligned with OWASP LLM Top 10, MITRE ATLAS, NIST AML)
Practical (step‑by‑step patterns for real deployments)
Modular (chatbots, RAG, agents, pipelines, etc.)
They are widely used by enterprises adopting LLMs and by security architects designing AI‑enabled systems and ecosystem risks