The OWASP LLM Top 10 is the world’s leading security standard for identifying and mitigating the most critical vulnerabilities in Large Language Model (LLM) applications. It is not an architecture—it is a vulnerability taxonomy and risk framework used by engineers, architects, and security teams to secure LLMs, RAG systems, and agentic AI.
Below is a clear, structured explanation based on authoritative OWASP sources.
The OWASP Top 10 for Large Language Model Applications is a globally adopted, open‑source security standard that identifies the 10 most critical risks in LLM‑powered systems.
It was first released in 2023, updated in 2024, and expanded again in 2025 to reflect real‑world attacks and the rise of agentic AI.
It is now part of the broader OWASP GenAI Security Project, which covers LLMs, RAG, agents, and generative AI systems.
Traditional OWASP standards (like the classic Web App Top 10) do not cover AI‑specific risks such as:
Prompt injection
Model manipulation
Data poisoning
Unsafe agent actions
Vector store exploitation
Model hallucination leading to harmful outputs
The LLM Top 10 fills this gap by giving organisations a shared vocabulary and prioritised risk list for securing AI systems.
The exact ordering evolves with each update, but the core categories include:
The highest‑priority risk for the second year running.
Attackers craft inputs that override system instructions or trigger harmful behaviour.
LLM output is trusted without validation, enabling downstream attacks.
Attackers manipulate training or fine‑tuning data to bias or compromise the model.
Models leak secrets, personal data, or internal prompts.
Resource exhaustion via oversized prompts or adversarial queries.
Risks in model weights, datasets, embeddings, or third‑party components.
LLMs generate incorrect or harmful outputs that systems trust.
Particularly relevant for agentic AI and tool‑calling systems.
Agents call external APIs or tools without proper guardrails.
Attackers steal model weights or replicate model behaviour.
(Note: OWASP updates categories annually; the above reflects the 2024–2025 consensus across sources.)
Security teams use the LLM Top 10 to:
Perform AI security assessments
Build threat models for LLM and RAG systems
Guide secure design patterns
Prioritise mitigations
Train developers and architects
It is widely referenced by Cisco, AWS, Microsoft, Google, and F5 in their AI security architectures.