OWASP Top 10 For LLM Applications (v1.1)

OWASP Top 10 For LLM Applications: AI Security Risks

A practical guide to OWASP Top 10 for LLM applications, covering prompt injection, output handling, excessive agency, and other AI-specific risks.

Last updated: 2026-03-10

Why This Matters

This guide maps the OWASP Top 10 for LLM applications into actionable controls teams can apply in production AI systems.

It is aimed at developers building prompt pipelines, RAG systems, and tool-calling agents where model output can influence real actions.

Top 10 Categories

  1. LLM01 Prompt Injection
  2. LLM02 Insecure Output Handling
  3. LLM03 Training Data Poisoning
  4. LLM04 Model Denial Of Service
  5. LLM05 Supply Chain Vulnerabilities
  6. LLM06 Sensitive Information Disclosure
  7. LLM07 Insecure Plugin Design
  8. LLM08 Excessive Agency
  9. LLM09 Overreliance
  10. LLM10 Model Theft

LLM01

Prompt Injection

Untrusted content manipulates model instructions and control flow.

Injected prompts can bypass guardrails and trigger unauthorized model behavior.

Prevention Checklist

  • Separate trusted system instructions from untrusted user/context data.
  • Apply policy checks before and after model calls.
  • Minimize tool permissions and require explicit user confirmation for high-risk actions.

LLM02

Insecure Output Handling

Model output is executed or rendered without validation.

Unsafe output handling can become command injection, XSS, or workflow abuse.

Prevention Checklist

  • Treat model output as untrusted input at every boundary.
  • Validate structured outputs against strict schemas.
  • Avoid direct execution of generated code and commands.

LLM03

Training Data Poisoning

Compromised datasets influence model behavior and outputs.

Poisoned data can bias outputs, inject hidden triggers, and degrade reliability.

Prevention Checklist

  • Track dataset lineage and require review for data ingestion.
  • Use integrity checks and anomaly scanning on training sources.
  • Apply staged evaluation gates before model promotion.

LLM04

Model Denial Of Service

Adversarial input causes excessive token or compute consumption.

Cost spikes and degraded latency can take user-facing systems offline.

Prevention Checklist

  • Enforce token limits, timeout ceilings, and per-user quotas.
  • Detect and block abusive prompt patterns.
  • Use caching and fast-fail controls for repeated attacks.

LLM05

Supply Chain Vulnerabilities

Third-party models, plugins, or tooling introduce compromise risk.

Compromised dependencies can exfiltrate data and bypass platform controls.

Prevention Checklist

  • Vet model providers, plugins, and registries before adoption.
  • Pin versions and verify signatures where supported.
  • Isolate untrusted integrations with least privilege.

LLM06

Sensitive Information Disclosure

Prompts, context, or outputs leak confidential data.

LLM systems often aggregate sensitive context that can be exposed unintentionally.

Prevention Checklist

  • Classify and redact sensitive data before prompt construction.
  • Enforce data minimization and scoped retrieval.
  • Audit logs for potential data leakage pathways.

LLM07

Insecure Plugin Design

Tool/plugin integrations expose unsafe capabilities.

Weak plugin design lets prompt-level attacks pivot into real-world side effects.

Prevention Checklist

  • Require explicit action policies and parameter validation for tools.
  • Implement user intent confirmation for destructive actions.
  • Apply strict authentication and authorization to plugin endpoints.

LLM08

Excessive Agency

Agents can take broad autonomous actions without enough constraints.

Over-privileged agents can produce cascading failures across systems and data.

Prevention Checklist

  • Define bounded capabilities and execution scopes per agent.
  • Use approval gates for actions with financial or data impact.
  • Add rollback and kill-switch controls for autonomous flows.

LLM09

Overreliance

Users or systems trust model output beyond its confidence and reliability.

Unverified model output can propagate incorrect or unsafe decisions.

Prevention Checklist

  • Provide uncertainty cues and human-review checkpoints.
  • Require source attribution for high-impact outputs.
  • Design workflows that tolerate and detect model error.

LLM10

Model Theft

Attackers extract model weights, prompts, or proprietary behavior.

Model theft undermines IP, safety controls, and competitive advantage.

Prevention Checklist

  • Protect inference endpoints with strong auth and abuse detection.
  • Limit output verbosity and repeated probing patterns.
  • Use watermarking, telemetry, and legal controls for misuse.

FAQ

Do classic OWASP controls still matter for AI systems?

Yes. AI introduces new attack surfaces, but classic controls around auth, access, logging, and input validation remain foundational.

What should teams fix first in LLM apps?

Prioritize prompt injection defenses, output validation, and strict tool authorization because they directly reduce high-impact abuse paths.

How do we reduce AI security regressions over time?

Use automated policy tests, eval suites for adversarial prompts, and release gates that block deployments when safety checks fail.

References