🧠OWASP LLM Top 10 LLM09Rule: OWASP-LLM-009medium

Misinformation

Description

Reliance on LLM-generated misinformation, hallucinations, or fabricated citations. Particularly impactful in legal, medical, financial, and journalistic contexts.

⚠️ Risk Impact

LLMs are confidence-without-correctness machines. An LLM output can be fluent, persuasive, and entirely wrong. Downstream users treat fluent output as authoritative — that's the misinformation risk.

🔍 How EchelonGraph Detects This

OWASP-LLM-009Automated scanner rule

EchelonGraph's Tier 1 Cloud Scanner automatically checks for this condition across all connected cloud accounts. Violations are flagged as medium-severity findings with remediation guidance.

🔧 Remediation

Cite source documents in RAG. Disclose AI-generated content. Validate factual outputs against ground truth where possible. Apply decline-to-answer thresholds on high-stakes use cases.

💀 Real-World Attack Scenario

A New York lawyer (June 2023) submitted a court brief containing six fabricated case citations generated by ChatGPT. The judge sanctioned the lawyer; the case became a precedent for AI hallucination liability. The lawyer's defence — 'ChatGPT told me they were real' — produced no relief.

💰 Cost of Non-Compliance

Mata v Avianca (Jun 2023): $5,000 fine + professional sanction. Avg AI-misinformation incident cost: $2.4M (Edelman 2024 Trust Barometer).

📋 Audit Questions

  • 1.How are AI-generated outputs labelled in your product?
  • 2.What decline-to-answer thresholds apply to high-stakes queries?
  • 3.Are sources cited in RAG outputs?
  • 4.Has any AI-misinformation incident occurred? How was it handled?

🎯 MITRE ATT&CK Mapping

T1565.001 — Stored Data Manipulation

⚡ Common Pitfalls

  • No source attribution in RAG — users can't verify
  • Confident output style on uncertain queries — false authority
  • No decline-to-answer thresholds — the model answers everything

📈 Business Value

Misinformation defence preserves user trust and reduces civil-liability exposure. Material for legal, medical, financial AI applications.

⏱️ Effort Estimate

Manual

2-3 weeks for citation + decline-to-answer + transparency UX

With EchelonGraph

EchelonGraph monitors LLM hallucination rate per workload; alerts on threshold breach

🔗 Cross-Framework References

EUAIA-ART50-TRANSPARENCYAIRMF-MAP-3.1

Automate OWASP LLM Top 10 LLM09 compliance

EchelonGraph continuously monitors this control across all your cloud accounts.

Start Free →