🧠OWASP LLM Top 10 LLM02Rule: OWASP-LLM-002critical

Sensitive Information Disclosure

Description

LLM inadvertently discloses sensitive information: PII memorised from training data, business secrets in system prompts, customer data accumulated in conversation history.

⚠️ Risk Impact

LLMs that handle sensitive data have multiple pathways to disclose it — training-data memorisation, system-prompt leakage, cross-session leakage, and downstream API responses.

🔍 How EchelonGraph Detects This

OWASP-LLM-002Automated scanner rule

EchelonGraph's Tier 1 Cloud Scanner automatically checks for this condition across all connected cloud accounts. Violations are flagged as critical-severity findings with remediation guidance.

🔧 Remediation

Redact PII before prompting. Apply output filters on regulated entity types (SSN, credit cards, PHI). Document training data; never train on production customer data without DP. Refuse PII queries via system-prompt + classifier.

💀 Real-World Attack Scenario

ChatGPT (Mar 2023) had a Redis bug that leaked snippets of other users' conversations in the conversation history. ~1.2% of ChatGPT Plus subscribers had payment data potentially exposed. The cause was infrastructure not training-data — but the impact was a textbook LLM02 violation.

💰 Cost of Non-Compliance

ChatGPT Mar 2023 incident: regulator inquiries from Italy, France, Germany. Italy temporarily banned ChatGPT. Avg LLM-sensitive-disclosure incident cost in 2024: $1.4M (Wiz).

📋 Audit Questions

  • 1.How is PII redacted from prompts?
  • 2.What output filters apply on regulated data types?
  • 3.Have you tested for training-data memorisation?
  • 4.Show me a recent PII-leak prevention rule update.

🎯 MITRE ATT&CK Mapping

MITRE_ATLAS-AML.T0024 — Model Inversion

⚡ Common Pitfalls

  • Training on production customer data without DP
  • Logging full prompts (with PII) for debugging
  • System prompts containing API keys or business secrets

📈 Business Value

Sensitive-information defence prevents the highest-stakes LLM incidents — material for healthcare, finance, and any GDPR-scoped LLM deployment.

⏱️ Effort Estimate

Manual

3-4 weeks for redaction pipeline + output filtering + DP integration

With EchelonGraph

EchelonGraph ships PII detection + output filtering middleware for LLM endpoints

🔗 Cross-Framework References

GDPR-Art32HIPAA-164.312(e)MITRE_ATLAS-AML.T0024

Automate OWASP LLM Top 10 LLM02 compliance

EchelonGraph continuously monitors this control across all your cloud accounts.

Start Free →