Improper Output Handling
Description
LLM output passed to downstream systems (SQL, shell, file system, web browser) without sanitisation. The 'LLM output is untrusted' principle.
⚠️ Risk Impact
LLM outputs are user-controllable (via prompt). Passing LLM output to a SQL query is SQL injection at LLM scale. Passing to shell is command injection. Passing to HTML is XSS.
🔍 How EchelonGraph Detects This
EchelonGraph's Tier 1 Cloud Scanner automatically checks for this condition across all connected cloud accounts. Violations are flagged as high-severity findings with remediation guidance.
🔧 Remediation
Treat LLM output as untrusted. Sanitise/escape before SQL, shell, HTML rendering. Never pass to eval / exec. Use parameterised queries. Apply output schema validation.
💀 Real-World Attack Scenario
An LLM-powered SQL assistant generated and executed queries based on natural-language user input. A user said: 'show me users matching admin' and the LLM generated SELECT * FROM users WHERE name LIKE '%admin%' OR 1=1 — dumping the full table. The team had assumed LLM-generated SQL was safe because 'the LLM is doing the security'.
💰 Cost of Non-Compliance
LLM-generated SQL injection: same impact as classical SQL injection = avg breach cost $4.45M (IBM 2024). Detection often delayed because LLM-mediated attacks evade signature-based WAF.
📋 Audit Questions
- 1.Show me where LLM output flows to SQL.
- 2.Is the output parameterised or string-concatenated?
- 3.What output schema validation is applied?
- 4.Has any LLM-output-injection vulnerability been tested?
🎯 MITRE ATT&CK Mapping
🏗️ Infrastructure as Code Fix
# Always parameterise SQL; never string-concat LLM output
# Pseudo-code for safe pattern:
# parsed = llm.generate_structured(prompt, schema=QueryParams)
# db.execute_prepared("SELECT * FROM users WHERE name LIKE :pattern", {'pattern': parsed.pattern})⚡ Common Pitfalls
- ⛔Treating LLM output as trusted because 'the LLM understands security'
- ⛔String-concatenating LLM output into queries
- ⛔Rendering LLM output in HTML without escaping
📈 Business Value
Output sanitisation is the difference between LLM as a copilot and LLM as an injection vector. Material for any LLM application that touches downstream systems.
⏱️ Effort Estimate
2-3 weeks for output-handling audit + sanitisation refactor
EchelonGraph runtime detection of LLM-output flowing to dangerous sinks
🔗 Cross-Framework References
Automate OWASP LLM Top 10 LLM05 compliance
EchelonGraph continuously monitors this control across all your cloud accounts.
Start Free →