Secure CodingRules
require-llm-output-encoding
Require encoding of LLM outputs based on usage context.
Require encoding of LLM outputs based on usage context.
OWASP LLM Top 10 2025: LLM05 - Improper Output Handling
CWE: CWE-116
Severity: 🔴 Critical
Rule Details
Enforces proper encoding of LLM outputs before use in HTML, SQL, or other contexts.
❌ Incorrect
element.innerHTML = llmOutput;
db.query(`SELECT * FROM users WHERE name = '${llmOutput}'`);✅ Correct
const safe = escapeHTML(llmOutput);
element.innerHTML = safe;
db.query('SELECT * FROM users WHERE name = ?', [llmOutput]);Options
{
"secure-coding/require-llm-output-encoding": ["error"]
}Best Practices
- HTML: Use
escapeHTML()or settextContentinstead ofinnerHTML - SQL: Use parameterized queries
- Shell: Avoid if possible, or use proper escaping
Version
Introduced in v2.3.0
Known False Negatives
The following patterns are not detected due to static analysis limitations:
Prompt from Variable
Why: Prompt content from variables not traced.
// ❌ NOT DETECTED - Prompt from variable
const prompt = buildPrompt(userInput);
await generateText({ prompt });Mitigation: Validate all prompt components.
Nested Context
Why: Deep nesting obscures injection.
// ❌ NOT DETECTED - Nested
const messages = [{ role: 'user', content: userInput }];
await chat({ messages });Mitigation: Validate at all levels.
Custom AI Wrappers
Why: Custom AI clients not recognized.
// ❌ NOT DETECTED - Custom wrapper
myAI.complete(userPrompt);Mitigation: Apply rule to wrapper implementations.