Interlace ESLint
ESLint Interlace
Vercel AIRules

no-unsafe-output-handling

This rule identifies code patterns where AI-generated output is passed directly to dangerous functions that can execute code, manipulate the DOM, or run databas

Prevents using AI-generated content in dangerous operations like eval, SQL, or innerHTML.

📊 Rule Details

PropertyValue
Typeproblem
Severity🔴 CRITICAL
OWASP LLMLLM05: Improper Output Handling
OWASP AgenticASI05: Unexpected Code Execution
CWECWE-94: Improper Control of Code Generation
CVSS9.8
Config Defaulterror (recommended, strict)

🔍 What This Rule Detects

This rule identifies code patterns where AI-generated output is passed directly to dangerous functions that can execute code, manipulate the DOM, or run database queries.

❌ Incorrect Code

// Code execution
const result = await generateText({ prompt: 'Generate code' });
eval(result.text);

// Function constructor
new Function(result.text)();

// XSS via innerHTML
element.innerHTML = result.text;

// SQL injection
db.query(result.text);

// Shell execution
exec(result.text);

✅ Correct Code

// Sandboxed execution
const result = await generateText({ prompt: 'Generate code' });
runInSandbox(result.text);

// Safe text content
element.textContent = result.text;

// Parameterized query
db.query('SELECT * FROM users WHERE id = ?', [parsedId]);

// Validated command
if (allowedCommands.includes(result.text)) {
  exec(result.text);
}

⚙️ Options

OptionTypeDefaultDescription
dangerousFunctionsstring[]['eval', 'Function', 'setTimeout', 'setInterval', 'exec', 'execSync', 'spawn']Functions to flag when receiving AI output
dangerousPropertiesstring[]['innerHTML', 'outerHTML']Properties to flag when assigned AI output

🛡️ Why This Matters

Passing AI output to dangerous functions enables:

  • Remote Code Execution (RCE) - Attackers can inject code via prompt manipulation
  • Cross-Site Scripting (XSS) - Malicious scripts in generated HTML
  • SQL Injection - Database manipulation via generated queries
  • Command Injection - System command execution

Known False Negatives

The following patterns are not detected due to static analysis limitations:

AI Output Stored in Variable

Why: Assignment to dangerous functions from variables is not traced.

// ❌ NOT DETECTED - Output stored first
const aiOutput = (await generateText({ prompt })).text;
// Later in code...
eval(aiOutput); // Not linked to AI output

Mitigation: Never use eval or similar with any external data.

Custom Dangerous Functions

Why: Non-standard execution functions may not be detected.

// ❌ NOT DETECTED - Custom exec wrapper
executeCode(result.text); // Custom function, not in dangerousFunctions

Mitigation: Configure dangerousFunctions with custom names.

Dynamic Function Invocation

Why: Dynamic property access is not analyzed.

// ❌ NOT DETECTED - Dynamic invocation
const method = 'eval';
window[method](result.text); // Dynamic access

Mitigation: Avoid dynamic function invocation.

Framework Rendering

Why: Framework-specific unsafe patterns may not be recognized.

// ❌ NOT DETECTED - React dangerouslySetInnerHTML
<div dangerouslySetInnerHTML={{ __html: result.text }} />

Mitigation: Use framework-specific security rules.

📚 References

Error Message Format

The rule provides LLM-optimized error messages (Compact 2-line format) with actionable security guidance:

🔒 CWE-94 OWASP:A05 CVSS:9.8 | Code Injection detected | CRITICAL [SOC2,PCI-DSS,ISO27001]
   Fix: Review and apply the recommended fix | https://owasp.org/Top10/A05_2021/

Message Components

ComponentPurposeExample
Risk StandardsSecurity benchmarksCWE-94 OWASP:A05 CVSS:9.8
Issue DescriptionSpecific vulnerabilityCode Injection detected
Severity & ComplianceImpact assessmentCRITICAL [SOC2,PCI-DSS,ISO27001]
Fix InstructionActionable remediationFollow the remediation steps below
Technical TruthOfficial referenceOWASP Top 10

On this page