Interlace ESLint
ESLint Interlace
Vercel AIRules

require-validated-prompt

This rule identifies code patterns where user-controlled input is passed directly to AI prompts without validation or sanitization. Such patterns expose your ap

Prevents prompt injection by detecting unvalidated user input in AI prompts.

📊 Rule Details

PropertyValue
Typeproblem
Severity🔴 CRITICAL
OWASP LLMLLM01: Prompt Injection
CWECWE-74: Improper Neutralization
CVSS9.0
Config Defaulterror (recommended, strict)

🔍 What This Rule Detects

This rule identifies code patterns where user-controlled input is passed directly to AI prompts without validation or sanitization. Such patterns expose your application to prompt injection attacks.

❌ Incorrect Code

// Direct user input in prompt
await generateText({
  prompt: userInput,
});

// User input from request
await generateText({
  prompt: req.body.question,
});

// Concatenated user input
await generateText({
  prompt: 'Answer this: ' + userQuestion,
});

// Template literal with user input
await streamText({
  prompt: `User asked: ${message}`,
});

✅ Correct Code

// Validated input
await generateText({
  prompt: validateInput(userInput),
});

// Sanitized prompt
await generateText({
  prompt: sanitizePrompt(req.body.question),
});

// Safe static prompt
await generateText({
  prompt: 'What is the capital of France?',
});

// Validated template
await streamText({
  prompt: `User asked: ${validateQuestion(message)}`,
});

⚙️ Options

OptionTypeDefaultDescription
validatorFunctionsstring[]['validate', 'sanitize', 'escape', 'filter', 'clean', 'verify', 'check']Function names considered as safe validators
userInputPatternsstring[]['userInput', 'input', 'query', 'question', 'message', 'prompt', 'request', 'body', 'params']Variable patterns suggesting user input
allowInTestsbooleantrueSkip validation in test files

Example Configuration

{
  rules: {
    'vercel-ai-security/require-validated-prompt': ['error', {
      validatorFunctions: ['validateInput', 'sanitizePrompt', 'cleanUserInput'],
      userInputPatterns: ['userQuery', 'chatMessage'],
      allowInTests: true
    }]
  }
}

🛡️ Why This Matters

Prompt injection is the #1 security risk for LLM applications according to OWASP. Attackers can:

  • Override system instructions
  • Extract sensitive information
  • Manipulate AI behavior
  • Bypass content filters

Known False Negatives

The following patterns are not detected due to static analysis limitations:

Custom Variable Names

Why: Only configured pattern names trigger detection.

// ❌ NOT DETECTED - Custom variable name
const clientQuestion = getClientQuestion(); // Not in userInputPatterns
await generateText({ prompt: clientQuestion });

Mitigation: Configure userInputPatterns with custom names.

Validated but Incorrectly

Why: Validation function quality is not assessed.

// ❌ NOT DETECTED - Weak validation
function validate(input) {
  return input;
} // Just returns input!
await generateText({ prompt: validate(userInput) });

Mitigation: Review validation functions. Use proper sanitization.

Input from External Module

Why: Imported values are not traced.

// ❌ NOT DETECTED - Input from module
import { getUserPrompt } from './user-input';
await generateText({ prompt: getUserPrompt() }); // May be unvalidated

Mitigation: Apply rule to input modules.

Nested Object Properties

Why: Deep property access may not match patterns.

// ❌ NOT DETECTED - Nested user input
await generateText({ prompt: req.body.chat.message.text });

Mitigation: Configure patterns for nested structures.

Dynamic Prompt Construction

Why: Runtime-built prompts are not analyzed.

// ❌ NOT DETECTED - Dynamic construction
const parts = [userInput, context];
await generateText({ prompt: parts.join(' ') });

Mitigation: Validate all parts before joining.

📚 References

On this page