require-validated-prompt
This rule identifies code patterns where user-controlled input is passed directly to AI prompts without validation or sanitization. Such patterns expose your ap
Prevents prompt injection by detecting unvalidated user input in AI prompts.
📊 Rule Details
| Property | Value |
|---|---|
| Type | problem |
| Severity | 🔴 CRITICAL |
| OWASP LLM | LLM01: Prompt Injection |
| CWE | CWE-74: Improper Neutralization |
| CVSS | 9.0 |
| Config Default | error (recommended, strict) |
🔍 What This Rule Detects
This rule identifies code patterns where user-controlled input is passed directly to AI prompts without validation or sanitization. Such patterns expose your application to prompt injection attacks.
❌ Incorrect Code
// Direct user input in prompt
await generateText({
prompt: userInput,
});
// User input from request
await generateText({
prompt: req.body.question,
});
// Concatenated user input
await generateText({
prompt: 'Answer this: ' + userQuestion,
});
// Template literal with user input
await streamText({
prompt: `User asked: ${message}`,
});✅ Correct Code
// Validated input
await generateText({
prompt: validateInput(userInput),
});
// Sanitized prompt
await generateText({
prompt: sanitizePrompt(req.body.question),
});
// Safe static prompt
await generateText({
prompt: 'What is the capital of France?',
});
// Validated template
await streamText({
prompt: `User asked: ${validateQuestion(message)}`,
});⚙️ Options
| Option | Type | Default | Description |
|---|---|---|---|
validatorFunctions | string[] | ['validate', 'sanitize', 'escape', 'filter', 'clean', 'verify', 'check'] | Function names considered as safe validators |
userInputPatterns | string[] | ['userInput', 'input', 'query', 'question', 'message', 'prompt', 'request', 'body', 'params'] | Variable patterns suggesting user input |
allowInTests | boolean | true | Skip validation in test files |
Example Configuration
{
rules: {
'vercel-ai-security/require-validated-prompt': ['error', {
validatorFunctions: ['validateInput', 'sanitizePrompt', 'cleanUserInput'],
userInputPatterns: ['userQuery', 'chatMessage'],
allowInTests: true
}]
}
}🛡️ Why This Matters
Prompt injection is the #1 security risk for LLM applications according to OWASP. Attackers can:
- Override system instructions
- Extract sensitive information
- Manipulate AI behavior
- Bypass content filters
🔗 Related Rules
no-sensitive-in-prompt- Prevent sensitive data in promptsno-dynamic-system-prompt- Prevent dynamic system prompts
Known False Negatives
The following patterns are not detected due to static analysis limitations:
Custom Variable Names
Why: Only configured pattern names trigger detection.
// ❌ NOT DETECTED - Custom variable name
const clientQuestion = getClientQuestion(); // Not in userInputPatterns
await generateText({ prompt: clientQuestion });Mitigation: Configure userInputPatterns with custom names.
Validated but Incorrectly
Why: Validation function quality is not assessed.
// ❌ NOT DETECTED - Weak validation
function validate(input) {
return input;
} // Just returns input!
await generateText({ prompt: validate(userInput) });Mitigation: Review validation functions. Use proper sanitization.
Input from External Module
Why: Imported values are not traced.
// ❌ NOT DETECTED - Input from module
import { getUserPrompt } from './user-input';
await generateText({ prompt: getUserPrompt() }); // May be unvalidatedMitigation: Apply rule to input modules.
Nested Object Properties
Why: Deep property access may not match patterns.
// ❌ NOT DETECTED - Nested user input
await generateText({ prompt: req.body.chat.message.text });Mitigation: Configure patterns for nested structures.
Dynamic Prompt Construction
Why: Runtime-built prompts are not analyzed.
// ❌ NOT DETECTED - Dynamic construction
const parts = [userInput, context];
await generateText({ prompt: parts.join(' ') });Mitigation: Validate all parts before joining.