no-unsafe-prompt-concatenation
Prevent prompt injection via direct string concatenation of user input into LLM prompts.
Prevent prompt injection via direct string concatenation of user input into LLM prompts.
OWASP LLM Top 10 2025: LLM01 - Prompt Injection
CWE: CWE-74, CWE-78
Severity: 🔴 Critical
Rule Details
This rule detects direct concatenation of user input into LLM prompts without proper sanitization, which can lead to prompt injection attacks. Attackers can manipulate LLM behavior by injecting malicious instructions through user-controlled input.
❌ Incorrect
// Direct template literal concatenation
const prompt = `Summarize this: ${userInput}`;
await llm.complete(prompt);
// String concatenation with +
const prompt = 'Analyze: ' + userContent;
await llm.complete(prompt);
// Multiple interpolations
const prompt = `User ${userId} asked: ${userQuestion}`;
await llm.chat(prompt);✅ Correct
// Parameterized prompts with structured messages
const messages = [
{ role: 'system', content: 'You are a helpful assistant' },
{ role: 'user', content: userInput },
];
await llm.complete({ messages });
// Sanitized input
const prompt = `Summarize: ${sanitizePromptInput(userInput)}`;
await llm.complete(prompt);
// Using prompt guard library
const safePrompt = promptGuard.sanitize(`Process: ${userInput}`);
await llm.complete(safePrompt);
// Safe annotation for validated functions
/**
* @safe - Input is validated against allowlist
*/
function generatePrompt(input) {
return `Summarize: ${input}`;
}Options
{
"secure-coding/no-unsafe-prompt-concatenation": [
"error",
{
"llmApiPatterns": ["customLLM.*", "myApp.ai.*"],
"trustedPromptSanitizers": ["myCustomSanitizer", "validatePrompt"],
"allowSanitized": true,
"strictMode": false
}
]
}llmApiPatterns
Array of additional LLM API patterns to detect. Default patterns include:
llm.complete,llm.chat,llm.generateopenai.chat,openai.completeanthropic.complete,claude.completecohere.generatechatCompletion,textCompletion
trustedPromptSanitizers
Array of function names considered safe for sanitizing prompts. Default:
sanitizePromptvalidatePromptpromptGuardescapePrompt
allowSanitized
If true, allows concatenation if input is sanitized. Default: true
strictMode
If true, disables all false positive detection. Default: false
When Not To Use It
If you're not using LLM/AI APIs in your codebase, you can disable this rule.
Known False Negatives
The following patterns are not detected due to static analysis limitations:
Prompt from Variable
Why: Prompt content from variables not traced.
// ❌ NOT DETECTED - Prompt from variable
const prompt = buildPrompt(userInput);
await generateText({ prompt });Mitigation: Validate all prompt components.
Nested Context
Why: Deep nesting obscures injection.
// ❌ NOT DETECTED - Nested
const messages = [{ role: 'user', content: userInput }];
await chat({ messages });Mitigation: Validate at all levels.
Custom AI Wrappers
Why: Custom AI clients not recognized.
// ❌ NOT DETECTED - Custom wrapper
myAI.complete(userPrompt);Mitigation: Apply rule to wrapper implementations.
Further Reading
- OWASP LLM Top 10 2025 - LLM01: Prompt Injection
- Prompt Injection Guide
- Rebuff - Prompt Injection Detector
Compatibility
- ✅ ESLint 8.x
- ✅ ESLint 9.x
- ✅ TypeScript
- ✅ JavaScript (ES6+)
Version
This rule was introduced in eslint-plugin-secure-coding v2.3.0 (OWASP LLM 2025 support).
no-unsafe-dynamic-require
Disallows dynamic `require()` calls with non-literal arguments that could lead to security vulnerabilities. This rule is part of [`eslint-plugin-secure-coding`]
no-unsafe-regex-construction
ESLint Rule: no-unsafe-regex-construction with LLM-optimized suggestions and auto-fix capabilities. This rule is part of [`eslint-plugin-secure-coding`](https:/