no-sensitive-in-prompt
ESLint rule documentation for no-sensitive-in-prompt
📡 Live from GitHub — This documentation is fetched directly from no-sensitive-in-prompt.md and cached for 6 hours.
Prevents sensitive data (passwords, tokens, PII) from being sent to LLMs.
This rule identifies code patterns where sensitive data like passwords, API keys, tokens, or personally identifiable ...
📊 Rule Details
| Property | Value |
|---|---|
| Type | problem |
| Severity | 🔴 CRITICAL |
| OWASP LLM | LLM02: Sensitive Information Disclosure |
| CWE | CWE-200: Information Exposure |
| CVSS | 8.5 |
| Config Default | error (recommended, strict) |
🔍 What This Rule Detects
This rule identifies code patterns where sensitive data like passwords, API keys, tokens, or personally identifiable information (PII) is passed to AI prompts. LLM providers may log, store, or use this data for training.
❌ Incorrect Code
// Password in prompt
await generateText({
prompt: `Reset password for user. Current password: ${userPassword}`,
});
// API key in prompt
await generateText({
prompt: `Configure service with key: ${apiKey}`,
});
// SSN in prompt
await streamText({
prompt: `Process application for SSN: ${socialSecurityNumber}`,
});
// Credit card in prompt
await generateText({
prompt: `Validate credit card: ${creditCardNumber}`,
});✅ Correct Code
// Redacted data
await generateText({
prompt: `Reset password for user. Current password: [REDACTED]`,
});
// No sensitive data
await generateText({
prompt: `Configure service with the API key stored in environment variables.`,
});
// Use redaction helper
await streamText({
prompt: `Process application for SSN: ${redact(socialSecurityNumber)}`,
});
// Reference instead of value
await generateText({
prompt: `Validate the credit card on file for user ${userId}`,
});⚙️ Options
| Option | Type | Default | Description |
|---|---|---|---|
sensitivePatterns | string[] | ['password', 'secret', 'token', 'apiKey', 'api_key', 'credential', 'ssn', 'socialSecurity', 'creditCard', 'cvv', 'privateKey'] | Variable patterns suggesting sensitive data |
🛡️ Why This Matters
Sending sensitive data to LLMs can result in:
- Data breach - LLM providers may store prompts
- Training data poisoning - Your data may be used to train models
- Compliance violations - GDPR, HIPAA, PCI-DSS violations
- Third-party exposure - Data shared with third-party AI providers
🔗 Related Rules
no-hardcoded-api-keys- Prevent hardcoded credentialsrequire-output-filtering- Filter sensitive tool output
Known False Negatives
The following patterns are not detected due to static analysis limitations:
Variable Names Not Matching Patterns
Why: Only configured sensitive patterns are checked.
// ❌ NOT DETECTED - Custom field name
await generateText({
prompt: `Process data: ${mySecretField}`, // Not in sensitivePatterns
});Mitigation: Configure sensitivePatterns with custom field names.
Dynamic Prompt Construction
Why: Prompts built at runtime are not analyzed.
// ❌ NOT DETECTED - Dynamic prompt
const fields = [userPassword, apiKey];
const prompt = fields.join(', ');
await generateText({ prompt });Mitigation: Never concatenate sensitive data into prompts.
Nested Object Properties
Why: Deep property access may not be recognized.
// ❌ NOT DETECTED - Nested property
await generateText({
prompt: `Reset with: ${user.credentials.password}`,
});Mitigation: Configure patterns to match nested sensitive fields.
Encrypted/Transformed Data
Why: Transformed data appears safe but may be sensitive.
// ❌ NOT DETECTED - Encrypted but still sensitive
await generateText({
prompt: `Decrypt this: ${encryptedPassword}`,
});Mitigation: Never send any form of credentials to LLMs.