Interlace ESLint
ESLint Interlace
Secure CodingRules

require-prompt-template-parameterization

Enforce structured prompt templates instead of string interpolation for LLM APIs.

Enforce structured prompt templates instead of string interpolation for LLM APIs.

OWASP LLM Top 10 2025: LLM01 - Prompt Injection
CWE: CWE-20
Severity: 🔴 Critical

Error Message Format

The rule provides LLM-optimized error messages (Compact 2-line format) with actionable security guidance:

🔒 CWE-20 OWASP:A06 CVSS:7.5 | Improper Input Validation detected | HIGH [SOC2,PCI-DSS,HIPAA,GDPR,ISO27001]
   Fix: Review and apply the recommended fix | https://owasp.org/Top10/A06_2021/

Message Components

ComponentPurposeExample
Risk StandardsSecurity benchmarksCWE-20 OWASP:A06 CVSS:7.5
Issue DescriptionSpecific vulnerabilityImproper Input Validation detected
Severity & ComplianceImpact assessmentHIGH [SOC2,PCI-DSS,HIPAA,GDPR,ISO27001]
Fix InstructionActionable remediationFollow the remediation steps below
Technical TruthOfficial referenceOWASP Top 10

Rule Details

This rule enforces the use of structured message arrays or template engines instead of string interpolation when calling LLM APIs. Structured formats provide better separation between instructions and user data, reducing prompt injection risks.

❌ Incorrect

// Template literal passed directly
await llm.complete(`Summarize: ${userInput}`);

// String concatenation
await llm.chat('Analyze: ' + userContent);

// Template in OpenAI call
await openai.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: `Question: ${q}` }],
});

✅ Correct

// Structured messages array
await llm.complete({
  messages: [
    { role: 'system', content: 'You are a helpful assistant' },
    { role: 'user', content: userInput },
  ],
});

// Messages as first argument
const messages = [{ role: 'user', content: userQuery }];
await llm.chat(messages);

// LangChain PromptTemplate
const template = new PromptTemplate({
  template: 'Summarize: {input}',
  inputVariables: ['input'],
});
await llm.complete(template.format({ input: userInput }));

// ChatPromptTemplate
const prompt = ChatPromptTemplate.fromMessages([
  ['system', 'You are helpful'],
  ['user', '{input}'],
]);
await llm.chat(await prompt.format({ input: userInput }));

// Static string (no variables)
await llm.complete('What is 2+2?');

Options

{
  "secure-coding/require-prompt-template-parameterization": [
    "error",
    {
      "llmApiPatterns": ["customLLM.*"],
      "allowedTemplateEngines": ["MyPromptTemplate"]
    }
  ]
}

llmApiPatterns

Array of additional LLM API patterns to check. Default patterns:

  • llm.complete, llm.chat
  • openai.chat, openai.complete
  • anthropic.complete
  • chatCompletion, textCompletion

allowedTemplateEngines

Array of template engine names to allow. Default:

  • PromptTemplate
  • ChatPromptTemplate
  • promptTemplate

Why This Matters

Structured formats provide:

  1. Clear role separation - System vs user messages
  2. Type safety - Structured data over strings
  3. Better parsing - Easier to validate and sanitize
  4. LLM optimization - Better instruction following

Examples

OpenAI SDK (Correct Usage)

const completion = await openai.chat.completions.create({
  model: 'gpt-4',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: userQuestion },
  ],
});

Anthropic SDK (Correct Usage)

const message = await anthropic.messages.create({
  model: 'claude-3-opus-20240229',
  messages: [{ role: 'user', content: userPrompt }],
});

When Not To Use It

  • If your codebase doesn't use LLM APIs
  • If you have custom prompt safety mechanisms

Known False Negatives

The following patterns are not detected due to static analysis limitations:

Prompt from Variable

Why: Prompt content from variables not traced.

// ❌ NOT DETECTED - Prompt from variable
const prompt = buildPrompt(userInput);
await generateText({ prompt });

Mitigation: Validate all prompt components.

Nested Context

Why: Deep nesting obscures injection.

// ❌ NOT DETECTED - Nested
const messages = [{ role: 'user', content: userInput }];
await chat({ messages });

Mitigation: Validate at all levels.

Custom AI Wrappers

Why: Custom AI clients not recognized.

// ❌ NOT DETECTED - Custom wrapper
myAI.complete(userPrompt);

Mitigation: Apply rule to wrapper implementations.

Further Reading

Compatibility

  • ✅ ESLint 8.x
  • ✅ ESLint 9.x
  • ✅ TypeScript
  • ✅ JavaScript (ES6+)

Version

This rule was introduced in eslint-plugin-secure-coding v2.3.0 (OWASP LLM 2025 support).

On this page