Interlace ESLint
ESLint Interlace
Secure CodingRules

require-llm-token-budget

Require token usage caps per request/user.

Require token usage caps per request/user.

OWASP LLM Top 10 2025: LLM10 - Unbounded Consumption
CWE: CWE-770
Severity: 🔴 High

Rule Details

Enforces token budget limits for LLM API calls to prevent cost explosions.

❌ Incorrect

await llm.complete(prompt);
await openai.chat.completions.create({ messages });

✅ Correct

await checkTokenBudget(userId, estimatedTokens);
await llm.complete(prompt);

await llm.complete(prompt, { maxTokens: 1000 });

const tokenLimit = getTokenLimit(user);
if (estimated < tokenLimit) {
  await llm.chat(messages);
}

Options

{
  "secure-coding/require-llm-token-budget": ["error"]
}

Best Practices

Implement per-user daily token budgets. Track actual vs estimated usage.

Version

Introduced in v2.3.0

Known False Negatives

The following patterns are not detected due to static analysis limitations:

Prompt from Variable

Why: Prompt content from variables not traced.

// ❌ NOT DETECTED - Prompt from variable
const prompt = buildPrompt(userInput);
await generateText({ prompt });

Mitigation: Validate all prompt components.

Nested Context

Why: Deep nesting obscures injection.

// ❌ NOT DETECTED - Nested
const messages = [{ role: 'user', content: userInput }];
await chat({ messages });

Mitigation: Validate at all levels.

Custom AI Wrappers

Why: Custom AI clients not recognized.

// ❌ NOT DETECTED - Custom wrapper
myAI.complete(userPrompt);

Mitigation: Apply rule to wrapper implementations.

On this page