Interlace ESLint
ESLint Interlace
Vercel AIRules

require-max-tokens

This rule identifies AI SDK calls that don't specify a `maxTokens` limit. Without limits, AI responses can consume excessive tokens, leading to high costs and p

Ensures all AI calls have token limits to prevent resource exhaustion.

📊 Rule Details

PropertyValue
Typesuggestion
Severity🟡 HIGH
OWASP LLMLLM10: Unbounded Consumption
CWECWE-770: Allocation of Resources Without Limits
CVSS6.5
Config Defaultwarn (recommended), error (strict)

🔍 What This Rule Detects

This rule identifies AI SDK calls that don't specify a maxTokens limit. Without limits, AI responses can consume excessive tokens, leading to high costs and potential denial of service.

❌ Incorrect Code

// No token limit
await generateText({
  model: openai('gpt-4'),
  prompt: 'Write a story',
});

// Missing maxTokens in stream
await streamText({
  model: anthropic('claude-3'),
  prompt: 'Explain quantum physics',
});

✅ Correct Code

// With token limit
await generateText({
  model: openai('gpt-4'),
  prompt: 'Write a story',
  maxTokens: 4096,
});

// Streaming with limit
await streamText({
  model: anthropic('claude-3'),
  prompt: 'Explain quantum physics',
  maxTokens: 2048,
});

⚙️ Options

OptionTypeDefaultDescription
allowedFunctionsstring[][]Functions that don't require maxTokens
maxRecommendednumberundefinedWarn if maxTokens exceeds this value

🛡️ Why This Matters

Unbounded token consumption can cause:

  • Cost explosion - Each token costs money
  • Denial of service - API rate limits exhausted
  • Slow responses - Long generations impact UX
  • Resource starvation - Other requests may be blocked

Known False Negatives

The following patterns are not detected due to static analysis limitations:

Options from Variable

Why: Options stored in variables are not analyzed.

// ❌ NOT DETECTED - Options from variable
const options = { model: openai('gpt-4'), prompt: 'Hello' }; // Missing maxTokens
await generateText(options);

Mitigation: Use inline options. Always specify maxTokens explicitly.

Spread Configuration

Why: Spread may hide that maxTokens is missing.

// ❌ NOT DETECTED - maxTokens may not be in base
const base = getModelConfig();
await generateText({ ...base, prompt: 'Hello' }); // maxTokens?

Mitigation: Always set maxTokens explicitly. Don't rely on spread configs.

Wrapper Functions

Why: Custom wrapper functions are not recognized.

// ❌ NOT DETECTED - Wrapper hides missing maxTokens
const result = await myGenerateText('Hello'); // Wrapper may not set limit

Mitigation: Apply rule to wrapper implementations.

Model Default Limits

Why: Model-specific defaults are not considered.

// ⚠️ MAY FLAG - Model has reasonable default
await generateText({
  model: openai('gpt-4-turbo'), // Has 4096 default
  prompt: 'Hello',
});

Mitigation: Explicitly set maxTokens for clarity.

📚 References

On this page