require-max-tokens
This rule identifies AI SDK calls that don't specify a `maxTokens` limit. Without limits, AI responses can consume excessive tokens, leading to high costs and p
Ensures all AI calls have token limits to prevent resource exhaustion.
📊 Rule Details
| Property | Value |
|---|---|
| Type | suggestion |
| Severity | 🟡 HIGH |
| OWASP LLM | LLM10: Unbounded Consumption |
| CWE | CWE-770: Allocation of Resources Without Limits |
| CVSS | 6.5 |
| Config Default | warn (recommended), error (strict) |
🔍 What This Rule Detects
This rule identifies AI SDK calls that don't specify a maxTokens limit. Without limits, AI responses can consume excessive tokens, leading to high costs and potential denial of service.
❌ Incorrect Code
// No token limit
await generateText({
model: openai('gpt-4'),
prompt: 'Write a story',
});
// Missing maxTokens in stream
await streamText({
model: anthropic('claude-3'),
prompt: 'Explain quantum physics',
});✅ Correct Code
// With token limit
await generateText({
model: openai('gpt-4'),
prompt: 'Write a story',
maxTokens: 4096,
});
// Streaming with limit
await streamText({
model: anthropic('claude-3'),
prompt: 'Explain quantum physics',
maxTokens: 2048,
});⚙️ Options
| Option | Type | Default | Description |
|---|---|---|---|
allowedFunctions | string[] | [] | Functions that don't require maxTokens |
maxRecommended | number | undefined | Warn if maxTokens exceeds this value |
🛡️ Why This Matters
Unbounded token consumption can cause:
- Cost explosion - Each token costs money
- Denial of service - API rate limits exhausted
- Slow responses - Long generations impact UX
- Resource starvation - Other requests may be blocked
🔗 Related Rules
require-max-steps- Limit multi-step tool callingrequire-abort-signal- Enable cancellation
Known False Negatives
The following patterns are not detected due to static analysis limitations:
Options from Variable
Why: Options stored in variables are not analyzed.
// ❌ NOT DETECTED - Options from variable
const options = { model: openai('gpt-4'), prompt: 'Hello' }; // Missing maxTokens
await generateText(options);Mitigation: Use inline options. Always specify maxTokens explicitly.
Spread Configuration
Why: Spread may hide that maxTokens is missing.
// ❌ NOT DETECTED - maxTokens may not be in base
const base = getModelConfig();
await generateText({ ...base, prompt: 'Hello' }); // maxTokens?Mitigation: Always set maxTokens explicitly. Don't rely on spread configs.
Wrapper Functions
Why: Custom wrapper functions are not recognized.
// ❌ NOT DETECTED - Wrapper hides missing maxTokens
const result = await myGenerateText('Hello'); // Wrapper may not set limitMitigation: Apply rule to wrapper implementations.
Model Default Limits
Why: Model-specific defaults are not considered.
// ⚠️ MAY FLAG - Model has reasonable default
await generateText({
model: openai('gpt-4-turbo'), // Has 4096 default
prompt: 'Hello',
});Mitigation: Explicitly set maxTokens for clarity.
📚 References
require-max-steps
This rule identifies AI SDK calls that use tools but don't specify a `maxSteps` limit. Without limits, AI agents can enter infinite loops calling tools repeated
require-output-filtering
This rule identifies tool execute functions that return raw data from data sources (databases, APIs, file systems) without filtering potentially sensitive infor