claude-code icon indicating copy to clipboard operation
claude-code copied to clipboard

[Feature Request] Make file read token limit dynamic based on model capabilities

Open salviz opened this issue 6 days ago • 2 comments

Summary

The current hardcoded 25,000 token limit for file reading via the Read tool doesn't scale with different Claude models and subscription tiers. Users with 1M context models are artificially limited to reading small file chunks.

Current Behavior

  • Read tool limit: 25,000 tokens (hardcoded)
  • Sonnet 4.5: Supports 64,000+ output tokens
  • 1M context models: Support 200,000+ tokens
  • User workaround: Must use MAX_MCP_OUTPUT_TOKENS environment variable

Requested Change

Make the file read token limit dynamic based on:

  1. The active Claude model being used
  2. The user's subscription tier (API limits)
  3. Available context window

Suggested Limits

Model/Tier Suggested Read Limit
Standard models 25,000 tokens (current)
Sonnet 4.5 64,000 tokens
1M context models 250,000 tokens (25% of context)

Rationale

1. Underutilized Context

Users with 1M context windows should be able to read larger files without artificial constraints. Currently, having to split a 50k token file into chunks defeats the purpose of a massive context window.

2. Model Capabilities

Different Claude models have vastly different token limits. The Read tool should automatically detect which model is active and adjust accordingly.

3. User Experience

The current workaround (export MAX_MCP_OUTPUT_TOKENS=250000) works but:

  • Requires manual configuration
  • Not discoverable for most users
  • Should be automatic based on model detection

Use Cases

  • Log analysis: Large application logs (50k-200k tokens)
  • Code review: Big generated files or concatenated code
  • Document processing: Long transcripts, reports, documentation
  • Data analysis: Large JSON/CSV files

Current Workaround

Users can manually set:

export MAX_MCP_OUTPUT_TOKENS=250000

But this should be automatic and dynamic.

Proposed Implementation

function getMaxReadTokens(model, subscriptionTier) {
  if (model.includes('1m')) return 250000;
  if (model.includes('sonnet-4.5')) return 64000;
  if (model.includes('opus')) return 32000;
  return 25000; // default fallback
}

Related Issues

  • #4002 (closed) - Original request, 63+ upvotes
  • #7679 (closed) - Request for 50k limit
  • #6910 - Read tool doesn't respect line limits

My Configuration

  • Model: Sonnet 4.5 (1M context)
  • Set limit to: 250,000 tokens (1/4 of context window)
  • Reason: Can now read substantial files without splitting

Please consider making this automatic so all users can benefit from their model's full capabilities without manual environment variable configuration.

salviz avatar Dec 20 '25 15:12 salviz

Found 1 possible duplicate issue:

  1. https://github.com/anthropics/claude-code/issues/4002

This issue will be automatically closed as a duplicate in 3 days.

  • If your issue is a duplicate, please close it and 👍 the existing issue instead
  • To prevent auto-closure, add a comment or 👎 this comment

🤖 Generated with Claude Code

github-actions[bot] avatar Dec 20 '25 15:12 github-actions[bot]

This is not a duplicate of #4002. Here's why:

#4002 (closed & locked)

  • Requested: Fixed higher limit (e.g., 50,000 tokens)
  • Solution provided: Manual MAX_MCP_OUTPUT_TOKENS environment variable
  • Status: Closed as "resolved" but still requires manual configuration

This Issue (#14888)

  • Requesting: Dynamic limits based on model capabilities
  • Solution requested: Automatic detection - no manual configuration needed
  • Key difference: Model-aware scaling (25k for standard, 250k for 1M context models)

Why This Matters

#4002's "resolution" requires users to manually discover and set an environment variable. Most users don't know this workaround exists.

This issue requests that Claude Code automatically detect which model the user is running and set appropriate limits without any manual configuration.

Feature Comparison

Aspect #4002 Solution This Request
Configuration Manual env var Automatic detection
User action required Yes No
Model-aware No Yes
Discoverability Poor Built-in

This is an enhancement on top of the existing workaround, not a duplicate request for the same functionality.

salviz avatar Dec 20 '25 16:12 salviz