claude-code icon indicating copy to clipboard operation
claude-code copied to clipboard

IMPROVEMENT: Truncate MCP Call Logging for Large Input Payloads

Open guidedways opened this issue 6 months ago • 4 comments

Bug Description Please can MCP calls not show entire input text as some times this is quite a lot of text (content of a file being passed to the server), it should clip content like it does for its own tools and show an ellipses after a line or two

Environment Info

  • Platform: macos
  • Terminal: iTerm.app
  • Version: 1.0.17
  • Feedback ID: 766197b1-fda7-4bf8-adc8-d5467b632a12

guidedways avatar Jun 09 '25 03:06 guidedways

same feedback. my use case is having Claude code write and read linear tickets

teren-papercutlabs avatar Jun 09 '25 11:06 teren-papercutlabs

If the text is JSON we should be clipping it after a few lines. Is this not what you're seeing? Or is the text not JSON?

ashwin-ant avatar Jun 09 '25 14:06 ashwin-ant

it's json. it's truncated a lot of the time but not everytime. i'll paste you the next example i see.

teren-papercutlabs avatar Jun 10 '25 01:06 teren-papercutlabs

● gemini:gemini_review (MCP)(code: "# Intelligent FAQ Indexing ReAG Implementation\n\nAdvanced FAQ system using intelligent indexing and LLM-guided retrieval for scalable FAQ collections (1000+ pages)\n\n## Context (explain what the feature is meant to achieve for the user)\n\nWhen FAQ collections grow beyond ~100 pages, the simple ReAG approach (feeding all FAQs to LLM) becomes inefficient due to context window limits and costs. This feature implements "gleaning everything at ingestion" - extracting rich structural intelligence from FAQ documents upfront, then using this intelligence to guide LLM retrieval decisions.\n\nUsers benefit from:\n- Fast FAQ responses even with large FAQ collections (1000+ pages)\n- Intelligent navigation through complex FAQ hierarchies\n- Transparent reasoning showing which FAQ sections were considered\n- Ability to handle diverse FAQ document formats (Word, PDF, structured text)\n- Reduced LLM costs through smart context filtering\n\n## Description (an explanation of how the feature works)\n\nThe system works in two phases:\n\nPhase 1: Intelligence Extraction (at FAQ document upload)\n1. LLM analyzes entire FAQ document to extract structural intelligence\n2. Creates searchable index of categories, topics, relationships, and user journey patterns\n3. Stores rich metadata alongside raw FAQ content\n4. Generates question variations and semantic clusters\n\nPhase 2: Intelligent Retrieval (at query time)\n1. LLM consults the intelligent index to understand FAQ structure\n2. Uses index to decide which specific FAQ sections to fetch\n3. Retrieves only relevant FAQs based on reasoning\n4. Provides final answer with transparent reasoning path\n\nThis approach scales to thousands of FAQs while maintaining sub-second response times and preserving reasoning transparency.\n\n[... includes detailed TypeScript type definitions, implementation checklist, acceptance criteria, technical notes, and cost considerations ...]", description: "This is a technical specification for an intelligent FAQ indexing system that uses LLM-guided retrieval for large FAQ collections. It's designed to replace simple ReAG approaches when FAQ collections exceed 100+ pages.", focus: "architecture")

text isn't json. maybe we should truncate non-json too

teren-papercutlabs avatar Jun 11 '25 02:06 teren-papercutlabs