DeepCode
DeepCode copied to clipboard
Bug Fixes and Workflow Improvements
[1.0.6-jm] - 2025-10-19
Added
- Dynamic Model Limit Detection: New
utils/model_limits.pymodule that automatically detects and adapts to any LLM model's token limits and pricing - Loop Detection System:
utils/loop_detector.pyprevents infinite loops by detecting repeated tool calls, timeouts, and progress stalls - Progress Tracking: 8-phase progress tracking (5% → 100%) with file-level progress indicators in both UI and terminal
- Abort Mechanism: "Stop Processing" button in UI with global abort flag for clean process termination
- Enhanced Error Display: Real-time error messages in both UI and terminal with timestamps
- File Progress Tracking: Shows files completed/total with estimated time remaining
Fixed
- Critical: False Error Detection: Fixed overly aggressive error detection that was marking successful operations as failures, causing premature abort and empty file generation
- Critical: Empty File Generation: Files now contain actual code instead of being empty (2-byte files)
- Unique Folder Naming: Each project run now creates
paper_{timestamp}folders instead of reusingpdf_output - PDF Save Location: PDFs now save to
deepcode_lab/papers/instead of system temp directory - Duplicate Folder Prevention: Added session state caching to prevent duplicate folder creation on UI reruns
- Token Limit Compliance: Fixed
max_tokensto respect model limits dynamically (e.g., gpt-4o-mini's 16,384 token limit) - Empty Plan Detection: System now fails early with clear error messages when initial plan is empty or invalid
- Process Hanging: Fixed infinite loops and hanging on errors - process now exits cleanly
- PDF to Markdown Conversion: Fixed automatic conversion and file location handling
- Document Segmentation: Properly uses configured character threshold from
mcp_agent.config.yaml - Error Propagation: Abort mechanism now properly stops process after 10 consecutive real errors
Changed
- Model-Aware Token Management: Token limits now adapt automatically based on configured model instead of hardcoded values
- Cost Calculation: Dynamic pricing based on actual model rates (OpenAI, Anthropic)
- Retry Logic: Token limits for retries now respect model maximum (87.5% → 95% → 98% of max)
- Segmentation Workflow: Better integration with code implementation phase
- Error Handling: Enhanced error propagation - errors no longer reported as "success"
- UI Display: Shows project folder name after PDF conversion for better visibility
- Terminal Logging: Added timestamps to all progress messages
Technical Improvements
- Added document-segmentation server to code implementation workflow for better token management
- Improved error handling in agent orchestration engine with proper cleanup
- Enhanced subprocess handling on Windows (hide console windows, prevent hanging)
- Better LibreOffice detection on Windows using direct path checking
- Fixed input data format consistency (JSON with
paper_pathkey) - Added comprehensive logging throughout the pipeline
- Improved resource cleanup on errors and process termination
Documentation
- Translated Chinese comments to English in core workflow files
- Added inline documentation for new utility modules
Breaking Changes
- None - all changes are backward compatible
Known Issues
- Terminal may show trailing "Calling Tool..." line after completion (cosmetic display artifact - process completes successfully)
- tiktoken package optional warning (doesn't affect functionality)
Success Metrics
- ✅ Complete end-to-end workflow: DOCX upload → PDF conversion → Markdown → Segmentation → Planning → Code generation
- ✅ Files generated with actual code content (15+ files with proper implementation)
- ✅ Single folder per project run (no duplicates)
- ✅ Dynamic token management working across different models
- ✅ Accurate cost tracking per model
- ✅ Clean process termination with proper error handling
Checklist
- [x] Changes tested locally
- [x] Code reviewed
- [x] Documentation updated
@MonkeyCode-AI review this pr plz
@AbhishekSinghVerma ai suggestions checked, nothing to fix.