dosubot[bot]

Results 14136 comments of dosubot[bot]

Agenta doesn't support Gemini- or OpenAI-style custom tool formats like `{ "google_search": {} }`. The platform enforces strict JSON Schema validation for tool and parameter definitions, so any non-OpenAPI-compliant or...

Agenta doesn't support Gemini/LiteLLM tool formats like {"googleSearch": {}} or {"google_search": {}}. It enforces strict JSON Schema/OpenAPI validation for all tool definitions, so any non-standard format will trigger an invalid...

Frequent parsing failures during batch uploads in ragflow—especially when individual document parsing works—are often caused by how temporary files are handled in concurrent batch operations. When multiple files are parsed...

There aren't any official recommendations for MAX_CONCURRENT_TASKS or MAX_CONCURRENT_CHUNK_BUILDERS for OSS and ES in the codebase or docs. Based on best practices and the defaults, try lowering MAX_CONCURRENT_TASKS from 5...

Based on the code and config, the most likely cause of DNS resource exhaustion is at the server level, not OSS or ES themselves. Ragflow’s OSS and ES connections use...

The document resolution service in ragflow is the part that locates, loads, and prepares documents for parsing and chunking—this is handled by connector interfaces like `LoadConnector`, `PollConnector`, and `CheckpointedConnector` in...

Yes, high Elasticsearch (ES) CPU usage during batch parsing can absolutely cause parsing failures in ragflow. When ES hits 100% CPU, it often leads to slow responses, timeouts, or connection...

This 503 error with `no_shard_available_action_exception` and `all shards failed` is not caused by a bug in your ragflow code—it's an Elasticsearch cluster health issue. It means ES can't find any...

Lowering MAX_CONCURRENT_TASKS to 2 helps reduce resource contention, but persistent parsing failures in batch uploads usually point to deeper issues—most often backend resource exhaustion, unhealthy Elasticsearch (ES) or MinIO, or...

Deepdoc's high memory usage comes from loading all PDF pages as images (at high resolution) and keeping them, plus all intermediate data (like OCR results and layout info), in memory...