Emerson Gomes
Emerson Gomes
Try setting or increasing `SYSTEM_RECURSION_LIMIT` envvar
@regismesquita It's probably better to add a new entry for the new model matching the API naming from mistral: `mistral-medium-2505` and `mistral-medium-latest` to avoid removing information from the past model....
128k is for input tokens - not output tokens.
Note that recently a new feature was introduced that allows you to increase the number of indexing workers for parallel connector processing via the env var `NUM_INDEXING_WORKER`. It will not...
Fixed by #2460
We've been also looking forward to use cloud-based services such as https://azure.microsoft.com/en-us/products/ai-services/ai-document-intelligence or https://aws.amazon.com/textract/
Closing as deprecated
AWS has started a preview for the prompt caching for Claude: https://pages.awscloud.com/promptcaching-Preview.html Hope Vertex comes up next
Vertex support is now live: https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/claude-prompt-caching#use_prompt_caching