杨睿
杨睿
Use panic recovery inside the eval goroutines, fix #79 .
related to https://github.com/encode/uvicorn/issues/111
Related to #111 #1636 may be a solution ## Reproducing ### 1. server code ```python import asyncio from fastapi import FastAPI, Request app = FastAPI() # !!!middleware is required to...
## Problem Automation session is created when status changes from activate to completed through `store.UpsertAutomationSession(...)` api. It is not created when starting. So we **can not** get this session util...
## description After redis server reconnected, the redis lua script should be load again. ## log ```log File "/usr/local/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 543, in solve_dependencies solved = await call(**sub_values) File "/usr/local/lib/python3.10/site-packages/fastapi_limiter/depends.py", line...
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this? - [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions ### 该问题是否在FAQ中有解答? | Is there an...
[vllm] fix: ensure AsyncLLM response_length less equal than max_new_tokens in generation_config.json
### Checklist Before Starting - [x] Search for similar PR(s). ### What does this PR do? - `max_tokens` is deprecated for vllm openai endpoint, use `max_completion_tokens` instead. - VLLM will...