Zhongsheng Ji
Zhongsheng Ji
Perhaps we need various caching strategies, the most common is by session, e.g. each `agent.run` corresponds to a single session, in which there may be several tool call/return pairs In...
It seems to be related to the network of WSL2 or reboot, I observed the following condition: whenever the computer sleeps, the network card is shut down and the WSL2...
> It seems to be related to the network of WSL2 or reboot, I observed the following condition: whenever the computer sleeps, the network card is shut down and the...
Nice catch! I was thinking that another possible solution would be to add a prefix to all `__validators__`, like `a` -> `_validator_func_a`. ```python if __validators__: for validator_name, validator in __validators__.items():...
> For the class definition example, we could also align with the create_model behavior and raise an error, although type checkers already raise an error, and implementing it might be...
related: https://github.com/pydantic/pydantic-ai/issues/509
Very nice PR, I was wondering if we could split it into two parts and release the ThinkingPart first so developers can implement it for their own models first. For...
> We've decided to not include the `ThinkingPart` into the requests for the time being. I'm ok with this and I think Agentic agent should *think* with `thinking tool` rather...
I noticed one paper that seems to implement kv cache migration: https://arxiv.org/abs/2406.03243 Their project: https://github.com/AlibabaPAI/llumnix Sorry I'm just getting into vllm and seeing this issue. I'm curious how they did...
Seems making it a separate server extension is more reasonable. As nbconvert need more configuration when using LaTeX.