Enhance AsyncLLM to match LLM functionality
- [x] This change is worth documenting at https://docs.all-hands.dev/
- [x] Include this change in the Release Notes. If checked, you must provide an end-user friendly description for your change below
End-user friendly description of the problem this fixes or functionality this introduces.
This PR enhances the AsyncLLM class to have feature parity with the LLM class, ensuring that both synchronous and asynchronous LLM implementations provide the same functionality. This includes support for function calling, detailed logging, and proper handling of model-specific features.
Summarize what the PR does, explaining any non-trivial design decisions.
The PR makes the following changes:
- Updates AsyncLLM to support function calling in the same way as LLM
- Adds proper handling of tool calls and function arguments
- Implements detailed logging and completion tracking
- Adds support for model-specific features like reasoning effort
- Adds comprehensive unit tests for AsyncLLM functionality
The implementation closely follows the pattern established in the LLM class to ensure consistency and maintainability. The AsyncLLM class now properly handles both native function calling and function calling mocking for models that don't support it natively.
Link of any specific issues this addresses:
N/A
@tofarr can click here to continue refining the PR
To run this PR locally, use the following command:
docker run -it --rm -p 3000:3000 -v /var/run/docker.sock:/var/run/docker.sock --add-host host.docker.internal:host-gateway -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:a09f5f8-nikolaik --name openhands-app-a09f5f8 docker.all-hands.dev/all-hands-ai/openhands:a09f5f8