camel
camel copied to clipboard
[Feature Request] Add Asynchronous `run` function to BaseModelBackend
Required prerequisites
- [X] I have searched the Issue Tracker and Discussions that this hasn't already been reported. (+1 or comment there if it has.)
- [ ] Consider asking first in a Discussion.
Motivation
Enables complete support for asynchronous request llm inference, including openai model and open source model.
Solution
No response
Alternatives
No response
Additional context
No response