Add Runpod Provider
Why this PR We want to add Runpod as remote inference provider for Llama-stack. Runpod endpoints are OpenAI Compatible, hence it's recommended to use it with Runpod model serving endpoints.
What does PR Includes
- Integration with the Distribution.
- OpenAI as a Client.
How did we test?
After setting the configuration by providing the : endpoint_url and api_key and keeping other settings as a default, launched a server using:
llama stack run remote_runpod --port 8080.
- Invoke the call(streaming):
curl -X POST http://localhost:8080/inference/chat_completion -H "Content-Type: application/json" -d '{"model":"Llama3.1-8B-Instruct","messages":[{"content":"hello world, write me a 2 sentence poem about the moon", "role": "user"}],"stream":true}'
Response:
data: {"event":{"event_type":"start","delta":"","logprobs":null,"stop_reason":null}}
data: {"event":{"event_type":"progress","delta":"","logprobs":null,"stop_reason":null}}
data: {"event":{"event_type":"progress","delta":"Here","logprobs":null,"stop_reason":null}}
data: {"event":{"event_type":"progress","delta":"'s","logprobs":null,"stop_reason":null}}
data: {"event":{"event_type":"complete","delta":"","logprobs":null,"stop_reason":"end_of_turn"}}
- Invoke the call(non-streaming)
curl -X POST http://localhost:8080/inference/chat_completion -H "Content-Type: application/json" -d '{"model":"Llama3.1-8B-Instruct","messages":[{"content":"hello world, write me a 2 sentence poem about the moon", "role": "user"}],"stream":false}'
Response:
data: {"completion_message":{"role":"assistant","content":"Here's a 2-sentence poem about the moon:\n\nThe moon glows softly in the midnight sky, \nA beacon of peace, as it drifts gently by.","stop_reason":"end_of_turn","tool_calls":[]},"logprobs":null}
@ashwinb @yanxi0830 @hardikjshah when can I expect review? Thanks.
Thanks for the PR @pandyamarut! We are putting together a few tests in the repository now so we can make sure inference works reliably (especially w.r.t. tool calling, etc.) wherever we are dealing with openai-compatible endpoints. Usually we vastly prefer a raw token API (e.g., HuggingFace's text-generation one https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/adapters/inference/tgi/tgi.py#L136). Expect some changes around here in a couple days. I will post an update when this happens. There are a couple other inference-related PRs also which are kind of languishing without review because of this issue.
@ashwinb Sure. Thanks for the update. Looking forward to getting this merged soon.
@ashwinb Is there any progress on this review - I would love to try this out. Thanks!
@rachfop thanks for the reminder. This PR is a little stale now unfortunately and as I had referenced in my previous comment, the implementation will need to be slightly updated. You can look at other inference providers and the openai_compat.py utility we have now in place as well as tests.
@ashwinb let me update the implementation. Thanks