llama-stack icon indicating copy to clipboard operation
llama-stack copied to clipboard

Add Runpod Provider

Open pandyamarut opened this issue 1 year ago • 6 comments

Why this PR We want to add Runpod as remote inference provider for Llama-stack. Runpod endpoints are OpenAI Compatible, hence it's recommended to use it with Runpod model serving endpoints.

What does PR Includes

  1. Integration with the Distribution.
  2. OpenAI as a Client.

How did we test? After setting the configuration by providing the : endpoint_url and api_key and keeping other settings as a default, launched a server using:

llama stack run remote_runpod --port 8080.

  1. Invoke the call(streaming): curl -X POST http://localhost:8080/inference/chat_completion -H "Content-Type: application/json" -d '{"model":"Llama3.1-8B-Instruct","messages":[{"content":"hello world, write me a 2 sentence poem about the moon", "role": "user"}],"stream":true}'

Response:

data: {"event":{"event_type":"start","delta":"","logprobs":null,"stop_reason":null}}

data: {"event":{"event_type":"progress","delta":"","logprobs":null,"stop_reason":null}}

data: {"event":{"event_type":"progress","delta":"Here","logprobs":null,"stop_reason":null}}

data: {"event":{"event_type":"progress","delta":"'s","logprobs":null,"stop_reason":null}}

data: {"event":{"event_type":"complete","delta":"","logprobs":null,"stop_reason":"end_of_turn"}}
  1. Invoke the call(non-streaming) curl -X POST http://localhost:8080/inference/chat_completion -H "Content-Type: application/json" -d '{"model":"Llama3.1-8B-Instruct","messages":[{"content":"hello world, write me a 2 sentence poem about the moon", "role": "user"}],"stream":false}'

Response:

data: {"completion_message":{"role":"assistant","content":"Here's a 2-sentence poem about the moon:\n\nThe moon glows softly in the midnight sky, \nA beacon of peace, as it drifts gently by.","stop_reason":"end_of_turn","tool_calls":[]},"logprobs":null}

pandyamarut avatar Sep 30 '24 11:09 pandyamarut

@ashwinb @yanxi0830 @hardikjshah when can I expect review? Thanks.

pandyamarut avatar Oct 02 '24 02:10 pandyamarut

Thanks for the PR @pandyamarut! We are putting together a few tests in the repository now so we can make sure inference works reliably (especially w.r.t. tool calling, etc.) wherever we are dealing with openai-compatible endpoints. Usually we vastly prefer a raw token API (e.g., HuggingFace's text-generation one https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/adapters/inference/tgi/tgi.py#L136). Expect some changes around here in a couple days. I will post an update when this happens. There are a couple other inference-related PRs also which are kind of languishing without review because of this issue.

ashwinb avatar Oct 03 '24 18:10 ashwinb

@ashwinb Sure. Thanks for the update. Looking forward to getting this merged soon.

pandyamarut avatar Oct 07 '24 19:10 pandyamarut

@ashwinb Is there any progress on this review - I would love to try this out. Thanks!

rachfop avatar Oct 31 '24 17:10 rachfop

@rachfop thanks for the reminder. This PR is a little stale now unfortunately and as I had referenced in my previous comment, the implementation will need to be slightly updated. You can look at other inference providers and the openai_compat.py utility we have now in place as well as tests.

ashwinb avatar Oct 31 '24 22:10 ashwinb

@ashwinb let me update the implementation. Thanks

pandyamarut avatar Oct 31 '24 22:10 pandyamarut