llama-stack
llama-stack copied to clipboard
Add Runpod Provider + Distribution
Add Runpod as a inference provider for openAI compatible managed endpoints.
Testing
- Configured llama stack from scratch, set
remote::runpodas a inference provider. - Added Runpod Endpoint URL and API key.
- Started llama-stack server - llama stack run my-local-stack --port 3000
curl http://localhost:5000/inference/chat_completion \
-H "Content-Type: application/json" \
-d '{
"model": "Llama3.1-8B-Instruct",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Write me a 2 sentence poem about the moon"}
],
"sampling_params": {"temperature": 0.7, "seed": 42, "max_tokens": 512}
}' ```
@ashwinb Closed the stale PR and Created a new one. Hoping this to be reviewed soon.
@ashwinb What does it require to have this PR reviewed ? Because, Stale PRs end up having merge conflicts and then it just becomes the another reason to not review/merge. Really appreciate any guidance. Thanks.
Hey @raghotham, can you please respond to @pandyamarut soon please?
@raghotham Really appreciate you reviews. Totally understand the priorities and timeline. Hoping to getting this merged soon.