llama-stack icon indicating copy to clipboard operation
llama-stack copied to clipboard

Add Runpod Provider + Distribution

Open pandyamarut opened this issue 1 year ago • 4 comments

Add Runpod as a inference provider for openAI compatible managed endpoints.

Testing

  • Configured llama stack from scratch, set remote::runpod as a inference provider.
  • Added Runpod Endpoint URL and API key.
  • Started llama-stack server - llama stack run my-local-stack --port 3000
curl http://localhost:5000/inference/chat_completion \
-H "Content-Type: application/json" \
-d '{
	"model": "Llama3.1-8B-Instruct",
	"messages": [
		{"role": "system", "content": "You are a helpful assistant."},
		{"role": "user", "content": "Write me a 2 sentence poem about the moon"}
	],
	"sampling_params": {"temperature": 0.7, "seed": 42, "max_tokens": 512}
}' ```

pandyamarut avatar Nov 04 '24 00:11 pandyamarut

@ashwinb Closed the stale PR and Created a new one. Hoping this to be reviewed soon.

pandyamarut avatar Nov 05 '24 19:11 pandyamarut

@ashwinb What does it require to have this PR reviewed ? Because, Stale PRs end up having merge conflicts and then it just becomes the another reason to not review/merge. Really appreciate any guidance. Thanks.

pandyamarut avatar Nov 22 '24 23:11 pandyamarut

Hey @raghotham, can you please respond to @pandyamarut soon please?

ashwinb avatar Nov 24 '24 00:11 ashwinb

@raghotham Really appreciate you reviews. Totally understand the priorities and timeline. Hoping to getting this merged soon.

pandyamarut avatar Nov 26 '24 22:11 pandyamarut