llama-stack
llama-stack copied to clipboard
Composable building blocks to build Llama Apps
### System Info 121W in standby,   ### Information - [X] The official example scripts - [ ] My own modified scripts ### 🐛 Describe the bug The GPU...
Closes - https://github.com/meta-llama/llama-stack/issues/334
### 🚀 The feature, motivation and pitch Currently we support only 3.1 based models with amazon aws bedrock inference and because bedrock has also added support for llama 3.2 models...
### 🚀 The feature, motivation and pitch Create the distribution of AMD ROCm GPU like the distributions/meta-reference-gpu which is base on NVIDIA GPU. ### Alternatives _No response_ ### Additional context...
# Add Nutanix AI Endpoint This PR adds Nutanix AI Endpoint as a provider. The distribution container is at https://hub.docker.com/repository/docker/jinanz/distribution-nutanix/general. ## Setup instructions Please refer to `llama_stack/templates/nutanix/doc_template.md` for details ##...
# What does this PR do? Support HTTPS for agents api client Closes # (issue) ## Feature/Issue validation/testing/test plan ``` python -m llama_stack.apis.agents.client https://llamastack-preview.fireworks.ai 443 tools_llama_3_1 /home/bchen/.local/lib/python3.10/site-packages/pydantic/_internal/_fields.py:172: UserWarning: Field name...
### System Info It's using the versions downloaded by pip install during the llama stack build. I have an nvidia GPU ### Information - [X] The official example scripts -...
### System Info WSL2 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### 🐛 Describe the bug Running Llama3.2-3B-Instruct and get the...
### System Info On llama stack server: llama_models 0.0.45 llama_stack 0.0.45 Llama stack client is 0.0.47 ### Information - [ ] The official example scripts - [ ] My own...
### System Info N/A ### Information - [ ] The official example scripts - [ ] My own modified scripts ### 🐛 Describe the bug llama-stack-client: command not found which...