llama-stack icon indicating copy to clipboard operation
llama-stack copied to clipboard

Composable building blocks to build Llama Apps

Results 360 llama-stack issues
Sort by recently updated
recently updated
newest added

--continuation of https://github.com/meta-llama/llama-stack/pull/323 # TL;DR - Add LLM as judge meta-reference impl. Judge using Llama Stack inference_api. - Move ScoringFnDef to `*.json` based files for easier registration. - Support dynamic...

CLA Signed

### System Info Python version: 3.10.12 Pytorch version: llama_models version: 0.0.42 llama_stack version: 0.0.42 llama_stack_client version: 0.0.41 Hardware: 4xA100 (40GB VRAM/GPU) local-gpu-run.yaml file content is as following: ``` version: '2'...

### System Info NVIDIA A30 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### 🐛 Describe the bug I try build by...

### System Info - PyTorch version: 2.2.2 - Is debug build: False - CUDA used to build PyTorch: None - ROCM used to build PyTorch: N/A - - OS: macOS...

### 🚀 The feature, motivation and pitch Add a distribution sample to support AMD ROCm GPU like `meta-reference-gpu` which looks like just support Nvidia GPU ### Alternatives _No response_ ###...

I am using the latest version I just pip installed 0.0.45. In my environment (Fedora 39) I have export DOCKER_BINARY="podman" When I build, pretty much following the example, I get...

### System Info Using Windows 11 ### Information - [X] The official example scripts - [X] My own modified scripts ### 🐛 Describe the bug When running: ``` docker run...

### System Info Ubuntu, CPU only, Conda, Python 3.10 ### Information - [x] The official example scripts - [ ] My own modified scripts ### 🐛 Describe the bug I...

### System Info Linux Ubuntu Anaconda ### Information - [X] The official example scripts - [ ] My own modified scripts ### 🐛 Describe the bug llama stack build /tmp/a/llama/anaconda/envs/stack/lib/python3.10/site-packages/pydantic/_internal/_fields.py:172:...