llama-stack icon indicating copy to clipboard operation
llama-stack copied to clipboard

Composable building blocks to build Llama Apps

Results 360 llama-stack issues
Sort by recently updated
recently updated
newest added

data_url = data_url_from_image("dog.jpg") print("The obtained data url is", data_url) iterator = client.inference.chat_completion( model=model, messages=[ { "role": "user", "content": [ { "image": { "uri": data_url } }, "Write a haiku describing...

question

Before patch : ![image](https://github.com/user-attachments/assets/3d5bc110-497d-42d6-8dab-99d49944fd2b) After patch : ![image](https://github.com/user-attachments/assets/503d1c39-83eb-47d9-9bd1-8fff9ab209be)

### Command: **llama stack run Llama3.2-11B-Vision-Instruct --port 5000** **Output:** ``` Using config `/Users/mac/.llama/builds/conda/Llama3.2-11B-Vision-Instruct-run.yaml` Resolved 4 providers inner-inference => meta-reference models => __routing_table__ inference => __autorouted__ inspect => __builtin__ [2024-10-15 07:20:46,247]...

question

even ı pipped the llama stack on ubuntu 20.04 I m facing with this issue and ı tried tried sudo snap install its an offtopic command line code for me...

question

I am getting this error. ``` ValueError: ProcessGroupNCCL is only supported with GPUs, no GPUs found! ``` ``` Traceback (most recent call last): File "/usr/local/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run()...

question

how come docker image is 9GB? this is not model itself, right? it is odd to have docker image 20x larger than model itself (e.g. 1B/3B INT4)

question

it is best practice to tag docker images with information about version of source code both gpu and cpu images don't have tags and have implicit `latest` only. (e.g. for...

right now image is build for same architecture as current host. ```bash $ ./llama-stack/bin/llama stack build --template local --image-type docker --name llama-stack $ docker image inspect llamastack-llama-stack | grep Architecture...

good first issue

docker image build expects models to be stored in '/root/.llama/checkpoints/` however, elsewhere in code and documentation it is expected to be in `/.llama/checkpoints/‹model name>` having `/root/...` is very odd. on...

question

I would like to make server without agents nor database functions. but now it is impossible since images are shipped with either sqlite, redis, postresql ``` Configuring API `agents`... >...