llama-stack icon indicating copy to clipboard operation
llama-stack copied to clipboard

Composable building blocks to build Llama Apps

Results 360 llama-stack issues
Sort by recently updated
recently updated
newest added

Here is the full trace of logs: > Enter a name for your Llama Stack (e.g. my-local-stack): test > Enter the image type you want your Llama Stack to be...

stale

Hey, I've launched the llama stack build with either python 3.10 and 3.12 and got this error all the time : > Enter a name for your Llama Stack (e.g....

Do you have plans to support Retrieval-Augmented-Generation (RAG), constructing a vector database based on personal data and then performing the corresponding retrieval and generation processes?

Updated text references of `HuggingFace` to `Hugging Face`. Also a Minor typo, `donwload` to `download`. No change to parameter values. (and great work on llama-stack!)

CLA Signed

Only after pip install step, llama cli command could be used (which is also specified in the notebook), so its common sense to put it before

CLA Signed

For windows, it uses the default user profile path in C drive. But mostly, the C drive is not the large enough path for download large models. Is there a...

``` File "/usr/local/Caskroom/miniconda/base/envs/llama/lib/python3.10/site-packages/llama_stack/providers/adapters/inference/together/__init__.py", line 7, in from .config import TogetherImplConfig, TogetherHeaderExtractor File "/usr/local/Caskroom/miniconda/base/envs/llama/lib/python3.10/site-packages/llama_stack/providers/adapters/inference/together/config.py", line 11, in from llama_stack.distribution.request_headers import annotate_header ImportError: cannot import name 'annotate_header' from 'llama_stack.distribution.request_headers' (/usr/local/Caskroom/miniconda/base/envs/llama/lib/python3.10/site-packages/llama_stack/distribution/request_headers.py) ``` This...

…he llama_models version to 0.0.24 as the latest version 0.0.35 has the model descriptor name changed. I was getting the missing package error during runtime as well, hence added the...

CLA Signed

`docker run llamastack/llamastack-local-gpu:latest` does nothing

stale

Every so often I'll get something like this in my output stream when doing async processing ```json {"event":{"event_type":"progress","delta":" errors","logprobs":null,"stop_reason":null}} ``` here's the code reading and printing the response, it's a...