llama-stack icon indicating copy to clipboard operation
llama-stack copied to clipboard

Composable building blocks to build Llama Apps

Results 360 llama-stack issues
Sort by recently updated
recently updated
newest added

### 🚀 Describe the new functionality needed Needed for supporting RAG on documents with images and searches using image ### 💡 Why is this needed? What if we don't build...

Run website Twitter Programs in X Box URL: https://www.twitter.com/Ai/google.dev/in/bi/Profileimage/surawutparavek/g.dev/gdevGoogleFinance.LLC>>open_source_run_Firefox_https://account.box.com/Settings/cooking/Ai/GitHubsupport/Google/adminbrowserinfo/in/Microsoft/data/user/server/cooking/open/run/Portfolio/surawutparavek_google_LLC/androidAPF812/9.3.2004

### System Info M4 Pro MacBook Pro 16" on 15.2. ### Information - [X] The official example scripts - [ ] My own modified scripts ### 🐛 Describe the bug...

### 🚀 The feature, motivation and pitch We have support for vLLM inline/remote inference provider. We should create a distribution start up guide for vLLM in https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/index.html. ### Alternatives _No...

### System Info python -m torch.utils.collect_env /opt/homebrew/anaconda3/envs/llamastack-ollama/lib/python3.10/runpy.py:126: RuntimeWarning: 'torch.utils.collect_env' found in sys.modules after import of package 'torch.utils', but prior to execution of 'torch.utils.collect_env'; this may result in unpredictable behaviour warn(RuntimeWarning(msg))...

Summary: Updates guides links in the Distribution table

CLA Signed

### System Info amd64, 1gpu ### Information - [ ] The official example scripts - [ ] My own modified scripts ### 🐛 Describe the bug `ValueError: Provider inline::llama-guard is...

question

### System Info llama = ChatOpenAI(api_key="ollama", model="llama3.2:latest", base_url="http://127.0.0.1:11434/v1") I found a bug when calling the GmailToolkit module using the ollama model llama3.2. The model tool_calls message should be 'to': ['[email protected]']},...

wontfix

### System Info Collecting environment information... PyTorch version: 2.2.2 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: macOS 13.6.6 (x86_64) GCC...

bug

### 🚀 Describe the new functionality needed The Llama stack should leverage [Feast](https://github.com/feast-dev/feast) to enable the [Model Lifecycle](https://github.com/meta-llama/llama-stack/blob/main/rfcs/RFC-0001-llama-stack.md#model-lifecycle). Feast already plays an important role in the [AI/ML Lifecyle](https://www.kubeflow.org/docs/started/architecture/#kubeflow-components-in-the-ml-lifecycle) in Kubeflow...

enhancement