Derek Higgins

Results 21 comments of Derek Higgins

code_interpreter was covered here https://github.com/meta-llama/llama-stack/pull/1654

> providers/utils/inference/embedding_mixin.py > - blocking loading and usage of embedding model https://github.com/llamastack/llama-stack/pull/3335

I've added a PR to add vllm to the current integration jobs, can you take a look when possible.

I seem to have 2 seperate problems trying this out The first when ~/.llama/distributions doesn't exist ``` (.venv) (base) derekh@laptop:~/workarea/llama-stack$ uv run llama stack list-deps --format uv ci-tests Traceback (most...

example output ``` DEBUG 2025-11-28 14:18:57,828 llama_stack.core.server.auth:149 core::auth: Authentication successful: e91dfa83-0d63-4457-8a9d-562696c9b65d with 2 attributes DEBUG 2025-11-28 14:18:57,830 llama_stack.core.server.server:230 core::server: Incoming raw request body for POST /v1/chat/completions: { 'messages': [{'role': 'user',...

Likely introduced by https://github.com/meta-llama/llama-stack/pull/2516

Weird I get the error when I run with the venv built by the "llma stack build" command ./built/bin/llama run ... but don't when I use the main venv llama...

ok, I've just discovered $LLAMA_STACK_DIR, this solves my problem LLAMA_STACK_DIR=$PWD llama stack build .... which makes me wonder why I havn't needed it upto now for other local edits...

> ok, I've just discovered $LLAMA_STACK_DIR, this solves my problem > > LLAMA_STACK_DIR=$PWD llama stack build .... > > which makes me wonder why I havn't needed it upto now...