Emilio Garcia
Emilio Garcia
## Screenshots Using LlamaStack from main: `llama stack run starter` NOTE: The client span is identical because that came from the openai client which I instrument ### HTTP Post ###...
@ehhuang take a look and let me know your thoughts. It looks like something we were not tracking when we did the testing was the output from the model routing...
@leseb I addressed what remains of telemetry API here. Should be resolved now, thanks for checking. Please take another look once CI is back on.
This is a slight deviation from the original vision, specifically, llama stack would no longer manage instantiating and managing telemetry exporters and collectors. That would all be left up to...
I fixed this to shuffle some of the changes back into #4127. 4127 now addresses everything related to the telemetry config. This handles the telemetry_traceable object. PTAL once CI is...
Just seeing this now, let us get back to you after the holidays
Would you mind just throwing it into a draft PR? Reviewing integrations takes us some time because we have educate ourselves on the tool, and then on the library, and...
Hi, this is an interesting proposal. The way the agent determines how to weigh a transaction and the data within it is not tunable at the moment. We do think...
Of course @abhibongale! Responses API types are stored [here](https://github.com/llamastack/llama-stack/blob/main/llama_stack/apis/agents/openai_responses.py), which is where you can find the [response object ](https://github.com/llamastack/llama-stack/blob/7c466a7ec5b5e3180db475e38b9f0aff5c7f3433/llama_stack/apis/agents/openai_responses.py#L323). This should be updated to contain `max_output_tokens` based on the [openAI...
Hi @gnunn1 this is not currently planned to be contributed back to Agents API. That API was deprecated due to a reasons like: it deviating strongly from industry standards and...