agenta
agenta copied to clipboard
The all-in-one LLM developer platform: prompt management, evaluation, human feedback, and deployment all in one place.
Right now the user needs to explicitly return in the traced function a dict that contains the cost, message, and number of tokens. However, this information is simply the sum...
We would like instrumentation for the OpenAI library. First we need to assess the different options. I see the following implementation patterns: * Monkey patching (I think baserun do that)...
We would like to automatically instrument llm calls made with llm (without going through the ag.span). This would allow us to instrument all the informations in the call. This would...
**Describe the bug** when agenta init & agenta serve an app with a name x then delete it from ui then agenta init & agenta serve again with the same...
**Describe the bug** For reasons obscure to me, when trying to build a variant from Windows (or WSL), the automatically created entrypoint.sh does not have execution permission. Meaning the server...
AxiosError: Network Error ![20240415202642](https://github.com/Agenta-AI/agenta/assets/162455810/4a1aaa40-bde7-4bbb-aac8-7618280c86cd) what's the problem, thx
The following code results in a wrong trace ``` import agenta import agenta as ag import litellm import asyncio from supported_models import get_all_supported_llm_models litellm.drop_params = True ag.init() tracing = ag.llm_tracing()...
Context: In entity recognition tasks, the user needs to evaluate multiple outputs. For instance, say the task is to extract the author and a date from a pdf. The user...
### Context: Some users are confused that mistral/mistral-medium for instance is showing cost 0. They ask: "if I use model "mistral/mistral-medium" in the Playground, I get $0.0 cost for each...
From [SyncLinear.com](https://synclinear.com) | [AGE-153](https://linear.app/agenta/issue/AGE-153/[bug]-sorting-by-timestamp-does-not-work-in-generation-and-trace-table)