lievan

Results 8 issues of lievan

## Description `Test_DependencyEnable` is passing for go, but not enabled. This PR enables it. ## Motivation ## Additional notes

run-all-scenarios
mergequeue-status: error

LLM Obs backend currently does not support ingesting the numerical metric type, so the SDK needs to be updated to 1. warn users not to submit this metric type and...

backport 2.9
backport 2.10

### What does this PR do? What is the motivation? This PR documents how to set the `metadata` field for evaluation metrics in the SDK ### Merge instructions - [...

Do Not Merge

### What does this PR do? What is the motivation? This PR adds instructions on how to annotate a prompt on an LLM span ! Not to be merged yet...

This PR adds in the non-boiler plate code for the ragas faithfulness evaluator. The majority of LOC changes are from cassettes/requirements. The main logic is in `ddtrace/llmobs/_evaluators/ragas/faithfulness.py`. There are four...

changelog/no-changelog

Tracks number of tokens read from and written to the prompt cache for bedrock converse api https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html Bedrock returns `cacheWrite/ReadInputTokenCount` or `cacheWrite/ReadInputTokens` (not exactly sure why there are two names,...

Tracks number of tokens read from and written to the prompt cache for anthropic https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching anthropic returns `cache_creation/read_input_tokens` in their usage field. We map these to `cache_write/read_input_tokens` keys in our...

Tracks number of tokens read from the prompt cache for openai chat completions openai does prompt caching by default and returns a `cached_tokens` field in `prompt_tokens_details` https://platform.openai.com/docs/api-reference/chat/create We rely on...