Marc Klingen
Marc Klingen
### Discussed in https://github.com/orgs/langfuse/discussions/2111 Originally posted by **arthurGrigo** May 21, 2024 Not sure if this behaviour is intended as it is but when I use langchains LLM cache and get_openai_callback()...
### Discussed in https://github.com/orgs/langfuse/discussions/2097 Originally posted by **hburrichter** May 18, 2024 ### Describe the feature or potential improvement It would be great to see the rendered Markdown of a trace...
### Describe the bug Currently the `run_id` needs to be available while the callback handler is executed. This works fine when the callbackhandler is passed to invoke/call/stream/... methods. However, if...
### Discussed in https://github.com/orgs/langfuse/discussions/2101 Originally posted by **Shekswess** May 20, 2024 ### Describe the feature or potential improvement The idea is to add AWS Cognito Authentication that could be easily...
### Discussed in https://github.com/orgs/langfuse/discussions/2072 Originally posted by **simon-hiel** May 16, 2024 ### Describe the feature or potential improvement **Context**: Trying to migrate 100s of prompts from my database to langfuse....
When using `generate` the `GenerationChunk` object contains `usage_metadata`. It'd be useful to capture the token counts from there. Example: ``` [[GenerationChunk([...] generation_info={'usage_metadata': {'prompt_token_count': 15, 'candidates_token_count': 647, 'total_token_count': 662}})]] ``` Docs:...
## Goal It is easy to monitor LLM Security tools in Langfuse as scores. It'd be helpful to include docs, cookbook and a blog post on how to do this....