Results 9 comments of Tommy He

Can confirm that when I add a `lf.flush()` in the separate process it just hangs.

Since @0xskl mentioned it might be tied to the OS, I'm running on an arm Mac `macOS 13.5.2 22G91 arm64`.

Would be interested in anyone that got hypercorn to work with opentelemetry! I'm having trouble getting it to send any traces at all, even with 1 worker. Here's a minimal...

Metrics seem to work, but I can't get traces to show up. Is there anything wrong with the configuration in my example? Or do you have a public repo showcasing...

Can confirm that I receive the same error ``` Giving up execute_task_with_backoff(...) after 3 tries (langfuse.request.APIError: Invalid JSON (400): None) ``` when OpenAI errors out too, due to rate limiting.

@maxdeichmann I'm not using the langchain integration. We have our own function that calls an LLM API that's wrapped in `@observe(as_type="generation")`. And our outer functions are wrapped in `@observe()`. Luckily...

Refer to https://github.com/psycopg/psycopg/issues/858 as well as the warning for MacOS ARM chips in https://www.psycopg.org/psycopg3/docs/basic/install.html#binary-installation. The issue seems to be difficulty in builders targeting older versions of MacOS on ARM chips...

Hello! Wanted to follow up to see if there's been any progress for a way to disable the tracing for a section of code. For us the use case is...

I am trying to do so with tags, but not sure why it might not be working. Any help would be appreciated! This is what I'm doing but happy to...