Robert Craigie
Robert Craigie
Yes you can ignore the errors either by disabling the diagnostic rule or on an individual case by case basis but I would rather not do either of these if...
Thank you for investigating this and the incredibly detailed issue, I really appreciate it! This is definitely something that we should handle gracefully.
> I am still concerned about requests getting stuck in the maximum timeout once and a while which doesn't appear to be related to the python client since it was...
Thanks @domenicrosati, what did you bump the pool limit too? Additionally, what timeout are you using? We have a pretty long timeout by default which, especially if your API calls...
@tobq I think the majority of these errors seem to be caused by an issue with your setup, can you share your tsconfig?
@khanakia can you share more about your setup please? Such as: - Operating system - Node version - Full `tsconfig.json`
Thanks for the detailed report and for bisecting @palvarezcordoba! Unfortunately I don't think there's anything we can realistically do here and the fact that open telemetry worked independently of the...
Ah of course, you shouldn't have to touch anything in `_base_client`, here's an example that should work (I haven't tested it): ```py import httpx from opentelemetry.instrumentation.httpx import HTTPXClientInstrumentor from openai...
Ah sorry it comes from this example: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-httpx#using-transports-directly ```py import httpx from opentelemetry.instrumentation.httpx import ( SyncOpenTelemetryTransport, ) transport = httpx.HTTPTransport() telemetry_transport = SyncOpenTelemetryTransport(transport) ```
@Alexjkz please see this comment https://github.com/openai/openai-python/issues/645#issuecomment-1761251481