openllmetry
openllmetry copied to clipboard
fix(langchain): improve callbacks
Addresses issues from #1426 -> DONE Basically this just extends tests. The main PRs were #1522 and #1452.
ToDo: decide how to handle async callbacks... -> DONE
- [x] I have added tests that cover my changes.
- [ ] If adding a new instrumentation or changing an existing one, I've added screenshots from some observability platform showing the change.
- [x] PR name follows conventional commits format:
feat(instrumentation): ...orfix(instrumentation): .... - [ ] (If applicable) I have updated the documentation accordingly.
Hi @nirga, I have added some more checks to the tests regarding span relationship - the async tests are broken - need to figure out why... Could it be that the example you mentioned in #1426 was also async? Because I could not reproduce it in the sync case, e.g. the openai.chat (but also bedrock and cohere) spans line up nicely.
I found this: https://github.com/traceloop/openllmetry/blob/main/packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/chat_wrappers.py#L62
On another note: I did an example via traceloop: the UI shows me 2 separate traces. However when I click on StrOutputParser, I end up in the 2nd trace - this is also confirmed by the tests. Do you have an explanation?
Right! @tibor-reiss I've run this: https://github.com/traceloop/openllmetry/blob/main/packages/sample-app/sample_app/langchain_lcel.py Which is indeed async
@nirga Updated this branch with your work from #1452
@tibor-reiss ok everything is now merged! I like it that you added test, do you want to merge this with main or just close this?
@tibor-reiss ok everything is now merged! I like it that you added test, do you want to merge this with main or just close this?
@nirga If I saw correctly, you already took some stuff from here / fixed yourself in the previous PR, so it's up to you if you want to add these extras :)