openinference
openinference copied to clipboard
feat: google-adk instrumentation.
Closes #1506
From examples/tools.py:
LLM(with function call output)
TOOL
LLM(with text output)
Thanks so much for contributing @daavoo ! If it's okay as I review I might push some fixes into your branch if it's low hanging fruit just to get a few more things over the line. I see some missing attributes like token-counts and just want to ship good first version for you!
Thanks so much for contributing @daavoo ! If it's okay as I review I might push some fixes into your branch if it's low hanging fruit just to get a few more things over the line. I see some missing attributes like token-counts and just want to ship good first version for you!
No problemo, go ahead. I wanted to open the PR to check with you if the implementation (wrapping the callback properties) makes sense but I was not sure if I was filling the fields correctly (or if some important ones are missing)
Thanks for this PR @daavoo. I think the current monkey-patching approach might run into problems when users create their own callbacks (as they're encouraged to do by the docs here). The main issue is that instance-level callbacks would override the instrumentor callbacks we added at the class level. For instance, if a user sets their callbacks to None during instantiation, the instrumentor would stop working. I'm looking into another solution that should fix these issues.
Thanks for this PR @daavoo. I think the current monkey-patching approach might run into problems when users create their own callbacks (as they're encouraged to do by the docs here). The main issue is that instance-level callbacks would override the instrumentor callbacks we added at the class level. For instance, if a user sets their callbacks to
Noneduring instantiation, the instrumentor would stop working.
Yeah 😅 , I noticed that, I just put together something that worked in our context (any-agent) because we can choose to always call instrument manually after agent instantiation. But I agree is not optimal for general use cases🙏
I'm looking into another solution that should fix these issues.
In case it helps, I also looked into:
- ADK telemetry module. Felt like the current traces they generate are missing info openinference wants. . I was going to open an issue in Google ADK for them to provide something like OpenAI custom processors.
- Events. It would be more tedious but we could wrapt the main event class and try to infer the type of trace to generate based on properties of the event.
- Just patch internal framework calls
Closing this, as commented the callback patch is not the right approach for openinference use case
@daavoo if you don't mind, i think we can keep this PR open as we update the approach here given what we have so far