logfire
logfire copied to clipboard
Support of instrumenting Anthropic models
Originally posted by @salahaz as a discussion #88
Is there going to be in the near future support of instrumenting other AI models such Anthropic?
Related to https://github.com/pydantic/logfire/issues/109 since they already support Anthropic.
@salahaz #98 shows how you can already do this with Mirascope in the meantime while we patiently wait for this feature
@samuelcolvin I'd love to take this feature request on if you'd like :)
We've already implemented the majority of it in our library following how you instrument openai, so shouldn't be too difficult to port over (and we can use the llm tag so it shows up pretty in the UI with no additional changes)
@samuelcolvin I'd love to take this feature request on if you'd like :)
@willbakst I think that'd be very appreciated
@alexmojaki Do you have a contribution guide I can follow?
https://github.com/pydantic/logfire/blob/main/CONTRIBUTING.md
@alexmojaki I just ran through the guide:
- There are 7 skipped tests and 2 xfailed tests. Is this expected?
- Running
make docsaborts with 9 warning in strict mode. Is this expected? (I can turn strict mode tofalselocally to get it to work).
Yes and yes. Our docs use a closed source version of mkdocs with special features for sponsors/insiders, so external contributors can't actually build them fully. The contribution guide should really reflect this.
Update: I have a locally working version with tests, but a lot of the code is duplicated given the similarity with OpenAI. Going to spend some time refactoring to reduce code/test duplication and hopefully make it easier to instrument additional providers in the future (assuming they use the same SDK generation provider).
This will take a little longer to get done, but I believe well worth it.