[Feature Request]: Add CometAPI support to Helicone
The Feature
Add CometAPI as a supported provider in Helicone's gateway and observability platform, enabling users to monitor, analyze, and manage their CometAPI usage through Helicone's comprehensive LLM observability features.
Motivation, pitch
Helicone currently supports many LLM providers for observability and monitoring. Adding CometAPI would:
- Expand Helicone's provider ecosystem with another OpenAI-compatible option
- Enable users to monitor CometAPI usage alongside other providers in a unified dashboard
- Provide cost tracking, latency monitoring, and quality evaluation for CometAPI requests
- Support CometAPI's multimodal capabilities through Helicone's observability features
- Offer caching, rate limiting, and security features through Helicone's gateway
This integration would benefit users who want comprehensive observability for their CometAPI usage, especially those already using Helicone for other providers.
CometAPI Resources
Implementation Offer We can implement this integration and submit a PR according to your project's standards and guidelines if you'd like.
Twitter / LinkedIn details
Hi @TensorNull , the AI Gateway is still currently in beta and we are focusing on making initial providers stable before adding more. Happy to revisit Comet in the future!
If you want a manual integration with Helicone, feel free!
Since all responses are OAI compatible, it should be straightforward. You can start with worker/src/index.ts to support a comet.helicone.ai endpoint (or whatever you'd like to call it).
You'll also need to update our costs package (packages/cost/).
Keep me updated!