gateway
gateway copied to clipboard
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with 1 fast & friendly API.
**Triton Support** - This branch contains the changes being done for Triton support. With these changes, portkey can now call Triton host. - The implementation is based on Ollama changes...
**Title:** - Make anthropic usage object in streaming mode compliant with openAI response **Description:** - Moved prompt_tokens to streamState object - used the streamState object to access the prompt_tokens **Related...
### What Happened? Anthropic transform currently sends completion_tokens and prompt_tokens in separate chunks ### What Should Have Happened? Openai format is to send them together in the last chunk ###...
**Title: Deprecate /generate endpoint for cohere provider** - Updates cohere endpoints in favour of depreciating generate endpoint. **Motivation:** (optional) - Why this change is necessary or beneficial **Related Issues:** (optional)...
**Title:** - Support Llama hosted API Model on Vertex https://console.cloud.google.com/vertex-ai/publishers/meta/model-garden/llama3-405b-instruct-maas **Description:** (optional) - Created transform and config for llama inside vertex - Handled JSON streaming response **Related Issues:** (optional) -...
### What Happened? I'm running the gateway locally using `npx @portkey-ai/gateway` and using the NodeJS SDK I got weird connection errors ```ts const portkey = new Portkey({ apiKey: " ",...
This will also support [Github Models](https://github.com/marketplace/models) More info: https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/introducing-the-azure-ai-model-inference-api/ba-p/4144292
https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/introducing-the-azure-ai-model-inference-api/ba-p/4144292