[Help]: Integration with PortKey - OpenAI Compatible Endpoint
What happened?
Description:
I attempted to integrate PortKey with LiteLLM in two ways. Here’s a summary of the steps and outcomes:
Attempt 1: Configuring PortKey as an OpenAI-Compatible Endpoint.
Followed the steps to set environment variables (PORTKEY_API_KEY, PORTKEY_API_BASE), updated the compose.yaml, and configured config.yaml as per the documentation.
When testing the cURL request to http://localhost:4000/v1/chat/completions, I encountered a 400 error. The response indicated that the headers x-portkey-config or x-portkey-provider are required.
Despite adding these headers, the error persists.
Attempt 2: Using PortKey as a LiteLLM Proxy (LLM Gateway).
This approach worked, using Python. I set up PortKey with a configuration that includes x-portkey-config and I successfully made requests via LiteLLM.
Question: Could you provide guidance on the necessary LiteLLM configuration and JSON payload to send to the endpoint http://localhost:4000/v1/chat/completion to make the OpenAI-compatible endpoint work?
Thanks in advance!
Relevant log output
{
"error": {
"message": "litellm.BadRequestError: OpenAIException - Error code: 400 - {'status': 'failure', 'message': 'Either x-portkey-config or x-portkey-provider header is required'}\nReceived Model Group=portkey/gpt-4o-mini\nAvailable Model Group Fallbacks=None",
"type": null,
"param": null,
"code": "400"
}
}
Twitter / LinkedIn details
No response
set environment variables (PORTKEY_API_KEY, PORTKEY_API_BASE
can you share how you're trying to add this? via config.yaml or .completion?
- Add the environment variables to the .env file:
PORTKEY_API_KEY=lx************************
PORTKEY_API_BASE=https://api.portkey.ai/v1
- Add them to the compose.yaml (as I have the ones of the other providers):
PORTKEY_API_BASE: ${PORTKEY_API_BASE}
PORTKEY_API_KEY: ${PORTKEY_API_KEY}
- Set config.yaml:
# PortKey - unknown model names
- model_name: "portkey/*"
litellm_params:
model: "openai/portkey/*" # Use OpenAI route
api_base: os.environ/PORTKEY_API_BASE
api_key: os.environ/PORTKEY_API_KEY
It doesn't work either, only specifying one model:
# PortKey
- model_name: "portkey/gpt-4o"
litellm_params:
model: "openai/portkey/gpt-4o" # Use OpenAI route
api_base: os.environ/PORTKEY_API_BASE
api_key: os.environ/PORTKEY_API_KEY
- Test
curl --request POST \
--url http://localhost:4000/v1/chat/completions \
--header 'Content-Type: application/json' \
--header 'x-portkey-config: pc-portke-*****' \ # whether including this header or not I always get the same error
--data '{
"model": "portkey/gpt-4o",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant. Answer the user's question."
},
{
"role": "user",
"content": "how are you today?"
}
],
"stream": false
}
'
- Error response:
{
"error": {
"message": "litellm.BadRequestError: OpenAIException - Error code: 400 - {'status': 'failure', 'message': 'Either x-portkey-config or x-portkey-provider header is required'}\nReceived Model Group=portkey/gpt-4o-mini\nAvailable Model Group Fallbacks=None",
"type": null,
"param": null,
"code": "400"
}
}
Hi @krrishdholakia could you take a look at this please? Thanks in advance!
"message": "litellm.BadRequestError: OpenAIException - Error code: 400 - {'status': 'failure', 'message': 'Either x-portkey-config or x-portkey-provider header is required'}\nReceived Model Group=portkey/gpt-4o-mini\nAvailable Model Group Fallbacks=None",
hey @mvrodrig your error indicates your missing some portkey headers
you can add these to your config like this:
- model_name: "portkey/*"
litellm_params:
model: "openai/portkey/*" # Use OpenAI route
api_base: os.environ/PORTKEY_API_BASE
api_key: os.environ/PORTKEY_API_KEY
extra_headers: {"x-portkey-provider": ..}
Hey @krrishdholakia ,
I tried this two ways in my config, none of them worked:
# PortKey
- model_name: "portkey/gpt-4o"
litellm_params:
model: "openai/portkey/gpt-4o" # Use OpenAI route
api_base: os.environ/PORTKEY_API_BASE
api_key: os.environ/PORTKEY_API_KEY
extra_headers: {"x-portkey-config": "pc-portke-******"}
and:
# PortKey
- model_name: "portkey/gpt-4o"
litellm_params:
model: "openai/portkey/gpt-4o" # Use OpenAI route
api_base: os.environ/PORTKEY_API_BASE
api_key: os.environ/PORTKEY_API_KEY
extra_headers:
x-portkey-config: "pc-portke-******"
Error I get:
{
"error": {
"message": "litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': 'openai error: invalid model ID', 'type': 'invalid_request_error', 'param': None, 'code': None}, 'provider': 'openai'}\nReceived Model Group=portkey/gpt-4o\nAvailable Model Group Fallbacks=None",
"type": null,
"param": null,
"code": "400"
}
}
It doesn't work with the x-portkey-provider header either.
Would it be too complicated for you to add Portkey as Provider? Thank you for your support!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.
This works for me. I am using a config to define the model and its configuration. I am using this with CrewAI and Crew uses Litellm behind the scene.
from crewai import LLM from portkey_ai import createHeaders
def get_llm():
return LLM(
model="openai/portkey/*",
base_url="https://api.portkey.ai/v1",
api_key="dummy",
extra_headers=createHeaders(
api_key="