chatGPTBox
chatGPTBox copied to clipboard
Independent Claude model
Is there any way to make PaLM model independent of Poe? Poe has limited access for free users. It'd be great if independent PaLM and Claude instant models are embedded in the ChatGPTBox.
I have been using openrouter.ai for this in my projects alongside poe, they got quite a variety of models - but unfortunately they require custom headers in order to use their proxy. I've been working with them to try and get this modified (this extension is one of the reasons too), but for the time being it can't be used fully interchangeably.
Hi @Aaarkay1 @zeddyemmy I believe I can help with this issue. I’m the maintainer of LiteLLM https://github.com/BerriAI/litellm - we allow you to use any LLM as a drop in replacement for gpt-3.5-turbo
.
You can use LiteLLM in the following ways:
With your own API KEY:
This calls the provider API directly
from litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-key" #
os.environ["COHERE_API_KEY"] = "your-key" #
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion(model="command-nightly", messages=messages)
Using the LiteLLM Proxy with a LiteLLM Key
this is great if you don’t have access to claude but want to use the open source LiteLLM proxy to access claude
from litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your openai key
os.environ["COHERE_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your cohere key
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion(model="command-nightly", messages=messages)
https://github.com/josStorer/chatGPTBox/releases/tag/v2.4.1
Hi @josStorer - What was litellm missing to be useful to you? Any feedback here would be helpful.