ArxivDigest
ArxivDigest copied to clipboard
Add support for Azure, OpenAI, Palm, Anthropic, Cohere Models - using litellm
I'm the maintainer of litellm https://github.com/BerriAI/litellm - a simple & light package to call OpenAI, Azure, Cohere, Anthropic API Endpoints
This PR adds support for models from all the above mentioned providers
Here's a sample of how it's used:
from litellm import completion
## set ENV variables
# ENV variables can be set in .env file, too. Example in .env.example
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion("command-nightly", messages)
# anthropic call
response = completion(model="claude-instant-1", messages=messages)
@rmfan would love your feedback on this 😊
Can you verify that for all the models, the responses are in the format:
({"Relevancy score": "an integer score out of 10", "Reasons for match": "1-2 sentence short reasonings"})
We've only tested this for gpt-3.5-turbo, and before we officially support any other LLM, we need to make sure that it will follow that format given the prompt in src/relevancy_prompt.txt
for some lower quality models output is not guaranteed to always be ({"Relevancy score": "an integer score out of 10", "Reasons for match": "1-2 sentence short reasonings"})
perhaps we can add a check in the output text and see if it is
type==dict and ('Relevancy score' in output_dict and 'Reasons for match' in output_dict)
this way - you can add support for new models while maintaining ur output guarantees