Alternative models in OpenAI
Hi, after using the module, I realized that your code only accepts specific types of models in openAI, which are GPT 3.5 and GPT 4 (not turbo).
Since there are multiple other models and maybe potential models that have different names, for example, OpenAI just announced GPT-4o, can you implement the code to accept those models?
You can edit the function api_call on utils.py
Maybe removing if deployment_name in ["gpt-35-turbo", "gpt-4", "gpt-3.5-turbo"]: and just setting a try catch instead is a good solution for what you want
"""
Call API (OpenAI, Azure, Perplexity) and return response
- prompt: prompt template
- deployment_name: name of the deployment to use (e.g. gpt-4, gpt-3.5-turbo, etc.)
- temperature: temperature parameter
- max_tokens: max tokens parameter
- top_p: top p parameter
"""
time.sleep(5) # Change to avoid rate limit
try:
response = client.chat.completions.create(
model=deployment_name,
temperature=float(temperature),
max_tokens=int(max_tokens),
top_p=float(top_p),
messages=[
{"role": "system", "content": ""},
{"role": "user", "content": prompt},
],
)
return response.choices[0].message.content
except Exception as e:
# Handle any exceptions here
print(f"An error occurred: {e}")
return None # Or you can raise an exception if required
Actually, sorry. Several functions in the code need to be changed to fix this issue.
Sorry for the late response! In our latest release, we have refactored the code to provide support for OpenAI API newest models (along with VertexAI and vLLM).