codeinterpreter-api
codeinterpreter-api copied to clipboard
Check for LLM usage
Analyze the project and check how efficient it is using the OpenAI API or LLMs in general. GPT-4 is expensive so every token counts. Think about improvements and write them down as issues.
for this problem, i think we can add API budget variable like auto-gpt
Yeah good idea but I thought there is maybe just a better implementation without tradeoffs in performance
Gpt-4 also has a rate limiter,50 hits so I changed it to 3.5 turbo which also gives good results.
Hi, I'm fairly new to coding and I came across this question. I'd like to change the LLM model to GPT-3.5 turbo, but I'm having a hard time doing so. I went into session.py and changed the following line to "gpt-3.5-turbo" and I'm still having issues. Would you be able to help out?
def _choose_llm( self, model: str = "gpt-4", openai_api_key: Optional[str] = None, **kwargs