mem0 icon indicating copy to clipboard operation
mem0 copied to clipboard

Issue with get_openai_answer

Open NSuer opened this issue 1 year ago • 2 comments

max_tokens parameter being set to 1000 is an issue. With having multiple sources (with long urls) and larger webpages, this is quickly eaten up. When the token amount is exceeded no warning is given except from openAI.

openai.error.RateLimitError: The server had an error while processing your request. Sorry about that!

def get_openai_answer(self, prompt): messages = [] messages.append({ "role": "user", "content": prompt }) response = openai.ChatCompletion.create( model="gpt-3.5-turbo-0613", messages=messages, temperature=0, max_tokens=1000, top_p=1, ) return response["choices"][0]["message"]["content"]

NSuer avatar Jun 22 '23 17:06 NSuer

Do you want it to be configurable/higher by default or do you want the error message to be clearer?

cachho avatar Jun 23 '23 16:06 cachho

I think both would be beneficial. Of course have a default.

NSuer avatar Jun 25 '23 21:06 NSuer

@taranjeet @deshraj Can I pick this up if nobody is working on it?

Dev-Khant avatar Aug 02 '23 12:08 Dev-Khant

@Dev-Khant : this support is already there. You can use QueryConfig.

Also, I think we need to improve our docs to make it more clear to the user.

Closing this issue. For docs thing, tracking in https://github.com/embedchain/embedchain/issues/301

taranjeet avatar Aug 12 '23 01:08 taranjeet