Auto-GPT-ZH
Auto-GPT-ZH copied to clipboard
autogpt/llm_utils.py 137行代码可以考虑优化一下
return openai.Embedding.create( input=[text], model="text-embedding-ada-002" )["data"][0]["embedding"]
可能导致 File "/media/Data/data/bone_seg/Auto-GPT-ZH-0.2.1/autogpt/memory/local.py", line 75, in add embedding = create_embedding_with_ada(text) File "/media/Data/data/bone_seg/Auto-GPT-ZH-0.2.1/autogpt/llm_utils.py", line 137, in create_embedding_with_ada return openai.Embedding.create( File "/home/anaconda3/lib/python3.9/site-packages/openai/api_resources/embedding.py", line 33, in create response = super().create(*args, **kwargs) File "/home/anaconda3/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/home/anaconda3/lib/python3.9/site-packages/openai/api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File "/home/anaconda3/lib/python3.9/site-packages/openai/api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "/home/anaconda3/lib/python3.9/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 11300 tokens (11300 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
解决思路: 1.限制此处text的长度,做个长度限制 2.直接把这块报错给expect掉
还有一个错误
可以考虑,我看了下原项目也有这个问题,如果明天他们不修复的话,我再修复
响应是对方返回东西,存在情况比较多,可能是没网,可能对方网站异常,暂时不处理