open-interpreter
open-interpreter copied to clipboard
Cryptic error when now API key set in a system
Describe the bug
Interpreter fails if there's no API key set in system variable, even though I self-host the model and don't need any key.
Open Interpreter will require approval before running code.
Use interpreter -y to bypass this.
Press CTRL-C to exit.
> hello
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Traceback (most recent call last):
File "E:\Python\3.10\lib\site-packages\interpreter\core\respond.py", line 44, in respond
for chunk in interpreter._llm(messages_for_llm):
File "E:\Python\3.10\lib\site-packages\interpreter\llm\convert_to_coding_llm.py", line 19, in coding_llm
for chunk in text_llm(messages):
File "E:\Python\3.10\lib\site-packages\interpreter\llm\setup_text_llm.py", line 117, in base_llm
return litellm.completion(**params)
File "E:\Python\3.10\lib\site-packages\litellm\utils.py", line 798, in wrapper
raise e
File "E:\Python\3.10\lib\site-packages\litellm\utils.py", line 757, in wrapper
result = original_function(*args, **kwargs)
File "E:\Python\3.10\lib\site-packages\litellm\timeout.py", line 53, in wrapper
result = future.result(timeout=local_timeout_duration)
File "E:\Python\3.10\lib\concurrent\futures\_base.py", line 458, in result
return self.__get_result()
File "E:\Python\3.10\lib\concurrent\futures\_base.py", line 403, in __get_result
raise self._exception
File "E:\Python\3.10\lib\site-packages\litellm\timeout.py", line 42, in async_func
return func(*args, **kwargs)
File "E:\Python\3.10\lib\site-packages\litellm\main.py", line 1195, in completion
raise exception_type(
File "E:\Python\3.10\lib\site-packages\litellm\utils.py", line 3173, in exception_type
raise e
File "E:\Python\3.10\lib\site-packages\litellm\utils.py", line 2530, in exception_type
raise original_exception
File "E:\Python\3.10\lib\site-packages\litellm\main.py", line 447, in completion
raise e
File "E:\Python\3.10\lib\site-packages\litellm\main.py", line 429, in completion
response = openai.ChatCompletion.create(
File "E:\Python\3.10\lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "E:\Python\3.10\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 151, in create
) = cls.__prepare_create_request(
File "E:\Python\3.10\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 108, in __prepare_create_request
requestor = api_requestor.APIRequestor(
File "E:\Python\3.10\lib\site-packages\openai\api_requestor.py", line 139, in __init__
self.api_key = key or util.default_api_key()
File "E:\Python\3.10\lib\site-packages\openai\util.py", line 186, in default_api_key
raise openai.error.AuthenticationError(
openai.error.AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = <API-KEY>', or you can set the environment variable OPENAI_API_KEY=<API-KEY>). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = <PATH>'. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\Python\3.10\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "E:\Python\3.10\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "E:\Python\3.10\Scripts\interpreter.exe\__main__.py", line 7, in <module>
File "E:\Python\3.10\lib\site-packages\interpreter\core\core.py", line 26, in cli
cli(self)
File "E:\Python\3.10\lib\site-packages\interpreter\cli\cli.py", line 254, in cli
interpreter.chat()
File "E:\Python\3.10\lib\site-packages\interpreter\core\core.py", line 88, in chat
for _ in self._streaming_chat(message=message, display=display):
File "E:\Python\3.10\lib\site-packages\interpreter\core\core.py", line 109, in _streaming_chat
yield from terminal_interface(self, message)
File "E:\Python\3.10\lib\site-packages\interpreter\terminal_interface\terminal_interface.py", line 62, in terminal_interface
for chunk in interpreter.chat(message, display=False, stream=True):
File "E:\Python\3.10\lib\site-packages\interpreter\core\core.py", line 117, in _streaming_chat
yield from self._respond()
File "E:\Python\3.10\lib\site-packages\interpreter\core\core.py", line 143, in _respond
yield from respond(self)
File "E:\Python\3.10\lib\site-packages\interpreter\core\respond.py", line 91, in respond
raise Exception(f"{output}\n\nThere might be an issue with your API key(s).\n\nTo reset your API key (we'll use OPENAI_API_KEY for this example, but you may need to reset your ANTHROPIC_API_KEY, HUGGINGFACE_API_KEY, etc):\n Mac/Linux: 'export OPENAI_API_KEY=your-key-here',\n Windows: 'setx OPENAI_API_KEY your-key-here' then restart terminal.\n\n")
Exception: Traceback (most recent call last):
File "E:\Python\3.10\lib\site-packages\interpreter\core\respond.py", line 44, in respond
for chunk in interpreter._llm(messages_for_llm):
File "E:\Python\3.10\lib\site-packages\interpreter\llm\convert_to_coding_llm.py", line 19, in coding_llm
for chunk in text_llm(messages):
File "E:\Python\3.10\lib\site-packages\interpreter\llm\setup_text_llm.py", line 117, in base_llm
return litellm.completion(**params)
File "E:\Python\3.10\lib\site-packages\litellm\utils.py", line 798, in wrapper
raise e
File "E:\Python\3.10\lib\site-packages\litellm\utils.py", line 757, in wrapper
result = original_function(*args, **kwargs)
File "E:\Python\3.10\lib\site-packages\litellm\timeout.py", line 53, in wrapper
result = future.result(timeout=local_timeout_duration)
File "E:\Python\3.10\lib\concurrent\futures\_base.py", line 458, in result
return self.__get_result()
File "E:\Python\3.10\lib\concurrent\futures\_base.py", line 403, in __get_result
raise self._exception
File "E:\Python\3.10\lib\site-packages\litellm\timeout.py", line 42, in async_func
return func(*args, **kwargs)
File "E:\Python\3.10\lib\site-packages\litellm\main.py", line 1195, in completion
raise exception_type(
File "E:\Python\3.10\lib\site-packages\litellm\utils.py", line 3173, in exception_type
raise e
File "E:\Python\3.10\lib\site-packages\litellm\utils.py", line 2530, in exception_type
raise original_exception
File "E:\Python\3.10\lib\site-packages\litellm\main.py", line 447, in completion
raise e
File "E:\Python\3.10\lib\site-packages\litellm\main.py", line 429, in completion
response = openai.ChatCompletion.create(
File "E:\Python\3.10\lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "E:\Python\3.10\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 151, in create
) = cls.__prepare_create_request(
File "E:\Python\3.10\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 108, in __prepare_create_request
requestor = api_requestor.APIRequestor(
File "E:\Python\3.10\lib\site-packages\openai\api_requestor.py", line 139, in __init__
self.api_key = key or util.default_api_key()
File "E:\Python\3.10\lib\site-packages\openai\util.py", line 186, in default_api_key
raise openai.error.AuthenticationError(
openai.error.AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = <API-KEY>', or you can set the environment variable OPENAI_API_KEY=<API-KEY>). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = <PATH>'. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.
There might be an issue with your API key(s).
To reset your API key (we'll use OPENAI_API_KEY for this example, but you may need to reset your ANTHROPIC_API_KEY, HUGGINGFACE_API_KEY, etc):
Mac/Linux: 'export OPENAI_API_KEY=your-key-here',
Windo
Reproduce
This works
interpreter -ab "http://localhost:1234/v1" -m openai/TheBloke/dolphin-2.1-mistral-7B-GGUF --context_window 4000 --max_tokens 2000 --api_key NONE
This doesn't
interpreter -ab "http://localhost:1234/v1" -m openai/TheBloke/dolphin-2.1-mistral-7B-GGUF --context_window 4000 --max_tokens 2000
Expected behavior
Open Interpreter should start with default / empty key and connect to the local server, as the key is not required.
Screenshots
No response
Open Interpreter version
0.1.10
Python version
3.10.10
Operating System name and version
Windows 11
Additional context
LLM is self-hosted in LM Studio.
@DjWarmonger when you make the call via openai, the openai sdk requires a key to be set. so setting it to even 'my-fake-key' would solve the problem
I face the exact issue, how to set the key if I want to use open source models? Can you show the steps?
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Traceback (most recent call last):
File "
提供反馈/获取帮助: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info:如果需要调试此错误,请使用“litellm.set_verbose=True”。
回溯(最近一次调用最后一次):文件“”,第 198 行,在_run_module_as_main 文件 “”,第 88 行,在run_code文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python\Python311\Scripts\interpreter.exe_main.py”中,第 7 行,在_文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311_\Lib\site-packages\interpreter\core\core.py”中,第 26 行,在 cli cli(self) 中 文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\cli\cli.py”,第 254 行,在 cli interpreter.chat() 中 文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\core.py”,第 88 行,在聊天中为 self._streaming_chat(message=message, display=display):文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\core.py”,第 109 行,_streaming_chat terminal_interface(self, message) 的产量 文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\terminal_interface\terminal_interface.py”,第 62 行,在 interpreter.chat 中的块terminal_interface(message, display=False, stream=True):文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\core.py”,第 117 行,_streaming_chat self._respond() 的 yield 文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\core.py”,第 143 行,_respond respond(self) 的产量文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\respond.py”,第 44 行,响应 interpreter._llm(messages_for_llm) 中的块: 文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\llm\setup_openai_coding_llm.py”,第 94 行,coding_llm响应 = litellm.completion(**params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\litellm\utils.py”,第 798 行,在包装器中 e 文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\litellm\utils.py”,第 757 行,在包装器中 result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\litellm\timeout.py”,第 53 行,包装器 result = future.result(timeout=local_timeout_duration) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures_base.py”,第 456 行,结果返回 self.__get_result() ^^^^^^^^^^^^^^^^^^^文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures_base.py”,第 401 行,__get_result引发self.exception 文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\litellm\timeout.py”,第 42 行,在async_func返回 func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\litellm\main.py”,第 1195 行,完成 raise exception_type( ^^^^^^^^^^^^^^^ 文件 “C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\litellm\utils.py”,第 3173 行,在 exception_type raise e 文件 “C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\litellm\utils.py”,第 2530 行,exception_type引发original_exception文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\litellm\main.py”,第 447 行,在完成中引发 e 文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\litellm\main.py”,第 429 行,完成响应 = openai。ChatCompletion.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 文件 “C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_resources\chat_completion.py”,第 25 行,在 create 中返回 super().create(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 文件 “C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py“,第 155 行,在创建响应中,,api_key = requestor.request( ^^^^^^^^^^^^^^^^^^ 文件 ”C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py“,第 299 行,在请求回复中,got_stream = self._interpret_response(result, stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py”,第 710 行,在 _interpret_response self._interpret_response_line( 文件“C:\Users\liuchen\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py”,第 775 行,在_interpret_response_line引发self.handle_error_response( openai.error.PermissionError:不允许请求。Please try again later.出现了以上错误。 我已经在系统变量里设置了OPENAI_API_KEY,以及PENAI_API_BASE,base我设置的是 https://ai.fakeopen.com/v1,但是还是报错了,怎么解决呢 太好了,我一直错误,看了你写的设置我才知道base的后面添加一个/v1,我已经能用了
哈哈哈误打误撞帮了你,我的问题也已经解决了
adding --api_key fake-key
or in python:
interpreter.llm.api_key = "fake-key"