OpenHands icon indicating copy to clipboard operation
OpenHands copied to clipboard

Can't initialize models without providing API Key for OpenAI embedding model. Is there an alternative?

Open TheLocalLab opened this issue 1 year ago • 6 comments

Describe the bug OpenAI's embedding model seams to require an API Key in order to load and use the model. I wanted to use other LLM models with this project but don't see how if its defaults to OpenAI's embeddings. Is there a way to change to a local embedding model instead?

Steps to Reproduce

  1. Do a fresh install of the program
  2. run uvicorn opendevin.server.listen:app --port 3000
  3. run npm run start -- --port 3001

Expected behavior Run other LLMs without an OpenAI API Key

Actual behavior Can't initialize models without providing API Key for OpenAI embedding model.

Additional context INFO: connection closed INFO: ('127.0.0.1', 64122) - "WebSocket /ws" [accepted] INFO: connection open Task exception was never retrieved future: <Task finished name='create controller' coro=<Session.create_controller() done, defined at C:\programs\opendevin\opendevin\server\session.py:116> exception=ValueError("\n******\nCould not load OpenAI embedding model. If you intended to use OpenAI, please check your OPENAI_API_KEY.\nOriginal error:\nNo API key found for OpenAI.\nPlease set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.\nAPI keys can be found or created at https://platform.openai.com/account/api-keys\n\nConsider using embed_model='local'.\nVisit our documentation for more embedding options: https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings.html#modules\n******")> Traceback (most recent call last): File "C:\Users\Paul\anaconda4\envs\opendevin\Lib\site-packages\llama_index\core\embeddings\utils.py", line 59, in resolve_embed_model validate_openai_api_key(embed_model.api_key) File "C:\Users\Paul\anaconda4\envs\opendevin\Lib\site-packages\llama_index\embeddings\openai\utils.py", line 104, in validate_openai_api_key raise ValueError(MISSING_API_KEY_ERROR_MESSAGE) ValueError: No API key found for OpenAI. Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization. API keys can be found or created at https://platform.openai.com/account/api-keys

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:\programs\opendevin\opendevin\server\session.py", line 132, in create_controller self.agent = AgentCls(llm) ^^^^^^^^^^^^^ File "C:\programs\opendevin\agenthub\langchains_agent_init_.py", line 75, in init self.memory = LongTermMemory() ^^^^^^^^^^^^^^^^ File "C:\programs\opendevin\agenthub\langchains_agent\utils\memory.py", line 15, in init self.index = VectorStoreIndex.from_vector_store(vector_store) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Paul\anaconda4\envs\opendevin\Lib\site-packages\llama_index\core\indices\vector_store\base.py", line 104, in from_vector_store return cls( ^^^^ File "C:\Users\Paul\anaconda4\envs\opendevin\Lib\site-packages\llama_index\core\indices\vector_store\base.py", line 71, in init else embed_model_from_settings_or_context(Settings, service_context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Paul\anaconda4\envs\opendevin\Lib\site-packages\llama_index\core\settings.py", line 274, in embed_model_from_settings_or_context return settings.embed_model ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Paul\anaconda4\envs\opendevin\Lib\site-packages\llama_index\core\settings.py", line 67, in embed_model self._embed_model = resolve_embed_model("default") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Paul\anaconda4\envs\opendevin\Lib\site-packages\llama_index\core\embeddings\utils.py", line 66, in resolve_embed_model raise ValueError( ValueError:


Could not load OpenAI embedding model. If you intended to use OpenAI, please check your OPENAI_API_KEY. Original error: No API key found for OpenAI. Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization. API keys can be found or created at https://platform.openai.com/account/api-keys

Consider using embed_model='local'. Visit our documentation for more embedding options: https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings.html#modules


TheLocalLab avatar Mar 26 '24 22:03 TheLocalLab

Have you tried to set a different LLM_MODEL this way? https://github.com/OpenDevin/OpenDevin/blob/main/README.md#picking-a-model

enyst avatar Mar 26 '24 22:03 enyst

Have you tried to set a different LLM_MODEL this way? https://github.com/OpenDevin/OpenDevin/blob/main/README.md#picking-a-model

Yes, but since I'm on windows using PowerShell, I used:

$env:LLM_API_KEY = $env:LLM_MODEL =

instead.

TheLocalLab avatar Mar 26 '24 23:03 TheLocalLab

I'm running into the same exactly problem and have been beating my head against the wall with both LMStudio and Ollama running Llama2 7b LLM (or any other LLM I've tried for that matter).

I'm using the following environment variables OPENAI_API_BASE=http://127.0.0.1:11434 OPENAI_API_KEY=NA

I also tried setting the model environment variable. LLM_MODEL=LLAMA2 & LLM_MODEL=OLLAMA

That environment variable doesn't seem to have any impact on the issue I'm seeing since it will always go straight for embeddings endpoint which doesn't exist under LMSTUDIO or OLLAMA apparently.

-----[Output from OpenDevon server failing at each step to get embeddings and never sending the questions] ------ STEP 4 Retrying llama_index.embeddings.openai.base.get_embeddings in 0.7409428528349281 seconds as it raised NotFoundError: Error code: 404 - {'error': "model 'text-embedding-ada-002' not found, try pulling it first"}. Retrying llama_index.embeddings.openai.base.get_embeddings in 1.9352484594475547 seconds as it raised NotFoundError: Error code: 404 - {'error': "model 'text-embedding-ada-002' not found, try pulling it first"}. Retrying llama_index.embeddings.openai.base.get_embeddings in 1.0679875113897173 seconds as it raised NotFoundError: Error code: 404 - {'error': "model 'text-embedding-ada-002' not found, try pulling it first"}. Retrying llama_index.embeddings.openai.base.get_embeddings in 6.227532973841202 seconds as it raised NotFoundError: Error code: 404 - {'error': "model 'text-embedding-ada-002' not found, try pulling it first"}. Retrying llama_index.embeddings.openai.base.get_embeddings in 1.10524917225532 seconds as it raised NotFoundError: Error code: 404 - {'error': "model 'text-embedding-ada-002' not found, try pulling it first"}. ------------------

------[ Output from Ollama Console when OpenDevon keeps trying to get embeddings ]------- [GIN] 2024/03/26 - 15:56:53 | 200 | 18.669764s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/03/26 - 16:00:25 | 404 | 0s | 127.0.0.1 | GET "/embeddings" [GIN] 2024/03/26 - 16:00:27 | 404 | 0s | 127.0.0.1 | GET "/embeddings" [GIN] 2024/03/26 - 16:00:27 | 404 | 0s | 127.0.0.1 | GET "/embeddings" [GIN] 2024/03/26 - 16:00:27 | 404 | 0s | 127.0.0.1 | GET "/embeddings" [GIN] 2024/03/26 - 16:00:27 | 404 | 0s | 127.0.0.1 | GET "/embeddings" [GIN] 2024/03/26 - 16:00:28 | 404 | 0s | 127.0.0.1 | GET "/embeddings" [GIN] 2024/03/26 - 16:00:28 | 404 | 0s | 127.0.0.1 | GET "/embeddings"

-----[ Proof embeddings endpoint doesn't exist when OLLAMA is running & responding ]----------- (env) D:\GitHub\OpenDevin>curl http://localhost:11434/embeddings 404 page not found (env) D:\GitHub\OpenDevin>curl http://localhost:11434/apiembeddings 404 page not found (env) D:\GitHub\OpenDevin>curl http://localhost:11434/ Ollama is running (env) D:\GitHub\OpenDevin>

I'm not even sure how important embeddings are to OpenDevon but they don't appear to be supported endpoints under all the various local hosts people use to host 3rd party LLM's using OpenAI style rest API. I'm not even sure why it's looking for model 'text-embedding-ada-002' anyways when the model is set to LLAMA2 🤔 I'd love to get this working since I've been having horrible luck with GPT-Pilot (Pythroia) since it really only seems to work with OpenAI specifically and anything else it just keeps asking questions and never gets to the coding part.

Barnacules avatar Mar 26 '24 23:03 Barnacules

Whoops 😬 https://github.com/OpenDevin/OpenDevin/pull/231 This should fix part of the issue.

I had a fix for the embeddings issue at one point, but it doesn't seem to be checked in... 👀

rbren avatar Mar 27 '24 00:03 rbren

Ok, in #231 I added two env vars:

  • LLM_EMBEDDING_MODEL
  • LLM_EMBEDDING_MODEL_BASE_URL

https://github.com/OpenDevin/OpenDevin/pull/231/files

Can y'all check out the branchrb/remove-key-assertion and see if that works for ollama? I'll try adding the local option too

rbren avatar Mar 27 '24 00:03 rbren

Ok, LLM_EMBEDDING_MODEL can be set to local or llama2. I've tested local and it seems OK

rbren avatar Mar 27 '24 00:03 rbren

I'm still encountering the issue:

These are my env vars: LLM_MODEL="huggingface/mistralai/Mistral-7B-Instruct-v0.2" LLM_API_KEY="hf_xxxxxxxxxxxxxxxxxxxxxx" LLM_EMBEDDING_MODEL="local"

Here is my output:

Task exception was never retrieved
future: <Task finished name='create controller' coro=<Session.create_controller() done, defined at /home/solomon/Documents/hfOpenAi/OpenDevin/opendevin/server/session.py:116> exception=ValueError("\n******\nCould not load OpenAI embedding model. If you intended to use OpenAI, please check your OPENAI_API_KEY.\nOriginal error:\nNo API key found for OpenAI.\nPlease set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.\nAPI keys can be found or created at https://platform.openai.com/account/api-keys\n\nConsider using embed_model='local'.\nVisit our documentation for more embedding options: https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings.html#modules\n******")>
Traceback (most recent call last):
  File "/home/solomon/genaienv/lib/python3.11/site-packages/llama_index/core/embeddings/utils.py", line 59, in resolve_embed_model
    validate_openai_api_key(embed_model.api_key)
  File "/home/solomon/genaienv/lib/python3.11/site-packages/llama_index/embeddings/openai/utils.py", line 104, in validate_openai_api_key
    raise ValueError(MISSING_API_KEY_ERROR_MESSAGE)
ValueError: No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/solomon/Documents/hfOpenAi/OpenDevin/opendevin/server/session.py", line 132, in create_controller
    self.agent = AgentCls(llm)
                 ^^^^^^^^^^^^^
  File "/home/solomon/Documents/hfOpenAi/OpenDevin/agenthub/langchains_agent/__init__.py", line 75, in __init__
    self.memory = LongTermMemory()
                  ^^^^^^^^^^^^^^^^
  File "/home/solomon/Documents/hfOpenAi/OpenDevin/agenthub/langchains_agent/utils/memory.py", line 15, in __init__
    self.index = VectorStoreIndex.from_vector_store(vector_store)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/solomon/genaienv/lib/python3.11/site-packages/llama_index/core/indices/vector_store/base.py", line 104, in from_vector_store
    return cls(
           ^^^^
  File "/home/solomon/genaienv/lib/python3.11/site-packages/llama_index/core/indices/vector_store/base.py", line 71, in __init__
    else embed_model_from_settings_or_context(Settings, service_context)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/solomon/genaienv/lib/python3.11/site-packages/llama_index/core/settings.py", line 274, in embed_model_from_settings_or_context
    return settings.embed_model
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/solomon/genaienv/lib/python3.11/site-packages/llama_index/core/settings.py", line 67, in embed_model
    self._embed_model = resolve_embed_model("default")
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/solomon/genaienv/lib/python3.11/site-packages/llama_index/core/embeddings/utils.py", line 66, in resolve_embed_model
    raise ValueError(
ValueError: 
******
Could not load OpenAI embedding model. If you intended to use OpenAI, please check your OPENAI_API_KEY.
Original error:
No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys

Consider using embed_model='local'.
Visit our documentation for more embedding options: https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings.html#modules
******

I don't have an API key for openAI and am trying to configure it with LiteLLM and HuggingFace models.

Solonce avatar Mar 27 '24 02:03 Solonce

Just to be sure--are you on the rb/remove-key-assertion branch?

git clone https://github.com/OpenDevin/OpenDevin
cd OpenDevin
git checkout rb/remove-key-assertion
# normal installation

rbren avatar Mar 27 '24 03:03 rbren

My bad, moved to the rb/remove-key-assertion branch.

However, based on the #231 fix, I think I'm running into a sort of 'Module Not Found Error' in reference to lamma_index.embeddings. I also ran pip install llama-index for good measure but it was already installed

Error:

Traceback (most recent call last):
  File "/home/solomon/genaienv/bin/uvicorn", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/home/solomon/genaienv/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/solomon/genaienv/lib/python3.11/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/home/solomon/genaienv/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/solomon/genaienv/lib/python3.11/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/solomon/genaienv/lib/python3.11/site-packages/uvicorn/main.py", line 409, in main
    run(
  File "/home/solomon/genaienv/lib/python3.11/site-packages/uvicorn/main.py", line 575, in run
    server.run()
  File "/home/solomon/genaienv/lib/python3.11/site-packages/uvicorn/server.py", line 65, in run
    return asyncio.run(self.serve(sockets=sockets))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/asyncio/runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
  File "/home/solomon/genaienv/lib/python3.11/site-packages/uvicorn/server.py", line 69, in serve
    await self._serve(sockets)
  File "/home/solomon/genaienv/lib/python3.11/site-packages/uvicorn/server.py", line 76, in _serve
    config.load()
  File "/home/solomon/genaienv/lib/python3.11/site-packages/uvicorn/config.py", line 433, in load
    self.loaded_app = import_from_string(self.app)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/solomon/genaienv/lib/python3.11/site-packages/uvicorn/importer.py", line 22, in import_from_string
    raise exc from None
  File "/home/solomon/genaienv/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string
    module = importlib.import_module(module_str)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/solomon/Documents/hfOpenAi/OpenDevin/opendevin/server/listen.py", line 3, in <module>
    import agenthub # noqa F401 (we import this to get the agents registered)
    ^^^^^^^^^^^^^^^
  File "/home/solomon/Documents/hfOpenAi/OpenDevin/agenthub/__init__.py", line 5, in <module>
    from . import langchains_agent # noqa: E402
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/solomon/Documents/hfOpenAi/OpenDevin/agenthub/langchains_agent/__init__.py", line 9, in <module>
    from agenthub.langchains_agent.utils.memory import LongTermMemory
  File "/home/solomon/Documents/hfOpenAi/OpenDevin/agenthub/langchains_agent/utils/memory.py", line 23, in <module>
    from llama_index.embeddings.huggingface import HuggingFaceEmbedding
ModuleNotFoundError: No module named 'llama_index.embeddings.huggingface'

Solonce avatar Mar 27 '24 03:03 Solonce

Can you make sure llama-index-embeddings-huggingface is installed? pip install llama-index-embeddings-huggingface https://github.com/OpenDevin/OpenDevin/blob/adc317de4c006683c2c5bb4e8823b03208d1dfa1/requirements.txt#L19

enyst avatar Mar 27 '24 07:03 enyst

hii i git checkout rb/remove-key-assertion and set variable below set WORKSPACE_DIR=test set LLM_EMBEDDING_MODEL = llama2 set LLM_EMBEDDING_MODEL_BASE_URL = http://127.0.0.1:11434

still when i add prompt on step 0 this error coming `STEP 0 Retrying llama_index.embeddings.openai.base.get_embeddings in 0.17373776383871053 seconds as it raised APIConnectionError: Connection error..

    AGENT ERROR:
    Connection error.

Traceback (most recent call last): File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\site-packages\httpx_transports\default.py", line 69, in map_httpcore_exceptions yield File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\site-packages\httpx_transports\default.py", line 233, in handle_request resp = self._pool.handle_request(req) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\site-packages\httpcore_sync\connection_pool.py", line 216, in handle_request raise exc from None File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\site-packages\httpcore_sync\connection_pool.py", line 196, in handle_request response = connection.handle_request( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\site-packages\httpcore_sync\connection.py", line 101, in handle_request return self._connection.handle_request(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\site-packages\httpcore_sync\http11.py", line 143, in handle_request raise exc File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\site-packages\httpcore_sync\http11.py", line 93, in handle_request self._send_request_headers(**kwargs) File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\site-packages\httpcore_sync\http11.py", line 151, in _send_request_headers with map_exceptions({h11.LocalProtocolError: LocalProtocolError}): File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\contextlib.py", line 155, in exit self.gen.throw(typ, value, traceback) File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\site-packages\httpcore_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.LocalProtocolError: Illegal header value b'Bearer '`

ghanshyamv1 avatar Mar 27 '24 09:03 ghanshyamv1

hii i git checkout rb/remove-key-assertion and set variable below set WORKSPACE_DIR=test set LLM_EMBEDDING_MODEL = llama2 set LLM_EMBEDDING_MODEL_BASE_URL = http://127.0.0.1:11434

still when i add prompt on step 0 this error coming `STEP 0 Retrying llama_index.embeddings.openai.base.get_embeddings in 0.17373776383871053 seconds as it raised APIConnectionError: Connection error..

    AGENT ERROR:
    Connection error.

Traceback (most recent call last): File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\site-packages\httpx_transports\default.py", line 69, in map_httpcore_exceptions yield File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\site-packages\httpx_transports\default.py", line 233, in handle_request resp = self._pool.handle_request(req) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\site-packages\httpcore_sync\connection_pool.py", line 216, in handle_request raise exc from None File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\site-packages\httpcore_sync\connection_pool.py", line 196, in handle_request response = connection.handle_request( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\site-packages\httpcore_sync\connection.py", line 101, in handle_request return self._connection.handle_request(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\site-packages\httpcore_sync\http11.py", line 143, in handle_request raise exc File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\site-packages\httpcore_sync\http11.py", line 93, in handle_request self._send_request_headers(**kwargs) File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\site-packages\httpcore_sync\http11.py", line 151, in _send_request_headers with map_exceptions({h11.LocalProtocolError: LocalProtocolError}): File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\contextlib.py", line 155, in exit self.gen.throw(typ, value, traceback) File "C:\Users\Ghanshyam\AppData\Local\Programs\Python\Python311\Lib\site-packages\httpcore_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.LocalProtocolError: Illegal header value b'Bearer '`

C:\Users\Sanket Roy\AppData\Local\Programs\Python\Python312\Lib\site-packages\onnxruntime\capi\onnxruntime_validation.py:26: UserWarning: Unsupported Windows version (11). ONNX Runtime supports Windows 10 and above, only. warnings.warn( Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in run_code File "C:\Users\Sanket Roy\AppData\Local\Programs\Python\Python312\Scripts\uvicorn.exe_main.py", line 7, in File "C:\Users\Sanket Roy\AppData\Local\Programs\Python\Python312\Lib\site-packages\click\core.py", line 1157, in call return self.main(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Sanket Roy\AppData\Local\Programs\Python\Python312\Lib\site-packages\click\core.py", line 1078, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File "C:\Users\Sanket Roy\AppData\Local\Programs\Python\Python312\Lib\site-packages\click\core.py", line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Sanket Roy\AppData\Local\Programs\Python\Python312\Lib\site-packages\click\core.py", line 783, in invoke return __callback(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Sanket Roy\AppData\Local\Programs\Python\Python312\Lib\site-packages\uvicorn\main.py", line 410, in main run( File "C:\Users\Sanket Roy\AppData\Local\Programs\Python\Python312\Lib\site-packages\uvicorn\main.py", line 578, in run server.run() File "C:\Users\Sanket Roy\AppData\Local\Programs\Python\Python312\Lib\site-packages\uvicorn\server.py", line 61, in run return asyncio.run(self.serve(sockets=sockets)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Sanket Roy\AppData\Local\Programs\Python\Python312\Lib\asyncio\runners.py", line 194, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "C:\Users\Sanket Roy\AppData\Local\Programs\Python\Python312\Lib\asyncio\runners.py", line 118, in run return self.loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Sanket Roy\AppData\Local\Programs\Python\Python312\Lib\asyncio\base_events.py", line 685, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "C:\Users\Sanket Roy\AppData\Local\Programs\Python\Python312\Lib\site-packages\uvicorn\server.py", line 68, in serve config.load() File "C:\Users\Sanket Roy\AppData\Local\Programs\Python\Python312\Lib\site-packages\uvicorn\config.py", line 473, in load self.loaded_app = import_from_string(self.app) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Sanket Roy\AppData\Local\Programs\Python\Python312\Lib\site-packages\uvicorn\importer.py", line 21, in import_from_string module = importlib.import_module(module_str) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Sanket Roy\AppData\Local\Programs\Python\Python312\Lib\importlib_init.py", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "", line 1387, in _gcd_import File "", line 1360, in _find_and_load File "", line 1331, in _find_and_load_unlocked File "", line 935, in load_unlocked File "", line 995, in exec_module File "", line 488, in call_with_frames_removed File "C:\Users\Sanket Roy\OpenDevin\opendevin\server\listen.py", line 3, in import agenthub # noqa F401 (we import this to get the agents registered) ^^^^^^^^^^^^^^^ File "C:\Users\Sanket Roy\OpenDevin\agenthub_init.py", line 6, in from . import codeact_agent # noqa: E402 ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Sanket Roy\OpenDevin\agenthub\codeact_agent_init.py", line 22, in assert ( AssertionError: Please set the OPENAI_API_KEY environment variable.

Any fix??

R3verseIN avatar Mar 27 '24 10:03 R3verseIN

Made some progress, I didn't have llamma-index-embeddings-huggingface installed.

Now I've arrived at the frontend page, but when given a prompt I receive this error:

EDIT: This was after I input a response on the frontend page. This error below was being repeated as a response from the bot in the chat.

Oops. Something went wrong: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided:
hf_wZdQE*************************FdyQ. You can find your API key at https://platform.openai.com/account/api-keys.',
'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

I have LLM_MODEL, LLM_API_KEY, and LLM_EMBEDDING_MODEL all set. I also have set and unset OPENAI_API_KEY and it appears to make no difference. Perhaps a logical error with handling huggingface litellm models?

Also, I was reading memory.py and it seems like on line 25 that its referencing a static HuggingFace model rather than taking in LLM_MODEL. Maybe a design feature, but thought it was worth mentioning.

Solonce avatar Mar 27 '24 12:03 Solonce

Thanks all for the comments. I think we have two PRs we need to get in to fix the problems here:

https://github.com/OpenDevin/OpenDevin/pull/244

https://github.com/OpenDevin/OpenDevin/pull/231

rbren avatar Mar 27 '24 13:03 rbren

#244 is merged.

Can someone try #231 (git checkout rb/remove-key-assertion) and see if it works now? Note that LLM_EMBEDDING_MODEL_BASE_URL is now just LLM_BASE_URL

rbren avatar Mar 27 '24 13:03 rbren

I tried the rb/remove-key-assertion branch, but I get errors when connecting. I am trying to make use of Ollama

error 1

  1. trying to access the frontend from a remote client npm run start -- --port 3001 --host

image

  1. When accessing front end from the local front end host
INFO:     Started server process [1285733]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)
INFO:     ('127.0.0.1', 51112) - "WebSocket /ws" [accepted]
INFO:     connection open
Task exception was never retrieved
future: <Task finished name='create controller' coro=<Session.create_controller() done, defined at /ssd1tb/project/OpenDevin/opendevin/server/session.py:116> exception=DockerException("Error while fetching server API version: ('Connection aborted.', PermissionError(13, 'Permission denied'))")>
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 700, in urlopen
    httplib_response = self._make_request(
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 395, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib/python3.10/http/client.py", line 1283, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1329, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output
    self.send(msg)
  File "/usr/lib/python3.10/http/client.py", line 976, in send
    self.connect()
  File "/usr/lib/python3/dist-packages/docker/transport/unixconn.py", line 30, in connect
    sock.connect(self.unix_socket)
PermissionError: [Errno 13] Permission denied

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 756, in urlopen
    retries = retries.increment(
  File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 532, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/usr/lib/python3/dist-packages/six.py", line 718, in reraise
    raise value.with_traceback(tb)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 700, in urlopen
    httplib_response = self._make_request(
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 395, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib/python3.10/http/client.py", line 1283, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1329, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output
    self.send(msg)
  File "/usr/lib/python3.10/http/client.py", line 976, in send
    self.connect()
  File "/usr/lib/python3/dist-packages/docker/transport/unixconn.py", line 30, in connect
    sock.connect(self.unix_socket)
urllib3.exceptions.ProtocolError: ('Connection aborted.', PermissionError(13, 'Permission denied'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/docker/api/client.py", line 214, in _retrieve_server_version
    return self.version(api_version=False)["ApiVersion"]
  File "/usr/lib/python3/dist-packages/docker/api/daemon.py", line 181, in version
    return self._result(self._get(url), json=True)
  File "/usr/lib/python3/dist-packages/docker/utils/decorators.py", line 46, in inner
    return f(self, *args, **kwargs)
  File "/usr/lib/python3/dist-packages/docker/api/client.py", line 237, in _get
    return self.get(url, **self._set_request_timeout(kwargs))
  File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 602, in get
    return self.request("GET", url, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 501, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', PermissionError(13, 'Permission denied'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/ssd1tb/project/OpenDevin/opendevin/server/session.py", line 133, in create_controller
    self.controller = AgentController(self.agent, workdir=directory, callbacks=[self.on_agent_event])
  File "/ssd1tb/project/OpenDevin/opendevin/controller/__init__.py", line 37, in __init__
    self.command_manager = CommandManager(workdir)
  File "/ssd1tb/project/OpenDevin/opendevin/controller/command_manager.py", line 9, in __init__
    self.shell = DockerInteractive(id="default", workspace_dir=dir)
  File "/ssd1tb/project/OpenDevin/opendevin/sandbox/sandbox.py", line 92, in __init__
    self.restart_docker_container()
  File "/ssd1tb/project/OpenDevin/opendevin/sandbox/sandbox.py", line 159, in restart_docker_container
    self.stop_docker_container()
  File "/ssd1tb/project/OpenDevin/opendevin/sandbox/sandbox.py", line 143, in stop_docker_container
    docker_client = docker.from_env()
  File "/usr/lib/python3/dist-packages/docker/client.py", line 96, in from_env
    return cls(
  File "/usr/lib/python3/dist-packages/docker/client.py", line 45, in __init__
    self.api = APIClient(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/docker/api/client.py", line 197, in __init__
    self._version = self._retrieve_server_version()
  File "/usr/lib/python3/dist-packages/docker/api/client.py", line 221, in _retrieve_server_version
    raise DockerException(
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', PermissionError(13, 'Permission denied'))

My environment variables:

export OPENAI_API_KEY=""
export WORKSPACE_DIR="/somedir"
export LLM_MODEL="mixtral:instruct"
export LLM_BASE_URL="http://10.7.0.18:11434"

lewismacnow avatar Mar 27 '24 14:03 lewismacnow

@lewismacnow it looks like your docker service isn't running, or the server can't connect to it. What does docker ps show?

rbren avatar Mar 27 '24 15:03 rbren

@lewismacnow it looks like your docker service isn't running, or the server can't connect to it. What does docker ps show?

I managed to get further by running as super user.

INFO:     Started server process [1313393]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:3000 (Press CTRL+C to quit)
INFO:     ('127.0.0.1', 34528) - "WebSocket /ws" [accepted]
INFO:     connection open
Workspace directory /ssd1tb/project/OpenDevin/workspace does not exist. Creating it...


==============
STEP 0
Retrying llama_index.embeddings.openai.base.get_embeddings in 0.2544940113123313 seconds as it raised APIConnectionError: Connection error..
	
	AGENT ERROR:
	Connection error.
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
    yield
  File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 233, in handle_request
    resp = self._pool.handle_request(req)
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
    raise exc from None
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
    response = connection.handle_request(
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_sync/connection.py", line 101, in handle_request
    return self._connection.handle_request(request)
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_sync/http11.py", line 143, in handle_request
    raise exc
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_sync/http11.py", line 93, in handle_request
    self._send_request_headers(**kwargs)
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_sync/http11.py", line 151, in _send_request_headers
    with map_exceptions({h11.LocalProtocolError: LocalProtocolError}):
  File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.LocalProtocolError: Illegal header value b'Bearer '

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 926, in _request
    response = self._client.send(
  File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 914, in send
    response = self._send_handling_auth(
  File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 942, in _send_handling_auth
    response = self._send_handling_redirects(
  File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 979, in _send_handling_redirects
    response = self._send_single_request(request)
  File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1015, in _send_single_request
    response = transport.handle_request(request)
  File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 232, in handle_request
    with map_httpcore_exceptions():
  File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.LocalProtocolError: Illegal header value b'Bearer '

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/ssd1tb/project/OpenDevin/opendevin/controller/__init__.py", line 85, in step
    action = self.agent.step(state)
  File "/ssd1tb/project/OpenDevin/agenthub/langchains_agent/__init__.py", line 123, in step
    self._initialize()
  File "/ssd1tb/project/OpenDevin/agenthub/langchains_agent/__init__.py", line 119, in _initialize
    self._add_event(d)
  File "/ssd1tb/project/OpenDevin/agenthub/langchains_agent/__init__.py", line 82, in _add_event
    self.memory.add_event(event)
  File "/ssd1tb/project/OpenDevin/agenthub/langchains_agent/utils/memory.py", line 61, in add_event
    self.index.insert(doc)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/indices/base.py", line 242, in insert
    self.insert_nodes(nodes, **insert_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/indices/vector_store/base.py", line 329, in insert_nodes
    self._insert(nodes, **insert_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/indices/vector_store/base.py", line 312, in _insert
    self._add_nodes_to_index(self._index_struct, nodes, **insert_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/indices/vector_store/base.py", line 233, in _add_nodes_to_index
    nodes_batch = self._get_node_with_embedding(nodes_batch, show_progress)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/indices/vector_store/base.py", line 141, in _get_node_with_embedding
    id_to_embed_map = embed_nodes(
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/indices/utils.py", line 138, in embed_nodes
    new_embeddings = embed_model.get_text_embedding_batch(
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/instrumentation/dispatcher.py", line 102, in wrapper
    self.span_drop(*args, id=id, err=e, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/instrumentation/dispatcher.py", line 77, in span_drop
    h.span_drop(*args, id=id, err=err, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/instrumentation/span_handlers/base.py", line 48, in span_drop
    span = self.prepare_to_drop_span(*args, id=id, err=err, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/instrumentation/span_handlers/null.py", line 35, in prepare_to_drop_span
    raise err
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/instrumentation/dispatcher.py", line 100, in wrapper
    result = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/base/embeddings/base.py", line 280, in get_text_embedding_batch
    embeddings = self._get_text_embeddings(cur_batch)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/embeddings/openai/base.py", line 427, in _get_text_embeddings
    return get_embeddings(
  File "/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
  File "/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
  File "/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py", line 325, in iter
    raise retry_exc.reraise()
  File "/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py", line 158, in reraise
    raise self.last_attempt.result()
  File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/embeddings/openai/base.py", line 180, in get_embeddings
    data = client.embeddings.create(input=list_of_text, model=engine, **kwargs).data
  File "/usr/local/lib/python3.10/dist-packages/openai/resources/embeddings.py", line 113, in create
    return self._post(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1208, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 897, in request
    return self._request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 950, in _request
    return self._retry_request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1021, in _retry_request
    return self._request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 950, in _request
    return self._retry_request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1021, in _retry_request
    return self._request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 950, in _request
    return self._retry_request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1021, in _retry_request
    return self._request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 950, in _request
    return self._retry_request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1021, in _retry_request
    return self._request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 950, in _request
    return self._retry_request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1021, in _retry_request
    return self._request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 950, in _request
    return self._retry_request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1021, in _retry_request
    return self._request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 950, in _request
    return self._retry_request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1021, in _retry_request
    return self._request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 950, in _request
    return self._retry_request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1021, in _retry_request
    return self._request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 950, in _request
    return self._retry_request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1021, in _retry_request
    return self._request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 950, in _request
    return self._retry_request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1021, in _retry_request
    return self._request(
  File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 960, in _request
    raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.
	
	OBSERVATION:
	Connection error.


==============
STEP 1

lewismacnow avatar Mar 27 '24 15:03 lewismacnow

Whoops didn't mean to close this.

rbren avatar Mar 27 '24 15:03 rbren

@lewismacnow it looks like you need to set LLM_EMBEDDING_MODEL=local in your env

rbren avatar Mar 27 '24 15:03 rbren

@lewismacnow I changed the default to local if you want to try pulling latest

rbren avatar Mar 27 '24 15:03 rbren

@lewismacnow I changed the default to local if you want to try pulling latest

Yes you are right. I got it working with Ollama, with a local agent. (Very impressive!)

My solution to run with Ollama was to edit the /etc/environment file

export OPENAI_API_KEY=""
export WORKSPACE_DIR="/tmp"
export LLM_EMBEDDING_MODEL="local"
export LLM_BASE_URL="http://10.7.0.18:11434"
export LLM_MODEL="ollama_chat/mixtral:instruct"

Then I ran

sudo uvicorn opendevin.server.listen:app --port 3000 --host 0.0.0.0 and npm run start -- --port 3001 --host

I started getting errors after around step 48, where I got Agent errors

==============
STEP 47
	
	ACTION:
	AgentThinkAction(thought='I have considered installing PhantomJS or Puppeteer using their respective installation files, but I still need to find a way to download these files without requiring user input. I will search for a command-line tool that can automate browser tasks and use it to download the installation files.', runnable=False)


==============
STEP 48
	
	AGENT ERROR:
	Error condensing thoughts: Invalid json output: {
	"new\_monologue": [
	{"action": "summarize", "args": {"summary": "I need to find a command-line tool that can automate browser tasks and download installation files for PhantomJS or Puppeteer without requiring user input."}}
	]
	}
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/langchain_core/output_parsers/json.py", line 212, in parse_result
    return parse_json_markdown(text)
  File "/usr/local/lib/python3.10/dist-packages/langchain_core/output_parsers/json.py", line 157, in parse_json_markdown
    parsed = parser(json_str)
  File "/usr/local/lib/python3.10/dist-packages/langchain_core/output_parsers/json.py", line 125, in parse_partial_json
    return json.loads(s, strict=strict)
  File "/usr/lib/python3.10/json/__init__.py", line 359, in loads
    return cls(**kw).decode(s)
  File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python3.10/json/decoder.py", line 353, in raw_decode
    obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Invalid \escape: line 2 column 5 (char 6)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/ssd1tb/project/OpenDevin/agenthub/langchains_agent/utils/monologue.py", line 32, in condense
    self.thoughts = prompts.parse_summary_response(summary_resp)
  File "/ssd1tb/project/OpenDevin/agenthub/langchains_agent/utils/prompts.py", line 177, in parse_summary_response
    parsed = parser.parse(response)
  File "/usr/local/lib/python3.10/dist-packages/langchain_core/output_parsers/json.py", line 218, in parse
    return self.parse_result([Generation(text=text)])
  File "/usr/local/lib/python3.10/dist-packages/langchain_core/output_parsers/json.py", line 215, in parse_result
    raise OutputParserException(msg, llm_output=text) from e
langchain_core.exceptions.OutputParserException: Invalid json output: {
"new\_monologue": [
{"action": "summarize", "args": {"summary": "I need to find a command-line tool that can automate browser tasks and download installation files for PhantomJS or Puppeteer without requiring user input."}}
]
}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/ssd1tb/project/OpenDevin/opendevin/controller/__init__.py", line 85, in step
    action = self.agent.step(state)
  File "/ssd1tb/project/OpenDevin/agenthub/langchains_agent/__init__.py", line 162, in step
    self._add_event(d)
  File "/ssd1tb/project/OpenDevin/agenthub/langchains_agent/__init__.py", line 84, in _add_event
    self.monologue.condense(self.llm)
  File "/ssd1tb/project/OpenDevin/agenthub/langchains_agent/utils/monologue.py", line 35, in condense
    raise RuntimeError(f"Error condensing thoughts: {e}")
RuntimeError: Error condensing thoughts: Invalid json output: {
"new\_monologue": [
{"action": "summarize", "args": {"summary": "I need to find a command-line tool that can automate browser tasks and download installation files for PhantomJS or Puppeteer without requiring user input."}}
]
}
	
	OBSERVATION:
	Error condensing thoughts: Invalid json output: {
	"new\_monologue": [
	{"action": "summarize", "args": {"summary": "I need to find a command-line tool that can automate browser tasks and download installation files for PhantomJS or Puppeteer without requiring user input."}}
	]
	}

lewismacnow avatar Mar 27 '24 16:03 lewismacnow

@rbren this invalid JSON error seems to be unrelated to the original bug. I'll create a seperate issue. Thanks for the assistance.

lewismacnow avatar Mar 27 '24 16:03 lewismacnow

🎉 looking much better! thanks

rbren avatar Mar 27 '24 18:03 rbren

This should be fixed now--if y'all are still struggling with embeddings please open a new issue.

Thanks all for the help chasing this one down!

rbren avatar Mar 27 '24 19:03 rbren

Process SpawnProcess-1: Traceback (most recent call last): File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2288.0_x64__qbz5n2kfra8p0\Lib\multiprocessing\process.py", line 314, in _bootstrap self.run() File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2288.0_x64__qbz5n2kfra8p0\Lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\uvicorn_subprocess.py", line 78, in subprocess_started target(sockets=sockets) File "C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\uvicorn\server.py", line 65, in run return asyncio.run(self.serve(sockets=sockets)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2288.0_x64__qbz5n2kfra8p0\Lib\asyncio\runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2288.0_x64__qbz5n2kfra8p0\Lib\asyncio\runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2288.0_x64__qbz5n2kfra8p0\Lib\asyncio\base_events.py", line 654, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\uvicorn\server.py", line 69, in serve await self._serve(sockets) File "C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\uvicorn\server.py", line 76, in serve config.load() File "C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\uvicorn\config.py", line 433, in load self.loaded_app = import_from_string(self.app) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\uvicorn\importer.py", line 19, in import_from_string
module = importlib.import_module(module_str) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2288.0_x64__qbz5n2kfra8p0\Lib\importlib_init
.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "", line 1204, in _gcd_import File "", line 1176, in _find_and_load File "", line 1147, in _find_and_load_unlocked File "", line 690, in load_unlocked File "", line 940, in exec_module File "", line 241, in call_with_frames_removed File "C:\Users\Desktop\OpenDevin\opendevin\server\listen.py", line 3, in import agenthub # noqa F401 (we import this to get the agents registered) ^^^^^^^^^^^^^^^ File "C:\Users\Desktop\OpenDevin\agenthub_init.py", line 5, in from . import langchains_agent # noqa: E402 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Desktop\OpenDevin\agenthub\langchains_agent_init.py", line 2, in from .langchains_agent import LangchainsAgent File "C:\Users\Desktop\OpenDevin\agenthub\langchains_agent\langchains_agent.py", line 8, in from agenthub.langchains_agent.utils.memory import LongTermMemory File "C:\Users\Desktop\OpenDevin\agenthub\langchains_agent\utils\memory.py", line 38, in embed_model = HuggingFaceEmbedding( ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\llama_index\embeddings\huggingface\base.py", line 86, in init self._model = SentenceTransformer( ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\sentence_transformers\SentenceTransformer.py", line 191, in init modules = self._load_sbert_model( ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\sentence_transformers\SentenceTransformer.py", line 1163, in _load_sbert_model with open(config_sentence_transformers_json_path) as fIn: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\\AppData\Local\llama_index\models--BAAI--bge-small-en-v1.5\snapshots\5c38ec7c405ec4b44b94cc5a9bb96e735b38267a\config_sentence_transformers.json'

Yemresalcan avatar Mar 29 '24 10:03 Yemresalcan

@Yemresalcan can you open a new issue, and include all of your settings, and the commands you ran?

rbren avatar Mar 29 '24 15:03 rbren