[Bug]: Inconsistency between successful pip installation and inability to locate the installed package
Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting
- [X] I have checked the existing issues.
Describe the bug
There is an inconsistency between the completion of the pip installation and the inability to locate the installed package thereafter. The pip installation process finishes successfully, indicating that the library should be installed, but subsequent attempts to use or import the library fail due to it not being recognized.
Current Version
ghcr.io/opendevin/opendevin:0.5
Installation and Configuration
docker run \
--pull=always \
-e SANDBOX_USER_ID=$(id -u) \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
ghcr.io/opendevin/opendevin:0.5
Model and Agent
-groq/llama3-8b-8192 -CodeActAgent
Reproduction Steps
I am requesting an example of a Discord bot from OpenDevin. installation process was successful. However, after the installation, I cannot use the installed library.
Logs, Errors, Screenshots, and Additional Context
Could you check the latest version?
Could you check the latest version?
============== STEP 6
06:10:41 - opendevin:ERROR: agent_controller.py:147 - Error in loop Traceback (most recent call last): File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai.py", line 427, in completion raise e File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai.py", line 386, in completion response = openai_client.chat.completions.create(**data, timeout=timeout) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/_utils/_utils.py", line 277, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 590, in create return self._post( ^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1240, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 921, in request return self._request( ^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1005, in _request return self._retry_request( ^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1053, in _retry_request return self._request( ^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1020, in _request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': 'Please reduce the length of the messages or completion.', 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 1053, in completion raise e File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 1026, in completion response = openai_chat_completions.completion( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai.py", line 433, in completion raise OpenAIError(status_code=e.status_code, message=str(e)) litellm.llms.openai.OpenAIError: Error code: 400 - {'error': {'message': 'Please reduce the length of the messages or completion.', 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/opendevin/controller/agent_controller.py", line 142, in _run
finished = await self.step(i)
^^^^^^^^^^^^^^^^^^
File "/app/opendevin/controller/agent_controller.py", line 256, in step
action = self.agent.step(self.state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/agenthub/codeact_agent/codeact_agent.py", line 223, in step
response = self.llm.completion(
^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 330, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 467, in call
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 368, in iter
result = action(retry_state)
^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 390, in
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Provider List: https://docs.litellm.ai/docs/providers
INFO: 192.168.65.1:50167 - "GET /api/select-file?file=main.py HTTP/1.1" 200 OK
LLM should restart the kernel or Agent should tell LLM to use %pip magic.
--
Matplotlib installation still fails after restarting the kernel and using %pip magic.
output of the kernel restart?
"Matplotlib is not installed."
The kernel is not restarted. If so, it will show like this.
Sometimes it succeeds but other times it fails. I can't seem to reproduce the bug consistently.
Could you test that PR?
Closing this issue as PR has merged. Please reopen if this did not resolve the issue.