OpenHands icon indicating copy to clipboard operation
OpenHands copied to clipboard

[Bug]: Inconsistency between successful pip installation and inability to locate the installed package

Open CmetankaJDD opened this issue 1 year ago • 9 comments

Is there an existing issue for the same bug?

  • [X] I have checked the troubleshooting document at https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting
  • [X] I have checked the existing issues.

Describe the bug

There is an inconsistency between the completion of the pip installation and the inability to locate the installed package thereafter. The pip installation process finishes successfully, indicating that the library should be installed, but subsequent attempts to use or import the library fail due to it not being recognized.

Current Version

ghcr.io/opendevin/opendevin:0.5

Installation and Configuration

docker run \
    --pull=always \
    -e SANDBOX_USER_ID=$(id -u) \
    -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
    -v $WORKSPACE_BASE:/opt/workspace_base \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -p 3000:3000 \
    --add-host host.docker.internal:host-gateway \
    ghcr.io/opendevin/opendevin:0.5

Model and Agent

-groq/llama3-8b-8192 -CodeActAgent

Reproduction Steps

I am requesting an example of a Discord bot from OpenDevin. installation process was successful. However, after the installation, I cannot use the installed library.

Logs, Errors, Screenshots, and Additional Context

Снимок экрана 2024-05-07 в 08 17 23 Снимок экрана 2024-05-07 в 08 18 42 Снимок экрана 2024-05-07 в 08 18 55

CmetankaJDD avatar May 07 '24 05:05 CmetankaJDD

Could you check the latest version?

SmartManoj avatar May 11 '24 13:05 SmartManoj

Could you check the latest version?

Снимок экрана 2024-05-12 в 09 13 21

============== STEP 6

06:10:41 - opendevin:ERROR: agent_controller.py:147 - Error in loop Traceback (most recent call last): File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai.py", line 427, in completion raise e File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai.py", line 386, in completion response = openai_client.chat.completions.create(**data, timeout=timeout) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/_utils/_utils.py", line 277, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 590, in create return self._post( ^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1240, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 921, in request return self._request( ^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1005, in _request return self._retry_request( ^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1053, in _retry_request return self._request( ^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1020, in _request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': 'Please reduce the length of the messages or completion.', 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 1053, in completion raise e File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 1026, in completion response = openai_chat_completions.completion( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai.py", line 433, in completion raise OpenAIError(status_code=e.status_code, message=str(e)) litellm.llms.openai.OpenAIError: Error code: 400 - {'error': {'message': 'Please reduce the length of the messages or completion.', 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/app/opendevin/controller/agent_controller.py", line 142, in _run finished = await self.step(i) ^^^^^^^^^^^^^^^^^^ File "/app/opendevin/controller/agent_controller.py", line 256, in step action = self.agent.step(self.state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/agenthub/codeact_agent/codeact_agent.py", line 223, in step response = self.llm.completion( ^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 330, in wrapped_f return self(f, *args, **kw) ^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 467, in call do = self.iter(retry_state=retry_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 368, in iter result = action(retry_state) ^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 390, in self._add_action_func(lambda rs: rs.outcome.result()) ^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 449, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result raise self._exception File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 470, in call result = fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/app/opendevin/llm/llm.py", line 188, in wrapper resp = completion_unwrapped(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 3222, in wrapper raise e File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 3116, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 2226, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 9233, in exception_type raise e File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 8000, in exception_type raise BadRequestError( litellm.exceptions.BadRequestError: GroqException - Error code: 400 - {'error': {'message': 'Please reduce the length of the messages or completion.', 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Model: llama3-8b-8192 Messages: [{'role': 'system', 'content': 'A chat between a curious user and an artificial intelligence assista 06:10:41 - opendevin:INFO: agent_controller.py:190 - Setting agent(CodeActAgent) state from AgentState.RUNNING to AgentState.ERROR

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

Provider List: https://docs.litellm.ai/docs/providers

INFO: 192.168.65.1:50167 - "GET /api/select-file?file=main.py HTTP/1.1" 200 OK

CmetankaJDD avatar May 12 '24 06:05 CmetankaJDD

LLM should restart the kernel or Agent should tell LLM to use %pip magic.

image

--

SmartManoj avatar May 15 '24 06:05 SmartManoj

Matplotlib installation still fails after restarting the kernel and using %pip magic.

image

steventangbc avatar May 15 '24 09:05 steventangbc

output of the kernel restart?

SmartManoj avatar May 15 '24 09:05 SmartManoj

"Matplotlib is not installed."

image

steventangbc avatar May 15 '24 10:05 steventangbc

The kernel is not restarted. If so, it will show like this. image

SmartManoj avatar May 15 '24 10:05 SmartManoj

Sometimes it succeeds but other times it fails. I can't seem to reproduce the bug consistently.

image

steventangbc avatar May 15 '24 11:05 steventangbc

Could you test that PR?

SmartManoj avatar May 15 '24 12:05 SmartManoj

Closing this issue as PR has merged. Please reopen if this did not resolve the issue.

mamoodi avatar May 29 '24 19:05 mamoodi