OpenHands icon indicating copy to clipboard operation
OpenHands copied to clipboard

Stuck on Step 0

Open JayLZhou opened this issue 1 year ago β€’ 32 comments

I can open the webpage at http://localhost:3001/ F5605EFA-CE0C-4855-A52D-C222638CE039

But, I will stucking at this step, and can't output anything else.

JayLZhou avatar Apr 08 '24 12:04 JayLZhou

Are you on a Mac?

enyst avatar Apr 08 '24 12:04 enyst

Are you on a Mac?

yes, m2

JayLZhou avatar Apr 08 '24 12:04 JayLZhou

So am I, and there's a problem that is being fixed just now, we need this branch: https://github.com/OpenDevin/OpenDevin/pull/891

Can you restart docker, then pull that branch to run it?

enyst avatar Apr 08 '24 12:04 enyst

So am I, and there's a problem that is being fixed just now, we need this branch: #891

Can you restart docker, then pull that branch to run it?

sry, it is still sticking at this step, I don't know why

JayLZhou avatar Apr 08 '24 12:04 JayLZhou

What is the last commit shown in git log?

enyst avatar Apr 08 '24 12:04 enyst

What is the last commit shown in git log?

commit 55760ec4ddc669daf4a0b8b36028d2e73c9ab17a Author: Xingyao Wang [email protected] Date: Mon Apr 8 12:59:18 2024 +0800

feat(sandbox): Support sshd-based stateful docker session (#847)

* support sshd-based stateful docker session

* use .getLogger to avoid same logging message to get printed twice

* update poetry lock for dependency

* fix ruff

* bump docker image version with sshd

* set-up random user password and only allow localhost connection for sandbox

* fix poetry

* move apt install up

commit 6e3b554317de7bc5d96ef81b4097287e05c0c4d0 Author: RaGe [email protected] Date: Sun Apr 7 15:57:31 2024 -0400

JayLZhou avatar Apr 08 '24 12:04 JayLZhou

Please do git pull again, and restart docker. It will update with a hotfix, worth trying.

But the full fix is on the branch I linked, and you need to pull it specifically.

enyst avatar Apr 08 '24 12:04 enyst

No, it is still not working. Actually, I have git pull from your linked branch, but it is still will be stuck in this step.

JayLZhou avatar Apr 08 '24 13:04 JayLZhou

Maybe ssh is not running on your machine? Also, if you try to start separately, with make start-backend Make start-frontend We will see what does backend not like. Alternatively, there should be a log file in ./logs.

enyst avatar Apr 08 '24 13:04 enyst

[email protected] start vite --port 3001

VITE v5.2.8 ready in 357 ms

➜ Local: http://localhost:3001/ ➜ Network: use --host to expose ➜ press h + enter to show help INFO: ('127.0.0.1', 63890) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiJlMzk5MWFkZS0xZWRlLTRlZDctYjFlYS04MjNjMWJkMWQzYjQifQ.XYFIdAi8Vhbw7n0iEEWhzZdue9WIJ4TqsKY68s5DFoc" [accepted] Starting loop_recv for sid: e3991ade-1ede-4ed7-b1ea-823c1bd1d3b4, False INFO: connection open INFO: ('127.0.0.1', 63902) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiJiNTU4MDg3Zi00NGFhLTQwOWItOWUyYy04YmI0MmE5NDEwMmMifQ.D4JH7BYER9ttOiKlsswN61kf1wYHz_aHt_WYQgunQ1Y" [accepted] Starting loop_recv for sid: b558087f-44aa-409b-9e2c-8bb42a94102c, False INFO: connection open INFO: ('127.0.0.1', 63904) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiJiNTU4MDg3Zi00NGFhLTQwOWItOWUyYy04YmI0MmE5NDEwMmMifQ.D4JH7BYER9ttOiKlsswN61kf1wYHz_aHt_WYQgunQ1Y" [accepted] Starting loop_recv for sid: b558087f-44aa-409b-9e2c-8bb42a94102c, False INFO: connection open INFO: 127.0.0.1:63910 - "GET /messages/total HTTP/1.1" 200 OK INFO: 127.0.0.1:63906 - "GET /refresh-files HTTP/1.1" 200 OK INFO: 127.0.0.1:63909 - "GET /configurations HTTP/1.1" 200 OK INFO: 127.0.0.1:63913 - "GET /refresh-files HTTP/1.1" 200 OK 21:23:39 - opendevin:INFO: sandbox.py:119 - Using workspace directory: /Users/zhouxiaolun/Projects/OpenDevin/workspace 21:23:39 - opendevin:INFO: sandbox.py:320 - Container stopped Darwin 21:23:39 - opendevin:WARNING: sandbox.py:336 - Using port forwarding for Mac OS. Server started by OpenDevin will not be accessible from the host machine at the moment. See https://github.com/OpenDevin/OpenDevin/issues/897 for more information. 21:23:39 - opendevin:INFO: sandbox.py:356 - Container started 21:23:40 - opendevin:INFO: sandbox.py:372 - waiting for container to start: 1, container status: running 21:23:40 - opendevin:INFO: sandbox.py:198 - Connecting to opendevin@localhost via ssh. If you encounter any issues, you can try ssh -v -p 2222 opendevin@localhost with the password '5c04f12b-b7a3-4c9b-9c19-d5ff35b12f8b' and report the issue on GitHub.

JayLZhou avatar Apr 08 '24 13:04 JayLZhou

FYI, the llm in logs is empty (response and prompt), and the opendevin_xxx.log as shown: 21:25:42 - opendevin:INFO: sandbox.py:119 - Using workspace directory: /Users/zhouxiaolun/Projects/OpenDevin/workspace 21:25:42 - opendevin:INFO: sandbox.py:320 - Container stopped 21:25:42 - opendevin:WARNING: sandbox.py:336 - Using port forwarding for Mac OS. Server started by OpenDevin will not be accessible from the host machine at the moment. See https://github.com/OpenDevin/OpenDevin/issues/897 for more information. 21:25:42 - opendevin:INFO: sandbox.py:356 - Container started 21:25:43 - opendevin:INFO: sandbox.py:372 - waiting for container to start: 1, container status: running 21:25:44 - opendevin:INFO: sandbox.py:198 - Connecting to opendevin@localhost via ssh. If you encounter any issues, you can try ssh -v -p 2222 opendevin@localhost with the password 'b0df15b7-d8dc-4b2d-baaa-eecdc7196354' and report the issue on GitHub.

JayLZhou avatar Apr 08 '24 13:04 JayLZhou

That... looks good? If there's no error when it tries to ssh, it's good news... it connects when the frontend starts a task. If you attempt to access localhost:3001?

enyst avatar Apr 08 '24 13:04 enyst

That... looks good? If there's no error when it tries to ssh, it's good news... it connects when the frontend starts a task. If you attempt to access localhost:3001?

But it. still not move to the next step, i mean i am still stop at the first step , not new plan output, not new response...

JayLZhou avatar Apr 08 '24 13:04 JayLZhou

[plugin:vite:import-analysis] Failed to resolve import "../i18n/declaration" from "src/components/Workspace.tsx". Does the file exist? C:/Users/xprat/PycharmProjects/devin ai/OpenDevin/frontend/src/components/ChatInterface.tsx:8:24C:/Users/xprat/PycharmProjects/devin ai/OpenDevin/frontend/src/components/SettingModal.tsx:27:24C:/Users/xprat/PycharmProjects/devin ai/OpenDevin/frontend/src/components/Workspace.tsx:7:24 22 | import Earth from "../assets/earth"; 23 | import Pencil from "../assets/pencil"; 24 | import { I18nKey } from "../i18n/declaration"; | ^ 25 | import { AllTabs, TabOption } from "../types/TabOption"; 26 | import Browser from "./Browser"; at formatError (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:50863:46) at TransformContext.error (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:50857:19) at normalizeUrl (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:66092:33) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:66247:47 at async Promise.all (index 10) at async TransformContext.transform (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:66168:13) at async Object.transform (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:51172:30) at async loadAndTransform (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:53923:29 at formatError (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:50863:46) at TransformContext.error (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:50857:19) at normalizeUrl (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:66092:33) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:66247:47 at async Promise.all (index 8) at async TransformContext.transform (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:66168:13) at async Object.transform (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:51172:30) at async loadAndTransform (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:53923:29) at async viteTransformMiddleware (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:63775:32 at formatError (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:50863:46) at TransformContext.error (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:50857:19) at normalizeUrl (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:66092:33) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:66247:47 at async Promise.all (index 9) at async TransformContext.transform (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:66168:13) at async Object.transform (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:51172:30) at async loadAndTransform (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:53923:29) at async viteTransformMiddleware (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:63775:32 Click outside, press Esc key, or fix the code to dismiss. You can also disable this overlay by setting server.hmr.overlay to false in vite.config.ts. neww1 neww1

googooanime avatar Apr 08 '24 14:04 googooanime

Still exist image

ζˆͺ屏2024-04-08 22 40 57

UnclePi979 avatar Apr 08 '24 14:04 UnclePi979

@hongdongyue2012 at that stage, I would make sure to wipe the container and even the image in docker, then run make build : it will redownload and rebuild.

@JayLZhou ssh daemon is started? I would refresh the page and try to send a task, to see if the container gets more activity or errors. But maybe the best is the same: try to clear image and redo, just to be sure: there have been successive updates today, both to the image and code, until it worked on Mac. Worth also, like in the screenshot above: when you see the message about the password to test with, to do that.

enyst avatar Apr 08 '24 23:04 enyst

: try to clear image and redo, just to be sure: there have been successive updates today, both to the i

DO you mean, we need to repull and rebuild our repo?

JayLZhou avatar Apr 09 '24 02:04 JayLZhou

Yes, after clearing the docker image

SmartManoj avatar Apr 09 '24 05:04 SmartManoj

Yes, after clearing the docker image

Let me try it again

JayLZhou avatar Apr 09 '24 05:04 JayLZhou

@enyst @SmartManoj Actually, I have (1) cleared our image and rebuilt the repo and (2) separately run the backend and frontend. But it still does not work.....

JayLZhou avatar Apr 09 '24 06:04 JayLZhou

have same question here

DEM1TASSE avatar Apr 09 '24 07:04 DEM1TASSE

Run this to check LLM response time.

import warnings
warnings.filterwarnings("ignore")

import tomllib as toml
from litellm import completion
from datetime import datetime
file_path=r'config.toml'
config = toml.load(open(file_path,'rb'))

messages = [{ "content": "What is the meaning of life?","role": "user"}]
dt = datetime.now()
response = completion(model=config['LLM_MODEL'], 
                        api_key=config['LLM_API_KEY'],
                        base_url=config.get('LLM_BASE_URL'),
                      messages=messages)

print(response.choices[0].message.content)

dt2 = datetime.now()
print(f"Time taken: {(dt2-dt).total_seconds():.1f}s")

SmartManoj avatar Apr 09 '24 08:04 SmartManoj

Run this to check LLM response time.

import warnings
warnings.filterwarnings("ignore")

import tomllib as toml
from litellm import completion

file_path=r'config.toml'
config = toml.load(open(file_path,'rb'))

messages = [{ "content": "What is the meaning of life?","role": "user"}]

response = completion(model=config['LLM_MODEL'], 
                        api_key=config['LLM_API_KEY'],
                        base_url=config.get('LLM_BASE_URL'),
                      messages=messages)
print(response.choices[0].message.content)

I think our api-key is good, since I can use my api-key in METAGPT

JayLZhou avatar Apr 09 '24 08:04 JayLZhou

θΏθ‘Œζ­€ε‘½δ»€ζ₯ζ£€ζŸ₯ LLM 响应既间。

import warnings
warnings.filterwarnings("ignore")

import tomllib as toml
from litellm import completion
from datetime import datetime
file_path=r'config.toml'
config = toml.load(open(file_path,'rb'))

messages = [{ "content": "What is the meaning of life?","role": "user"}]
dt = datetime.now()
response = completion(model=config['LLM_MODEL'], 
                        api_key=config['LLM_API_KEY'],
                        base_url=config.get('LLM_BASE_URL'),
                      messages=messages)

print(response.choices[0].message.content)

dt2 = datetime.now()
print(f"Time taken: {(dt2-dt).total_seconds():.1f}s")

I have the same question as also stuck in the step 0 in #908 , and this script can run successfully:

The meaning of life is a deep and complex philosophical question that has been debated for centuries. Different people and cultures have different beliefs about the purpose and meaning of life. Some may find meaning in relationships, personal achievements, or spiritual fulfillment, while others may find meaning in contributing to the well-being of others or pursuing knowledge and understanding. Ultimately, the meaning of life is a deeply personal and subjective concept that each individual must explore and define for themselves.
Time taken: 2.9s

DEM1TASSE avatar Apr 09 '24 08:04 DEM1TASSE

Add the following code to opendevin\llm\llm.py and run

if __name__ == '__main__':
    llm = LLM()
    messages = [{"content": "42?", "role": "user"}]
    response = llm.completion(messages=messages)
    print('\n' * 4 + '--' * 20)
    print(response['choices'][0]['message']['content'])
    

SmartManoj avatar Apr 09 '24 09:04 SmartManoj

@JayLZhou with make start-backend separately, after you connect the frontend and enter a question, what are the errors?

@SmartManoj why do you suspect that it's the response time or key, did you experience something related to those?

enyst avatar Apr 09 '24 14:04 enyst

@enyst In some low-end devices with 8GB RAM, even to generate "Hello", it took around ~3 mins for a ~6GB model.

SmartManoj avatar Apr 10 '24 01:04 SmartManoj

Add the following code to opendevin\llm\llm.py and run

if __name__ == '__main__':
    llm = LLM()
    messages = [{"content": "42?", "role": "user"}]
    response = llm.completion(messages=messages)
    print('\n' * 4 + '--' * 20)
    print(response['choices'][0]['message']['content'])
    

Wrongly commented here instead of in #908 for @DEM1TASSE

SmartManoj avatar Apr 10 '24 01:04 SmartManoj

@enyst In some low-end devices with 8GB RAM, even to generate "Hello", it took around ~3 mins for a ~6GB model.

Was the browser window used before, in this example? Or... the browser, simply, was it used for multiple messages? There is a history saving feature recently, which attempts to restore sessions if it has them. It ends up taking a lot of time, because I think it's adding embeddings to the local vector store...

If you experience that yourself, can you make sure to clear the browser local storage, and close all tabs used with the frontend.

enyst avatar Apr 10 '24 04:04 enyst

@enyst In some low-end devices with 8GB RAM, even to generate "Hello", it took around ~3 mins for a ~6GB model.

When testing the LLM manually.

So, I thought the user might stop the program after certain minutes by thinking it is stuck.

SmartManoj avatar Apr 10 '24 04:04 SmartManoj