OpenHands
OpenHands copied to clipboard
`make run` failed with `libcudart.so.12: cannot open shared object file`
Describe the bug
make run failed with OSError: libcudart.so.12: cannot open shared object file: No such file or directory
Setup and configuration
Current version:
$ git log -n 1
commit baa981cda761907ae53419921e1aa5d8048d18d9 (HEAD -> main, upstream/main, origin/main, origin/HEAD)
Author: mashiro <[email protected]>
Date: Thu Apr 4 21:44:07 2024 +0800
fix: block input send event while ime composition (#701)
* fix: trigger send event while ime composition & separate input element & disable input event while initializing
* fix: eslint react plugin setting
---------
Co-authored-by: Jim Su <[email protected]>
My config.toml and environment vars (be sure to redact API keys):
My model and agent (you can see these settings in the UI):
- Model:
- Agent:
Commands I ran to install and run OpenDevin:
make build
make run
Steps to Reproduce:
- make build
- make run
Logs, error messages, and screenshots:
$ make run Running the app... make[1]: Entering directory '/repo/github/functicons/OpenDevin' Starting backend... make[1]: Entering directory '/repo/github/functicons/OpenDevin' Starting frontend...
[email protected] start vite
VITE v5.2.8 ready in 519 ms
➜ Local: http://localhost:3001/ ➜ Network: use --host to expose Traceback (most recent call last): File "/.local/share/virtualenvs/OpenDevin-fz-TgslG/lib/python3.11/site-packages/torch/init.py", line 176, in _load_global_deps ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL) File "/usr/lib/python3.11/ctypes/init.py", line 376, in init self._handle = _dlopen(self._name, mode) ^^^^^^^^^^^^^^^^^^^^^^^^^ OSError: libcudart.so.12: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/.local/share/virtualenvs/OpenDevin-fz-TgslG/bin/uvicorn", line 8, in
Additional Context
Same problem here. I'm at WSL Ubuntu 22 in a Windows 11 Pro.
function install_python() {
ver="$1" ; if [[ -z "$ver" ]] || [[ "$ver" = "-h" ]] || [[ "$ver" = "--help" ]]
then
curl -fsSL https://www.python.org/ftp/python > /tmp/pv && \
pvs=$(egrep -o '"([^"]*)"' /tmp/pv | grep '[0-9]' | grep -v '[a-z]' | sed 's+"\|/++g' | tr '\n' ' ') && \
printf "Usage: $0 VERSION\n\nVersions: $pvs\n" && exit 0
fi
function __install_python(){
apt-get install -y build-essential
apt-get install -y uuid-dev tk-dev liblzma-dev libgdbm-dev libsqlite3-dev \
libbz2-dev libreadline-dev zlib1g-dev libncursesw5-dev libssl-dev libffi-dev libllvm-dev clang
rm -r /tmp/Python* 2> /dev/null ; curl -kfSL https://www.python.org/ftp/python/$ver/Python-$ver.tgz | \
tar xzv -C /tmp/ -- && cd /tmp/Python-$ver
./configure --enable-optimizations --enable-shared CC=clang CXX=clang++
make -j$(nproc) ; make altinstall ; ldconfig
python$(echo $ver|cut -d. -f1-2) <(curl -sL https://bootstrap.pypa.io/get-pip.py)
}
__install_python $ver
}
install_python 3.10.9 ; sed -i 's+python+python3.10+g' Makefile ; rm Pipfile.lock ; make build
Ensure that you have already installed the NVIDIA CUDA toolkit. If you have not yet installed it, please refer to the official NVIDIA documentation for a step-by-step guide on how to install the CUDA toolkit on your system https://docs.nvidia.com/cuda/wsl-user-guide/index.html.
function install_python() { ver="$1" ; if [[ -z "$ver" ]] || [[ "$ver" = "-h" ]] || [[ "$ver" = "--help" ]] then curl -fsSL https://www.python.org/ftp/python > /tmp/pv && \ pvs=$(egrep -o '"([^"]*)"' /tmp/pv | grep '[0-9]' | grep -v '[a-z]' | sed 's+"\|/++g' | tr '\n' ' ') && \ printf "Usage: $0 VERSION\n\nVersions: $pvs\n" && exit 0 fi function __install_python(){ apt-get install -y build-essential apt-get install -y uuid-dev tk-dev liblzma-dev libgdbm-dev libsqlite3-dev \ libbz2-dev libreadline-dev zlib1g-dev libncursesw5-dev libssl-dev libffi-dev libllvm-dev clang rm -r /tmp/Python* 2> /dev/null ; curl -kfSL https://www.python.org/ftp/python/$ver/Python-$ver.tgz | \ tar xzv -C /tmp/ -- && cd /tmp/Python-$ver ./configure --enable-optimizations --enable-shared CC=clang CXX=clang++ make -j$(nproc) ; make altinstall ; ldconfig python$(echo $ver|cut -d. -f1-2) <(curl -sL https://bootstrap.pypa.io/get-pip.py) } __install_python $ver }install_python 3.10.9 ; sed -i 's+python+python3.10+g' Makefile ; rm Pipfile.lock ; make build
Could you explain what is your code doing? Why are you installing python 3.10 if the requirements specifies 3.11?
Ensure that you have already installed the NVIDIA CUDA toolkit. If you have not yet installed it, please refer to the official NVIDIA documentation for a step-by-step guide on how to install the CUDA toolkit on your system https://docs.nvidia.com/cuda/wsl-user-guide/index.html.
Can you ellaborate on how NVIDIA CUDA is connected with OpenDevin? Thanks.
OpenDevin doesn't use it, it's likely a transitive dependency of OpenDevin's explicit dependencies. My strongest suspect is torch
I had the same problem, to make it work I uninstalled torch, deleted Pipfile.lock and renamed Pipfile.torchidx to Pipfile and ran make build again
same problem:
config.toml:
LLM_API_KEY="ollama"
LLM_MODEL="ollama/codellama:13b"
LLM_EMBEDDING_MODEL="local"
LLM_BASE_URL="http://127.0.0.1:11434"
WORKSPACE_DIR="./workspace"
ollama running in backend on port 11434
Getting Error,
Starting backend...
Traceback (most recent call last):
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/torch/__init__.py", line 176, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File "/usr/lib/python3.11/ctypes/__init__.py", line 376, in __init__
self._handle = _dlopen(self._name, mode)
^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: libcudart.so.12: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/bin/uvicorn", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/uvicorn/main.py", line 409, in main
run(
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/uvicorn/main.py", line 575, in run
server.run()
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/uvicorn/server.py", line 65, in run
return asyncio.run(self.serve(sockets=sockets))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/uvicorn/server.py", line 69, in serve
await self._serve(sockets)
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/uvicorn/server.py", line 76, in _serve
config.load()
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/uvicorn/config.py", line 433, in load
self.loaded_app = import_from_string(self.app)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string
module = importlib.import_module(module_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/maity/OpenDevin/opendevin/server/listen.py", line 4, in <module>
import agenthub # noqa F401 (we import this to get the agents registered)
^^^^^^^^^^^^^^^
File "/home/maity/OpenDevin/agenthub/__init__.py", line 5, in <module>
from . import monologue_agent # noqa: E402
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/maity/OpenDevin/agenthub/monologue_agent/__init__.py", line 2, in <module>
from .agent import MonologueAgent
File "/home/maity/OpenDevin/agenthub/monologue_agent/agent.py", line 28, in <module>
from agenthub.monologue_agent.utils.memory import LongTermMemory
File "/home/maity/OpenDevin/agenthub/monologue_agent/utils/memory.py", line 2, in <module>
from llama_index.core import Document
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/llama_index/core/__init__.py", line 19, in <module>
from llama_index.core.indices import (
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/llama_index/core/indices/__init__.py", line 4, in <module>
from llama_index.core.indices.composability.graph import ComposableGraph
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/llama_index/core/indices/composability/__init__.py", line 4, in <module>
from llama_index.core.indices.composability.graph import ComposableGraph
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/llama_index/core/indices/composability/graph.py", line 7, in <module>
from llama_index.core.indices.base import BaseIndex
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/llama_index/core/indices/base.py", line 12, in <module>
from llama_index.core.ingestion import run_transformations
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/llama_index/core/ingestion/__init__.py", line 2, in <module>
from llama_index.core.ingestion.pipeline import (
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/llama_index/core/ingestion/pipeline.py", line 31, in <module>
from llama_index.core.ingestion.api_utils import get_client
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/llama_index/core/ingestion/api_utils.py", line 23, in <module>
from llama_index.core.ingestion.transformations import (
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/llama_index/core/ingestion/transformations.py", line 267, in <module>
ConfigurableTransformations = build_configurable_transformation_enum()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/llama_index/core/ingestion/transformations.py", line 247, in build_configurable_transformation_enum
from llama_index.embeddings.huggingface import (
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/llama_index/embeddings/huggingface/__init__.py", line 1, in <module>
from llama_index.embeddings.huggingface.base import (
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/llama_index/embeddings/huggingface/base.py", line 27, in <module>
from sentence_transformers import SentenceTransformer
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/sentence_transformers/__init__.py", line 3, in <module>
from .datasets import SentencesDataset, ParallelSentencesDataset
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/sentence_transformers/datasets/__init__.py", line 1, in <module>
from .DenoisingAutoEncoderDataset import DenoisingAutoEncoderDataset
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/sentence_transformers/datasets/DenoisingAutoEncoderDataset.py", line 1, in <module>
from torch.utils.data import Dataset
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/torch/__init__.py", line 236, in <module>
_load_global_deps()
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/torch/__init__.py", line 197, in _load_global_deps
_preload_cuda_deps(lib_folder, lib_name)
File "/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages/torch/__init__.py", line 162, in _preload_cuda_deps
raise ValueError(f"{lib_name} not found in the system path {sys.path}")
ValueError: libcublas.so.*[0-9] not found in the system path ['', '/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/bin', '/usr/lib/python311.zip', '/usr/lib/python3.11', '/usr/lib/python3.11/lib-dynload', '/home/maity/.local/share/virtualenvs/OpenDevin-umIhOGa5/lib/python3.11/site-packages', '/home/maity/OpenDevin']
make: *** [Makefile:33: start-backend] Error 1
Solved by installing torch. `pip install torch torchvision torchaudio -f https://download.pytorch.org/whl/torch_stable.html'
can you try to pull the new main and let me know if there's any updates
can you try to pull the new main and let me know if there's any updates
I am not able to verify it now, because the project just switched to Peotry from Pipenv, and there are errors when with Peotry.
you can now pull the latest main, and try to run make build if you get some errors you can send the logs here
Should be fixed with the new docker installation method