Langchain-Chatchat
Langchain-Chatchat copied to clipboard
ImportError: dlopen: cannot load any more object with static TLS or Segmentation fault
问题描述 / Problem Description 机器:3080 10卡 尝试运行程序,不论是启动webui.py 还是 cli_demo.py 还是docker 都是各有的错,
环境信息 / Environment Information
- langchain-ChatGLM 版本/commit 号:master
- 是否使用 Docker 部署(是/否):尝试过
- 使用的模型(ChatGLM-6B / ClueAI/ChatYuan-large-v2 等):ChatGLM-6B
- 使用的 Embedding 模型(GanymedeNil/text2vec-large-chinese 等):text2vec-large-chinese
- 操作系统及版本 / Operating system and version: centos 7.9
- Python 版本 / Python version: 3.9
- 其他相关环境信息 / Other relevant environment information: 1、 [root@OneThingEdgeCloud langchain-ChatGLM]# nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA Corporation Built on Tue_Mar__8_18:18:20_PST_2022 Cuda compilation tools, release 11.6, V11.6.124 Build cuda_11.6.r11.6/compiler.31057947_0
实际结果 / Actual Result 1、python cli_demo.py
│ /mnt/data/deploy/venv_modelscope/lib/python3.9/site-packages/sklearn/__check_build/init.py:4 │
│ 6 in
Contents of /mnt/data/deploy/venv_modelscope/lib/python3.9/site-packages/sklearn/_check_build: setup.py check_build.cpython-39-x86_64-linux-gnu.so__init.py pycache
It seems that scikit-learn has not been built correctly.
If you have installed scikit-learn from source, please do not forget
to build the package before using it: run python setup.py install
or
make
in the source directory.
If you have used an installer, please check that it is suited for your Python version, your operating system and your platform
2、运行python webui.py 报错如下: INFO 2023-05-15 20:18:23,709-1d: loading model config llm device: cuda embedding device: cuda dir: /mnt/data/code/langchain-ChatGLM flagging username: 5dad1e8b5ad24af191765ae04478b45d
/mnt/data/deploy/venv_modelscope/lib/python3.9/site-packages/paddle/fluid/core.py:219: UserWarning: Load /lib64/libgomp.so.1 failed warnings.warn("Load {} failed".format(dso_absolute_path)) Segmentation fault
3、docker 报错如下:
/usr/local/lib/python3.8/dist-packages/torch/cuda/init.py:107: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 804: forward compatibility was attempted on non supported HW (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:109.) return torch._C._cuda_getDeviceCount() > 0 INFO 2023-05-15 11:54:05,132-1d: loading model config llm device: cpu embedding device: cpu dir: /chatGLM flagging username: 72ad90a0c30a4c50abb3307e4eeaa224
Traceback (most recent call last):
File "webui.py", line 4, in