vllm
vllm copied to clipboard
[Bug]: ailed to import from vllm._C with ImportError('/usr/local/lib/python3.8/dist-packages/vllm/_C.abi3.so: undefined symbol: _ZN5torch7LibraryC1ENS0_4KindESsSt8optionalIN3c1011DispatchKeyEEPKcj')
Your current environment
The output of `python collect_env.py`
import vllm WARNING 06-13 11:42:20 _custom_ops.py:11] Failed to import from vllm._C with ImportError('/usr/local/lib/python3.8/dist-packages/vllm/_C.abi3.so: undefined symbol: _ZN5torch7LibraryC1ENS0_4KindESsSt8optionalIN3c1011DispatchKeyEEPKcj')
🐛 Describe the bug
import vllm WARNING 06-13 11:42:20 _custom_ops.py:11] Failed to import from vllm._C with ImportError('/usr/local/lib/python3.8/dist-packages/vllm/_C.abi3.so: undefined symbol: _ZN5torch7LibraryC1ENS0_4KindESsSt8optionalIN3c1011DispatchKeyEEPKcj')
Does the pip install vllm supports torch2.3.0? If not, why not mentioned it in README?
cc @bnellnm , this should be relevant with the recent change of binding system.
Older versions of pytorch use c10::optional instead of std::optional, so I'm guessing torch 2.3.0 is not installed in this case. Maybe there's some way to improve the warning message from _custom_ops.py?
@MonolithFoundation can you verify your pytorch version and maybe how you installed vllm?
It's not just unenable to found symbols, it just breaks import, and vllm can not be use. Why not just fix it rather than throw another nice warning?
Am using torch2.3 as addressed in my issue template.
@youkaichao @bnellnm Hello, would consider fix this for torch2.3 as soon as possible, am current install both with pip from pypi and build source all same error. Note I have cleaned after pypi.
@youkaichao @bnellnm Hello, would consider fix this for torch2.3 as soon as possible, am current install both with pip from pypi and build source all same error. Note I have cleaned after pypi.
I'm able to pip install vllm and run from a fresh virtualenv without any trouble. I'm guessing there is something that is not quite right in your environment that is causing problems. Can you provide the results of the collect_env.sh script?
Is it possible you have multiple versions of pytorch installed? Or a custom built version of pytorch 2.3.0?
@bnellnm you just mentioned torch2.3.1 have c10:optional difference from older version, does this is ok? Does vllm master branch can build with torch2.3.1? I am pretty sure I didn't have multiple versions either torch or vllm. This issue I noticed serveral toher users claims as well just bere yesrerday
@bnellnm you just mentioned torch2.3.1 have c10:optional difference from older version, does this is ok? Does vllm master branch can build with torch2.3.1? I am pretty sure I didn't have multiple versions either torch or vllm. This issue I noticed serveral toher users claims as well just bere yesrerday
vllm is built with torch 2.3.0 and it has that torch::Library symbol defined
From my local install:
> python3 -c "import torch; print(torch.__version__)"
2.3.0+cu121
The symbol is defined in libtorch_cpu.so
> nm -AC --defined-only .local/lib/python3.10/site-packages/torch/lib/*.so | grep "torch::Library::Library"
.local/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so:0000000001b2a4e0 T torch::Library::Library(torch::Library::Kind, std::string, std::optional<c10::DispatchKey>, char const*, unsigned int)
.local/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so:0000000001b2a4e0 T torch::Library::Library(torch::Library::Kind, std::string, std::optional<c10::DispatchKey>, char const*, unsigned int)
.local/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so:00000000010f0518 t torch::Library::Library(torch::Library::Kind, std::string, std::optional<c10::DispatchKey>, char const*, unsigned int) [clone .cold]
And vllm's _C.abi3.so is linked against libtorch_cpu.so
> ldd .local/lib/python3.10/site-packages/vllm/_C.abi3.so | grep libtorch_cpu
libtorch_cpu.so => /home/bnellnm/.local/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so (0x00007f56a8e86000)
@bnellnm Hi, does this sombol defined?
undefined symbol: _ZN5torch7LibraryC1ENS0_4KindESsSt8optionalIN3c1011DispatchKeyEEPKcj'
@bnellnm Hi, does this sombol defined?
undefined symbol: _ZN5torch7LibraryC1ENS0_4KindESsSt8optionalIN3c1011DispatchKeyEEPKcj'
Yes, this symbol is defined in libtorch_cpu.so in pytorch 2.3.0
Hi @bnellnm
My pytorch version is : 2.3.0+cu121
and my cuda version is : CUDA Version: 12.0
I am still getting the issue while:
from vllm import LLM
Error: RuntimeError: Tried to instantiate class '_core_C.ScalarType', but it does not exist! Ensure that it is registered via torch::class_
Hi @bnellnm My pytorch version is :
2.3.0+cu121and my cuda version is :CUDA Version: 12.0I am still getting the issue while:
from vllm import LLMError:
RuntimeError: Tried to instantiate class '_core_C.ScalarType', but it does not exist! Ensure that it is registered via torch::class_
@titu1992 It sounds like you've got a problem with your install of vLLM. Check and see if there's a _core_C.abi3.so file that has been added recently. That's where ScalarType is defined.
I have the same issue here. After I perform pip install torch==2.3.0, I could be able to import vllm and torch.
I have the same issue here. After I perform
pip install torch==2.3.0, I could be able to importvllmandtorch.
If you are on the latest version of vLLM you'll need Pytorch 2.4.
If you are on the latest version of vLLM you'll need Pytorch 2.4.
Thank you for reply. I am recompiling vLLM from source with PyTorch 2.3.0, because I need to edit vLLM itself.
Compiling from source of vllm_flash_attn and vllm solves problem. Here is my versions
~/l/hip-attention/third_party/vllm ainl-hip11 !1 ?1 ❯ pip show vllm vllm_flash_attn х INT Py hip ain@a100-80-naver-1
WARNING: Ignoring invalid distribution -riton (/home/ain/miniconda3/envs/hip/lib/python3.10/site-packages)
Name: vllm
Version: 0.5.4+cu125
Summary: A high-throughput and memory-efficient inference and serving engine for LLMs
Home-page: https://github.com/vllm-project/vllm
Author: vLLM Team
Author-email:
License: Apache 2.0
Location: /home/ain/miniconda3/envs/hip/lib/python3.10/site-packages
Editable project location: /home/ain/library/hip-attention/third_party/vllm
Requires: aiohttp, cmake, fastapi, filelock, gguf, lm-format-enforcer, ninja, numpy, nvidia-ml-py, openai, outlines, pillow, prometheus-client, prometheus-fastapi-instrumentator, psutil, py-cpuinfo, pydantic, pyzmq, ray, requests, sentencepiece, tiktoken, tokenizers, torch, torchvision, tqdm, transformers, typing-extensions, uvicorn, vllm-flash-attn, xformers
Required-by:
---
Name: vllm-flash-attn
Version: 2.6.1+cu125
Summary: Forward-only flash-attn
Home-page: https://github.com/vllm-project/flash-attention.git
Author: vLLM Team
Author-email:
License:
Location: /home/ain/library/hip-attention/third_party/flash-attention
Editable project location: /home/ain/library/hip-attention/third_party/flash-attention
Requires: torch
Required-by: vllm
Hi @bnellnm My pytorch version is :
2.3.0+cu121and my cuda version is :CUDA Version: 12.0I am still getting the issue while:from vllm import LLMError:RuntimeError: Tried to instantiate class '_core_C.ScalarType', but it does not exist! Ensure that it is registered via torch::class_@titu1992 It sounds like you've got a problem with your install of vLLM. Check and see if there's a
_core_C.abi3.sofile that has been added recently. That's where ScalarType is defined.
hey,I meet same issue,how to deal with it
Same problem here.
Same problem when installing vllm on Kaggle
I think I solved it by downgrading to lower version.
same problem, I am running pytorch 2.4.0 with vllm 0.6.1.post2
I encounter the same problem when I use pytorch=2.3.0=cuda118 installed with Conda, it seems that the pytorch installed with pip, which is used in official built version, is different from the pytorch installed with Conda.
I built vLLM from source instead, and it solved.
Same issue here using
+ vllm==0.6.1.post2
+ vllm-flash-attn==2.6.1
Same problem when installing
vllmon Kaggle
Kaggle seems to have bad installs of some dependencies. For me, forcing an upgrade of torchvision to 0.19.1 which is then subsequently downgraded back to 0.19 when installing vllm did the trick:
!pip install torchvision==0.19.1 !pip install vllm
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you!
@bnellnm you just mentioned torch2.3.1 have c10:optional difference from older version, does this is ok? Does vllm master branch can build with torch2.3.1? I am pretty sure I didn't have multiple versions either torch or vllm. This issue I noticed serveral toher users claims as well just bere yesrerday
I have the same problem. Have you solved it?
I have the same problem. Have you solved it?