vllm
vllm copied to clipboard
[Installation]: Core dumped after updating vllm 0.6.2 to 0.6.3
Your current environment
The output of `python collect_env.py`
PyTorch version: 2.4.0
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-50-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800 80GB PCIe
GPU 1: NVIDIA A800 80GB PCIe
GPU 2: NVIDIA A800 80GB PCIe
GPU 3: NVIDIA A800 80GB PCIe
GPU 4: NVIDIA A800 80GB PCIe
GPU 5: NVIDIA A800 80GB PCIe
GPU 6: NVIDIA A800 80GB PCIe
GPU 7: NVIDIA A800 80GB PCIe
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 4
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8358P CPU @ 2.60GHz
Stepping: 6
CPU MHz: 877.468
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
L1d cache: 3 MiB
L1i cache: 2 MiB
L2 cache: 80 MiB
L3 cache: 96 MiB
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-ml-py==12.560.30
[pip3] pyzmq==26.2.0
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[pip3] transformers==4.45.0
[pip3] triton==3.0.0
[conda] blas 1.0 mkl defaults
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344 defaults
[conda] mkl-service 2.4.0 py310h5eee18b_1 defaults
[conda] mkl_fft 1.3.10 py310h5eee18b_0 defaults
[conda] mkl_random 1.2.7 py310h1128e8f_0 defaults
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-ml-py 12.560.30 pypi_0 pypi
[conda] pytorch 2.4.0 py3.10_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_7 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pyzmq 26.2.0 pypi_0 pypi
[conda] torchaudio 2.4.0 py310_cu124 pytorch
[conda] torchtriton 3.0.0 py310 pytorch
[conda] torchvision 0.19.0 py310_cu124 pytorch
[conda] transformers 4.45.0 pypi_0 pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: N/A
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 CPU Affinity NUMA Affinity
GPU0 X PIX NV8 PIX SYS SYS SYS SYS NODE PIX 0-15,64-79 0
GPU1 PIX X PIX NV8 SYS SYS SYS SYS NODE PIX 0-15,64-79 0
GPU2 NV8 PIX X PIX SYS SYS SYS SYS NODE PIX 0-15,64-79 0
GPU3 PIX NV8 PIX X SYS SYS SYS SYS NODE PIX 0-15,64-79 0
GPU4 SYS SYS SYS SYS X PIX PIX NV6 SYS SYS 32-47,96-111 2
GPU5 SYS SYS SYS SYS PIX X NV4 PIX SYS SYS 32-47,96-111 2
GPU6 SYS SYS SYS SYS PIX NV4 X PIX SYS SYS 32-47,96-111 2
GPU7 SYS SYS SYS SYS NV6 PIX PIX X SYS SYS 32-47,96-111 2
NIC0 NODE NODE NODE NODE SYS SYS SYS SYS X NODE
NIC1 PIX PIX PIX PIX SYS SYS SYS SYS NODE X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
Model Input Dumps
No response
🐛 Describe the bug
I want to update vllm 0.6.2 to 0.6.3. Followed by https://docs.vllm.ai/en/latest/getting_started/installation.html, I run python python_only_dev.py --quit-dev but got an error. So I run pip uninstall vllm, pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/nightly/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl+clone the latest code + python python_only_dev.py. After that, I tried to run the demo vllm code but got "Aborted (core dumped)“. I rebuilt my conda env, and the "core dumped" error disappeared. However, I got a NCCL error:
INFO 10-18 14:42:48 utils.py:1009] Found nccl from library libnccl.so.2
ERROR 10-18 14:42:48 pynccl_wrapper.py:196] Failed to load NCCL library from libnccl.so.2 .It is expected if you are not running on NVIDIA/AMD GPUs.Otherwise, the nccl library might not exist, be corrupted or it does not support the current platform Linux-5.15.0-50-generic-x86_64-with-glibc2.31.If you already have the library, please set the environment variable VLLM_NCCL_SO_PATH to point to the correct nccl library path.
Then I check my driver by running CUDA_VISIBLE_DEVICES=6,7 NCCL_DEBUG=TRACE torchrun --nproc-per-node=2 test_vllm_env.py. It seems that NCCL is not found. Indeed, I checked this before updating vllm 0.6.2 to 0.6.3, everything is ok.
W1018 14:44:39.010000 23456244184256 torch/distributed/run.py:779]
W1018 14:44:39.010000 23456244184256 torch/distributed/run.py:779] *****************************************
W1018 14:44:39.010000 23456244184256 torch/distributed/run.py:779] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W1018 14:44:39.010000 23456244184256 torch/distributed/run.py:779] *****************************************
node05:265363:265363 [0] NCCL INFO Bootstrap : Using ibs110:192.168.99.105<0>
node05:265363:265363 [0] NCCL INFO NET/Plugin : dlerror=libnccl-net.so: cannot open shared object file: No such file or directory No plugin found (libnccl-net.so), using internal implementation
node05:265363:265363 [0] NCCL INFO cudaDriverVersion 12010
NCCL version 2.20.5+cuda12.4
node05:265364:265364 [1] NCCL INFO cudaDriverVersion 12010
node05:265364:265364 [1] NCCL INFO Bootstrap : Using ibs110:192.168.99.105<0>
node05:265364:265364 [1] NCCL INFO NET/Plugin : dlerror=libnccl-net.so: cannot open shared object file: No such file or directory No plugin found (libnccl-net.so), using internal implementation
node05:265363:265402 [0] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB [1]mlx5_1:1/IB [RO]; OOB ibs110:192.168.99.105<0>
node05:265363:265402 [0] NCCL INFO Using non-device net plugin version 0
node05:265363:265402 [0] NCCL INFO Using network IB
node05:265364:265404 [1] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB [1]mlx5_1:1/IB [RO]; OOB ibs110:192.168.99.105<0>
node05:265364:265404 [1] NCCL INFO Using non-device net plugin version 0
node05:265364:265404 [1] NCCL INFO Using network IB
node05:265363:265402 [0] NCCL INFO comm 0x6460c980 rank 0 nranks 2 cudaDev 0 nvmlDev 6 busId 9c000 commId 0xeb63a64af00d1fa4 - Init START
node05:265364:265404 [1] NCCL INFO comm 0x6482bb70 rank 1 nranks 2 cudaDev 1 nvmlDev 7 busId 9e000 commId 0xeb63a64af00d1fa4 - Init START
node05:265364:265404 [1] NCCL INFO Setting affinity for GPU 7 to ffff,00000000,0000ffff,00000000
node05:265363:265402 [0] NCCL INFO Setting affinity for GPU 6 to ffff,00000000,0000ffff,00000000
node05:265364:265404 [1] NCCL INFO comm 0x6482bb70 rank 1 nRanks 2 nNodes 1 localRanks 2 localRank 1 MNNVL 0
node05:265363:265402 [0] NCCL INFO comm 0x6460c980 rank 0 nRanks 2 nNodes 1 localRanks 2 localRank 0 MNNVL 0
node05:265364:265404 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 [2] -1/-1/-1->1->0 [3] 0/-1/-1->1->-1
node05:265364:265404 [1] NCCL INFO P2P Chunksize set to 131072
node05:265363:265402 [0] NCCL INFO Channel 00/04 : 0 1
node05:265363:265402 [0] NCCL INFO Channel 01/04 : 0 1
node05:265363:265402 [0] NCCL INFO Channel 02/04 : 0 1
node05:265363:265402 [0] NCCL INFO Channel 03/04 : 0 1
node05:265363:265402 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1 [2] 1/-1/-1->0->-1 [3] -1/-1/-1->0->1
node05:265363:265402 [0] NCCL INFO P2P Chunksize set to 131072
node05:265363:265402 [0] NCCL INFO Channel 00/0 : 0[6] -> 1[7] via P2P/CUMEM
node05:265363:265402 [0] NCCL INFO Channel 01/0 : 0[6] -> 1[7] via P2P/CUMEM
node05:265363:265402 [0] NCCL INFO Channel 02/0 : 0[6] -> 1[7] via P2P/CUMEM
node05:265364:265404 [1] NCCL INFO Channel 00/0 : 1[7] -> 0[6] via P2P/CUMEM
node05:265363:265402 [0] NCCL INFO Channel 03/0 : 0[6] -> 1[7] via P2P/CUMEM
node05:265364:265404 [1] NCCL INFO Channel 01/0 : 1[7] -> 0[6] via P2P/CUMEM
node05:265364:265404 [1] NCCL INFO Channel 02/0 : 1[7] -> 0[6] via P2P/CUMEM
node05:265364:265404 [1] NCCL INFO Channel 03/0 : 1[7] -> 0[6] via P2P/CUMEM
node05:265363:265402 [0] NCCL INFO Connected all rings
node05:265363:265402 [0] NCCL INFO Connected all trees
node05:265364:265404 [1] NCCL INFO Connected all rings
node05:265364:265404 [1] NCCL INFO Connected all trees
node05:265364:265404 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
node05:265364:265404 [1] NCCL INFO 4 coll channels, 0 collnet channels, 0 nvls channels, 4 p2p channels, 2 p2p channels per peer
node05:265363:265402 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
node05:265363:265402 [0] NCCL INFO 4 coll channels, 0 collnet channels, 0 nvls channels, 4 p2p channels, 2 p2p channels per peer
node05:265364:265404 [1] NCCL INFO comm 0x6482bb70 rank 1 nranks 2 cudaDev 1 nvmlDev 7 busId 9e000 commId 0xeb63a64af00d1fa4 - Init COMPLETE
node05:265363:265402 [0] NCCL INFO comm 0x6460c980 rank 0 nranks 2 cudaDev 0 nvmlDev 6 busId 9c000 commId 0xeb63a64af00d1fa4 - Init COMPLETE
PyTorch NCCL is successful!PyTorch NCCL is successful!
PyTorch GLOO is successful!PyTorch GLOO is successful!
INFO 10-18 14:44:45 utils.py:1009] Found nccl from library libnccl.so.2
ERROR 10-18 14:44:45 pynccl_wrapper.py:196] Failed to load NCCL library from libnccl.so.2 .It is expected if you are not running on NVIDIA/AMD GPUs.Otherwise, the nccl library might not exist, be corrupted or it does not support the current platform Linux-5.15.0-50-generic-x86_64-with-glibc2.31.If you already have the library, please set the environment variable VLLM_NCCL_SO_PATH to point to the correct nccl library path.
INFO 10-18 14:44:45 utils.py:1009] Found nccl from library libnccl.so.2
ERROR 10-18 14:44:45 pynccl_wrapper.py:196] Failed to load NCCL library from libnccl.so.2 .It is expected if you are not running on NVIDIA/AMD GPUs.Otherwise, the nccl library might not exist, be corrupted or it does not support the current platform Linux-5.15.0-50-generic-x86_64-with-glibc2.31.If you already have the library, please set the environment variable VLLM_NCCL_SO_PATH to point to the correct nccl library path.
[rank0]: Traceback (most recent call last):
[rank0]: File "/localnvme/application/sc_new/myy_world_consistency/test_vllm_env.py", line 34, in <module>
[rank0]: pynccl.all_reduce(data, stream=s)
[rank0]: File "/localnvme/application/sc_new/miniconda3/envs/cwc2/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl.py", line 113, in all_reduce
[rank0]: assert tensor.device == self.device, (
[rank0]: AttributeError: 'PyNcclCommunicator' object has no attribute 'device'
[rank1]: Traceback (most recent call last):
[rank1]: File "/localnvme/application/sc_new/myy_world_consistency/test_vllm_env.py", line 34, in <module>
[rank1]: pynccl.all_reduce(data, stream=s)
[rank1]: File "/localnvme/application/sc_new/miniconda3/envs/cwc2/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl.py", line 113, in all_reduce
[rank1]: assert tensor.device == self.device, (
[rank1]: AttributeError: 'PyNcclCommunicator' object has no attribute 'device'
W1018 14:44:47.131000 23456244184256 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 265364 closing signal SIGTERM
E1018 14:44:47.245000 23456244184256 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 0 (pid: 265363) of binary: /localnvme/application/sc_new/miniconda3/envs/cwc2/bin/python
Traceback (most recent call last):
File "/localnvme/application/sc_new/miniconda3/envs/cwc2/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch==2.4.0', 'console_scripts', 'torchrun')())
File "/localnvme/application/sc_new/miniconda3/envs/cwc2/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 348, in wrapper
return f(*args, **kwargs)
File "/localnvme/application/sc_new/miniconda3/envs/cwc2/lib/python3.10/site-packages/torch/distributed/run.py", line 901, in main
run(args)
File "/localnvme/application/sc_new/miniconda3/envs/cwc2/lib/python3.10/site-packages/torch/distributed/run.py", line 892, in run
elastic_launch(
File "/localnvme/application/sc_new/miniconda3/envs/cwc2/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 133, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/localnvme/application/sc_new/miniconda3/envs/cwc2/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
test_vllm_env.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-10-18_14:44:47
host : node05
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 265363)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
Before submitting a new issue...
- [X] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Personally, I resolved this via full build (i.e. python -m pip install -e .). I think some of the kernels/ops have been changed in this update, so they have to be rebuilt. @dtrifiro is this intended behavior?
Is my foundational env (including driver, nccl, etc) corrupted? I am training LLMs which is not related to VLLM, and it is showing new warnings. @DarkLight1337
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.4
[WARNING] using untested triton version (3.0.0), only 1.0.0 is known to be compatible
Are you using the same conda environment for multiple applications? If yes, that may indeed be the case.
@DarkLight1337 I'm not sure about the specific error, but it looks like something is wrong with the nvidia-nccl-cu12 dependency.
@takagi97 could you reinstall vllm in a fresh venv (or force reinstalling with pip install --force-reinstall nvidia-nccl-cu12)?
@DarkLight1337 I'm not sure about the specific error, but it looks like something is wrong with the
nvidia-nccl-cu12dependency.@takagi97 could you reinstall
vllmin a fresh venv (or force reinstalling withpip install --force-reinstall nvidia-nccl-cu12)?
I run pip install --force-reinstall nvidia-nccl-cu12 but still got a core dumped. For reinstalling vllm in a new conda env, I got a NCCL bug:
INFO 10-18 14:42:48 utils.py:1009] Found nccl from library libnccl.so.2
ERROR 10-18 14:42:48 pynccl_wrapper.py:196] Failed to load NCCL library from libnccl.so.2 .It is expected if you are not running on NVIDIA/AMD GPUs.Otherwise, the nccl library might not exist, be corrupted or it does not support the current platform Linux-5.15.0-50-generic-x86_64-with-glibc2.31.If you already have the library, please set the environment variable VLLM_NCCL_SO_PATH to point to the correct nccl library path.
I run python -c "import torch; print(torch.cuda.nccl.version())", return (2, 20, 5), seeming nccl can work?
I have tested the similar conda env on a new server. My conclusion is that the following error comes from conda env itself:
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.4
[WARNING] using untested triton version (3.0.0), only 1.0.0 is known to be compatible
However, I don't know which package is responsible. And another error is caused by the new feature of VLLM, i.e., 0.6.3:
AttributeError: 'PyNcclCommunicator' object has no attribute 'device'
My foundational environment, such as NCCL, is functioning normally.
I don't think simply reinstalling nvidia-nccl-cu12 will solve the issue. Based on the code in the vLLM repository https://github.com/vllm-project/vllm/blob/main/vllm/utils.py#L988-L1010 and https://github.com/vllm-project/vllm/blob/main/vllm/distributed/device_communicators/pynccl_wrapper.py#L186-L205 , it means vLLM prioritizes torch's built-in nccl.
It appears that the nccl in your torch installation may be corrupted. Either of the following may help:
- Reinstall
torch, or completely reinstall all vLLM dependencies bypip install --force-reinstall -r requirements-cuda.txt. - If you have already reinstalled
nvidia-nccl-cu12, you can manually locate thenccl.so file, then you set the envVLLM_NCCL_SO_PATH=to that path.
@cermeng nvidia-nccl-cu12 is a torch dependency. torch itself does not bundle libnccl.so.2.
@takagi97
Could you try doing the following (not using conda for now, but just the system python
python -m venv tmp_venv
source tmp_venv/bin/activate
pip install vllm==0.6.3.post1
and then either running the code you were trying earlier or vllm serve facebook/opt-125m and see if the server starts up and you can query localhost:8000/v1/completions
I have tested the similar conda env on a new server. My conclusion is that the following error comes from conda env itself:
[WARNING] async_io requires the dev libaio .so object and headers but these were not found. [WARNING] async_io: please install the libaio-dev package with apt [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found. [WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.4 [WARNING] using untested triton version (3.0.0), only 1.0.0 is known to be compatibleHowever, I don't know which package is responsible. And another error is caused by the new feature of VLLM, i.e., 0.6.3:
AttributeError: 'PyNcclCommunicator' object has no attribute 'device'My foundational environment, such as NCCL, is functioning normally.
These warnings resulted from Deepspeed 0.14.4. After updating to 0.15.2, these warnings disappeared. Also, thanks for the suggestions above! I rebuilt my conda env, and now it works properly.