[Bug]: Vllm CPU mode only takes 1 single core for multi-core cpu
Your current environment
The output of `python collect_env.py`
root@075d96cb53c1:/workspace# wget https://raw.githubusercontent.com/vllm-project/vllm/main/collect_env.py
--2024-12-06 23:49:25-- https://raw.githubusercontent.com/vllm-project/vllm/main/collect_env.py
Connecting to 192.168.0.109:7895... connected.
Proxy request sent, awaiting response... 200 OK
Length: 26218 (26K) [text/plain]
Saving to: 'collect_env.py'
collect_env.py 100%[==========================================================================================>] 25.60K --.-KB/s in 0s
2024-12-06 23:49:26 (116 MB/s) - 'collect_env.py' saved [26218/26218]
root@075d96cb53c1:/workspace# python collect_env.py
bash: python: command not found
root@075d96cb53c1:/workspace# python3 collect_env.py
Collecting environment information...
PyTorch version: 2.5.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 28
On-line CPU(s) list: 0-27
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-14700KF
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 6835.20
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 768 KiB (20 instances)
L1i cache: 1 MiB (20 instances)
L2 cache: 28 MiB (11 instances)
L3 cache: 33 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-27
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] intel_extension_for_pytorch==2.5.0
[pip3] numpy==1.26.4
[pip3] pyzmq==26.2.0
[pip3] torch==2.5.1+cpu
[pip3] torchvision==0.20.1+cpu
[pip3] transformers==4.46.3
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.6.4.post2.dev254+gdcdc3faf
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect
VLLM_CPU_KVCACHE_SPACE=10
VLLM_CPU_OMP_THREADS_BIND=0,2,4,6,8,10,12,14,16-26
LD_LIBRARY_PATH=/usr/local/lib/python3.10/dist-packages/cv2/../../lib64:
Model Input Dumps
No response
🐛 Describe the bug
docker build -f Dockerfile.cpu -t vllm-cpu-env --shm-size=4g .
sudo docker run --env "VLLM_CPU_KVCACHE_SPACE=10" --env "VLLM_CPU_OMP_THREADS_BIND=0,2,4,6,8,10,12,14,16-26" --privileged=true --ipc=host vllm-cpu-env --model "meta-llama/Llama-3.2-1B-Instruct" --max-model-len "4096"
And run it, and even if there are many concurrent requests, vllm takes 1 single core (instead of e.g. 10 cores)
Before submitting a new issue...
- [X] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
I think you can try to add --dtype=bfloat16, seems that some CPUs are not working with fp16 properly.
@Isotr0py Thank you! But that does not seem to work for me :(
My CPU is intel i7-14700KF. Is that unsupported?
p.s. the command (only new thing is the dtype):
... vllm-cpu-env --model "meta-llama/Llama-3.2-1B-Instruct" --max-model-len "4096" --dtype "bfloat16"
Hmmm, that's odd, because I'm using an even older intel cpu (Xeon Silver 4116), and it can still take all 24 cores to work... (Building from source instead)
I'm not sure if this is related to the docker building workflow...
cc @zhouyuan and @bigPYJ1151 Do you have any ideas about this issue?
Hmm then that looks really weird...
Random thought: Xeon seems to support AVX512 (https://www.intel.com/content/www/us/en/products/sku/120481/intel-xeon-silver-4116-processor-16-5m-cache-2-10-ghz/specifications.html), while 14700kf only support AVX2 but not AVX512 (https://www.intel.com/content/www/us/en/products/sku/236789/intel-core-i7-processor-14700kf-33m-cache-up-to-5-60-ghz/specifications.html). Vllm doc (https://docs.vllm.ai/en/latest/getting_started/cpu-installation.html) says AVX512 is recommended (while not required). Is that somehow related?
Look like a thread binding problem, it is probably due to OS kernel version or some system services.
The root cause may be difficult to investigate. I would suggest you to use --cpuset-cpus of docker run if VLLM_CPU_OMP_THREADS_BIND is not working.
@bigPYJ1151 Thanks for reply! I also tried not doing any thread binding, i.e. no env var VLLM_CPU_OMP_THREADS_BIND, but it also did not work.
Hmm, please try to add a env var LD_PRELOAD="/usr/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4" and remove VLLM_CPU_OMP_THREADS_BIND. Let's check whether intel-omp is the cause.
Ok I will do that experiment. Thank you!
@bigPYJ1151
sudo docker run -v ... -p ... --env "LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4" --privileged=true --ipc=host vllm-cpu-env --model "meta-llama/Llama-3.2-1B-Instruct" --max-model-len "4096" --dtype "bfloat16"
Still no use, only roughly a single core.
Btw, when using vllm on GPU, it also uses a single core to 100% like:
497220 root 20 0 41.0g 7.6g 6.4g R 100.0 12.1 3:44.54 /usr/bin/python3 -c from multiprocessing.spawn import spawn_main; spawn_m+
I am facing the same issue. I am also using the NVidia Docker image. Ryzen 5600 on a B350 board Ubuntu 24.02, 6.8.0-50-generic
When using --cpu-offload-gb i'm getting same result - 2 threads used of 48 total example run command: VLLM_CPU_KVCACHE_SPACE=10 VLLM_CPU_OMP_THREADS_BIND=0-31 HF_HUB_OFFLINE=1 REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_VISIBLE_DEVICES=2,4 vllm serve kosbu/QVQ-72B-Preview-AWQ --tensor-parallel-size 2 --quantization awq_marlin --max-model-len 12768 --gpu-memory-utilization 0.95 --api-key aaaaa --cpu-offload-gb 10
GPU's are dual rtx 3090, cpu is amd epyc 7642, I have tried using LD export per manual, tried removing VLLM_CPU_KVCACHE_SPACE=10 VLLM_CPU_OMP_THREADS_BIND=0-31: no difference. Is there anything wrong with run command or with cpuoffload vllm always uses number of threads equal to tensor-parallel-size ?
same issue here. vllm only utilized 1 cpu core for each GPU, and all utilized cpu cores kept 100% usage.
I’m encountering the same issue. How can I resolve it?
My CPU is an E5 2680 V4, and I’ve compiled and installed a pure CPU version. The startup command is:
OMP_NUM_THREADS=13 VLLM_CPU_OMP_THREADS_BIND=0-12 vllm serve DeepSeek-R1-Distill-Qwen-14B --max-model-len 1024 --dtype bfloat16 --distributed-executor-backend mp
show log:
INFO 03-04 00:23:36 [cpu_worker.py:215] OMP threads binding of Process 7722:
INFO 03-04 00:23:36 [cpu_worker.py:215] OMP tid: 7722, core 0
INFO 03-04 00:23:36 [cpu_worker.py:215] OMP tid: 8043, core 1
INFO 03-04 00:23:36 [cpu_worker.py:215] OMP tid: 8044, core 2
INFO 03-04 00:23:36 [cpu_worker.py:215] OMP tid: 8045, core 3
INFO 03-04 00:23:36 [cpu_worker.py:215] OMP tid: 8046, core 4
INFO 03-04 00:23:36 [cpu_worker.py:215] OMP tid: 8047, core 5
INFO 03-04 00:23:36 [cpu_worker.py:215] OMP tid: 8048, core 6
INFO 03-04 00:23:36 [cpu_worker.py:215] OMP tid: 8049, core 7
INFO 03-04 00:23:36 [cpu_worker.py:215] OMP tid: 8050, core 8
INFO 03-04 00:23:36 [cpu_worker.py:215] OMP tid: 8051, core 9
INFO 03-04 00:23:36 [cpu_worker.py:215] OMP tid: 8052, core 10
INFO 03-04 00:23:36 [cpu_worker.py:215] OMP tid: 8053, core 11
INFO 03-04 00:23:36 [cpu_worker.py:215] OMP tid: 8054, core 12
INFO 03-04 00:23:36 [cpu_worker.py:215] OMP tid: 8055, core 13
When checking CPU monitoring with top, I see that during model loading, all 13 cores are utilized. However, during inference requests, only 1 core is at 100% load, while the remaining cores are completely idle.
I re-ran the following commands:
VLLM_CPU_OMP_THREADS_BIND="0-13|14-27" vllm serve DeepSeek-R1-Distill-Qwen-14B --max-model-len 1024 --dtype bfloat16 --served-model-name DeepSeek -tp=2 --distributed-executor-backend mp
With tensor parallelism set to 2, the result is that cores 0 and 14 are fully loaded, while the other cores are almost idle.
VLLM_CPU_OMP_THREADS_BIND="0|1|2|3|4|5|6|7" vllm serve DeepSeek-R1-Distill-Qwen-14B --max-model-len 1024 --dtype bfloat16 --served-model-name DeepSeek -tp 8 --distributed-executor-backend mp
With tensor parallelism set to 8, the result is that cores 0-7 are fully loaded, speed can be used normally,inference works normally.
When using tensor parallelism, it can utilize multiple cores, but the downside is that it consumes more memory. How can I use multiple cores for inference without relying on tensor parallelism?
Same issue here, tried all proposed fixes in previous posts, I am still getting 1 cpu core used for each GPU. Within vLLM container I can see all CPU's, these are also used when loading the model - but not during inference 😕
Can confirm, no way to utilize more than one thread per GPU. With TP 1 thread\gpu max.
What if I use pp>1 would it use #cpu cores > #gpu cores?
VLLM_CPU_OMP_THREADS_BIND="0|1|2|3|4|5|6|7" vllm serve DeepSeek-R1-Distill-Qwen-14B --max-model-len 1024 --dtype bfloat16 --served-model-name DeepSeek -tp 8 --distributed-executor-backend mp
@LBJ6666 can you please share how many output tokens/sec were you able to get with tensor parallelism set to 8 and how much memory is it consuming for you? I am trying to a run a 14b Qwen LLM without GPU on a c7a.32xlarge (128 vCPUs | 256 GB DDR5 Memory) ec2 instance
@ImmarKarim I changed to a 16-core 32-thread CPU
VLLM_CPU_OMP_THREADS_BIND="0|1|2|3|4|5|6|7" vllm serve DeepSeek-R1-Distill-Qwen-14B --max-model-len 1024 --dtype bfloat16 --served-model-name DeepSeek
The result is that core 0 is fully loaded, using around 34GB of memory, and the inference speed is 0.4 t/s.
VLLM_CPU_OMP_THREADS_BIND="0|1|2|3|4|5|6|7" vllm serve DeepSeek-R1-Distill-Qwen-14B --max-model-len 1024 --dtype bfloat16 --served-model-name DeepSeek -tp 8 --distributed-executor-backend mp
The result is that cores 0 to 7 are fully loaded, using around 64GB of memory, and the inference speed is 1.7 t/s.
Is it because of the Python GIL that a single GPU can only use one CPU core, and are there any solutions to this problem now?
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
I've had the same issue