vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Bug]: Llama 4 EOFError

Open w013nad opened this issue 1 year ago • 16 comments

Your current environment

The output of `python collect_env.py`
root@b33811284aa6:/home/ndurkee# python3 collect_env.py
INFO 04-06 06:49:09 [__init__.py:239] Automatically detected platform cuda.
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 4.0.0
Libc version: glibc-2.35

Python version: 3.12.9 (main, Feb  5 2025, 08:49:00) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-4.18.0-425.19.2.el8_7.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB

Nvidia driver version: 525.105.17
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Address sizes:                   43 bits physical, 48 bits virtual
Byte Order:                      Little Endian
CPU(s):                          256
On-line CPU(s) list:             0-255
Vendor ID:                       AuthenticAMD
Model name:                      AMD EPYC 7742 64-Core Processor
CPU family:                      23
Model:                           49
Thread(s) per core:              2
Core(s) per socket:              64
Socket(s):                       2
Stepping:                        0
Frequency boost:                 enabled
CPU max MHz:                     2250.0000
CPU min MHz:                     1500.0000
BogoMIPS:                        4491.45
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization:                  AMD-V
L1d cache:                       4 MiB (128 instances)
L1i cache:                       4 MiB (128 instances)
L2 cache:                        64 MiB (128 instances)
L3 cache:                        512 MiB (32 instances)
NUMA node(s):                    8
NUMA node0 CPU(s):               0-15,128-143
NUMA node1 CPU(s):               16-31,144-159
NUMA node2 CPU(s):               32-47,160-175
NUMA node3 CPU(s):               48-63,176-191
NUMA node4 CPU(s):               64-79,192-207
NUMA node5 CPU(s):               80-95,208-223
NUMA node6 CPU(s):               96-111,224-239
NUMA node7 CPU(s):               112-127,240-255
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Not affected
Vulnerability Retbleed:          Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected

Versions of relevant libraries:
[pip3] flashinfer-python==0.2.1.post2+cu124torch2.6
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pyzmq==26.4.0
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] transformers==4.51.0
[pip3] triton==3.2.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.8.3
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    NIC0    NIC1    NIC2    NIC3    NIC4    NIC5    NIC6    NIC7    NIC8    NIC9    CPU Affinity    NUMA Affinity
GPU0     X      NV12    NV12    NV12    PXB     PXB     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     48-63,176-191   3
GPU1    NV12     X      NV12    NV12    SYS     SYS     PXB     PXB     SYS     SYS     SYS     SYS     SYS     SYS     16-31,144-159   1
GPU2    NV12    NV12     X      NV12    SYS     SYS     SYS     SYS     PXB     PXB     SYS     SYS     SYS     SYS     112-127,240-255 7
GPU3    NV12    NV12    NV12     X      SYS     SYS     SYS     SYS     PXB     PXB     SYS     SYS     SYS     SYS     112-127,240-255 7
NIC0    PXB     SYS     SYS     SYS      X      PXB     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS
NIC1    PXB     SYS     SYS     SYS     PXB      X      SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS
NIC2    SYS     PXB     SYS     SYS     SYS     SYS      X      PXB     SYS     SYS     SYS     SYS     SYS     SYS
NIC3    SYS     PXB     SYS     SYS     SYS     SYS     PXB      X      SYS     SYS     SYS     SYS     SYS     SYS
NIC4    SYS     SYS     PXB     PXB     SYS     SYS     SYS     SYS      X      PXB     SYS     SYS     SYS     SYS
NIC5    SYS     SYS     PXB     PXB     SYS     SYS     SYS     SYS     PXB      X      SYS     SYS     SYS     SYS
NIC6    SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS      X      PXB     SYS     SYS
NIC7    SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     PXB      X      SYS     SYS
NIC8    SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS      X      PIX
NIC9    SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     PIX      X

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_2
  NIC3: mlx5_3
  NIC4: mlx5_4
  NIC5: mlx5_5
  NIC6: mlx5_6
  NIC7: mlx5_7
  NIC8: mlx5_8
  NIC9: mlx5_9

NVIDIA_VISIBLE_DEVICES=1,3,4,5
NVIDIA_REQUIRE_CUDA=cuda>=12.4 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=525,driver<526 brand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,driver<526 brand=geforce,driver>=525,driver<526 brand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 brand=titanrtx,driver>=525,driver<526 brand=tesla,driver>=535,driver<536 brand=unknown,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=geforce,driver>=535,driver<536 brand=geforcertx,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=titan,driver>=535,driver<536 brand=titanrtx,driver>=535,driver<536
NCCL_VERSION=2.20.5-1
NVIDIA_DRIVER_CAPABILITIES=compute,utility
NVIDIA_PRODUCT_NAME=CUDA
VLLM_USAGE_SOURCE=production-docker-image
CUDA_VERSION=12.4.0
LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
NCCL_CUMEM_ENABLE=0
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY

🐛 Describe the bug

Unable to run Llama 4 on 4xA100. Using 0.8.3 official docker image and the scout-instruct version from Meta

Using this command

sudo docker run --rm --name ndurkee_main --shm-size=10.24gb --gpus '"device=1,3,4,5"' -p 15010:15001 -p 15001:15001 -v /raid/ndurkee:/home/ndurkee arti.bsf.ball.com/docker-group/vllm/vllm-openai:v0.8.3 --model /home/ndurkee/Llama-4-Scout-17B-16E-Instruct/ --max-model-len 32000 -tp 4 --gpu-memory-utilization 0.90 --port 15001 --max-log-len 10 --enable-prefix-caching

Note that I tried running this command with and without fp8 quantization and both failed. I also checked the safetensors files by themselves and they all loaded properly.

[[email protected]@lscoaec-dgx0001 Llama-4-Scout-17B-16E-Instruct]$ sudo docker run --rm --name ndurkee_main --shm-size=10.24gb --gpus '"device=1,3,4,5"' -p 15010:15001 -p 15001:15001 -v /raid/ndurkee:/home/ndurkee arti.bsf.ball.com/docker-group/vllm/vllm-openai:v0.8.3 --model /home/ndurkee/Llama-4-Scout-17B-16E-Instruct/ --max-model-len 32000 -tp 4 --gpu-memory-utilization 0.90 --port 15001 --max-log-len 10 --enable-prefix-caching
[sudo] password for [email protected]:
INFO 04-06 06:44:35 [__init__.py:239] Automatically detected platform cuda.
INFO 04-06 06:44:38 [api_server.py:1034] vLLM API server version 0.8.3
INFO 04-06 06:44:38 [api_server.py:1035] args: Namespace(host=None, port=15001, uvicorn_log_level='info', disable_uvicorn_access_log=False, allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='/home/ndurkee/Llama-4-Scout-17B-16E-Instruct/', task='auto', tokenizer=None, hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=32000, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=4, data_parallel_size=1, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=True, prefix_caching_hash_algo='builtin', disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=None, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, use_tqdm_on_load=True, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_config=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', worker_extension_cls='', generation_config='auto', override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, enable_reasoning=False, reasoning_parser=None, disable_cascade_attn=False, disable_log_requests=False, max_log_len=10, disable_fastapi_docs=False, enable_prompt_tokens_details=False, enable_server_load_tracking=False)
INFO 04-06 06:44:44 [config.py:600] This model supports multiple tasks: {'generate', 'embed', 'reward', 'classify', 'score'}. Defaulting to 'generate'.
INFO 04-06 06:44:44 [config.py:1600] Defaulting to use mp for distributed inference
INFO 04-06 06:44:44 [config.py:1780] Chunked prefill is enabled with max_num_batched_tokens=2048.
INFO 04-06 06:44:49 [__init__.py:239] Automatically detected platform cuda.
INFO 04-06 06:44:51 [core.py:61] Initializing a V1 LLM engine (v0.8.3) with config: model='/home/ndurkee/Llama-4-Scout-17B-16E-Instruct/', speculative_config=None, tokenizer='/home/ndurkee/Llama-4-Scout-17B-16E-Instruct/', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32000, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto,  device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=/home/ndurkee/Llama-4-Scout-17B-16E-Instruct/, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
WARNING 04-06 06:44:51 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
INFO 04-06 06:44:51 [shm_broadcast.py:264] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_451ec02c'), local_subscribe_addr='ipc:///tmp/0349da12-7899-4119-acf5-3c7bca857b78', remote_subscribe_addr=None, remote_addr_ipv6=False)
INFO 04-06 06:44:54 [__init__.py:239] Automatically detected platform cuda.
WARNING 04-06 06:44:58 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7f5d00dbe480>
(VllmWorker rank=0 pid=221) INFO 04-06 06:44:58 [shm_broadcast.py:264] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_e526fa01'), local_subscribe_addr='ipc:///tmp/4097c250-76b6-4073-83aa-a703633d034c', remote_subscribe_addr=None, remote_addr_ipv6=False)
INFO 04-06 06:45:01 [__init__.py:239] Automatically detected platform cuda.
WARNING 04-06 06:45:03 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7ff2c4096d20>
(VllmWorker rank=1 pid=238) INFO 04-06 06:45:03 [shm_broadcast.py:264] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_1f7d2e87'), local_subscribe_addr='ipc:///tmp/7d449291-5f2a-415e-9cdb-f805f1454d12', remote_subscribe_addr=None, remote_addr_ipv6=False)
INFO 04-06 06:45:07 [__init__.py:239] Automatically detected platform cuda.
WARNING 04-06 06:45:09 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7f952c09ab10>
(VllmWorker rank=2 pid=257) INFO 04-06 06:45:09 [shm_broadcast.py:264] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_50acc578'), local_subscribe_addr='ipc:///tmp/7dc49ecc-1706-4eba-94d0-389fbf0dc9c2', remote_subscribe_addr=None, remote_addr_ipv6=False)
INFO 04-06 06:45:12 [__init__.py:239] Automatically detected platform cuda.
WARNING 04-06 06:45:15 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7fab21f9f170>
(VllmWorker rank=3 pid=281) INFO 04-06 06:45:15 [shm_broadcast.py:264] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_bd618765'), local_subscribe_addr='ipc:///tmp/33c1e0cb-1870-449d-b57c-be834d857c47', remote_subscribe_addr=None, remote_addr_ipv6=False)
(VllmWorker rank=1 pid=238) INFO 04-06 06:45:15 [utils.py:990] Found nccl from library libnccl.so.2
(VllmWorker rank=3 pid=281) INFO 04-06 06:45:15 [utils.py:990] Found nccl from library libnccl.so.2
(VllmWorker rank=1 pid=238) INFO 04-06 06:45:15 [pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorker rank=3 pid=281) INFO 04-06 06:45:15 [pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorker rank=0 pid=221) INFO 04-06 06:45:15 [utils.py:990] Found nccl from library libnccl.so.2
(VllmWorker rank=0 pid=221) INFO 04-06 06:45:15 [pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorker rank=2 pid=257) INFO 04-06 06:45:15 [utils.py:990] Found nccl from library libnccl.so.2
(VllmWorker rank=2 pid=257) INFO 04-06 06:45:15 [pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorker rank=0 pid=221) INFO 04-06 06:45:16 [custom_all_reduce_utils.py:206] generating GPU P2P access cache in /root/.cache/vllm/gpu_p2p_access_cache_for_0,1,2,3.json
(VllmWorker rank=0 pid=221) INFO 04-06 06:45:40 [custom_all_reduce_utils.py:244] reading GPU P2P access cache from /root/.cache/vllm/gpu_p2p_access_cache_for_0,1,2,3.json
(VllmWorker rank=2 pid=257) INFO 04-06 06:45:40 [custom_all_reduce_utils.py:244] reading GPU P2P access cache from /root/.cache/vllm/gpu_p2p_access_cache_for_0,1,2,3.json
(VllmWorker rank=1 pid=238) INFO 04-06 06:45:40 [custom_all_reduce_utils.py:244] reading GPU P2P access cache from /root/.cache/vllm/gpu_p2p_access_cache_for_0,1,2,3.json
(VllmWorker rank=3 pid=281) INFO 04-06 06:45:40 [custom_all_reduce_utils.py:244] reading GPU P2P access cache from /root/.cache/vllm/gpu_p2p_access_cache_for_0,1,2,3.json
(VllmWorker rank=0 pid=221) INFO 04-06 06:45:40 [shm_broadcast.py:264] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_7fad12d8'), local_subscribe_addr='ipc:///tmp/3d4c53c6-20c9-4481-a9e1-bde6fd832508', remote_subscribe_addr=None, remote_addr_ipv6=False)
(VllmWorker rank=2 pid=257) INFO 04-06 06:45:40 [parallel_state.py:957] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
(VllmWorker rank=3 pid=281) INFO 04-06 06:45:40 [parallel_state.py:957] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
(VllmWorker rank=0 pid=221) INFO 04-06 06:45:40 [parallel_state.py:957] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
(VllmWorker rank=1 pid=238) INFO 04-06 06:45:40 [parallel_state.py:957] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
(VllmWorker rank=2 pid=257) INFO 04-06 06:45:40 [cuda.py:221] Using Flash Attention backend on V1 engine.
(VllmWorker rank=3 pid=281) INFO 04-06 06:45:40 [cuda.py:221] Using Flash Attention backend on V1 engine.
(VllmWorker rank=0 pid=221) INFO 04-06 06:45:40 [cuda.py:221] Using Flash Attention backend on V1 engine.
(VllmWorker rank=1 pid=238) INFO 04-06 06:45:40 [cuda.py:221] Using Flash Attention backend on V1 engine.
(VllmWorker rank=0 pid=221) INFO 04-06 06:45:44 [gpu_model_runner.py:1258] Starting to load model /home/ndurkee/Llama-4-Scout-17B-16E-Instruct/...
(VllmWorker rank=2 pid=257) INFO 04-06 06:45:44 [gpu_model_runner.py:1258] Starting to load model /home/ndurkee/Llama-4-Scout-17B-16E-Instruct/...
(VllmWorker rank=3 pid=281) INFO 04-06 06:45:44 [gpu_model_runner.py:1258] Starting to load model /home/ndurkee/Llama-4-Scout-17B-16E-Instruct/...
(VllmWorker rank=0 pid=221) INFO 04-06 06:45:45 [config.py:3334] cudagraph sizes specified by model runner [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 264, 272, 280, 288, 296, 304, 312, 320, 328, 336, 344, 352, 360, 368, 376, 384, 392, 400, 408, 416, 424, 432, 440, 448, 456, 464, 472, 480, 488, 496, 504, 512] is overridden by config [512, 384, 256, 128, 4, 2, 1, 392, 264, 136, 8, 400, 272, 144, 16, 408, 280, 152, 24, 416, 288, 160, 32, 424, 296, 168, 40, 432, 304, 176, 48, 440, 312, 184, 56, 448, 320, 192, 64, 456, 328, 200, 72, 464, 336, 208, 80, 472, 344, 216, 88, 120, 480, 352, 248, 224, 96, 488, 504, 360, 232, 104, 496, 368, 240, 112, 376]
(VllmWorker rank=1 pid=238) INFO 04-06 06:45:45 [gpu_model_runner.py:1258] Starting to load model /home/ndurkee/Llama-4-Scout-17B-16E-Instruct/...
(VllmWorker rank=2 pid=257) INFO 04-06 06:45:45 [config.py:3334] cudagraph sizes specified by model runner [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 264, 272, 280, 288, 296, 304, 312, 320, 328, 336, 344, 352, 360, 368, 376, 384, 392, 400, 408, 416, 424, 432, 440, 448, 456, 464, 472, 480, 488, 496, 504, 512] is overridden by config [512, 384, 256, 128, 4, 2, 1, 392, 264, 136, 8, 400, 272, 144, 16, 408, 280, 152, 24, 416, 288, 160, 32, 424, 296, 168, 40, 432, 304, 176, 48, 440, 312, 184, 56, 448, 320, 192, 64, 456, 328, 200, 72, 464, 336, 208, 80, 472, 344, 216, 88, 120, 480, 352, 248, 224, 96, 488, 504, 360, 232, 104, 496, 368, 240, 112, 376]
(VllmWorker rank=3 pid=281) INFO 04-06 06:45:45 [config.py:3334] cudagraph sizes specified by model runner [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 264, 272, 280, 288, 296, 304, 312, 320, 328, 336, 344, 352, 360, 368, 376, 384, 392, 400, 408, 416, 424, 432, 440, 448, 456, 464, 472, 480, 488, 496, 504, 512] is overridden by config [512, 384, 256, 128, 4, 2, 1, 392, 264, 136, 8, 400, 272, 144, 16, 408, 280, 152, 24, 416, 288, 160, 32, 424, 296, 168, 40, 432, 304, 176, 48, 440, 312, 184, 56, 448, 320, 192, 64, 456, 328, 200, 72, 464, 336, 208, 80, 472, 344, 216, 88, 120, 480, 352, 248, 224, 96, 488, 504, 360, 232, 104, 496, 368, 240, 112, 376]
(VllmWorker rank=1 pid=238) INFO 04-06 06:45:45 [config.py:3334] cudagraph sizes specified by model runner [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 264, 272, 280, 288, 296, 304, 312, 320, 328, 336, 344, 352, 360, 368, 376, 384, 392, 400, 408, 416, 424, 432, 440, 448, 456, 464, 472, 480, 488, 496, 504, 512] is overridden by config [512, 384, 256, 128, 4, 2, 1, 392, 264, 136, 8, 400, 272, 144, 16, 408, 280, 152, 24, 416, 288, 160, 32, 424, 296, 168, 40, 432, 304, 176, 48, 440, 312, 184, 56, 448, 320, 192, 64, 456, 328, 200, 72, 464, 336, 208, 80, 472, 344, 216, 88, 120, 480, 352, 248, 224, 96, 488, 504, 360, 232, 104, 496, 368, 240, 112, 376]
(VllmWorker rank=0 pid=221) WARNING 04-06 06:45:45 [config.py:3785] `torch.compile` is turned on, but the model /home/ndurkee/Llama-4-Scout-17B-16E-Instruct/ does not support it. Please open an issue on GitHub if you want it to be supported.
(VllmWorker rank=0 pid=221) WARNING 04-06 06:45:45 [config.py:3785] `torch.compile` is turned on, but the model /home/ndurkee/Llama-4-Scout-17B-16E-Instruct/ does not support it. Please open an issue on GitHub if you want it to be supported.
(VllmWorker rank=3 pid=281) WARNING 04-06 06:45:45 [config.py:3785] `torch.compile` is turned on, but the model /home/ndurkee/Llama-4-Scout-17B-16E-Instruct/ does not support it. Please open an issue on GitHub if you want it to be supported.
(VllmWorker rank=3 pid=281) WARNING 04-06 06:45:45 [config.py:3785] `torch.compile` is turned on, but the model /home/ndurkee/Llama-4-Scout-17B-16E-Instruct/ does not support it. Please open an issue on GitHub if you want it to be supported.
(VllmWorker rank=2 pid=257) WARNING 04-06 06:45:45 [config.py:3785] `torch.compile` is turned on, but the model /home/ndurkee/Llama-4-Scout-17B-16E-Instruct/ does not support it. Please open an issue on GitHub if you want it to be supported.
(VllmWorker rank=2 pid=257) WARNING 04-06 06:45:45 [config.py:3785] `torch.compile` is turned on, but the model /home/ndurkee/Llama-4-Scout-17B-16E-Instruct/ does not support it. Please open an issue on GitHub if you want it to be supported.
(VllmWorker rank=1 pid=238) WARNING 04-06 06:45:45 [config.py:3785] `torch.compile` is turned on, but the model /home/ndurkee/Llama-4-Scout-17B-16E-Instruct/ does not support it. Please open an issue on GitHub if you want it to be supported.
(VllmWorker rank=1 pid=238) WARNING 04-06 06:45:45 [config.py:3785] `torch.compile` is turned on, but the model /home/ndurkee/Llama-4-Scout-17B-16E-Instruct/ does not support it. Please open an issue on GitHub if you want it to be supported.
(VllmWorker rank=0 pid=221) Process SpawnProcess-1:1:
CRITICAL 04-06 06:45:45 [multiproc_executor.py:49] MulitprocExecutor got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
CRITICAL 04-06 06:45:45 [core_client.py:361] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1121, in <module>
    uvloop.run(run_server(args))
  File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
    return __asyncio.run(
           ^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
  File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
    return await main
           ^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1069, in run_server
    async with build_async_engine_client(args) as engine_client:
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 146, in build_async_engine_client
    async with build_async_engine_client_from_engine_args(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 178, in build_async_engine_client_from_engine_args
    async_llm = AsyncLLM.from_vllm_config(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 136, in from_vllm_config
    return cls(
           ^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 102, in __init__
    self.engine_core = EngineCoreClient.make_client(
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 69, in make_client
    return AsyncMPClient(vllm_config, executor_class, log_stats)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 570, in __init__
    super().__init__(
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 401, in __init__
    engine.proc_handle.wait_for_startup()
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/utils.py", line 127, in wait_for_startup
    if self.reader.recv()["status"] != "READY":
       ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/multiprocessing/connection.py", line 250, in recv
    buf = self._recv_bytes()
          ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/multiprocessing/connection.py", line 430, in _recv_bytes
    buf = self._recv(4)
          ^^^^^^^^^^^^^
  File "/usr/lib/python3.12/multiprocessing/connection.py", line 399, in _recv
    raise EOFError
EOFError

Before submitting a new issue...

  • [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

w013nad avatar Apr 06 '25 13:04 w013nad

Wait, is this just the error when you're OOM? I thought you had another error message for that?

Also, is fp8 still loaded in all at once and then quantized?

As a bit of extra information, it seems to max out memory on the first GPU and then crash. The other GPUs were at around 1.6GB

w013nad avatar Apr 06 '25 14:04 w013nad

+1

bwhartlove avatar Apr 06 '25 15:04 bwhartlove

Is it 40GB A100 or 80GB version? Also A100 doesn't support fp8.

Could you confirm you are using meta-llama/Llama-4-Scout-17B-16E-Instruct? Could you download the model with huggingface-cli first locally and try again?

houseroad avatar Apr 06 '25 16:04 houseroad

Is it 40GB A100 or 80GB version? Also A100 doesn't support fp8.

Could you confirm you are using meta-llama/Llama-4-Scout-17B-16E-Instruct? Could you download the model with huggingface-cli first locally and try again?

  1. It's the 40GB version. Not enough memory for bf16 but should be enough for fp8 if the layers are quantized sequentially as loaded. Normally, you get a normal OOM error if you can't fit the model in your GPUs
  2. vLLM should support fp8 via marlin kernels. I've used marlin kernels with many other models.
  3. Yes, it is meta-llama/Llama-4-Scout-17B-16E-Instruct
  4. I used the hugginface snapshot_download to download the model and transferred it to the server.

w013nad avatar Apr 06 '25 16:04 w013nad

  1. 40GB A100 should require 8 cards to serve bf16 + 16 experts (Llama4 Scout)
  2. Yes, vLLM supports fp8, but A100 doesn't.
  3. Good
  4. Thanks for confirming that.

houseroad avatar Apr 06 '25 17:04 houseroad

Getting a very similar error with bitsandbytes 4 bit quant on 2 * A6000 GPU. Same model Llama-4-Scout-17B-16E-Instruct.

llm = LLM(model=fn, trust_remote_code=True, quantization="bitsandbytes")

etemiz avatar Apr 06 '25 18:04 etemiz

+1

umbe95 avatar Apr 06 '25 20:04 umbe95

I have the same problem on 2x H100-80gb config and bnb 8 bit quant model from Unsloth

andrei-aa avatar Apr 06 '25 20:04 andrei-aa

I believe this due to insufficient memory. You would most likely need 8 40GB A100 GPUs as mentioned above.

As for the EOF error message, please see my reply at https://github.com/vllm-project/vllm/issues/16197#issuecomment-2784855811

sarckk avatar Apr 07 '25 23:04 sarckk

I get this error with any model after upgrading to 0.8.3. Try your model with version 0.7.3. There is something wrong with the newest versions.

manitadayon avatar Apr 08 '25 01:04 manitadayon

@manitadayon In vLLM version 0.7.3, the transformers version appears to be incompatible.

yeoV avatar Apr 11 '25 07:04 yeoV

@manitadayon In vLLM version 0.7.3, the transformers version appears to be incompatible.

What is your transformers version and what is the use case you have. You can install 4.49.0 and very compatible with vllm 0.7.3.

manitadayon avatar Apr 11 '25 07:04 manitadayon

Llama4 requires vllm >= 0.8.3, transformers >= 4.51.0

houseroad avatar Apr 11 '25 07:04 houseroad

Llama4 requires vllm >= 0.8.3, transformers >= 4.51.0

True, my answer above was mainly about compatibility of transformers version with vllm 0.7.3.

manitadayon avatar Apr 11 '25 07:04 manitadayon

Oh, i mean the Llama 4 Scout model doesn't seem to run on vLLM version 0.7.3 because of transformers version.

yeoV avatar Apr 11 '25 07:04 yeoV

Hi, I also met this issue when using Qwen 2.5 models (3B). I ensure that the GPU memory for my program is sufficient (H100-96G). And if I switch to version 0.7.3, it works.

SnowCharmQ avatar Apr 12 '25 09:04 SnowCharmQ

I run llama4 with 8 x h20,vllm 0.8.4,encountering similar issues。It' s not a insufficient memory problem, as there is stiil half of memory unused in my case.

OceanF0rever avatar Apr 15 '25 06:04 OceanF0rever

Hello

We are running 2xH100, latest master branch (15.04.2025), we have 192GB of VRAM, running with

extraArgs: [
  "--enforce-eager", # disable cuda graphs. lowers token/s. reduce memory.
  "--max_num_seqs=1",  # # Number of concurrent requests. reduce kv cache size.
  "--gpu-memory-utilization=0.99", 
  "--max-model-len=1000", # context length # reduce kv cache size.
  "--tensor-parallel-size=2",
  "--kv-cache-dtype=fp8" #dynamic quantization of the KV cach
 ]

Getting

(VllmWorker rank=1 pid=253) INFO 04-16 01:24:44 [custom_all_reduce_utils.py:244] reading GPU P2P access cache from /root/.cache/vllm/gpu_p2p_access_cache_for_0,1.json
(VllmWorker rank=0 pid=236) INFO 04-16 01:24:44 [shm_broadcast.py:264] vLLM message queue communication handle: Handle(local_reader_ranks=[1], buffer_handle=(1, 4194304, 6, 'psm_271adea1'), local_subscribe_addr='ipc:///tmp/dcdacfc8-b044-45d2-a72e-7a568d7b7a8c', remote_subscribe_addr=None, remote_addr_ipv6=False)
(VllmWorker rank=1 pid=253) INFO 04-16 01:24:44 [parallel_state.py:959] rank 1 in world size 2 is assigned as DP rank 0, PP rank 0, TP rank 1
(VllmWorker rank=0 pid=236) INFO 04-16 01:24:44 [parallel_state.py:959] rank 0 in world size 2 is assigned as DP rank 0, PP rank 0, TP rank 0
(VllmWorker rank=1 pid=253) INFO 04-16 01:24:44 [cuda.py:221] Using Flash Attention backend on V1 engine.
(VllmWorker rank=0 pid=236) INFO 04-16 01:24:44 [cuda.py:221] Using Flash Attention backend on V1 engine.
(VllmWorker rank=1 pid=253) INFO 04-16 01:24:48 [gpu_model_runner.py:1278] Starting to load model meta-llama/Llama-4-Scout-17B-16E-Instruct...
(VllmWorker rank=1 pid=253) INFO 04-16 01:24:48 [config.py:3521] cudagraph sizes specified by model runner [] is overridden by config []
(VllmWorker rank=1 pid=253) Process SpawnProcess-1:2:
CRITICAL 04-16 01:24:49 [multiproc_executor.py:49] MulitprocExecutor got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
CRITICAL 04-16 01:24:49 [core_client.py:359] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
(VllmWorker rank=1 pid=253) Traceback (most recent call last):
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1129, in <module>
    uvloop.run(run_server(args))
  File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
    return __asyncio.run(
           ^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
  File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
    return await main
           ^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1077, in run_server
    async with build_async_engine_client(args) as engine_client:
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 146, in build_async_engine_client
    async with build_async_engine_client_from_engine_args(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 178, in build_async_engine_client_from_engine_args
    async_llm = AsyncLLM.from_vllm_config(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 136, in from_vllm_config
    return cls(
           ^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 102, in __init__
    self.engine_core = EngineCoreClient.make_client(
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 71, in make_client
    return AsyncMPClient(vllm_config, executor_class, log_stats)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 604, in __init__
    super().__init__(
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 404, in __init__
    self._wait_for_engine_startup()
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 426, in _wait_for_engine_startup
    raise RuntimeError("Engine core initialization failed. "
RuntimeError: Engine core initialization failed. See root cause above.

Any ideas ?

bernardgut avatar Apr 16 '25 08:04 bernardgut

@w013nad, not sure if it was same issue as yours, just for your reference. I fixed it by upgrading fabric manager NVIDIA GPU driver. Below is detail:

I hit similar issue while loading Llama-4-Scout-17B-16E-Instruct with vllm 0.83/0.84 on a 8 x v100 environment, cuda is 12.4:

...
ERROR 04-16 00:59:44 [core.py:390] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.25 GiB. GPU 0 has a total capacity of 79.14 GiB of which 74.75 MiB is free. Process 508352 has 79.06 GiB memory in use. Of the allocated memory 78.50 GiB is allocated by PyTorch, and 78.61 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
ERROR 04-16 00:59:44 [core.py:390]
CRITICAL 04-16 00:59:44 [core_client.py:361] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1121, in <module>
    uvloop.run(run_server(args))
  File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
    return __asyncio.run(
           ^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
  File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
    return await main
           ^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1069, in run_server
    async with build_async_engine_client(args) as engine_client:
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 146, in build_async_engine_client
    async with build_async_engine_client_from_engine_args(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 178, in build_async_engine_client_from_engine_args
    async_llm = AsyncLLM.from_vllm_config(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 136, in from_vllm_config
    return cls(
           ^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 102, in __init__
    self.engine_core = EngineCoreClient.make_client(
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 69, in make_client
    return AsyncMPClient(vllm_config, executor_class, log_stats)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 570, in __init__
    super().__init__(
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 401, in __init__
    engine.proc_handle.wait_for_startup()
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/utils.py", line 127, in wait_for_startup
    if self.reader.recv()["status"] != "READY":
       ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/multiprocessing/connection.py", line 250, in recv
    buf = self._recv_bytes()
          ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/multiprocessing/connection.py", line 430, in _recv_bytes
    buf = self._recv(4)
          ^^^^^^^^^^^^^
  File "/usr/lib/python3.12/multiprocessing/connection.py", line 399, in _recv
    raise EOFError
EOFError

And below simple code could reproduce the issue:

$ python -c "import torch; torch.cuda.init(); print(torch.cuda.device_count())"
 
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/raid/encryagent/miniforge3/envs/vllm-py312/lib/python3.12/site-packages/torch/cuda/__init__.py", line 286, in init
    _lazy_init()
  File "/raid/encryagent/miniforge3/envs/vllm-py312/lib/python3.12/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 802: system not yet initialized

Finally, we found it was nvidia-fabricmanager service was wrong (see below), and had to upgrade fabric manager NVIDIA GPU driver. After that, the issue was fixed.

$ sudo systemctl status nvidia-fabricmanager
[sudo] password for llm:
 nvidia-fabricmanager.service - NVIDIA fabric manager service
     Loaded: loaded (/lib/systemd/system/nvidia-fabricmanager.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Tue 2025-04-15 20:41:11 PDT; 48min ago
    Process: 371578 ExecStart=/usr/bin/nv-fabricmanager -c /usr/share/nvidia/nvswitch/fabricmanager.cfg (code=exited, status=1/FAILURE)

Apr 15 20:41:11 BST1425 systemd[1]: Starting NVIDIA fabric manager service...
Apr 15 20:41:11 BST1425 nv-fabricmanager[371595]: fabric manager NVIDIA GPU driver interface version 525.147.05 don't match with driver version 550.54.14. Please update with matching NVIDIA driver package.
Apr 15 20:41:11 BST1425 nv-fabricmanager[371595]: fabric manager NVIDIA GPU driver interface version 525.147.05 don't match with driver version 550.54.14. Please update with matching NVIDIA driver package.
Apr 15 20:41:11 BST1425 systemd[1]: nvidia-fabricmanager.service: Control process exited, code=exited, status=1/FAILURE
Apr 15 20:41:11 BST1425 systemd[1]: nvidia-fabricmanager.service: Failed with result 'exit-code'.
Apr 15 20:41:11 BST1425 systemd[1]: Failed to start NVIDIA fabric manager service.

davidfeb avatar Apr 20 '25 13:04 davidfeb

If folks encountering this issue can try with the latest main branch, the root cause error should hopefully no longer be hidden.

njhill avatar Apr 22 '25 17:04 njhill

I have hit similar issue while trying to run latest vllm docker image to several H100, on an ubuntu 22.04 machine fully updated. nvidia-smi 'seems' to work.

trying to use Qwen2.5 3B

But i have to say it is not the only model inference manager/loader platform not working... whisper-ws is not working either. Although Ollama is.

:~$ python3 -c
"import torch; torch.cuda.init(); print(torch.cuda.device_count())" Traceback (most recent call last):
File "<string>", line 1, in ‹module ›
File "/home/luis/.local/lib/python3.10/site-packages/torch/cuda/_init_.py", line 286, in init
lazy init()
File "/home/luis/:local/lib/python3.10/site-packages/torch/cuda/_init_.py", line 319, in _lazy_init torch._c._cuda_init()
RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices () that might have already set an error? Error 802: system not yet initialized

LuisMalhadas avatar Apr 23 '25 14:04 LuisMalhadas

My H100s have this problem. Downgrading to 0.8.1 solves this.

TimeLovercc avatar Apr 23 '25 14:04 TimeLovercc

Running on H100s, tried 2x and 4x, getting RuntimeError: Engine core initialization failed. Using v0.8.4 Does anyone have anything on this?

ilyabcodin avatar Apr 28 '25 13:04 ilyabcodin

met this issue when running with vllm 0.8.5 for meta-llama/Llama-4-Scout-17B-16E-Instruct (downloaded from https://www.llama.com/) on 4 H100s.

[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435] WorkerProc failed to start.
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435] Traceback (most recent call last):
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]   File "/net/storage149/mnt/md0/zhuoran/miniconda3/envs/fmwork-v22-cu126-vllm084/lib/python3.11/site-packages/vllm/v1/executor/multiproc_executor.py", line 409, in worker_main
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]     worker = WorkerProc(*args, **kwargs)
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]   File "/net/storage149/mnt/md0/zhuoran/miniconda3/envs/fmwork-v22-cu126-vllm084/lib/python3.11/site-packages/vllm/v1/executor/multiproc_executor.py", line 306, in __init__
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]     self.worker.load_model()
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]   File "/net/storage149/mnt/md0/zhuoran/miniconda3/envs/fmwork-v22-cu126-vllm084/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 162, in load_model
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]     self.model_runner.load_model()
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]   File "/net/storage149/mnt/md0/zhuoran/miniconda3/envs/fmwork-v22-cu126-vllm084/lib/python3.11/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1332, in load_model
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]     self.model = get_model(vllm_config=self.vllm_config)
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]   File "/net/storage149/mnt/md0/zhuoran/miniconda3/envs/fmwork-v22-cu126-vllm084/lib/python3.11/site-packages/vllm/model_executor/model_loader/__init__.py", line 14, in get_model
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]     return loader.load_model(vllm_config=vllm_config)
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]   File "/net/storage149/mnt/md0/zhuoran/miniconda3/envs/fmwork-v22-cu126-vllm084/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 452, in load_model
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]     model = _initialize_model(vllm_config=vllm_config)
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]   File "/net/storage149/mnt/md0/zhuoran/miniconda3/envs/fmwork-v22-cu126-vllm084/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 133, in _initialize_model
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]     return model_class(vllm_config=vllm_config, prefix=prefix)
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]   File "/net/storage149/mnt/md0/zhuoran/miniconda3/envs/fmwork-v22-cu126-vllm084/lib/python3.11/site-packages/vllm/model_executor/models/llama.py", line 496, in __init__
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]     self.model = self._init_model(vllm_config=vllm_config,
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]   File "/net/storage149/mnt/md0/zhuoran/miniconda3/envs/fmwork-v22-cu126-vllm084/lib/python3.11/site-packages/vllm/model_executor/models/llama.py", line 542, in _init_model
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]     return LlamaModel(vllm_config=vllm_config,
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]   File "/net/storage149/mnt/md0/zhuoran/miniconda3/envs/fmwork-v22-cu126-vllm084/lib/python3.11/site-packages/vllm/compilation/decorators.py", line 151, in __init__
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]     old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]   File "/net/storage149/mnt/md0/zhuoran/miniconda3/envs/fmwork-v22-cu126-vllm084/lib/python3.11/site-packages/vllm/model_executor/models/llama.py", line 321, in __init__
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]     self.start_layer, self.end_layer, self.layers = make_layers(
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]                                                     ^^^^^^^^^^^^
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]   File "/net/storage149/mnt/md0/zhuoran/miniconda3/envs/fmwork-v22-cu126-vllm084/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 609, in make_layers
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]     [PPMissingLayer() for _ in range(start_layer)] + [
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]                                                      ^
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]   File "/net/storage149/mnt/md0/zhuoran/miniconda3/envs/fmwork-v22-cu126-vllm084/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 610, in <listcomp>
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]     maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]   File "/net/storage149/mnt/md0/zhuoran/miniconda3/envs/fmwork-v22-cu126-vllm084/lib/python3.11/site-packages/vllm/model_executor/models/llama.py", line 323, in <lambda>
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]     lambda prefix: layer_type(config=config,
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]                    ^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]   File "/net/storage149/mnt/md0/zhuoran/miniconda3/envs/fmwork-v22-cu126-vllm084/lib/python3.11/site-packages/vllm/model_executor/models/llama.py", line 239, in __init__
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]     self.self_attn = LlamaAttention(
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]                      ^^^^^^^^^^^^^^^
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]   File "/net/storage149/mnt/md0/zhuoran/miniconda3/envs/fmwork-v22-cu126-vllm084/lib/python3.11/site-packages/vllm/model_executor/models/llama.py", line 135, in __init__
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]     self.q_size = self.num_heads * self.head_dim
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435]                   ~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
[1;36m(VllmWorker rank=0 pid=60515)[0;0m ERROR 04-29 19:43:23 [multiproc_executor.py:435] TypeError: unsupported operand type(s) for *: 'int' and 'NoneType'

WarningRan avatar Apr 29 '25 19:04 WarningRan

@WarningRan this looks like a different problem? Could you open a separate issue for it? Looks like it might be model config related.

njhill avatar Apr 29 '25 20:04 njhill

Opened a new issue #17412

WarningRan avatar Apr 29 '25 20:04 WarningRan