vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Bug]: Error while running inference with LLava 1.6 in v0.5.1

Open sindhuvahinis opened this issue 7 months ago • 9 comments

Your current environment

Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.35

Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1049-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A10G
GPU 1: NVIDIA A10G
GPU 2: NVIDIA A10G
GPU 3: NVIDIA A10G

Nvidia driver version: 535.104.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      48 bits physical, 48 bits virtual
Byte Order:                         Little Endian
CPU(s):                             96
On-line CPU(s) list:                0-95
Vendor ID:                          AuthenticAMD
Model name:                         AMD EPYC 7R32
CPU family:                         23
Model:                              49
Thread(s) per core:                 2
Core(s) per socket:                 48
Socket(s):                          1
Stepping:                           0
BogoMIPS:                           5599.99
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
Hypervisor vendor:                  KVM
Virtualization type:                full
L1d cache:                          1.5 MiB (48 instances)
L1i cache:                          1.5 MiB (48 instances)
L2 cache:                           24 MiB (48 instances)
L3 cache:                           192 MiB (12 instances)
NUMA node(s):                       1
NUMA node0 CPU(s):                  0-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] onnx==1.16.1
[pip3] onnxruntime-gpu==1.18.0
[pip3] sentence-transformers==3.0.1
[pip3] torch==2.3.0+cu121
[pip3] torchvision==0.18.0+cu121
[pip3] transformers==4.42.3
[pip3] triton==2.3.0
[pip3] vllm_nccl_cu12==2.18.1.0.4.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0	GPU1	GPU2	GPU3	CPU Affinity	NUMA Affinity	GPU NUMA ID
GPU0	 X 	PHB	PHB	PHB	0-95		N/A		N/A
GPU1	PHB	 X 	PHB	PHB	0-95		N/A		N/A
GPU2	PHB	PHB	 X 	PHB	0-95		N/A		N/A
GPU3	PHB	PHB	PHB	 X 	0-95		N/A		N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

🐛 Describe the bug

My LLM Configuration: I have enabled enforce_eager=True and enable_prefix_caching=False

Initializing an LLM engine (v0.5.1) with config: model='llava-hf/llava-v1.6-34b-hf', speculative_config=None, tokenizer='llava-hf/llava-v1.6-34b-hf', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None), seed=0, served_model_name=llava-hf/llava-v1.6-34b-hf, use_v2_block_manager=False, enable_prefix_caching=False)

Code we used something like this

def fetch_image_from_url(image_url: str) -> Image.Image:
    with requests.get(url=image_url) as response:
        response.raise_for_status()
        image_raw = response.content
    # Opens the image using pillow, but it does not load the model into memory yet
    # (image.load()), as some frameworks like vllm does it anyway.
    image = Image.open(BytesIO(image_raw))
    return image

args = EngineArgs(
    model='llava-hf/llava-v1.6-34b-hf', speculative_config=None, tokenizer='llava-hf/llava-v1.6-34b-hf', 
    skip_tokenizer_init=False, 
    tokenizer_mode=auto, revision=None, rope_scaling=None, 
    rope_theta=None, tokenizer_revision=None, 
    trust_remote_code=False, dtype=torch.bfloat16,
    max_seq_len=4096, download_dir=None, 
    load_format=LoadFormat.AUTO, 
    tensor_parallel_size=4, pipeline_parallel_size=1, 
    disable_custom_all_reduce=False, quantization=None, 
    enforce_eager=True, kv_cache_dtype=auto, 
    quantization_param_path=None, device_config=cuda, 
    decoding_config=DecodingConfig(guided_decoding_backend='outlines'), 
    observability_config=ObservabilityConfig(otlp_traces_endpoint=None), 
    seed=0, served_model_name=llava-hf/llava-v1.6-34b-hf, use_v2_block_manager=False, enable_prefix_caching=False)
)
engine = LLMEngine.from_engine_args(args)
sampling_params = SamplingParams(max_tokens=100)
prompt_inputs = {'prompt': '<|im_start|>user\n<image>\nWhat?s in this image?<|im_end|>\n', 'multi_modal_data': {'image': fetch_image_from_url('https://h2o-release.s3.amazonaws.com/h2ogpt/bigben.jpg')}}
engine.add_request(request_id=request_id,
                                    inputs=prompt_inputs,
                                    params=sampling_params,
                                    **request_params)

request_outputs = self.engine.step() # will keep running this until it is finished.

My PromptInputs is

{'prompt': '<|im_start|>user\n<image>\nWhat?s in this image?<|im_end|>\n', 'multi_modal_data': {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=422x629 at 0x7F7AC038F9A0>}}

My input image is https://h2o-release.s3.amazonaws.com/h2ogpt/bigben.jpg I get the same error described in this issue https://github.com/vllm-project/vllm/issues/6176

Error we got

ValueError: Attempted to assign 2160 = 2160 image tokens to 0 placeholders
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226] Exception in worker VllmWorkerProcess while processing method start_worker_execution_loop: Attempted to assign 2160 = 2160 image tokens to 0 placeholders, Traceback (most recent call last):
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/multiproc_worker_utils.py", line 223, in _run_worker_process
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]     output = executor(*args, **kwargs)
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]     return func(*args, **kwargs)
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker_base.py", line 64, in start_worker_execution_loop
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]     output = self.execute_model(execute_model_req=None)
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker_base.py", line 271, in execute_model
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]     output = self.model_runner.execute_model(
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]     return func(*args, **kwargs)
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1243, in execute_model
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]     hidden_or_intermediate_states = model_executable(
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]     return self._call_impl(*args, **kwargs)
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]     return forward_call(*args, **kwargs)
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/llava_next.py", line 494, in forward
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]     inputs_embeds = merge_vision_embeddings(
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 35, in merge_vision_embeddings
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]     raise ValueError(
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226] ValueError: Attempted to assign 2160 = 2160 image tokens to 0 placeholders
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5550) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226]
INFO  PyProcess W-5532-test-stdout: (VllmWorkerProcess pid=5548) ERROR 07-08 21:44:17 multiproc_worker_utils.py:226] Exception in worker VllmWorkerProcess while processing method start_worker_execution_loop: Attempted to assign 2160 = 2160 image tokens to 0 placeholders, Traceback (most recent call last):

sindhuvahinis avatar Jul 08 '24 21:07 sindhuvahinis