vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Feature]: Application support for the Qwen2.5-VL-72B-Instruct-unsloth-bnb-4bit、Qwen2.5-VL-72B-Instruct-bnb-4bit series models.

Open moshilangzi opened this issue 10 months ago • 1 comments

🚀 The feature, motivation and pitch

https://modelscope.cn/models/unsloth/Qwen2.5-VL-72B-Instruct-unsloth-bnb-4bit https://modelscope.cn/models/unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit

Alternatives

https://modelscope.cn/models/unsloth/Qwen2.5-VL-72B-Instruct-unsloth-bnb-4bit https://modelscope.cn/models/unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit

Additional context

https://modelscope.cn/models/unsloth/Qwen2.5-VL-72B-Instruct-unsloth-bnb-4bit https://modelscope.cn/models/unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit

Before submitting a new issue...

  • [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

moshilangzi avatar Feb 16 '25 04:02 moshilangzi

I am running unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit succesfully, note that the dynamic quant one is not yet working, at least not for me..

You might want to build vLLM from source or grab the latest build.

bbss avatar Feb 16 '25 12:02 bbss

I am running unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit succesfully, note that the dynamic quant one is not yet working, at least not for me..

You might want to build vLLM from source or grab the latest build.

Thank dear,Big boss, could you help me write the command to install VLLM for successfully running the model at https://modelscope.cn/models/unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit?

moshilangzi avatar Feb 17 '25 02:02 moshilangzi

I am running unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit succesfully, note that the dynamic quant one is not yet working, at least not for me..

You might want to build vLLM from source or grab the latest build.

please help me sir: `CUDA_VISIBLE_DEVICES=4,5,6,7 vllm serve unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit
--quantization bitsandbytes
--dtype half
--load-format bitsandbytes
--max-model-len 32768
--max_num_batched_tokens 32768
--max_num_seqs 20
--pipeline_parallel_size 4
--swap_space 10
--num_scheduler_steps 6

/home/anaconda3/envs/xinference/lib/python3.11/site-packages/transformers/utils/hub.py:106: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead. warnings.warn( INFO 02-17 14:36:01 init.py:190] Automatically detected platform cuda. INFO 02-17 14:36:02 api_server.py:840] vLLM API server version 0.7.2 INFO 02-17 14:36:02 api_server.py:841] args: Namespace(subparser='serve', model_tag='unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit', config='', host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=[''], allowed_methods=[''], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, enable_reasoning=False, reasoning_parser=None, tool_call_parser=None, tool_parser_plugin='', model='unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit', task='auto', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='bitsandbytes', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='half', kv_cache_dtype='auto', max_model_len=32768, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=4, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=0, swap_space=10.0, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=32768, max_num_seqs=20, max_logprobs=20, disable_log_stats=False, quantization='bitsandbytes', rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=6, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', generation_config=None, override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, dispatch_function=<function serve at 0x7f1cf4dcc860>) WARNING 02-17 14:36:02 config.py:2386] Casting torch.bfloat16 to torch.float16. INFO 02-17 14:36:09 config.py:542] This model supports multiple tasks: {'generate', 'reward', 'embed', 'score', 'classify'}. Defaulting to 'generate'. WARNING 02-17 14:36:10 config.py:621] bitsandbytes quantization is not fully optimized yet. The speed can be slower than non-quantized models. INFO 02-17 14:36:10 config.py:1401] Defaulting to use mp for distributed inference WARNING 02-17 14:36:10 config.py:669] Async output processing can not be enabled with pipeline parallel INFO 02-17 14:36:10 llm_engine.py:234] Initializing a V0 LLM engine (v0.7.2) with config: model='unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit', speculative_config=None, tokenizer='unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.BITSANDBYTES, tensor_parallel_size=1, pipeline_parallel_size=4, disable_custom_all_reduce=False, quantization=bitsandbytes, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit, num_scheduler_steps=6, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=False, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[24,16,8,4,2,1],"max_capture_size":24}, use_cached_outputs=False, WARNING 02-17 14:36:10 multiproc_worker_utils.py:300] Reducing Torch parallelism from 64 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed. INFO 02-17 14:36:10 custom_cache_manager.py:19] Setting Triton cache manager to: vllm.triton_utils.custom_cache_manager:CustomCacheManager INFO 02-17 14:36:10 cuda.py:230] Using Flash Attention backend. WARNING 02-17 14:36:10 registry.py:340] mm_limits has already been set for model=unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit, and will be overwritten by the new values. /home/anaconda3/envs/xinference/lib/python3.11/site-packages/transformers/utils/hub.py:106: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead. warnings.warn( /home/anaconda3/envs/xinference/lib/python3.11/site-packages/transformers/utils/hub.py:106: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead. warnings.warn( /home/anaconda3/envs/xinference/lib/python3.11/site-packages/transformers/utils/hub.py:106: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead. warnings.warn( INFO 02-17 14:36:14 init.py:190] Automatically detected platform cuda. INFO 02-17 14:36:14 init.py:190] Automatically detected platform cuda. INFO 02-17 14:36:14 init.py:190] Automatically detected platform cuda. (VllmWorkerProcess pid=1532564) INFO 02-17 14:36:15 multiproc_worker_utils.py:229] Worker ready; awaiting tasks (VllmWorkerProcess pid=1532563) INFO 02-17 14:36:15 multiproc_worker_utils.py:229] Worker ready; awaiting tasks (VllmWorkerProcess pid=1532562) INFO 02-17 14:36:15 multiproc_worker_utils.py:229] Worker ready; awaiting tasks (VllmWorkerProcess pid=1532564) INFO 02-17 14:36:16 cuda.py:230] Using Flash Attention backend. (VllmWorkerProcess pid=1532564) WARNING 02-17 14:36:16 registry.py:340] mm_limits has already been set for model=unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit, and will be overwritten by the new values. (VllmWorkerProcess pid=1532563) INFO 02-17 14:36:16 cuda.py:230] Using Flash Attention backend. (VllmWorkerProcess pid=1532563) WARNING 02-17 14:36:16 registry.py:340] mm_limits has already been set for model=unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit, and will be overwritten by the new values. (VllmWorkerProcess pid=1532562) INFO 02-17 14:36:16 cuda.py:230] Using Flash Attention backend. (VllmWorkerProcess pid=1532562) WARNING 02-17 14:36:16 registry.py:340] mm_limits has already been set for model=unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit, and will be overwritten by the new values. [W217 14:36:18.634291549 CUDAAllocatorConfig.h:28] Warning: expandable_segments not supported on this platform (function operator()) [W217 14:36:18.634291387 CUDAAllocatorConfig.h:28] Warning: expandable_segments not supported on this platform (function operator()) [W217 14:36:18.654875990 CUDAAllocatorConfig.h:28] Warning: expandable_segments not supported on this platform (function operator()) [W217 14:36:18.658306610 CUDAAllocatorConfig.h:28] Warning: expandable_segments not supported on this platform (function operator()) INFO 02-17 14:36:18 utils.py:950] Found nccl from library libnccl.so.2 (VllmWorkerProcess pid=1532562) INFO 02-17 14:36:18 utils.py:950] Found nccl from library libnccl.so.2 INFO 02-17 14:36:18 pynccl.py:69] vLLM is using nccl==2.21.5 (VllmWorkerProcess pid=1532564) INFO 02-17 14:36:18 utils.py:950] Found nccl from library libnccl.so.2 (VllmWorkerProcess pid=1532563) INFO 02-17 14:36:18 utils.py:950] Found nccl from library libnccl.so.2 (VllmWorkerProcess pid=1532562) INFO 02-17 14:36:18 pynccl.py:69] vLLM is using nccl==2.21.5 (VllmWorkerProcess pid=1532564) INFO 02-17 14:36:18 pynccl.py:69] vLLM is using nccl==2.21.5 (VllmWorkerProcess pid=1532563) INFO 02-17 14:36:18 pynccl.py:69] vLLM is using nccl==2.21.5 INFO 02-17 14:36:18 model_runner.py:1110] Starting to load model unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit... (VllmWorkerProcess pid=1532563) INFO 02-17 14:36:18 model_runner.py:1110] Starting to load model unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit... (VllmWorkerProcess pid=1532564) INFO 02-17 14:36:18 model_runner.py:1110] Starting to load model unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit... (VllmWorkerProcess pid=1532562) INFO 02-17 14:36:18 model_runner.py:1110] Starting to load model unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit... WARNING 02-17 14:36:18 vision.py:94] Current vllm-flash-attn has a bug inside vision module, so we use xformers backend instead. You can run pip install flash-attn to use flash-attention backend. (VllmWorkerProcess pid=1532564) WARNING 02-17 14:36:18 vision.py:94] Current vllm-flash-attn has a bug inside vision module, so we use xformers backend instead. You can run pip install flash-attn to use flash-attention backend. (VllmWorkerProcess pid=1532563) WARNING 02-17 14:36:18 vision.py:94] Current vllm-flash-attn has a bug inside vision module, so we use xformers backend instead. You can run pip install flash-attn to use flash-attention backend. (VllmWorkerProcess pid=1532562) WARNING 02-17 14:36:18 vision.py:94] Current vllm-flash-attn has a bug inside vision module, so we use xformers backend instead. You can run pip install flash-attn to use flash-attention backend. INFO 02-17 14:36:25 config.py:2992] cudagraph sizes specified by model runner [1, 2, 4, 8, 16, 24] is overridden by config [1, 2, 4, 8, 16, 24] (VllmWorkerProcess pid=1532564) INFO 02-17 14:36:25 config.py:2992] cudagraph sizes specified by model runner [1, 2, 4, 8, 16, 24] is overridden by config [1, 2, 4, 8, 16, 24] INFO 02-17 14:36:25 loader.py:1102] Loading weights with BitsAndBytes quantization. May take a while ... Loading safetensors checkpoint shards: 0% Completed | 0/9 [00:00<?, ?it/s] (VllmWorkerProcess pid=1532564) INFO 02-17 14:36:25 loader.py:1102] Loading weights with BitsAndBytes quantization. May take a while ... (VllmWorkerProcess pid=1532562) INFO 02-17 14:36:25 config.py:2992] cudagraph sizes specified by model runner [1, 2, 4, 8, 16, 24] is overridden by config [1, 2, 4, 8, 16, 24] (VllmWorkerProcess pid=1532563) INFO 02-17 14:36:25 config.py:2992] cudagraph sizes specified by model runner [1, 2, 4, 8, 16, 24] is overridden by config [1, 2, 4, 8, 16, 24] (VllmWorkerProcess pid=1532562) INFO 02-17 14:36:25 loader.py:1102] Loading weights with BitsAndBytes quantization. May take a while ... (VllmWorkerProcess pid=1532563) INFO 02-17 14:36:25 loader.py:1102] Loading weights with BitsAndBytes quantization. May take a while ... Loading safetensors checkpoint shards: 11% Completed | 1/9 [00:00<00:04, 1.79it/s] Loading safetensors checkpoint shards: 22% Completed | 2/9 [00:01<00:07, 1.05s/it] Loading safetensors checkpoint shards: 33% Completed | 3/9 [00:03<00:06, 1.13s/it] Loading safetensors checkpoint shards: 44% Completed | 4/9 [00:04<00:06, 1.20s/it] Loading safetensors checkpoint shards: 56% Completed | 5/9 [00:05<00:04, 1.12s/it] Loading safetensors checkpoint shards: 67% Completed | 6/9 [00:06<00:03, 1.00s/it] Loading safetensors checkpoint shards: 78% Completed | 7/9 [00:07<00:02, 1.06s/it] Loading safetensors checkpoint shards: 89% Completed | 8/9 [00:08<00:01, 1.07s/it] Loading safetensors checkpoint shards: 100% Completed | 9/9 [00:09<00:00, 1.00s/it] Loading safetensors checkpoint shards: 100% Completed | 9/9 [00:09<00:00, 1.04s/it]

Loading safetensors checkpoint shards: 0% Completed | 0/9 [00:00<?, ?it/s] Loading safetensors checkpoint shards: 11% Completed | 1/9 [00:00<00:03, 2.00it/s] Loading safetensors checkpoint shards: 22% Completed | 2/9 [00:01<00:07, 1.04s/it] Loading safetensors checkpoint shards: 33% Completed | 3/9 [00:03<00:06, 1.14s/it] Loading safetensors checkpoint shards: 44% Completed | 4/9 [00:04<00:06, 1.22s/it] Loading safetensors checkpoint shards: 56% Completed | 5/9 [00:05<00:04, 1.22s/it] Loading safetensors checkpoint shards: 67% Completed | 6/9 [00:06<00:03, 1.19s/it] (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] Exception in worker VllmWorkerProcess while processing method load_model. (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] Traceback (most recent call last): (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/executor/multiproc_worker_utils.py", line 236, in _run_worker_process (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] output = run_method(worker, method, args, kwargs) (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/utils.py", line 2220, in run_method (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] return func(*args, **kwargs) (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/worker/worker.py", line 183, in load_model (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] self.model_runner.load_model() (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/worker/multi_step_model_runner.py", line 652, in load_model (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] self._base_model_runner.load_model() (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/worker/model_runner.py", line 1112, in load_model (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] self.model = get_model(vllm_config=self.vllm_config) (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] return loader.load_model(vllm_config=vllm_config) (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 1225, in load_model (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] self._load_weights(model_config, model) (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 1135, in _load_weights (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] loaded_weights = model.load_weights(qweight_iterator) (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/models/qwen2_5_vl.py", line 1124, in load_weights (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] return loader.load_weights(weights, mapper=self.hf_to_vllm_mapper) (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 235, in load_weights (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] autoloaded_weights = set(self._load_module("", self.module, weights)) (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 196, in _load_module (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] yield from self._load_module(prefix, (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 173, in _load_module (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] loaded_params = module_load_weights(weights) (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/models/qwen2_5_vl.py", line 672, in load_weights (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] loaded_weight = loaded_weight.view(3, visual_num_heads, (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/torch/utils/_device.py", line 106, in torch_function (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] return func(*args, **kwargs) (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532563) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] RuntimeError: shape '[3, 16, 80, 1280]' is invalid for input of size 2457600 (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] Exception in worker VllmWorkerProcess while processing method load_model. (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] Traceback (most recent call last): (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/executor/multiproc_worker_utils.py", line 236, in _run_worker_process (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] output = run_method(worker, method, args, kwargs) (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/utils.py", line 2220, in run_method (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] return func(*args, **kwargs) (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/worker/worker.py", line 183, in load_model (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] self.model_runner.load_model() (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/worker/multi_step_model_runner.py", line 652, in load_model (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] self._base_model_runner.load_model() (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/worker/model_runner.py", line 1112, in load_model (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] self.model = get_model(vllm_config=self.vllm_config) (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] return loader.load_model(vllm_config=vllm_config) (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 1225, in load_model (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] self._load_weights(model_config, model) (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 1135, in _load_weights (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] loaded_weights = model.load_weights(qweight_iterator) (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/models/qwen2_5_vl.py", line 1124, in load_weights (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] return loader.load_weights(weights, mapper=self.hf_to_vllm_mapper) (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 235, in load_weights (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] autoloaded_weights = set(self._load_module("", self.module, weights)) (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 196, in _load_module (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] yield from self._load_module(prefix, (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 173, in _load_module (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] loaded_params = module_load_weights(weights) (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/models/qwen2_5_vl.py", line 672, in load_weights (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] loaded_weight = loaded_weight.view(3, visual_num_heads, (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/torch/utils/_device.py", line 106, in torch_function (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] return func(*args, **kwargs) (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^ (VllmWorkerProcess pid=1532564) ERROR 02-17 14:36:42 multiproc_worker_utils.py:242] RuntimeError: shape '[3, 16, 80, 1280]' is invalid for input of size 2457600 [rank0]: Traceback (most recent call last): [rank0]: File "/home/anaconda3/envs/xinference/bin/vllm", line 8, in [rank0]: sys.exit(main()) [rank0]: ^^^^^^ [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/scripts.py", line 204, in main [rank0]: args.dispatch_function(args) [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/scripts.py", line 44, in serve [rank0]: uvloop.run(run_server(args)) [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/uvloop/init.py", line 105, in run [rank0]: return runner.run(wrapper()) [rank0]: ^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/asyncio/runners.py", line 118, in run [rank0]: return self._loop.run_until_complete(task) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/uvloop/init.py", line 61, in wrapper [rank0]: return await main [rank0]: ^^^^^^^^^^ [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 875, in run_server [rank0]: async with build_async_engine_client(args) as engine_client: [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/contextlib.py", line 210, in aenter [rank0]: return await anext(self.gen) [rank0]: ^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 136, in build_async_engine_client [rank0]: async with build_async_engine_client_from_engine_args( [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/contextlib.py", line 210, in aenter [rank0]: return await anext(self.gen) [rank0]: ^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 160, in build_async_engine_client_from_engine_args [rank0]: engine_client = AsyncLLMEngine.from_engine_args( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 644, in from_engine_args [rank0]: engine = cls( [rank0]: ^^^^ [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 594, in init [rank0]: self.engine = self._engine_class(*args, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 267, in init [rank0]: super().init(*args, **kwargs) [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 273, in init [rank0]: self.model_executor = executor_class(vllm_config=vllm_config, ) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 262, in init [rank0]: super().init(*args, **kwargs) [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 51, in init [rank0]: self._init_executor() [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/executor/mp_distributed_executor.py", line 125, in _init_executor [rank0]: self._run_workers("load_model", [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/executor/mp_distributed_executor.py", line 185, in _run_workers [rank0]: driver_worker_output = run_method(self.driver_worker, sent_method, [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/utils.py", line 2220, in run_method [rank0]: return func(*args, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/worker/worker.py", line 183, in load_model [rank0]: self.model_runner.load_model() [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/worker/multi_step_model_runner.py", line 652, in load_model [rank0]: self._base_model_runner.load_model() [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/worker/model_runner.py", line 1112, in load_model [rank0]: self.model = get_model(vllm_config=self.vllm_config) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model [rank0]: return loader.load_model(vllm_config=vllm_config) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 1225, in load_model [rank0]: self._load_weights(model_config, model) [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 1135, in _load_weights [rank0]: loaded_weights = model.load_weights(qweight_iterator) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/models/qwen2_5_vl.py", line 1124, in load_weights [rank0]: return loader.load_weights(weights, mapper=self.hf_to_vllm_mapper) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 235, in load_weights [rank0]: autoloaded_weights = set(self._load_module("", self.module, weights)) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 196, in _load_module [rank0]: yield from self._load_module(prefix, [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 173, in _load_module [rank0]: loaded_params = module_load_weights(weights) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/vllm/model_executor/models/qwen2_5_vl.py", line 672, in load_weights [rank0]: loaded_weight = loaded_weight.view(3, visual_num_heads, [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/anaconda3/envs/xinference/lib/python3.11/site-packages/torch/utils/_device.py", line 106, in torch_function [rank0]: return func(*args, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^ [rank0]: RuntimeError: shape '[3, 16, 80, 1280]' is invalid for input of size 2457600 ERROR 02-17 14:36:43 multiproc_worker_utils.py:124] Worker VllmWorkerProcess pid 1532564 died, exit code: -15 INFO 02-17 14:36:43 multiproc_worker_utils.py:128] Killing local vLLM worker processes Loading safetensors checkpoint shards: 67% Completed | 6/9 [00:09<00:04, 1.56s/it]

[rank0]:[W217 14:36:44.729902043 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator()) /home/anaconda3/envs/xinference/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 3 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '`

moshilangzi avatar Feb 17 '25 06:02 moshilangzi

You don't need to built the nightly yourself, you can install it using:

pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly

hmellor avatar Feb 17 '25 13:02 hmellor

@bbss Is the dynamic quant one working now ?

`

from vllm import LLM INFO 03-20 12:43:57 [init.py:256] Automatically detected platform cuda. import torch model_id = "/opt/Qwen2.5-VL-72B-Instruct-unsloth-bnb-4bit" llm = LLM(model=model_id, dtype=torch.bfloat16, trust_remote_code=True,
... quantization="bitsandbytes", load_format="bitsandbytes") INFO 03-20 12:44:39 [config.py:583] This model supports multiple tasks: {'classify', 'generate', 'embed', 'reward', 'score'}. Defaulting to 'generate'. WARNING 03-20 12:44:40 [config.py:662] bitsandbytes quantization is not fully optimized yet. The speed can be slower than non-quantized models. WARNING 03-20 12:44:40 [arg_utils.py:1765] --quantization bitsandbytes is not supported by the V1 Engine. Falling back to V0. INFO 03-20 12:44:40 [llm_engine.py:241] Initializing a V0 LLM engine (v0.8.1) with config: model='/opt/Qwen2.5-VL-72B-Instruct-unsloth-bnb-4bit', speculative_config=None, tokenizer='/opt/Qwen2.5-VL-72B-Instruct-unsloth-bnb-4bit', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=bitsandbytes, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=bitsandbytes, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=/opt/Qwen2.5-VL-72B-Instruct-unsloth-bnb-4bit, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=None, chunked_prefill_enabled=False, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=False, INFO 03-20 12:44:41 [cuda.py:285] Using Flash Attention backend. INFO 03-20 12:44:41 [parallel_state.py:967] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0 INFO 03-20 12:44:41 [model_runner.py:1110] Starting to load model /opt/Qwen2.5-VL-72B-Instruct-unsloth-bnb-4bit... INFO 03-20 12:44:41 [config.py:3222] cudagraph sizes specified by model runner [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256] is overridden by config [256, 128, 2, 1, 4, 136, 8, 144, 16, 152, 24, 160, 32, 168, 40, 176, 48, 184, 56, 192, 64, 200, 72, 208, 80, 216, 88, 120, 224, 96, 232, 104, 240, 112, 248] INFO 03-20 12:44:41 [loader.py:1137] Loading weights with BitsAndBytes quantization. May take a while ... Loading safetensors checkpoint shards: 0% Completed | 0/9 [00:00<?, ?it/s] Loading safetensors checkpoint shards: 11% Completed | 1/9 [00:00<00:06, 1.27it/s] Loading safetensors checkpoint shards: 22% Completed | 2/9 [00:01<00:05, 1.26it/s] Loading safetensors checkpoint shards: 33% Completed | 3/9 [00:02<00:04, 1.29it/s] Loading safetensors checkpoint shards: 44% Completed | 4/9 [00:03<00:04, 1.24it/s] Loading safetensors checkpoint shards: 56% Completed | 5/9 [00:04<00:03, 1.20it/s] Loading safetensors checkpoint shards: 67% Completed | 6/9 [00:04<00:02, 1.18it/s] Loading safetensors checkpoint shards: 78% Completed | 7/9 [00:05<00:01, 1.16it/s] Loading safetensors checkpoint shards: 89% Completed | 8/9 [00:06<00:00, 1.15it/s] Loading safetensors checkpoint shards: 100% Completed | 9/9 [00:07<00:00, 1.16it/s] Loading safetensors checkpoint shards: 100% Completed | 9/9 [00:07<00:00, 1.19it/s] Loading safetensors checkpoint shards: 0% Completed | 0/9 [00:00<?, ?it/s] Loading safetensors checkpoint shards: 11% Completed | 1/9 [00:00<00:06, 1.22it/s] [rank0]: Traceback (most recent call last): [rank0]: File "", line 1, in [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/utils.py", line 1031, in inner [rank0]: return fn(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/llm.py", line 242, in init [rank0]: self.llm_engine = LLMEngine.from_engine_args( [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 520, in from_engine_args [rank0]: return engine_cls.from_vllm_config( [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 496, in from_vllm_config [rank0]: return cls( [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 280, in init [rank0]: self.model_executor = executor_class(vllm_config=vllm_config, ) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 52, in init [rank0]: self._init_executor() [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/executor/uniproc_executor.py", line 47, in _init_executor [rank0]: self.collective_rpc("load_model") [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc [rank0]: answer = run_method(self.driver_worker, method, args, kwargs) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/utils.py", line 2216, in run_method [rank0]: return func(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 183, in load_model [rank0]: self.model_runner.load_model() [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1113, in load_model [rank0]: self.model = get_model(vllm_config=self.vllm_config) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model [rank0]: return loader.load_model(vllm_config=vllm_config) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/loader.py", line 1260, in load_model [rank0]: self._load_weights(model_config, model) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/loader.py", line 1170, in _load_weights [rank0]: loaded_weights = model.load_weights(qweight_iterator) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2_5_vl.py", line 1098, in load_weights [rank0]: return loader.load_weights(weights, mapper=self.hf_to_vllm_mapper) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 235, in load_weights [rank0]: autoloaded_weights = set(self._load_module("", self.module, weights)) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 196, in _load_module [rank0]: yield from self._load_module(prefix, [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 173, in _load_module [rank0]: loaded_params = module_load_weights(weights) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2.py", line 490, in load_weights [rank0]: return loader.load_weights(weights) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 235, in load_weights [rank0]: autoloaded_weights = set(self._load_module("", self.module, weights)) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 196, in _load_module [rank0]: yield from self._load_module(prefix, [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 173, in _load_module [rank0]: loaded_params = module_load_weights(weights) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2.py", line 388, in load_weights [rank0]: weight_loader(param, loaded_weight, shard_id) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/linear.py", line 688, in weight_loader [rank0]: assert param_data.shape == loaded_weight.shape [rank0]: AssertionError `

junyan-zg avatar Mar 20 '25 04:03 junyan-zg

I am running unsloth/Qwen2.5-VL-72B-Instruct-bnb-4bit succesfully, note that the dynamic quant one is not yet working, at least not for me..

You might want to build vLLM from source or grab the latest build.

When I set -tp=2 it gives me:

ValueError: Prequant BitsAndBytes models with tensor parallelism is not supported. Please try with pipeline parallelism.

QwertyJack avatar Mar 25 '25 06:03 QwertyJack