aphrodite-engine icon indicating copy to clipboard operation
aphrodite-engine copied to clipboard

[Bug]: VPTQ quantitative model Inference error

Open ZanePoe opened this issue 9 months ago • 8 comments

Your current environment

The output of `python env.py` ```text Collecting environment information... PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.5 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.35

Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-6.5.0-28-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.6.85 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti GPU 1: NVIDIA GeForce RTX 2080 Ti GPU 2: NVIDIA GeForce RTX 2080 Ti GPU 3: NVIDIA GeForce RTX 2080 Ti

Nvidia driver version: 565.57.01 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.1 /usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.1 /usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.1 /usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.1 /usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.1 /usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.1 /usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.1 /usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.1 /usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn.so.8.9.7 /usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.7 /usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.7 /usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.7 /usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.7 /usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.7 /usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.7 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: 架构: x86_64 CPU 运行模式: 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual 字节序: Little Endian CPU: 56 在线 CPU 列表: 0-55 厂商 ID: GenuineIntel 型号名称: Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz CPU 系列: 6 型号: 79 每个核的线程数: 2 每个座的核数: 14 座: 2 步进: 1 CPU 最大 MHz: 3300.0000 CPU 最小 MHz: 1200.0000 BogoMIPS: 4788.78 标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts vnmi md_clear flush_l1d 虚拟化: VT-x L1d 缓存: 896 KiB (28 instances) L1i 缓存: 896 KiB (28 instances) L2 缓存: 7 MiB (28 instances) L3 缓存: 70 MiB (2 instances) NUMA 节点: 2 NUMA 节点0 CPU: 0-13,28-41 NUMA 节点1 CPU: 14-27,42-55 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable

Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] pyzmq==26.2.1 [pip3] torch==2.4.0 [pip3] torchvision==0.19.0 [pip3] transformers==4.45.2 [pip3] triton==3.0.0 [conda] numpy 1.26.4 pypi_0 pypi [conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi [conda] pyzmq 26.2.1 pypi_0 pypi [conda] torch 2.4.0 pypi_0 pypi [conda] torchvision 0.19.0 pypi_0 pypi [conda] transformers 4.45.2 pypi_0 pypi [conda] triton 3.0.0 pypi_0 pypi ROCM Version: Could not collect Neuron SDK Version: N/A Aphrodite Version: 0.6.7 Aphrodite Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: GPU0 GPU1 GPU2 GPU3 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X SYS SYS SYS 0-13,28-41 0 N/A GPU1 SYS X PHB PHB 14-27,42-55 1 N/A GPU2 SYS PHB X PHB 14-27,42-55 1 N/A GPU3 SYS PHB PHB X 14-27,42-55 1 N/A

Legend:

X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks


</details>


### Model Input Dumps

_No response_

### 🐛 Describe the bug

Inference vptq model times error。
my code:

aphrodite run Qwen2.5-72B-Instruct-v8-k1024-512-woft -tp 4 --host 0.0.0.0 --port 6668 --max-model-len 20480 --guided-decoding-backend xgrammar --enable-prefix-caching --gpu-memory-utilization 0.7 --trust-remote-code --dtype=half --quantization vptq

The model is downloaded from here[Qwen2.5-72B-Instruct-v8-k1024-512-woft](https://huggingface.co/VPTQ-community/Qwen2.5-72B-Instruct-v8-k1024-512-woft)

The error message is as follows:

ERROR: Worker AphroditeWorkerProcess pid 3686142 died, exit code: -15 ERROR: Worker AphroditeWorkerProcess pid 3686194 died, exit code: -15 INFO: Killing local Aphrodite worker processes Traceback (most recent call last): File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/engine/multiprocessing/engine.py", line 371, in run_mp_engine engine = MQAphroditeEngine.from_engine_args(engine_args=engine_args, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/engine/multiprocessing/engine.py", line 139, in from_engine_args return cls( ^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/engine/multiprocessing/engine.py", line 75, in init self.engine = AphroditeEngine(*args, ^^^^^^^^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/engine/aphrodite_engine.py", line 334, in init self.model_executor = executor_class( ^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/executor/distributed_gpu_executor.py", line 25, in init super().init(*args, **kwargs) File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/executor/executor_base.py", line 46, in init self._init_executor() File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/executor/multiproc_gpu_executor.py", line 111, in _init_executor self._run_workers("load_model", File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/executor/multiproc_gpu_executor.py", line 191, in _run_workers driver_worker_output = driver_worker_method(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/worker/worker.py", line 157, in load_model self.model_runner.load_model() File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/worker/model_runner.py", line 1038, in load_model self.model = get_model(model_config=self.model_config, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/model_loader/init.py", line 20, in get_model return loader.load_model(model_config=model_config, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/model_loader/loader.py", line 404, in load_model model = _initialize_model(model_config, self.load_config, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/model_loader/loader.py", line 172, in _initialize_model return build_model( ^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/model_loader/loader.py", line 157, in build_model return model_class(config=hf_config, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/models/qwen2.py", line 400, in init self.model = Qwen2Model(config, cache_config, quant_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/models/qwen2.py", line 256, in init self.start_layer, self.end_layer, self.layers = make_layers( ^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/models/utils.py", line 404, in make_layers maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/models/qwen2.py", line 258, in lambda prefix: Qwen2DecoderLayer(config=config, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/models/qwen2.py", line 176, in init self.self_attn = Qwen2Attention( ^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/models/qwen2.py", line 118, in init self.qkv_proj = QKVParallelLinear( ^^^^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/layers/linear.py", line 727, in init super().init(input_size=input_size, File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/layers/linear.py", line 293, in init super().init(input_size, output_size, skip_bias_add, params_dtype, File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/layers/linear.py", line 184, in init self.quant_method = quant_config.get_quant_method(self, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/quantization/vptq.py", line 360, in get_quant_method quant_config = self.get_config_for_key(base_name, linear_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/quantization/vptq.py", line 340, in get_config_for_key raise ValueError(f"Cannot find config for ({prefix}, {key})") ValueError: Cannot find config for (, ) Traceback (most recent call last): File "/home/zane/miniconda3/envs/aphrodite-engine/bin/aphrodite", line 8, in sys.exit(main()) ^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/endpoints/cli.py", line 229, in main args.dispatch_function(args) File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/endpoints/cli.py", line 32, in serve uvloop.run(run_server(args)) File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/uvloop/init.py", line 109, in run return __asyncio.run( ^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/asyncio/runners.py", line 195, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/uvloop/init.py", line 61, in wrapper return await main ^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/endpoints/openai/api_server.py", line 1194, in run_server async with build_engine_client(args) as engine_client: ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/contextlib.py", line 210, in aenter return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/endpoints/openai/api_server.py", line 121, in build_engine_client async with build_engine_client_from_engine_args( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/contextlib.py", line 210, in aenter return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/endpoints/openai/api_server.py", line 203, in build_engine_client_from_engine_args raise RuntimeError( RuntimeError: Engine process failed to start /home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/multiprocessing/resource_tracker.py:255: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '

ZanePoe avatar Mar 12 '25 03:03 ZanePoe

I found out from your documentation Supported models that qwen2.5 is not supported yet, is it because of this?But vllm has supported qwen2.5 for a long time, I don't think there should be this problem

ZanePoe avatar Mar 12 '25 04:03 ZanePoe

I can be sure this is a problem because aphrodite-engine does not support qwen2.5.I tested llama3.1 VPTQ-community/Meta-Llama-3.1 its VPTQ version works very well on aphrodite.I hope aphrodite can support qwen2.5 as soon as possible. It is very popular, including sglang, vllm, etc., which have been widely supported.

ZanePoe avatar Mar 12 '25 05:03 ZanePoe

We do support Qwen2.5. It's not listed in the supported models list because it uses the same architecture as Qwen2: Qwen2ForCausalLM. See here https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/495f39366efef23836d0cfae4fbe635880d2be31/config.json#L3

We don't support Qwen2.5-Vision yet, because that's a different architecture from Qwen2-Vision.

Anyways, this error log is incomplete. Can you run with --disable-frontend-multiprocessing and send the full aphrodite logs? Thank you.

AlpinDale avatar Mar 12 '25 08:03 AlpinDale

We do support Qwen2.5. It's not listed in the supported models list because it uses the same architecture as Qwen2: Qwen2ForCausalLM. See here https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/495f39366efef23836d0cfae4fbe635880d2be31/config.json#L3支持Qwen2.5。 它不在支持的模型列表中列出,因为它使用与 Qwen2: Qwen2ForCausalLM 相同的体系结构。 参见此处 https : / /huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/495366ef23836d0cfae4fbe635880d2be31/config.json#L3

We don't support Qwen2.5-Vision yet, because that's a different architecture from Qwen2-Vision.我们还不支持Qwen2.5-Vision,因为这是与Qwen2-Vision不同的架构。

Anyways, this error log is incomplete. Can you run with --disable-frontend-multiprocessing and send the full aphrodite logs? Thank you.无论如何,这个错误日志是不完整的。 你能用 --disable-frontend-multiprocessing 运行并发送完整的催发性日志吗? 谢谢你。

Thank you very much for your reply, I followed your prompt to add the command --disable-frontend-multiprocessing, which is the detailed complete command and log information.

(aphrodite-engine) zane@zane-desktop:~/workspace/models$ aphrodite run Qwen2.5-32B-Instruct-v8-k65536-256-woft  -tp 4 --host 0.0.0.0 --port 6668 --max-model-len 20480 --guided-decoding-backend xgrammar --dtype
=half --enable-prefix-caching --gpu-memory-utilization 0.84 --disable-frontend-multiprocessing
WARNING:  Casting torch.bfloat16 to torch.float16.
WARNING:  vptq quantization is not fully optimized yet. The speed can be slower than non-quantized models.
INFO:     Defaulting to use mp for distributed inference.
INFO:     -------------------------------------------------------------------------------------
INFO:     Initializing Aphrodite Engine (v0.6.7 commit e64075b8) with the following non-default config:
INFO:     cache.enable_prefix_caching=True
INFO:     cache.gpu_memory_utilization=0.84
INFO:     device.device=device(type='cuda')
INFO:     model.dtype=torch.float16
INFO:     model.max_model_len=20480
INFO:     model.max_seq_len_to_capture=20480
INFO:     model.model='Qwen2.5-32B-Instruct-v8-k65536-256-woft'
INFO:     model.quantization='vptq'
INFO:     parallel.distributed_executor_backend='mp'
INFO:     parallel.tensor_parallel_size=4
INFO:     scheduler.max_num_batched_tokens=20480
INFO:     -------------------------------------------------------------------------------------
WARNING:  Reducing Torch parallelism from 28 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
INFO:     Cannot use FlashAttention-2 backend for Volta and Turing GPUs.
INFO:     Using XFormers backend.
/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/xformers/ops/fmha/flash.py:211: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
  @torch.library.impl_abstract("xformers_flash::flash_fwd")
/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/xformers/ops/fmha/flash.py:344: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
  @torch.library.impl_abstract("xformers_flash::flash_bwd")
(AphroditeWorkerProcess pid=461983) INFO:     Cannot use FlashAttention-2 backend for Volta and Turing GPUs.
(AphroditeWorkerProcess pid=461983) INFO:     Using XFormers backend.
(AphroditeWorkerProcess pid=461981) INFO:     Cannot use FlashAttention-2 backend for Volta and Turing GPUs.
(AphroditeWorkerProcess pid=461981) INFO:     Using XFormers backend.
(AphroditeWorkerProcess pid=461983) /home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/xformers/ops/fmha/flash.py:211: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
(AphroditeWorkerProcess pid=461983)   @torch.library.impl_abstract("xformers_flash::flash_fwd")
(AphroditeWorkerProcess pid=461982) INFO:     Cannot use FlashAttention-2 backend for Volta and Turing GPUs.
(AphroditeWorkerProcess pid=461982) INFO:     Using XFormers backend.
(AphroditeWorkerProcess pid=461981) /home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/xformers/ops/fmha/flash.py:211: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
(AphroditeWorkerProcess pid=461981)   @torch.library.impl_abstract("xformers_flash::flash_fwd")
(AphroditeWorkerProcess pid=461983) /home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/xformers/ops/fmha/flash.py:344: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
(AphroditeWorkerProcess pid=461983)   @torch.library.impl_abstract("xformers_flash::flash_bwd")
(AphroditeWorkerProcess pid=461982) /home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/xformers/ops/fmha/flash.py:211: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
(AphroditeWorkerProcess pid=461982)   @torch.library.impl_abstract("xformers_flash::flash_fwd")
(AphroditeWorkerProcess pid=461981) /home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/xformers/ops/fmha/flash.py:344: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
(AphroditeWorkerProcess pid=461981)   @torch.library.impl_abstract("xformers_flash::flash_bwd")
(AphroditeWorkerProcess pid=461982) /home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/xformers/ops/fmha/flash.py:344: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
(AphroditeWorkerProcess pid=461982)   @torch.library.impl_abstract("xformers_flash::flash_bwd")
(AphroditeWorkerProcess pid=461983) INFO:     Worker ready; awaiting tasks
(AphroditeWorkerProcess pid=461981) INFO:     Worker ready; awaiting tasks
(AphroditeWorkerProcess pid=461982) INFO:     Worker ready; awaiting tasks
WARNING:  Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
INFO:     Loading model Qwen2.5-32B-Instruct-v8-k65536-256-woft...
[rank0]: Traceback (most recent call last):
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/bin/aphrodite", line 8, in <module>
[rank0]:     sys.exit(main())
[rank0]:              ^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/endpoints/cli.py", line 229, in main
[rank0]:     args.dispatch_function(args)
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/endpoints/cli.py", line 32, in serve
[rank0]:     uvloop.run(run_server(args))
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/uvloop/__init__.py", line 109, in run
[rank0]:     return __asyncio.run(
[rank0]:            ^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/asyncio/runners.py", line 195, in run
[rank0]:     return runner.run(main)
[rank0]:            ^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/asyncio/runners.py", line 118, in run
[rank0]:     return self._loop.run_until_complete(task)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/uvloop/__init__.py", line 61, in wrapper
[rank0]:     return await main
[rank0]:            ^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/endpoints/openai/api_server.py", line 1194, in run_server
[rank0]:     async with build_engine_client(args) as engine_client:
[rank0]:                ^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/contextlib.py", line 210, in __aenter__
[rank0]:     return await anext(self.gen)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/endpoints/openai/api_server.py", line 121, in build_engine_client
[rank0]:     async with build_engine_client_from_engine_args(
[rank0]:                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/contextlib.py", line 210, in __aenter__
[rank0]:     return await anext(self.gen)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/endpoints/openai/api_server.py", line 154, in build_engine_client_from_engine_args
[rank0]:     engine_client = await asyncio.get_running_loop().run_in_executor(
[rank0]:                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/concurrent/futures/thread.py", line 59, in run
[rank0]:     result = self.fn(*self.args, **self.kwargs)
[rank0]:              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/engine/async_aphrodite.py", line 633, in from_engine_args
[rank0]:     engine = cls(
[rank0]:              ^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/engine/async_aphrodite.py", line 526, in __init__
[rank0]:     self.engine = self._engine_class(*args, **kwargs)
[rank0]:                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/engine/async_aphrodite.py", line 263, in __init__
[rank0]:     super().__init__(*args, **kwargs)
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/engine/aphrodite_engine.py", line 334, in __init__
[rank0]:     self.model_executor = executor_class(
[rank0]:                           ^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/executor/multiproc_gpu_executor.py", line 214, in __init__
[rank0]:     super().__init__(*args, **kwargs)
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/executor/distributed_gpu_executor.py", line 25, in __init__
[rank0]:     super().__init__(*args, **kwargs)
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/executor/executor_base.py", line 46, in __init__
[rank0]:     self._init_executor()
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/executor/multiproc_gpu_executor.py", line 111, in _init_executor
[rank0]:     self._run_workers("load_model",
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/executor/multiproc_gpu_executor.py", line 191, in _run_workers
[rank0]:     driver_worker_output = driver_worker_method(*args, **kwargs)
[rank0]:                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/worker/worker.py", line 157, in load_model
[rank0]:     self.model_runner.load_model()
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/worker/model_runner.py", line 1038, in load_model
[rank0]:     self.model = get_model(model_config=self.model_config,
[rank0]:                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/model_loader/__init__.py", line 20, in get_model
[rank0]:     return loader.load_model(model_config=model_config,
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/model_loader/loader.py", line 404, in load_model
[rank0]:     model = _initialize_model(model_config, self.load_config,
[rank0]:             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/model_loader/loader.py", line 172, in _initialize_model
[rank0]:     return build_model(
[rank0]:            ^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/model_loader/loader.py", line 157, in build_model
[rank0]:     return model_class(config=hf_config,
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/models/qwen2.py", line 400, in __init__
[rank0]:     self.model = Qwen2Model(config, cache_config, quant_config)
[rank0]:                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/models/qwen2.py", line 256, in __init__
[rank0]:     self.start_layer, self.end_layer, self.layers = make_layers(
[rank0]:                                                     ^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/models/utils.py", line 404, in make_layers
[rank0]:     maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
[rank0]:                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/models/qwen2.py", line 258, in <lambda>
[rank0]:     lambda prefix: Qwen2DecoderLayer(config=config,
[rank0]:                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/models/qwen2.py", line 176, in __init__
[rank0]:     self.self_attn = Qwen2Attention(
[rank0]:                      ^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/models/qwen2.py", line 118, in __init__
[rank0]:     self.qkv_proj = QKVParallelLinear(
[rank0]:                     ^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/layers/linear.py", line 727, in __init__
[rank0]:     super().__init__(input_size=input_size,
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/layers/linear.py", line 293, in __init__
[rank0]:     super().__init__(input_size, output_size, skip_bias_add, params_dtype,
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/modeling/layers/linear.py", line 184, in __init__
[rank0]:     self.quant_method = quant_config.get_quant_method(self,
[rank0]:                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/quantization/vptq.py", line 360, in get_quant_method
[rank0]:     quant_config = self.get_config_for_key(base_name, linear_name)
[rank0]:                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/site-packages/aphrodite/quantization/vptq.py", line 340, in get_config_for_key
[rank0]:     raise ValueError(f"Cannot find config for ({prefix}, {key})")
[rank0]: ValueError: Cannot find config for (, )
/home/zane/miniconda3/envs/aphrodite-engine/lib/python3.12/multiprocessing/resource_tracker.py:255: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

Looking forward to your reply!

ZanePoe avatar Mar 12 '25 09:03 ZanePoe

This seems to be an issue with the quantized model, looks like one of (or all) the layers doesn't have a config defined for it. Maybe @wejoncy has an idea?

AlpinDale avatar Mar 12 '25 13:03 AlpinDale

This seems to be an issue with the quantized model, looks like one of (or all) the layers doesn't have a config defined for it. Maybe @wejoncy has an idea?

I'm using the latest updated qwen2.5 series model from the VPTQ-community, and the link to one of them is as follows:Qwen2.5-32B-Instruct-v8-k65536-65536-woft

ZanePoe avatar Mar 12 '25 14:03 ZanePoe

This seems to be an issue with the quantized model, looks like one of (or all) the layers doesn't have a config defined for it. Maybe @wejoncy has an idea?

Will look at it and relsove ASAP.

wejoncy avatar Mar 13 '25 05:03 wejoncy

Hi @AlpinDale @ZanePoe, Fow now, quantized model requires prefix to imply which layer it is., just like vLLM/sglang did, https://github.com/vllm-project/vllm/blob/54a8804455a14234ba246f7cbaf29fb5e8587d64/vllm/model_executor/models/qwen2.py#L80C1-L81C1.

Seems like we might need a PR to support str[prefix] for qwen/qwen2 or other models

wejoncy avatar Mar 15 '25 00:03 wejoncy