aphrodite-engine icon indicating copy to clipboard operation
aphrodite-engine copied to clipboard

[Bug]: unable to load 14B Qwen2.5 GGUF with newest version (0.6.2.post1)

Open NeoChen1024 opened this issue 1 year ago • 4 comments

Your current environment

The output of `python env.py` ``` Collecting environment information... WARNING: Failed to import from aphrodite._C with No module named 'aphrodite._C' PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A

OS: Debian GNU/Linux 12 (bookworm) (x86_64) GCC version: (Debian 12.2.0-14) 12.2.0 Clang version: Could not collect CMake version: version 3.25.1 Libc version: glibc-2.36

Python version: 3.11.2 (main, Aug 26 2024, 07:20:54) [GCC 12.2.0] (64-bit runtime) Python platform: Linux-6.1.0-26-amd64-x86_64-with-glibc2.36 Is CUDA available: True CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: Tesla P40 Nvidia driver version: 535.183.01 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Vendor ID: GenuineIntel Model name: Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz CPU family: 6 Model: 158 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 Stepping: 9 CPU(s) scaling MHz: 96% CPU max MHz: 4200.0000 CPU min MHz: 800.0000 BogoMIPS: 7200.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities L1d cache: 128 KiB (4 instances) L1i cache: 128 KiB (4 instances) L2 cache: 1 MiB (4 instances) L3 cache: 8 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-7 Vulnerability Gather data sampling: Mitigation; Microcode Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Mitigation; IBRS Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Mitigation; Microcode Vulnerability Tsx async abort: Mitigation; TSX disabled

Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] pyzmq==26.2.0 [pip3] torch==2.4.0 [pip3] torchvision==0.19.0 [pip3] transformers==4.45.2 [pip3] triton==3.0.0 [conda] Could not collect ROCM Version: Could not collect Neuron SDK Version: N/A Aphrodite Version: 0.6.2 Aphrodite Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: [4mGPU0 CPU Affinity NUMA Affinity GPU NUMA ID[0m GPU0 X 0-7 0 N/A

Legend:

X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks


</details>


### 🐛 Describe the bug

It couldn't load Qwen2.5-based GGUF. For example: https://huggingface.co/SakuraLLM/Sakura-14B-Qwen2.5-v1.0-GGUF/blob/main/sakura-14b-qwen2.5-v1.0-q4km.gguf


backtrace:

⠋ Loading modules... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/917 0% 0:00:00 Process SpawnProcess-1: Traceback (most recent call last): File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/user/aphrodite-venv/lib/python3.11/site-packages/aphrodite/endpoints/openai/rpc/server.py", line 209, in run_rpc_server server = AsyncEngineRPCServer(async_engine_args, rpc_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/aphrodite-venv/lib/python3.11/site-packages/aphrodite/endpoints/openai/rpc/server.py", line 24, in init self.engine = AsyncAphrodite.from_engine_args(async_engine_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/aphrodite-venv/lib/python3.11/site-packages/aphrodite/engine/async_aphrodite.py", line 601, in from_engine_args engine = cls( ^^^^ File "/home/user/aphrodite-venv/lib/python3.11/site-packages/aphrodite/engine/async_aphrodite.py", line 510, in init self.engine = self._init_engine(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/aphrodite-venv/lib/python3.11/site-packages/aphrodite/engine/async_aphrodite.py", line 694, in _init_engine return engine_class(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/aphrodite-venv/lib/python3.11/site-packages/aphrodite/engine/aphrodite_engine.py", line 261, in init self.model_executor = executor_class( ^^^^^^^^^^^^^^^ File "/home/user/aphrodite-venv/lib/python3.11/site-packages/aphrodite/executor/executor_base.py", line 45, in init self._init_executor() File "/home/user/aphrodite-venv/lib/python3.11/site-packages/aphrodite/executor/gpu_executor.py", line 36, in _init_executor self.driver_worker.load_model() File "/home/user/aphrodite-venv/lib/python3.11/site-packages/aphrodite/task_handler/worker.py", line 153, in load_model self.model_runner.load_model() File "/home/user/aphrodite-venv/lib/python3.11/site-packages/aphrodite/task_handler/model_runner.py", line 888, in load_model self.model = get_model(model_config=self.model_config, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/aphrodite-venv/lib/python3.11/site-packages/aphrodite/modeling/model_loader/init.py", line 20, in get_model return loader.load_model(model_config=model_config, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/aphrodite-venv/lib/python3.11/site-packages/aphrodite/modeling/model_loader/loader.py", line 1035, in load_model model.load_weights( File "/home/user/aphrodite-venv/lib/python3.11/site-packages/aphrodite/modeling/models/qwen2.py", line 431, in load_weights param = params_dict[name] ~~~~~~~~~~~^^^^^^ KeyError: 'model.embed_tokens.qweight_type'

NeoChen1024 avatar Oct 23 '24 18:10 NeoChen1024

looks like it's not support Qwen2.5 yet with current py script. @AlpinDale After look deeper, it might have something to do with GGUF 0.10

sorasoras avatar Oct 25 '24 20:10 sorasoras

I'm getting the same issue when trying to use the DeepSeek R1 Qwen 32B distilled model. The R1-LLama-8B works as expected. Problem persists with gguf==0.14. Is there anything else I could try?

SP00kYDuD3 avatar Feb 08 '25 01:02 SP00kYDuD3

Can confirm this is happening here too.

@AlpinDale since Qwen2.5-based models are so very common, have you seen this in the wild yet?

Invocation:

CUDA_VISIBLE_DEVICES=0 aphrodite run /opt/llm_models/gguf/Lamarckvergence-14B.Q4_K_M.gguf --speculative-model '[ngram]' --num-speculative-tokens 5 --ngram-prompt-lookup-max 4 --use-v2-block-manager -q deepspeedfp --deepspeed-fp-bits 6 --max-model-len 4096 --port 4000 --gpu_memory_utilization 0.9 --enforce-eager

Or:

CUDA_VISIBLE_DEVICES=0 aphrodite run /opt/llm_models/gguf/Lamarckvergence-14B.Q4_K_M.gguf --max-model-len 4096

Tail of output:

INFO:     Loading model /opt/llm_models/gguf/Lamarckvergence-14B.Q4_K_M.gguf...
INFO:     Cannot use FlashAttention-2 backend for Volta and Turing GPUs.
INFO:     Using XFormers backend.
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/root/miniconda3/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/root/miniconda3/lib/python3.12/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/opt/aphrodite/dove/lib/python3.12/site-packages/aphrodite/endpoints/openai/rpc/server.py", line 229, in run_rpc_server
    server = AsyncEngineRPCServer(async_engine_args, rpc_path)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/aphrodite/dove/lib/python3.12/site-packages/aphrodite/endpoints/openai/rpc/server.py", line 39, in __init__
    self.engine = AsyncAphrodite.from_engine_args(async_engine_args)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/aphrodite/dove/lib/python3.12/site-packages/aphrodite/engine/async_aphrodite.py", line 741, in from_engine_args
    engine = cls(
             ^^^^
  File "/opt/aphrodite/dove/lib/python3.12/site-packages/aphrodite/engine/async_aphrodite.py", line 630, in __init__
    self.engine = self._init_engine(*args, **kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/aphrodite/dove/lib/python3.12/site-packages/aphrodite/engine/async_aphrodite.py", line 840, in _init_engine
    return engine_class(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/aphrodite/dove/lib/python3.12/site-packages/aphrodite/engine/async_aphrodite.py", line 263, in __init__
    super().__init__(*args, **kwargs)
  File "/opt/aphrodite/dove/lib/python3.12/site-packages/aphrodite/engine/aphrodite_engine.py", line 294, in __init__
    self.model_executor = executor_class(
                          ^^^^^^^^^^^^^^^
  File "/opt/aphrodite/dove/lib/python3.12/site-packages/aphrodite/executor/executor_base.py", line 46, in __init__
    self._init_executor()
  File "/opt/aphrodite/dove/lib/python3.12/site-packages/aphrodite/executor/gpu_executor.py", line 38, in _init_executor
    self.driver_worker.init_device()
  File "/opt/aphrodite/dove/lib/python3.12/site-packages/aphrodite/spec_decode/spec_decode_worker.py", line 264, in init_device
    self.scorer_worker.load_model()
  File "/opt/aphrodite/dove/lib/python3.12/site-packages/aphrodite/task_handler/worker.py", line 157, in load_model
    self.model_runner.load_model()
  File "/opt/aphrodite/dove/lib/python3.12/site-packages/aphrodite/task_handler/model_runner.py", line 913, in load_model
    self.model = get_model(model_config=self.model_config,
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/aphrodite/dove/lib/python3.12/site-packages/aphrodite/modeling/model_loader/__init__.py", line 20, in get_model
    return loader.load_model(model_config=model_config,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/aphrodite/dove/lib/python3.12/site-packages/aphrodite/modeling/model_loader/loader.py", line 1080, in load_model
    model.load_weights(
  File "/opt/aphrodite/dove/lib/python3.12/site-packages/aphrodite/modeling/models/qwen2.py", line 428, in load_weights
    param = params_dict[name]
            ~~~~~~~~~~~^^^^^^
KeyError: 'model.embed_tokens.qweight_type'
ERROR:    RPCServer process died before responding to readiness probe

cassettesgoboom avatar Feb 23 '25 02:02 cassettesgoboom

As of #1215 and #1216 this issue should be fixed.

Also, your first command is incorrect @cassettesgoboom. You can't load a GGUF model in another quant format (deepspeefp in your case). The second one is correct.

AlpinDale avatar Feb 23 '25 03:02 AlpinDale