aphrodite-engine icon indicating copy to clipboard operation
aphrodite-engine copied to clipboard

[Bug]: Cannot load Mixtral GGUF model?

Open Nero10578 opened this issue 8 months ago • 13 comments

Your current environment

Collecting environment information...
PyTorch version: 2.3.0
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (conda-forge gcc 11.3.0-19) 11.3.0
Clang version: Could not collect 
CMake version: version 3.29.3
Libc version: glibc-2.35
Python version: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:36:13) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: Tesla P40
GPU 1: Tesla P40

Nvidia driver version: 545.29.06
cuDNN version: Could not collect 
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      46 bits physical, 48 bits virtual
Byte Order:                         Little Endian
CPU(s):                             12
On-line CPU(s) list:                0-11
Vendor ID:                          GenuineIntel
Model name:                         Intel(R) Xeon(R) W-2135 CPU @ 3.70GHz
CPU family:                         6
Model:                              85
Thread(s) per core:                 2
Core(s) per socket:                 6
Socket(s):                          1
Stepping:                           4
CPU max MHz:                        4500.0000
CPU min MHz:                        1200.0000
BogoMIPS:                           7399.70
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi md_clear flush_l1d arch_capabilities
Virtualization:                     VT-x
L1d cache:                          192 KiB (6 instances)
L1i cache:                          192 KiB (6 instances)
L2 cache:                           6 MiB (6 instances)
L3 cache:                           8.3 MiB (1 instance)
NUMA node(s):                       1
NUMA node0 CPU(s):                  0-11
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit:        KVM: Mitigation: VMX disabled
Vulnerability L1tf:                 Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds:                  Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown:             Mitigation; PTI
Vulnerability Mmio stale data:      Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed:             Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.3.0
[pip3] triton==2.3.0
[conda] blas                      2.16                        mkl    conda-forge
[conda] libblas                   3.8.0                    16_mkl    conda-forge
[conda] libcblas                  3.8.0                    16_mkl    conda-forge
[conda] liblapack                 3.8.0                    16_mkl    conda-forge
[conda] liblapacke                3.8.0                    16_mkl    conda-forge
[conda] mkl                       2020.2                      256  
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] pytorch                   2.3.0           py3.11_cuda12.1_cudnn8.9.2_0    pytorch
[conda] pytorch-cuda              12.1                 ha16c6d3_5    pytorch
[conda] pytorch-mutex             1.0                        cuda    pytorch
[conda] torchtriton               2.3.0                     py311    pytorchROCM Version: Could not collect 
Aphrodite Version: 0.5.3
Aphrodite Build Flags:
CUDA Archs: Not Set; ROCm: Disabled

🐛 Describe the bug

It seems like its saying mixtral isn't supported? Is it just for GGUF?

INFO:     Extracting config from GGUF...
WARNING:  gguf quantization is not fully optimized yet. The speed can be slower than non-quantized models.
INFO:     Using fp8 data type to store kv cache. It reduces the GPU memory footprint and boosts the performance. But it may 
cause slight accuracy drop without scaling factors. FP8_E5M2 (without scaling) is only supported on cuda version greater than 
11.8. On ROCm (AMD GPU), FP8_E4M3 is instead supported for common inference criteria.
2024-05-25 20:40:25,511 INFO worker.py:1749 -- Started a local Ray instance.
INFO:     Initializing the Aphrodite Engine (v0.5.3) with the following config:
INFO:     Model = '/home/owen/models/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf'
INFO:     Speculative Config = None
INFO:     DataType = torch.float16
INFO:     Model Load Format = auto
INFO:     Number of GPUs = 2
INFO:     Disable Custom All-Reduce = False
INFO:     Quantization Format = gguf
INFO:     Context Length = 8192
INFO:     Enforce Eager Mode = True
INFO:     KV Cache Data Type = fp8
INFO:     KV Cache Params Path = None
INFO:     Device = cuda
INFO:     Guided Decoding Backend = DecodingConfig(guided_decoding_backend='outlines')
INFO:     Converting tokenizer from GGUF...
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
INFO:     Cannot use FlashAttention backend for Volta and Turing GPUs.
INFO:     Using XFormers backend.
(RayWorkerAphrodite pid=28850) INFO:     Cannot use FlashAttention backend for Volta and Turing GPUs.
(RayWorkerAphrodite pid=28850) INFO:     Using XFormers backend.
INFO:     Aphrodite is using nccl==2.21.5
(RayWorkerAphrodite pid=28850) INFO:     Aphrodite is using nccl==2.21.5
INFO:     NVLink detection failed with message "Not Supported". This is normal if your machine has no NVLink equipped
INFO:     reading GPU P2P access cache from /home/owen/.config/aphrodite/gpu_p2p_access_cache_for_0,1.json
(RayWorkerAphrodite pid=28850) INFO:     NVLink detection failed with message "Not Supported". This is normal if your machine has no NVLink equipped
(RayWorkerAphrodite pid=28850) INFO:     reading GPU P2P access cache from /home/owen/.config/aphrodite/gpu_p2p_access_cache_for_0,1.json
[rank0]: Traceback (most recent call last):
[rank0]:   File "<frozen runpy>", line 198, in _run_module_as_main
[rank0]:   File "<frozen runpy>", line 88, in _run_code
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/endpoints/openai/api_server.py", line 562, in <module>
[rank0]:     run_server(args)
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/endpoints/openai/api_server.py", line 519, in run_server
[rank0]:     engine = AsyncAphrodite.from_engine_args(engine_args)
[rank0]:              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/engine/async_aphrodite.py", line 358, in from_engine_args
[rank0]:     engine = cls(engine_config.parallel_config.worker_use_ray,
[rank0]:              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/engine/async_aphrodite.py", line 323, in __init__
[rank0]:     self.engine = self._init_engine(*args, **kwargs)
[rank0]:                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/engine/async_aphrodite.py", line 429, in _init_engine
[rank0]:     return engine_class(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/engine/aphrodite_engine.py", line 131, in __init__
[rank0]:     self.model_executor = executor_class(
[rank0]:                           ^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/executor/executor_base.py", line 39, in __init__
[rank0]:     self._init_executor()
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/executor/ray_gpu_executor.py", line 45, in _init_executor
[rank0]:     self._init_workers_ray(placement_group)
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/executor/ray_gpu_executor.py", line 193, in _init_workers_ray
[rank0]:     self._run_workers(
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/executor/ray_gpu_executor.py", line 309, in _run_workers
[rank0]:     driver_worker_output = getattr(self.driver_worker,
[rank0]:                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/task_handler/worker.py", line 125, in load_model
[rank0]:     self.model_runner.load_model()
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/task_handler/model_runner.py", line 179, in load_model
[rank0]:     self.model = get_model(
[rank0]:                  ^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/loader.py", line 103, in get_model
[rank0]:     model.load_weights(model_config.model, model_config.download_dir,
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/models/mixtral.py", line 515, in load_weights
[rank0]:     for name, loaded_weight in hf_model_weights_iterator(
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/hf_downloader.py", line 318, in hf_model_weights_iterator
[rank0]:     for name, param in convert_gguf_to_state_dict(model_name_or_path,
[rank0]:                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/owen/aphrodite-engine/aphrodite/modeling/hf_downloader.py", line 246, in convert_gguf_to_state_dict
[rank0]:     raise RuntimeError(f"Unknown model_type: {model_type}")
[rank0]: RuntimeError: Unknown model_type: mixtral
(RayWorkerAphrodite pid=28850) ERROR:    Error executing method load_model. This might cause deadlock in distributed execution.[W CudaIPCTypes.cpp:16] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]

Nero10578 avatar May 26 '24 03:05 Nero10578