Unable to load Hubble-4B
Describe the bug
Model URL: https://huggingface.co/bartowski/Hubble-4B-v1-GGUF/discussions/1
llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 27: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 30: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 31: general.quantization_version u32 = 2 llama_model_loader: - kv 32: quantize.imatrix.file str = /models_out/Hubble-4B-v1-GGUF/Hubble-... llama_model_loader: - kv 33: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt llama_model_loader: - kv 34: quantize.imatrix.entries_count i32 = 224 llama_model_loader: - kv 35: quantize.imatrix.chunks_count i32 = 125 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors llm_load_vocab: special tokens cache size = 258 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128258 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 3072 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 9216 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 4.51 B llm_load_print_meta: model size = 2.58 GiB (4.91 BPW) llm_load_print_meta: general.name = Hubble 4B V1C llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' llm_load_print_meta: PAD token = 128001 '<|end_of_text|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128257 '<|im_end|>' llm_load_print_meta: max token length = 256 llm_load_tensors: ggml ctx size = 0.14 MiB llama_model_load: error loading model: check_tensor_dims: tensor 'rope_freqs.weight' has wrong shape; expected 48, got 64, 1, 1, 1 llama_load_model_from_file: failed to load model
Steps to reproduce
No response
LLMUnity version
No response
Operating System
Windows
Hi, thanks for the request! LLMUnity uses the b3617 version of llama.cpp (released 3 weeks ago) which did not support the model. It will be fixed when I update to a later version, however I will wait a few weeks because the build process is quite complicated for all the different architectures.
Hubble-4B now works with the latest release (v2.4.0) :+1: