llama.cpp
llama.cpp copied to clipboard
Misc. bug: llama-run segmentation fault
Name and Version
version: 4754 (de8b5a36)
built with Apple clang version 16.0.0 (clang-1600.0.26.6) for arm64-apple-darwin24.2.0
but also reproducing on the current main branch
Operating systems
Mac
Which llama.cpp modules do you know to be affected?
Other (Please specify in the next section)
Command line
$ llama-run -v tinyllama
[1] 41777 segmentation fault ./build/bin/llama-run -v tinyllama
Problem description & steps to reproduce
only run the command llama-run -v tinyllama
First Bad Commit
if I revert the commit https://github.com/ggml-org/llama.cpp/commit/0d559580a0a74c842c3a876035ba96a338aabfb2
it's working again reverting this specific commit I have the prompt again
./build/bin/llama-run tinyllama
>
Relevant log output
Loading modelllama_model_load_from_file_impl: using device Metal (Apple M2 Max) - 73727 MiB free
llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from tinyllama (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = TinyLlama
llama_model_loader: - kv 2: llama.context_length u32 = 2048
llama_model_loader: - kv 3: llama.embedding_length u32 = 2048
llama_model_loader: - kv 4: llama.block_count u32 = 22
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 5632
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 64
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 4
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2
llama_model_loader: - kv 21: tokenizer.chat_template str = {% for message in messages %}\n{% if m...
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - type f32: 45 tensors
llama_model_loader: - type q4_0: 155 tensors
llama_model_loader: - type q6_K: 1 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_0
print_info: file size = 606.53 MiB (4.63 BPW)
init_tokenizer: initializing tokenizer for type 1
load: control token: 1 '<s>' is not marked as EOG
load: control token: 2 '</s>' is not marked as EOG
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 3
load: token to piece cache size = 0.1684 MB
print_info: arch = llama
print_info: vocab_only = 0
print_info: n_ctx_train = 2048
print_info: n_embd = 2048
print_info: n_layer = 22
print_info: n_head = 32
print_info: n_head_kv = 4
print_info: n_rot = 64
print_info: n_swa = 0
print_info: n_embd_head_k = 64
print_info: n_embd_head_v = 64
print_info: n_gqa = 8
print_info: n_embd_k_gqa = 256
print_info: n_embd_v_gqa = 256
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: n_ff = 5632
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 0
print_info: rope scaling = linear
print_info: freq_base_train = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 2048
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 1B
print_info: model params = 1.10 B
print_info: general.name = TinyLlama
print_info: vocab type = SPM
print_info: n_vocab = 32000
print_info: n_merges = 0
print_info: BOS token = 1 '<s>'
print_info: EOS token = 2 '</s>'
print_info: UNK token = 0 '<unk>'
print_info: PAD token = 2 '</s>'
print_info: LF token = 13 '<0x0A>'
print_info: EOG token = 2 '</s>'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: layer 0 assigned to device Metal
load_tensors: layer 1 assigned to device Metal
load_tensors: layer 2 assigned to device Metal
load_tensors: layer 3 assigned to device Metal
load_tensors: layer 4 assigned to device Metal
load_tensors: layer 5 assigned to device Metal
load_tensors: layer 6 assigned to device Metal
load_tensors: layer 7 assigned to device Metal
load_tensors: layer 8 assigned to device Metal
load_tensors: layer 9 assigned to device Metal
load_tensors: layer 10 assigned to device Metal
load_tensors: layer 11 assigned to device Metal
load_tensors: layer 12 assigned to device Metal
load_tensors: layer 13 assigned to device Metal
load_tensors: layer 14 assigned to device Metal
load_tensors: layer 15 assigned to device Metal
load_tensors: layer 16 assigned to device Metal
load_tensors: layer 17 assigned to device Metal
load_tensors: layer 18 assigned to device Metal
load_tensors: layer 19 assigned to device Metal
load_tensors: layer 20 assigned to device Metal
load_tensors: layer 21 assigned to device Metal
load_tensors: layer 22 assigned to device Metal
load_tensors: tensor 'token_embd.weight' (q4_0) (and 0 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead
ggml_backend_metal_log_allocated_size: allocated buffer, size = 606.55 MiB, ( 606.61 / 73728.00)
load_tensors: offloading 22 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 23/23 layers to GPU
load_tensors: Metal_Mapped model buffer size = 606.53 MiB
load_tensors: CPU_Mapped model buffer size = 35.16 MiB
......................................................................................
llama_init_from_model: n_seq_max = 1
llama_init_from_model: n_ctx = 2048
llama_init_from_model: n_ctx_per_seq = 2048
llama_init_from_model: n_batch = 2048
llama_init_from_model: n_ubatch = 512
llama_init_from_model: flash_attn = 0
llama_init_from_model: freq_base = 10000.0
llama_init_from_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M2 Max
ggml_metal_init: picking default device: Apple M2 Max
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name: Apple M2 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
ggml_metal_init: simdgroup reduction = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets = true
ggml_metal_init: has bfloat = true
ggml_metal_init: use bfloat = false
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 77309.41 MB
ggml_metal_init: loaded kernel_add 0x12a209820 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_add_row 0x12a209e80 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_sub 0x12a20a4e0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_sub_row 0x12a20ab40 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_mul 0x12a20b1a0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_mul_row 0x12a20b800 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_div 0x12a20be60 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_div_row 0x12a20c4c0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_repeat_f32 0x12a20ca70 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_repeat_f16 0x12a20d020 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_repeat_i32 0x12a20d5d0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_repeat_i16 0x12a20dcf0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_scale 0x12a20e560 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_scale_4 0x12a20edd0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_clamp 0x12a20f6a0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_tanh 0x12a20fe80 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_relu 0x12a210660 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_sigmoid 0x12a210e40 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_gelu 0x12a211620 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_gelu_4 0x12a211f70 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_gelu_quick 0x12a212750 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_gelu_quick_4 0x12a212f30 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_silu 0x12a213710 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_silu_4 0x12a214070 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_elu 0x12a214850 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_soft_max_f16 0x12a214db0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_soft_max_f16_4 0x12a215310 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_soft_max_f32 0x12a215a70 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_soft_max_f32_4 0x12a215fd0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_diag_mask_inf 0x12a216530 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_diag_mask_inf_8 0x12a216790 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_f32 0x12a216ca0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_f16 0x12a2174a0 | th_max = 1024 | th_width = 32
ggml_metal_init: skipping kernel_get_rows_bf16 (not supported)
ggml_metal_init: loaded kernel_get_rows_q4_0 0x12a217a00 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_q4_1 0x12a217f60 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_q5_0 0x12a2184c0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_q5_1 0x12a218c90 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_q8_0 0x12a2191f0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_q2_K 0x12a219750 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_q3_K 0x12a219cb0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_q4_K 0x12a21a210 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_q5_K 0x12a21a770 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_q6_K 0x12a21acd0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_iq2_xxs 0x12a21af30 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_iq2_xs 0x12a21b440 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_iq3_xxs 0x12a21ba70 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_iq3_s 0x12a21c0a0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_iq2_s 0x12a21cd20 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_iq1_s 0x12a21d280 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_iq1_m 0x12a21d7e0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_iq4_nl 0x12a21da40 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_iq4_xs 0x12a21df50 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_get_rows_i32 0x12a21e580 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_rms_norm 0x12a21eba0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_group_norm 0x12a21f1c0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_norm 0x12a21f7e0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_ssm_conv_f32 0x12a21fe00 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_ssm_scan_f32 0x12a220420 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_f32_f32 0x12a220f00 | th_max = 1024 | th_width = 32
ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported)
ggml_metal_init: loaded kernel_mul_mv_f16_f32 0x12a2214b0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_f16_f32_1row 0x12a2218f0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_f16_f32_l4 0x12a221ea0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_f16_f16 0x12a222450 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_q4_0_f32 0x12a222a00 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_q4_1_f32 0x12a222fb0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_q5_0_f32 0x12a223560 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_q5_1_f32 0x12a223b10 | th_max = 576 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_q8_0_f32 0x12a2240c0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_f16_f32_r1_2 0x12a224720 | th_max = 896 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_f16_f32_r1_3 0x12a224d80 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_f16_f32_r1_4 0x12a2253e0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_f16_f32_r1_5 0x12a225a40 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q4_0_f32_r1_2 0x12a2260a0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q4_0_f32_r1_3 0x12a226700 | th_max = 704 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q4_0_f32_r1_4 0x12a226d60 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q4_0_f32_r1_5 0x12a2273c0 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q4_1_f32_r1_2 0x12a227a20 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q4_1_f32_r1_3 0x12a228080 | th_max = 704 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q4_1_f32_r1_4 0x12a2286e0 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q4_1_f32_r1_5 0x12a228d40 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q5_0_f32_r1_2 0x12a2293a0 | th_max = 704 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q5_0_f32_r1_3 0x12a229be0 | th_max = 704 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q5_0_f32_r1_4 0x12a22a240 | th_max = 576 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q5_0_f32_r1_5 0x12a22a8a0 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q5_1_f32_r1_2 0x12a22af00 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q5_1_f32_r1_3 0x12a22b560 | th_max = 704 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q5_1_f32_r1_4 0x12a22bbc0 | th_max = 576 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q5_1_f32_r1_5 0x12a22c220 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q8_0_f32_r1_2 0x12a22c880 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q8_0_f32_r1_3 0x12a22cee0 | th_max = 704 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q8_0_f32_r1_4 0x12a22d540 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q8_0_f32_r1_5 0x12a22dba0 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q4_K_f32_r1_2 0x12a22e200 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q4_K_f32_r1_3 0x12a22e860 | th_max = 704 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q4_K_f32_r1_4 0x12a22eec0 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q4_K_f32_r1_5 0x12a22f520 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q5_K_f32_r1_2 0x12a22fb80 | th_max = 704 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q5_K_f32_r1_3 0x12a22fde0 | th_max = 704 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q5_K_f32_r1_4 0x12a230470 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q5_K_f32_r1_5 0x12a2306d0 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q6_K_f32_r1_2 0x12a230f10 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q6_K_f32_r1_3 0x12a231570 | th_max = 704 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q6_K_f32_r1_4 0x12a231bd0 | th_max = 704 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_q6_K_f32_r1_5 0x12a232230 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_iq4_nl_f32_r1_2 0x12a232890 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_iq4_nl_f32_r1_3 0x12a232ef0 | th_max = 704 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_iq4_nl_f32_r1_4 0x12a233550 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_ext_iq4_nl_f32_r1_5 0x12a233bb0 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_q2_K_f32 0x12a234160 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_q3_K_f32 0x12a234710 | th_max = 576 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_q4_K_f32 0x12a234cc0 | th_max = 576 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_q5_K_f32 0x12a235270 | th_max = 576 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_q6_K_f32 0x12a235820 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_iq2_xxs_f32 0x12a235dd0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_iq2_xs_f32 0x12a236380 | th_max = 704 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_iq3_xxs_f32 0x12a236930 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_iq3_s_f32 0x12a236ee0 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_iq2_s_f32 0x12a237490 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_iq1_s_f32 0x12a237c90 | th_max = 448 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_iq1_m_f32 0x12a238240 | th_max = 576 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_iq4_nl_f32 0x12a2387f0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_iq4_xs_f32 0x12a238da0 | th_max = 896 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_f32_f32 0x12a239350 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_f16_f32 0x12a239900 | th_max = 1024 | th_width = 32
ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported)
ggml_metal_init: loaded kernel_mul_mv_id_q4_0_f32 0x12a239eb0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_q4_1_f32 0x12a23a460 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_q5_0_f32 0x12a23aa10 | th_max = 576 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_q5_1_f32 0x12a23afc0 | th_max = 576 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_q8_0_f32 0x12a23b570 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_q2_K_f32 0x12a23bb20 | th_max = 576 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_q3_K_f32 0x12a23c0d0 | th_max = 576 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_q4_K_f32 0x12a23c680 | th_max = 576 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_q5_K_f32 0x12a23cc30 | th_max = 576 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_q6_K_f32 0x12a23d1e0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_iq2_xxs_f32 0x12a23d790 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_iq2_xs_f32 0x12a23dd40 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_iq3_xxs_f32 0x12a23e2f0 | th_max = 704 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_iq3_s_f32 0x12a23e8a0 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_iq2_s_f32 0x12a23ee50 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_iq1_s_f32 0x12a23f400 | th_max = 448 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_iq1_m_f32 0x12a23f9b0 | th_max = 576 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_iq4_nl_f32 0x12a23ff60 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_mul_mv_id_iq4_xs_f32 0x12a240510 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_f32_f32 0x12a240ac0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_f16_f32 0x12a241070 | th_max = 832 | th_width = 32
ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported)
ggml_metal_init: loaded kernel_mul_mm_q4_0_f32 0x12a241620 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_q4_1_f32 0x12a241bd0 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_q5_0_f32 0x12a242180 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_q5_1_f32 0x12a242730 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_q8_0_f32 0x12a242ce0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_q2_K_f32 0x12a243290 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_q3_K_f32 0x12a243840 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_q4_K_f32 0x12a243df0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_q5_K_f32 0x12a2443a0 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_q6_K_f32 0x12a244950 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_iq2_xxs_f32 0x12a244f00 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_iq2_xs_f32 0x12a2454b0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_iq3_xxs_f32 0x12a245a60 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_iq3_s_f32 0x12a246010 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_iq2_s_f32 0x12a2465c0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_iq1_s_f32 0x12a246b70 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_iq1_m_f32 0x12a247120 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_iq4_nl_f32 0x12a2476d0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_iq4_xs_f32 0x12a247c80 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_f32_f32 0x12a248230 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_f16_f32 0x12a2487e0 | th_max = 832 | th_width = 32
ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32 (not supported)
ggml_metal_init: loaded kernel_mul_mm_id_q4_0_f32 0x12a248d90 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_q4_1_f32 0x12a249340 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_q5_0_f32 0x12a2498f0 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_q5_1_f32 0x12a249ea0 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_q8_0_f32 0x12a24a450 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_q2_K_f32 0x12a24aa00 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_q3_K_f32 0x12a24afb0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_q4_K_f32 0x12a24b560 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_q5_K_f32 0x12a24bb10 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_q6_K_f32 0x12a24c0c0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_iq2_xxs_f32 0x12a24c670 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_iq2_xs_f32 0x12a24cc20 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_iq3_xxs_f32 0x12a24d1d0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_iq3_s_f32 0x12a24d780 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_iq2_s_f32 0x12a24dd30 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_iq1_s_f32 0x12a24e2e0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_iq1_m_f32 0x12a24e890 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_iq4_nl_f32 0x12a24ee40 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_mul_mm_id_iq4_xs_f32 0x12a24f3f0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_rope_norm_f32 0x12a24fa50 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_rope_norm_f16 0x12a2500b0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_rope_neox_f32 0x12a250710 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_rope_neox_f16 0x12a250d70 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_im2col_f16 0x12a2512d0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_im2col_f32 0x12a251830 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_im2col_ext_f16 0x12a251d90 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_im2col_ext_f32 0x12a2522f0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_conv_transpose_1d_f32_f32 0x12a252850 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_conv_transpose_1d_f16_f32 0x12a252db0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_upscale_f32 0x12a253310 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_pad_f32 0x12a253870 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_pad_reflect_1d_f32 0x12a253dd0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_timestep_embedding_f32 0x12a254330 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_arange_f32 0x12a254890 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_argsort_f32_i32_asc 0x12a254df0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_argsort_f32_i32_desc 0x12a255350 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_leaky_relu_f32 0x12a255bc0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_f16_h64 0x12a256220 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_f16_h80 0x12a256880 | th_max = 640 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_f16_h96 0x12a256ee0 | th_max = 576 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_f16_h112 0x12a257540 | th_max = 576 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_f16_h128 0x12a257ba0 | th_max = 512 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_f16_h256 0x12a258200 | th_max = 512 | th_width = 32
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported)
ggml_metal_init: loaded kernel_flash_attn_ext_q4_0_h64 0x12a258860 | th_max = 704 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q4_0_h80 0x12a258ec0 | th_max = 896 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q4_0_h96 0x12a259520 | th_max = 896 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q4_0_h112 0x12a259b80 | th_max = 896 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q4_0_h128 0x12a25a1e0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q4_0_h256 0x12a25a840 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q4_1_h64 0x12a25aea0 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q4_1_h80 0x12a25b500 | th_max = 896 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q4_1_h96 0x12a25bb60 | th_max = 896 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q4_1_h112 0x12a25c3a0 | th_max = 896 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q4_1_h128 0x12a25ca00 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q4_1_h256 0x12a25d060 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q5_0_h64 0x12a25d6c0 | th_max = 576 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q5_0_h80 0x12a25dd20 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q5_0_h96 0x12a25e380 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q5_0_h112 0x12a25e9e0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q5_0_h128 0x12a25f040 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q5_0_h256 0x12a25f6a0 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q5_1_h64 0x12a25fd00 | th_max = 576 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q5_1_h80 0x12a260360 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q5_1_h96 0x12a2609c0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q5_1_h112 0x12a261020 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q5_1_h128 0x12a261680 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q5_1_h256 0x12a261ce0 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q8_0_h64 0x12a262340 | th_max = 704 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q8_0_h80 0x12a2629a0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q8_0_h96 0x12a263000 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q8_0_h112 0x12a263660 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q8_0_h128 0x12a263cc0 | th_max = 896 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_q8_0_h256 0x12a264320 | th_max = 896 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_vec_f16_h128 0x12a264980 | th_max = 1024 | th_width = 32
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported)
ggml_metal_init: loaded kernel_flash_attn_ext_vec_q4_0_h128 0x12a264fe0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_vec_q4_1_h128 0x12a265640 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_vec_q5_0_h128 0x12a265ca0 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_vec_q5_1_h128 0x12a266300 | th_max = 768 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_vec_q8_0_h128 0x12a266960 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_vec_f16_h256 0x12a266fc0 | th_max = 1024 | th_width = 32
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported)
ggml_metal_init: loaded kernel_flash_attn_ext_vec_q4_0_h256 0x12a267620 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_vec_q4_1_h256 0x12a267c80 | th_max = 896 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_vec_q5_0_h256 0x12a2682e0 | th_max = 704 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_vec_q5_1_h256 0x12a268940 | th_max = 704 | th_width = 32
ggml_metal_init: loaded kernel_flash_attn_ext_vec_q8_0_h256 0x12a268fa0 | th_max = 832 | th_width = 32
ggml_metal_init: loaded kernel_set_f32 0x12a269550 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_set_i32 0x12a269b00 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_cpy_f32_f32 0x12a26a0b0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_cpy_f32_f16 0x12a26a660 | th_max = 1024 | th_width = 32
ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported)
ggml_metal_init: loaded kernel_cpy_f16_f32 0x12a26ac10 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_cpy_f16_f16 0x12a26b1c0 | th_max = 1024 | th_width = 32
ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported)
ggml_metal_init: loaded kernel_cpy_f32_q8_0 0x12a26b770 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_cpy_f32_q4_0 0x12a26bd20 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_cpy_f32_q4_1 0x12a26c2d0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_cpy_f32_q5_0 0x12a26c880 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_cpy_f32_q5_1 0x12a26ce30 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_cpy_f32_iq4_nl 0x12a26d3e0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_concat 0x12a26da40 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_sqr 0x12a26e220 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_sqrt 0x12a26ea00 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_sin 0x12a26f1e0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_cos 0x12a26f9c0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_sum_rows 0x12a26ff20 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_argmax 0x12a270480 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_pool_2d_avg_f32 0x12a2709e0 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_pool_2d_max_f32 0x12a270f40 | th_max = 1024 | th_width = 32
llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 22, can_shift = 1
llama_kv_cache_init: layer 0: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 1: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 2: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 3: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 4: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 5: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 6: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 7: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 8: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 9: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 10: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 11: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 12: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 13: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 14: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 15: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 16: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 17: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 18: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 19: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 20: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: layer 21: n_embd_k_gqa = 256, n_embd_v_gqa = 256
llama_kv_cache_init: Metal KV buffer size = 44.00 MiB
llama_init_from_model: KV self size = 44.00 MiB, K (f16): 22.00 MiB, V (f16): 22.00 MiB
llama_init_from_model: CPU output buffer size = 0.12 MiB
llama_init_from_model: Metal compute buffer size = 148.00 MiB
llama_init_from_model: CPU compute buffer size = 8.01 MiB
llama_init_from_model: graph nodes = 710
llama_init_from_model: graph splits = 2
[1] 41777 segmentation fault ./build/bin/llama-run -v tinyllama