Add Intel Advanced Matrix Extensions (AMX) support to ggml
- [x] I have read the contributing guidelines
- Self-reported review complexity:
- [ ] Low
- [x] Medium
- [ ] High
replacement of https://github.com/ggerganov/llama.cpp/pull/7707 to trigger ggml-ci on amx
To trigger ggml-ci you need to include the string "ggml-ci" somewhere in the commit message. For example: https://github.com/ggerganov/llama.cpp/commit/5ef07e25ac39e62297a67208c5bcced50835a2dd
@ggerganov could you please take a look at this one? I have moved the amx init code from ggml.c to ggml-amx/mmq.cpp according to previous comments.
@slaren just updated cmake compiler options: -mamx-tile, -mamx-int8 and -mamx-bf16!
@ggerganov could you please tell me why this CI failed? https://github.com/ggml-org/ci/tree/results/llama.cpp/28/cfc0ffdbfec9520a2c190d57025350229d340c/ggml-4-x86-cuda-v100
failure 124 means that the run timeout. There is currently a limit of 30 minutes for these runs and in this case it was exceeded.
It's not related to this PR - the CUDA build time has recently increased so this run is timeouting from time to time. I just restarted it to see if it would pass.
@slaren after changing is_host to false from the AMX backend leads to an fault from ggml_backend_sched_backend_id_from_cur (log attached below). Do you have any insight how to fix it?
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: AMX KV buffer size = 1024.00 MiB
llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.49 MiB
ggml/src/ggml-backend.c:1204: pre-allocated tensor in a backend that cannot run the operation
[New LWP 2746117]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
0x00007f7d1a7205a2 in waitpid () from /lib64/libpthread.so.0
#0 0x00007f7d1a7205a2 in waitpid () from /lib64/libpthread.so.0
#1 0x000000000048a648 in ggml_print_backtrace () at ggml/src/ggml.c:282
282 waitpid(pid, &wstatus, 0);
#2 ggml_abort (file=file@entry=0x6788d4 "ggml/src/ggml-backend.c", line=line@entry=1204, fmt=fmt@entry=0x678c50 "pre-allocated tensor in a backend that cannot run the operation") at ggml/src/ggml.c:309
309 ggml_print_backtrace();
#3 0x00000000004cf025 in ggml_backend_sched_backend_id_from_cur (sched=0x172fd20, tensor=0x5305e10) at ggml/src/ggml-backend.c:1204
1204 GGML_ABORT("pre-allocated tensor in a backend that cannot run the operation");
#4 0x00000000004d127c in ggml_backend_sched_split_graph (sched=sched@entry=0x172fd20, graph=graph@entry=0x1ada190) at ggml/src/ggml-backend.c:1337
1337 *leaf_backend_id = ggml_backend_sched_backend_id_from_cur(sched, leaf);
#5 0x00000000004d2dde in ggml_backend_sched_split_graph (graph=0x1ada190, sched=<optimized out>) at ggml/src/ggml-backend.c:1327
1327 if (sched->ctx == NULL) {
#6 ggml_backend_sched_reserve (sched=0x172fd20, measure_graph=0x1ada190) at ggml/src/ggml-backend.c:1992
1992 ggml_backend_sched_split_graph(sched, measure_graph);
#7 0x000000000053204b in llama_new_context_with_model (model=0x1729f30, params=...) at src/llama.cpp:19176
19176 if (!ggml_backend_sched_reserve(ctx->sched, gf)) {
#8 0x000000000060a48d in llama_init_from_gpt_params (params=...) at common/common.cpp:843
843 llama_context * lctx = llama_new_context_with_model(model, cparams);
#9 0x000000000043894d in main (argc=<optimized out>, argv=0x7fffb9a8de18) at examples/main/main.cpp:200
200 llama_init_result llama_init = llama_init_from_gpt_params(params);
[Inferior 1 (process 2746116) detached]
./run_generate_cpu.sh: line 26: 2746116 Aborted (core dumped) $PREFIX $main -m ./models/Meta-Llama-3-8B-Instruct-GGUF/$MODEL -t $CORES -n 5 -p "$prompt" --no-mmap
I think that may be related to KV operations, which should be fixed with the change I suggested before. By making llama_default_buffer_type_offload return the AMX buffer type, it will cause the KV cache to be allocated on an AMX buffer, which is not good. If that doesn't fix it, please add some prints to show the tensor that is causing the error.
@slaren could you please help review this one again? just changed ggml_backend_buft_is_host to return false for amx backend.
Looks good to me, feel free to merge this at any point.
@slaren Thank you for the detailed review.
@mingfeima Remember to squash the commits when merging as explained in the contributing guidelines. Btw, I just restarted the ggml-ci node with AMX instruction set support, so we might want to wait for ggml-ci to run before merging. Will run it on this branch shortly.
Edit: the AMX CI has passed
Would recommend using 4 spaces for indentation for conformance with the rest of the codebase.
@ggerganov changed to tab with 4 spaces. also the branch is rebased to squash into one.
Nice, great job! Feel free to merge this - you should have the access to do so.
@slaren thanks a lot for your review!
@mingfeima
Thank you for your work. I have a question.
The qtype_has_amx_kernels() returns TRUE with Q4_0 and Q4_1, so does it mean the AMX acceleration is only enabled with Q4_0 and Q4_1 model?
Is there any plan to support Q4_K or more?
inline bool qtype_has_amx_kernels(const enum ggml_type type) {
// TODO: fix padding for vnni format
return (type == GGML_TYPE_Q4_0) ||
(type == GGML_TYPE_Q4_1);
//(type == GGML_TYPE_Q8_0) ||
//(type == GGML_TYPE_Q4_K) ||
//(type == GGML_TYPE_Q5_K) ||
//(type == GGML_TYPE_Q6_K) ||
//(type == GGML_TYPE_IQ4_XS);
}
@mingfeima I have an additional question. When I tried latest lamma.cpp in my environment, the prompt eval time became 2x faster compared to no AMX supported commit, but the eval time (text generation speed) remained almost unchanged. (I tried it with gemma-2b-q4_0 on Intel Xeon with AMX.)
In your original PR, it was reported that eval time became 2x faster.
Do you have any idea that the reason of eval time does not become faster?
@nai-kon originally i wrote kernels for
//(type == GGML_TYPE_Q8_0) ||
//(type == GGML_TYPE_Q4_K) ||
//(type == GGML_TYPE_Q5_K) ||
//(type == GGML_TYPE_Q6_K) ||
//(type == GGML_TYPE_IQ4_XS);
but later on, these are banned, because i did not figure out how to do the padding with ggml-backend (these formats require additional padding for the packed format). Anyway i suppose it can be done, it's just i don't have spare time for this at the moment.
As for the performance of generation, I suppose this is because the original cpu impl in ggml-quant has also been improved (the old code base has a huge thread sync overhead with pthread, it uses atomic; later on when openmp is used, the overhead is much smaller). Additionally, the current version of AMX kernels are actually slower than my first version, some optimizations are removed to rebase to use ggml-backend. My estimation is that, the current AMX kernels still have ~1/3 gap to optimal.
the work here is my personal interest (not company task), I will get back to this once I have spare time (add back the kernels for other quant types and improve the performance).
Thank you for your reply. It sounds wonderful that the generation speed can still be improved. I truly appreciate your wonderful personal effort and work.
Additionally, the current version of AMX kernels are actually slower than my first version, some optimizations are removed to rebase to use ggml-backend.
Did you find any issue with the ggml-backend interface that forced you to remove these optimizations? The plan is to reintegrate the AMX backend in the CPU backend in the future (#10359), which may eliminate some overheads and allow using the optimized q8_0 quantization functions again.
@slaren the major problem that i have with ggml-backend is I didn't figure out how to do padding with the AMX backend (when the packing weight for AMX, e.g. vnni format, has a different memory size with default CPU backend. I run into a couple of issues when integrating AMX backend with ggml-backend. So I just leave the dtypes that does not require padding. Again, I think it should be able to be done elegantly, I just did not have the time to investigate recently.
From the performance wise, TODOs from my side are: fuse the quantization of A into the gemm loop; Q8_K quant method is a reference now (very slow); and so on.
@mingfeima as far as I can tell, you were already doing everything that's necessary to allow padding (returning the padded size in get_alloc_size is enough). I intend to move the AMX code to the CPU backend and enable all the supported types in #10570.