Serialize Executables crashing when compiling LLaMa on async-cpu
The following dispatches appear to cause a crash when compiling a llama model. Unrolling / vectorization makes 20K+ lines of generated code which likely causes final LLVM compilation to completely fail.
module_prefill_bs4$async_dispatch_1.zip module_decode_bs4$async_dispatch_2.zip
It appears the issue is in LLVMCPUVectorTransferLowering. There is a full unrolling making the dispatch rather unruly.
It appears the issue is in
LLVMCPUVectorTransferLowering. There is a full unrolling making the dispatch rather unruly.
The unrolling is needed because LLVM backend wants 1D vector. It could be the issue in tile size selection, and vector shape optimization potentially can help with it.
Some additional guilty lines:
%12 = vector.transfer_read %10[%c0, %c0], %cst_2 {in_bounds = [true, true]} : tensor<32000x3200xf16>, vector<32000x3200xf16>
%13 = arith.extf %12 : vector<32000x3200xf16> to vector<32000x3200xf32>
%14 = vector.transfer_write %13, %11[%c0, %c0] {in_bounds = [true, true]} : vector<32000x3200xf32>, tensor<32000x3200xf32>
If we defer the unrolling of the vector.transfer_write we have the arith.extf unroll inside of the convert-to-llvm. I would imagine generic vectorization should generate an actual loop of vector instructions instead of resulting in the whole operation unrolling during LLVM generation.
Okay, so this is similar to what I'm seeing in https://github.com/iree-org/iree/issues/17226#issuecomment-2087747095
IMO, we should not fuse these two generic ops. TileAndFuse is basically broken for the case. There are no dependency captured by operands. I'll talk to Mahesh to see if we can disable such fusion.
@pashu123 please help take a look if there are other issues, apart from the fusion issue.
Do we have a workaround for this or any patches we could try?
I'm also seeing unusably slow behavior after running LLVMCPUVectorTransferLowering on open_llama_3b_v2_f16_gguf from https://github.com/nod-ai/sharktank. Logs and IR here: https://gist.github.com/ScottTodd/17734adbbd570dbfa3d275c8c7a8e9a9
Perhaps you can try https://github.com/llvm/torch-mlir/pull/3277 . It should fix the embedding lookup issue at torch level.
Perhaps you can try llvm/torch-mlir#3277 . It should fix the embedding lookup issue at torch level.
That gets further, yeah :D. Might be enough to call this particular issue fixed?
I do see another error with
iree-compile open_llama_3b_v2_f16.mlir --iree-hal-target-backends=llvm-cpu -o /tmp/open_llama_3b_v2_f16_cpu.vmfb:
failed to legalize operation 'arith.extui'
note: see current operation: %1401 = "arith.extui"(%1398) : (i1) -> i64
pretty late in compilation: https://gist.github.com/ScottTodd/6fbe7edd118bbb53c0abc2582459158d
That gets further, yeah :D. Might be enough to call this particular issue fixed?
There is an action item at LinAlg level: https://github.com/iree-org/iree/issues/17226#issuecomment-2093718610
I do see another error with iree-compile open_llama_3b_v2_f16.mlir --iree-hal-target-backends=llvm-cpu -o /tmp/open_llama_3b_v2_f16_cpu.vmfb
@ScottTodd can you provide the mlir file? @pashu123 please help triage and provide possible solutions
@ScottTodd can you provide the mlir file? @pashu123 please help triage and provide possible solutions
This is the input file I'm working with: https://sharkpublic.blob.core.windows.net/sharkpublic/scotttodd/issue_reports/open_llama_3b_v2_f16.mlir
@ScottTodd I think you should add -iree-opt-demote-i64-to-i32 flag. Meanwhile, I'll double check this.
-iree-opt-demote-i64-to-i32
Verified adding this generates .vmfb.
After thinking a while, I think we can close the issue. The action item I mentioned is tracking in the other issue, and we don't have action items for this issue now.
This issue is blocking another model on the onnx front.
This issue is blocking another model on the
onnxfront.
Yes, the model is RAFT_vaiq_int8
I added some information to the issue #17226
I think we only need to track it in one of the issues? So either we can close this or the other one.
I think we only need to track it in one of the issues? So either we can close this or the other one.
Yeah, that's why I opted to provide more information there. I can't close this issue because I don't have permissions.
Wherever the issue is tracked, can we follow-up and get fixes or patches landed? I've needed to keep patching https://github.com/llvm/torch-mlir/pull/3277 locally as a workaround for compilation crashes for several weeks now.
Actually, I think we already land a more robust fix. https://github.com/iree-org/iree/commit/748db3113727de390a4f0a008c9dab3373e33b86 is landed. It should create a valid IR for codegen input. @ScottTodd could you to verify if the commit fixes the issue?
Actually, I think we already land a more robust fix. 748db31 is landed. It should create a valid IR for codegen input. @ScottTodd could you to verify if the commit fixes the issue?
Thanks! I haven't seen issues lately without the torch-mlir commit but I'm still juggling a few other flags (notably --iree-opt-strip-assertions) and patches.
After circling back from https://github.com/iree-org/iree/pull/17341#issuecomment-2121170501 I think we need that torch-mlir Patch. I will address the comments there.