iree icon indicating copy to clipboard operation
iree copied to clipboard

Serialize Executables crashing when compiling LLaMa on async-cpu

Open rsuderman opened this issue 1 year ago • 21 comments

The following dispatches appear to cause a crash when compiling a llama model. Unrolling / vectorization makes 20K+ lines of generated code which likely causes final LLVM compilation to completely fail.

module_prefill_bs4$async_dispatch_1.zip module_decode_bs4$async_dispatch_2.zip

rsuderman avatar Apr 30 '24 22:04 rsuderman

It appears the issue is in LLVMCPUVectorTransferLowering. There is a full unrolling making the dispatch rather unruly.

rsuderman avatar Apr 30 '24 23:04 rsuderman

It appears the issue is in LLVMCPUVectorTransferLowering. There is a full unrolling making the dispatch rather unruly.

The unrolling is needed because LLVM backend wants 1D vector. It could be the issue in tile size selection, and vector shape optimization potentially can help with it.

hanhanW avatar Apr 30 '24 23:04 hanhanW

Some additional guilty lines:

  %12 = vector.transfer_read %10[%c0, %c0], %cst_2 {in_bounds = [true, true]} : tensor<32000x3200xf16>, vector<32000x3200xf16>
  %13 = arith.extf %12 : vector<32000x3200xf16> to vector<32000x3200xf32>
  %14 = vector.transfer_write %13, %11[%c0, %c0] {in_bounds = [true, true]} : vector<32000x3200xf32>, tensor<32000x3200xf32>

If we defer the unrolling of the vector.transfer_write we have the arith.extf unroll inside of the convert-to-llvm. I would imagine generic vectorization should generate an actual loop of vector instructions instead of resulting in the whole operation unrolling during LLVM generation.

rsuderman avatar May 01 '24 19:05 rsuderman

Okay, so this is similar to what I'm seeing in https://github.com/iree-org/iree/issues/17226#issuecomment-2087747095

IMO, we should not fuse these two generic ops. TileAndFuse is basically broken for the case. There are no dependency captured by operands. I'll talk to Mahesh to see if we can disable such fusion.

hanhanW avatar May 01 '24 21:05 hanhanW

@pashu123 please help take a look if there are other issues, apart from the fusion issue.

hanhanW avatar May 01 '24 23:05 hanhanW

Do we have a workaround for this or any patches we could try?

I'm also seeing unusably slow behavior after running LLVMCPUVectorTransferLowering on open_llama_3b_v2_f16_gguf from https://github.com/nod-ai/sharktank. Logs and IR here: https://gist.github.com/ScottTodd/17734adbbd570dbfa3d275c8c7a8e9a9

ScottTodd avatar May 07 '24 19:05 ScottTodd

Perhaps you can try https://github.com/llvm/torch-mlir/pull/3277 . It should fix the embedding lookup issue at torch level.

hanhanW avatar May 07 '24 21:05 hanhanW

Perhaps you can try llvm/torch-mlir#3277 . It should fix the embedding lookup issue at torch level.

That gets further, yeah :D. Might be enough to call this particular issue fixed?

I do see another error with iree-compile open_llama_3b_v2_f16.mlir --iree-hal-target-backends=llvm-cpu -o /tmp/open_llama_3b_v2_f16_cpu.vmfb:

failed to legalize operation 'arith.extui'
note: see current operation: %1401 = "arith.extui"(%1398) : (i1) -> i64

pretty late in compilation: https://gist.github.com/ScottTodd/6fbe7edd118bbb53c0abc2582459158d

ScottTodd avatar May 07 '24 21:05 ScottTodd

That gets further, yeah :D. Might be enough to call this particular issue fixed?

There is an action item at LinAlg level: https://github.com/iree-org/iree/issues/17226#issuecomment-2093718610

I do see another error with iree-compile open_llama_3b_v2_f16.mlir --iree-hal-target-backends=llvm-cpu -o /tmp/open_llama_3b_v2_f16_cpu.vmfb

@ScottTodd can you provide the mlir file? @pashu123 please help triage and provide possible solutions

hanhanW avatar May 07 '24 22:05 hanhanW

@ScottTodd can you provide the mlir file? @pashu123 please help triage and provide possible solutions

This is the input file I'm working with: https://sharkpublic.blob.core.windows.net/sharkpublic/scotttodd/issue_reports/open_llama_3b_v2_f16.mlir

ScottTodd avatar May 07 '24 22:05 ScottTodd

@ScottTodd I think you should add -iree-opt-demote-i64-to-i32 flag. Meanwhile, I'll double check this.

pashu123 avatar May 08 '24 02:05 pashu123

-iree-opt-demote-i64-to-i32

Verified adding this generates .vmfb.

pashu123 avatar May 08 '24 03:05 pashu123

After thinking a while, I think we can close the issue. The action item I mentioned is tracking in the other issue, and we don't have action items for this issue now.

hanhanW avatar May 09 '24 17:05 hanhanW

This issue is blocking another model on the onnx front.

rsuderman avatar May 20 '24 17:05 rsuderman

This issue is blocking another model on the onnx front.

Yes, the model is RAFT_vaiq_int8

I added some information to the issue #17226

zjgarvey avatar May 20 '24 17:05 zjgarvey

I think we only need to track it in one of the issues? So either we can close this or the other one.

hanhanW avatar May 20 '24 18:05 hanhanW

I think we only need to track it in one of the issues? So either we can close this or the other one.

Yeah, that's why I opted to provide more information there. I can't close this issue because I don't have permissions.

zjgarvey avatar May 20 '24 18:05 zjgarvey

Wherever the issue is tracked, can we follow-up and get fixes or patches landed? I've needed to keep patching https://github.com/llvm/torch-mlir/pull/3277 locally as a workaround for compilation crashes for several weeks now.

ScottTodd avatar May 20 '24 18:05 ScottTodd

Actually, I think we already land a more robust fix. https://github.com/iree-org/iree/commit/748db3113727de390a4f0a008c9dab3373e33b86 is landed. It should create a valid IR for codegen input. @ScottTodd could you to verify if the commit fixes the issue?

hanhanW avatar May 20 '24 18:05 hanhanW

Actually, I think we already land a more robust fix. 748db31 is landed. It should create a valid IR for codegen input. @ScottTodd could you to verify if the commit fixes the issue?

Thanks! I haven't seen issues lately without the torch-mlir commit but I'm still juggling a few other flags (notably --iree-opt-strip-assertions) and patches.

ScottTodd avatar May 22 '24 19:05 ScottTodd

After circling back from https://github.com/iree-org/iree/pull/17341#issuecomment-2121170501 I think we need that torch-mlir Patch. I will address the comments there.

pashu123 avatar May 23 '24 06:05 pashu123