iree icon indicating copy to clipboard operation
iree copied to clipboard

[Codegen] Set alignment attributes over vectorized memory accesses

Open kuhar opened this issue 9 months ago • 6 comments

Hardware like amdgpu generally prefers wide memory accesses (e.g., dwordx4 or b128) but these only get selected by the llvm backend when the alignment is known and sufficiently large. LLVM is able to infer the alignment in some cases, but it's not guaranteed to always do the best job possible.

For example, consider this pseudo-IR:

%lds = memref.alloc() : memref<2x10xf16, #gpu.address_space<workgroup>>>
%x = vector.load %lds[%a, %b] : memref<2x10xf16, #gpu.address_space<workgroup>>>, vector<8xf16>

Ideally, we would like to assume 16-byte alignment (8 * 2 bytes), but this requires knowing:

  1. That the alignment of the allocation itself is at least 16
  2. That memory access pattern guarantees %b to be a multiple of 8

Just because the accessed type is vector<8xf16> is not enough to infer that the alignment is 16.

We can already accomplish 1. with the alignment attribute supported by memref.alloc op, but we have no way of expressing known alignment over memory accesses across memref/vector load/store ops. Similar with inbounds attributes, the most general representation would be per-dimension, but allowing it over 1-d vector types makes things simpler.

I think the following implementation should work:

  • Add alignment attribute to memref/vector memory access ops. To keep it simple, require this to be a single byte value (instead of number of elements) wrt the first element accessed only.
  • Propagate these alignment attributes when converting from memref/vector to llvm/spirv.
  • Flatten the memory accesses in IREE, so that we don't have to worry about any n-d cases.
  • Set known alignment values in IREE codegen, e.g., for shared memory.

kuhar avatar Mar 16 '25 21:03 kuhar

cc: @krzysz00 @MaheshRavishankar

Feel free to comment / edit if I missed something that we discussed when we talked about it at the end of Feb.

kuhar avatar Mar 16 '25 21:03 kuhar

Memref flattening is #20226

And I figure that's a reasonable definition of alignment for the base operators, though I claim something like transfer_read might want per-dimension alignments that get lowered to the relevant low-level operations since they've already got the higher-level inbounds

... Might be overkill though

krzysz00 avatar Mar 17 '25 03:03 krzysz00

I've learned that there's a dedicated op for alignment: https://mlir.llvm.org/docs/Dialects/MemRef/#memrefassume_alignment-memrefassumealignmentop

So there's probably no need to go and add alignment attributes to all the memory access ops?

kuhar avatar Mar 20 '25 17:03 kuhar

@kuhar That isn't sufficient. That's for declaring the assumed alignment of the base pointer of the memtef. It very much doesn't declare the alignment of individual loads, which can't always be inferred - especially since vector.load and store are often too conservative with their alignment values

krzysz00 avatar Mar 21 '25 04:03 krzysz00

Well, it depends if you can infer the alignment from the indexing math. But +1 that all this seems fragile.

kuhar avatar Mar 21 '25 05:03 kuhar

@tyb0807

ftynse avatar Jun 05 '25 12:06 ftynse

First PR in the series:

  • https://github.com/llvm/llvm-project/pull/144344

kuhar avatar Jul 29 '25 17:07 kuhar

Regarding:

Flatten the memory accesses in IREE, so that we don't have to worry about any n-d cases.

Flattening Memref capabilities is in upstream.

Just use it at a late stage of the pipeline (before emulate narrow types, we will have a lot of cleaning for that pass)

lialan avatar Sep 08 '25 20:09 lialan