[Codegen] Set alignment attributes over vectorized memory accesses
Hardware like amdgpu generally prefers wide memory accesses (e.g., dwordx4 or b128) but these only get selected by the llvm backend when the alignment is known and sufficiently large. LLVM is able to infer the alignment in some cases, but it's not guaranteed to always do the best job possible.
For example, consider this pseudo-IR:
%lds = memref.alloc() : memref<2x10xf16, #gpu.address_space<workgroup>>>
%x = vector.load %lds[%a, %b] : memref<2x10xf16, #gpu.address_space<workgroup>>>, vector<8xf16>
Ideally, we would like to assume 16-byte alignment (8 * 2 bytes), but this requires knowing:
- That the alignment of the allocation itself is at least 16
- That memory access pattern guarantees
%bto be a multiple of 8
Just because the accessed type is vector<8xf16> is not enough to infer that the alignment is 16.
We can already accomplish 1. with the alignment attribute supported by memref.alloc op, but we have no way of expressing known alignment over memory accesses across memref/vector load/store ops. Similar with inbounds attributes, the most general representation would be per-dimension, but allowing it over 1-d vector types makes things simpler.
I think the following implementation should work:
- Add
alignmentattribute tomemref/vectormemory access ops. To keep it simple, require this to be a single byte value (instead of number of elements) wrt the first element accessed only. - Propagate these
alignmentattributes when converting frommemref/vectortollvm/spirv. - Flatten the memory accesses in IREE, so that we don't have to worry about any n-d cases.
- Set known alignment values in IREE codegen, e.g., for shared memory.
cc: @krzysz00 @MaheshRavishankar
Feel free to comment / edit if I missed something that we discussed when we talked about it at the end of Feb.
Memref flattening is #20226
And I figure that's a reasonable definition of alignment for the base operators, though I claim something like transfer_read might want per-dimension alignments that get lowered to the relevant low-level operations since they've already got the higher-level inbounds
... Might be overkill though
I've learned that there's a dedicated op for alignment: https://mlir.llvm.org/docs/Dialects/MemRef/#memrefassume_alignment-memrefassumealignmentop
So there's probably no need to go and add alignment attributes to all the memory access ops?
@kuhar That isn't sufficient. That's for declaring the assumed alignment of the base pointer of the memtef. It very much doesn't declare the alignment of individual loads, which can't always be inferred - especially since vector.load and store are often too conservative with their alignment values
Well, it depends if you can infer the alignment from the indexing math. But +1 that all this seems fragile.
@tyb0807
First PR in the series:
- https://github.com/llvm/llvm-project/pull/144344
Regarding:
Flatten the memory accesses in IREE, so that we don't have to worry about any n-d cases.
Flattening Memref capabilities is in upstream.
Just use it at a late stage of the pipeline (before emulate narrow types, we will have a lot of cleaning for that pass)