AMDGPU.jl
AMDGPU.jl copied to clipboard
Support packed FP16 operations
Currently, the intrinsics we expose for Float16 inputs accept scalar inputs, however, there exists many math intrinsics that take Tuple{Float16,Float16}, with the intention to be able to use packed math instructions most effectively. While we certainly can expose these intrinsics directly to the user, we should also consider if there is some optimization available (either through LLVM, or manually implemented here) to fuse non-packed F16 operations into packed 2F16 operations.
The vectorization passes should do it, but I don't think we run them in the GPUCompiler pipeline
In the past I had success with using SIMD.jl for this. (packed ops on CUDA)
So I was taking a look into why it doesn't like vectorizing some simple loops and it said
Pass: loop-vectorize Name: CantVersionLoopWithDivergentTarget DebugLoc: { File: 'REPL[2]', Line: 2, Column: 0 } Function: _Z5vadd_14ROCDeviceArrayI7Float32Li1ELi1EES_IS0_Li1ELi1EES_IS0_Li1ELi1EE Args:
- String: 'loop not vectorized: '
- String: runtime pointer checks needed. Not enabled for divergent target
Which makes me think that it thinks the arrays passed in alias. Which stops fun things from happening