GPUArrays.jl
GPUArrays.jl copied to clipboard
Implement mapreduce
Ported from oneAPI.jl
- [ ] Currentl limited to a static workgroupsize
Could these things maybe live in https://github.com/anicusan/AcceleratedKernels.jl in the future ? Seems like they put quite a bit of effort into fast, KernelAbstraction based array operations in there ;)
Could these things maybe live in https://github.com/anicusan/AcceleratedKernels.jl in the future ?
There is a dependency ordering issue, GPUArrays is the common infrastructure and this is would be the fallback implementation for a common implementation. So GPUArrays would need to take a dependency on something like AcceleratedKernels.jl
Of course JLArrays doesn't work.. That uses the CPU backend and this is cpu=false
There is a dependency ordering issue
I was considering it as "leave it to AcceleratedKernels" to implement these. Well, it's a very young package, but I was wondering if it could be a path towards the future ;)
Just to write down my current understanding of the JLArray issue:
while d < items
@synchronize() # legal since cpu=false
Is not valid for the CPU in KA right now due to the synchronization within a while loop. This should be fine once we have a POCL backend for KA, so the "fix" for JLArrays in this PR is to wait for POCL.
GPU execution on all vendors should still work, and Arrays should have their own implementation somewhere else. It's just that the JLArray tests will fail for a bit here.
If we continue this, see https://github.com/JuliaGPU/CUDA.jl/pull/2778: The reshape in here is problematic and can go. Though it would probably be better to adapt the implementation as to avoid multiple kernel calls when possible (e.g. using atomics).
I thought the idea was to move towards depending on AK.jl for these kernels?
I thought the idea was to move towards depending on AK.jl for these kernels?
Ideally it would be. I pushed the feedback so it would be easier to benchmark between existing implementations and the equivalent KA port (for Metal at least, CUDA has some differences as previously discussed)