GPUArrays.jl icon indicating copy to clipboard operation
GPUArrays.jl copied to clipboard

Implement mapreduce

Open vchuravy opened this issue 1 year ago • 6 comments

Ported from oneAPI.jl

  • [ ] Currentl limited to a static workgroupsize

vchuravy avatar Sep 23 '24 14:09 vchuravy

Could these things maybe live in https://github.com/anicusan/AcceleratedKernels.jl in the future ? Seems like they put quite a bit of effort into fast, KernelAbstraction based array operations in there ;)

SimonDanisch avatar Sep 24 '24 09:09 SimonDanisch

Could these things maybe live in https://github.com/anicusan/AcceleratedKernels.jl in the future ?

There is a dependency ordering issue, GPUArrays is the common infrastructure and this is would be the fallback implementation for a common implementation. So GPUArrays would need to take a dependency on something like AcceleratedKernels.jl

vchuravy avatar Sep 24 '24 11:09 vchuravy

Of course JLArrays doesn't work.. That uses the CPU backend and this is cpu=false

vchuravy avatar Sep 24 '24 12:09 vchuravy

There is a dependency ordering issue

I was considering it as "leave it to AcceleratedKernels" to implement these. Well, it's a very young package, but I was wondering if it could be a path towards the future ;)

SimonDanisch avatar Sep 24 '24 17:09 SimonDanisch

Just to write down my current understanding of the JLArray issue:

    while d < items
        @synchronize() # legal since cpu=false

Is not valid for the CPU in KA right now due to the synchronization within a while loop. This should be fine once we have a POCL backend for KA, so the "fix" for JLArrays in this PR is to wait for POCL.

GPU execution on all vendors should still work, and Arrays should have their own implementation somewhere else. It's just that the JLArray tests will fail for a bit here.

leios avatar Sep 24 '24 19:09 leios

If we continue this, see https://github.com/JuliaGPU/CUDA.jl/pull/2778: The reshape in here is problematic and can go. Though it would probably be better to adapt the implementation as to avoid multiple kernel calls when possible (e.g. using atomics).

maleadt avatar May 12 '25 07:05 maleadt

I thought the idea was to move towards depending on AK.jl for these kernels?

maleadt avatar Jun 30 '25 07:06 maleadt

I thought the idea was to move towards depending on AK.jl for these kernels?

Ideally it would be. I pushed the feedback so it would be easier to benchmark between existing implementations and the equivalent KA port (for Metal at least, CUDA has some differences as previously discussed)

christiangnrd avatar Jun 30 '25 17:06 christiangnrd