RAJA icon indicating copy to clipboard operation
RAJA copied to clipboard

Runtime memory policy for multireducers

Open tomstitt opened this issue 4 months ago • 0 comments

Is your feature request related to a problem? Please describe.

The RAJA::hip_multi_reduce_atomic and RAJA::cuda_multi_reduce_atomic multireducers allocate GPU memory even when they are only used in a CPU kernel. Our application supports a runtime compute policy with GPU builds such that we can run CPU-only if desired; with multireducers this breaks because in CPU mode we now allocate GPU memory.

Describe the solution you'd like

We would like the GPU multireducers to dynamically choose their allocator based on the the kernel they are captured by, similar to the regular GPU RAJA reducers like RAJA::hip_reduce and RAJA::cuda_reduce.

Describe alternatives you've considered

We've considered templating routines where we use multireducers with a sequential dispatch for the CPU and a platform-dependent dispatch for the GPU but that requires additional boilerplate.

Additional context

n/a

tomstitt avatar Oct 22 '24 17:10 tomstitt