alpaka
alpaka copied to clipboard
`alpaka::getWarpSizes` incurs a noticeable overhead
While porting the CMS pixel reconstruction from native CUDA to Alpaka, it was noticed that the use of the alpaka::getWarpSizes(device)
function incurs a noticeable overhead.
See https://github.com/cms-sw/cmssw/pull/43064#issuecomment-1817590926 for the discussion.
A possible workaround is to cache the warp size in our code, instead of querying it for every event.
However, it would seem natural to cache this information within the Alpaka device objects, instead of querying the underlying back-end each time.
I think that caching the warp sizes inside the device object would require
- either filling it at construction time
- or using a mutex to avoid setting the cache concurrently
IMO caching makes sense, we should store the value during the device creation then there will be no need for a mutex.
Is there a CUDA device with a warpSize not 32? I am almost in favor of hardcoding it ... Otherwise, we could just collect and cache the entire device properties (i.e. cudaDeviceProp
), so we can also serve other values faster.
Not that I know of.
But HIP devices can have a warp size of 32 or 64, depending on the GPU model and potentially on the environment settings.
Partly solved by #2246. Never the less we should cache all over runtime constant device properties within the device, than there is no need to query the API multiple times.