alpaka icon indicating copy to clipboard operation
alpaka copied to clipboard

`alpaka::getWarpSizes` incurs a noticeable overhead

Open fwyzard opened this issue 1 year ago • 5 comments

While porting the CMS pixel reconstruction from native CUDA to Alpaka, it was noticed that the use of the alpaka::getWarpSizes(device) function incurs a noticeable overhead.

See https://github.com/cms-sw/cmssw/pull/43064#issuecomment-1817590926 for the discussion.

A possible workaround is to cache the warp size in our code, instead of querying it for every event.

However, it would seem natural to cache this information within the Alpaka device objects, instead of querying the underlying back-end each time.

fwyzard avatar Nov 21 '23 09:11 fwyzard

I think that caching the warp sizes inside the device object would require

  • either filling it at construction time
  • or using a mutex to avoid setting the cache concurrently

fwyzard avatar Nov 21 '23 09:11 fwyzard

IMO caching makes sense, we should store the value during the device creation then there will be no need for a mutex.

psychocoderHPC avatar Nov 21 '23 12:11 psychocoderHPC

Is there a CUDA device with a warpSize not 32? I am almost in favor of hardcoding it ... Otherwise, we could just collect and cache the entire device properties (i.e. cudaDeviceProp), so we can also serve other values faster.

bernhardmgruber avatar Nov 21 '23 18:11 bernhardmgruber

Not that I know of.

But HIP devices can have a warp size of 32 or 64, depending on the GPU model and potentially on the environment settings.

fwyzard avatar Nov 21 '23 20:11 fwyzard

Partly solved by #2246. Never the less we should cache all over runtime constant device properties within the device, than there is no need to query the API multiple times.

psychocoderHPC avatar Mar 12 '24 09:03 psychocoderHPC