gtensor icon indicating copy to clipboard operation
gtensor copied to clipboard

Create generic launch and assign kernels

Open bd4 opened this issue 5 years ago • 0 comments

The CUDA/HIP implementations are fragile in that there may be array sizes that overflow certain limits. Explore using linear launch indexing and mapping back to expression indexes. This requires integer divide and modulo, which may hurt performance, but depending on computational intensity may not hurt performance and my simplify the launch routines a lot. Ideally we have one generic launch for any number of dimensions.

The WIP sycl implementation currently only goes up to 3 dims using range instead of nd_range, and similar challenges apply.

bd4 avatar May 15 '20 15:05 bd4