René Widera
René Widera
boost `aligned_alloc` is removed with #1094
Note: As I wrote this is not possible for all functions in alpaka, we need to evaluate where this functionality makes sense.
@bussmann you forgot to link [llama documentation](https://llama-doc.readthedocs.io/en/latest/) + [llama github](https://github.com/alpaka-group/llama)
IMO the check that we are in a parallel region should stay for OpenMP
> ### Replace `ALPAKA_HOST_ONLY` with a separate preprocessor symbol for each backend > Backend specific code should be seen only by the respective compiler, so `ALPAKA_HOST_ONLY` is probably not enough...
@fwyzard What is the build workflow to support CUDA and HIP within one binary. I build an application in the past which supported CUDA, OpenCl and CPU. For that, I...
> Option 1: code duplication This would revert what we did in the past. https://github.com/alpaka-group/alpaka/pull/928 But this does not mean that we can do it, we thought there will be...
> @j-stephan @psychocoderHPC you can find at [fwyzard/split_CUDA_ROCm_types](https://github.com/fwyzard/alpaka/tree/split_CUDA_ROCm_types) a first draft of this approach. I swiped over your prototype. I added a comment because your changes provide the wrong accelerator...
> Remove. Any specific memory layout should remain outside Alpaka. This is done by LLAMA. I disagree, the padding is set by CUDA/HIP and is device-specific. This does not mean...
> I expect that the list of available devices should not change during a program's execution, so would it make sense to discover this information once and store it in...