KernelAbstractions.jl
KernelAbstractions.jl copied to clipboard
Heterogeneous programming in Julia
still fails with illegal memory access in kernel
x-ref: https://github.com/vchuravy/GPUifyLoops.jl/issues/91
https://github.com/JuliaGPU/KernelAbstractions.jl/blob/1497d4109857c239a3d407943a0f2323c2bfc396/src/backends/cuda.jl#L99 x-ref: https://github.com/vchuravy/GPUifyLoops.jl/issues/100
https://github.com/vchuravy/GPUifyLoops.jl/issues/103
https://github.com/vchuravy/GPUifyLoops.jl/issues/104
Since kernel launches are event based and not stream based, we need a way to efficiently time the kernels themselves, using the event system.
```julia using AMDGPU using CUDA using KernelAbstractions const KA = KernelAbstractions using CUDAKernels using ROCKernels x::KA.GPU = CUDADevice() # works x::KA.GPU = ROCDevice() # doesn't work ``` Output: ```julia julia>...
- #317 - Define KernelAbstractionsCore.jl that defines the interface for backend - Move backends from CUDAKernels to CUDA and so-forth The last two are necessary since I just noticed that...
@michel2323 's PR, but opening so we can have a place to discuss.