Tom Lin
Tom Lin
the arrtibute spb_default_sections_count that was supposed to be read isn't there This is what I get from Android Studio Preview ``` java.lang.AssertionError at android.content.res.BridgeResources.getInteger(BridgeResources.java:425) at fr.castorflex.android.smoothprogressbar.SmoothProgressDrawable$Builder.initValues(SmoothProgressDrawable.java:651) at fr.castorflex.android.smoothprogressbar.SmoothProgressDrawable$Builder.(SmoothProgressDrawable.java:623) at fr.castorflex.android.smoothprogressbar.SmoothProgressBar.(SmoothProgressBar.java:38) ...
Using `@Arg` alone with a call to `Akatsuki.restore` and a non-null instance state causes Akatsuki to look for the `@Retained`'s retainer class which does not exist.
There are several instances where the double constant `0.0` is used in a way that promotes everything it touches. For example: https://github.com/UoB-HPC/BabelStream/blob/1d423fc70dd573b528ee43f521401277731b443a/src/std-data/STDDataStream.cpp#L85 In this case, the value is used on...
oneTBB works well when used as a CMake FetchContent dependency. By doing this, TBB and the benchmark can be configured and compiled together which allows TBB to make better decisions...
[Numba](https://developer.nvidia.com/how-to-cuda-python) seems to be the *Nvidia recognised* way of CUDA programming with Python. Numba supports direct kernel programming similar to how it's done in Julia where the annotated code/method is...
So it appears that instead of calling `++` or separate `+` and `=` like in libstdc++ and `-stdpar=gpu`, `-stdpar=multicore` calls `+=`. ``` "/lustre/home/br-wlin/nvhpc_sdk/Linux_x86_64/22.1/compilers/include-stdpar/thrust/system/detail/generic/advance.inl", line 48: error: no operator "+=" matches...
RAJA 0.14.x contains about a years worth of changes (source incompatible) with additional backend support (SYCL).
So that the implementation can be used from the REPL directly.
This breaks CI runs for RAJA. The ICE is tracked on [bugzilla](https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100102), let's revisit 2eca397 once this is fixed. For reference, we also got a few nearly [identical](https://github.com/NVIDIA/nccl/issues/494) [ones](https://github.com/alpaka-group/alpaka/issues/1297).
I'm trying to port the classic GPU tree reduction via KernelAbstractions.jl. See [this](https://github.com/UoB-HPC/BabelStream/blob/6fe81e19556ac26761a1c7247ae29fa88fb4e0ab/CUDAStream.cu#L233) for the direct CUDA implementation of what I'm trying to port from. This is what I have...