Bradley Dice
Bradley Dice
The build errors here seem to stem from a bunch of places in the source code where the extents type is left as default (`uint32_t`) or defined incorrectly. The previous...
I think this is a good idea. I am first working on reducing Numba dependencies in #1760 / #1761. This would be a good follow-up.
It might be possible to get a buffer using huge pages on the host like this. (Snippet generated by ChatGPT o3-mini-high.) ```python import sys import mmap def allocate_buffer(size): """ Allocate...
NumPy is a library that I feel perfectly comfortable requiring in almost all cases but the use in RMM isn’t enough imo. The single usage is not enough to justify...
I re-assessed this today. I thought the huge pages case was the only usage of NumPy. There is one other usage, where we use `np.dtype` to parse the type string...
Initial steps in #1800. I filed a follow-up in #1845 to test examples in CI. Intermediate / advanced example codes are still needed.
Another requested example: using the statistics MR adaptor from C++, similar to the profiler guide for Python: https://docs.rapids.ai/api/rmm/stable/guide/#memory-profiler
The `rmm::device_buffer` has a [set_stream](https://docs.rapids.ai/api/librmm/nightly/classrmm_1_1device__buffer.html#ab271ced85f304e3061a3ff72526dbc37) method. This proposed API would call that method of the underlying buffer. This seems reasonable from what I can tell, but its utility (transferring ownership...
https://github.com/rapidsai/cudf/issues/19118 was opened as a duplicate of this issue. @tgujar Feel free to share more about your use case here.
Yes, this is a good idea. For any enum values that are now present in all supported CUDA versions, we can do this. That would mean CUDA 11.2+ for sure,...