Bradley Dice

Results 608 comments of Bradley Dice

I improved this in #1984 while I was touching some related documentation. I think the changes I made there should be enough to close this issue.

I made some progress here. The patch reverting PR 211 is actually masking a bug in cuSpatial, which I fixed. This patch can be removed from rapids-cmake and cuDF. See...

All three PRs [above](https://github.com/NVIDIA/cccl/issues/1939#issuecomment-2212544416) have merged, which should fix up the RAPIDS CCCL devcontainer builds. The remaining cuDF patches should apply cleanly over the current CCCL. To continue removing those...

I updated the cuDF testing PR for CCCL 2.8.x to remove all patches: https://github.com/rapidsai/cudf/pull/18235#issuecomment-2798095191 This has the ["maybe unroll" backport 4387](https://github.com/NVIDIA/cccl/pull/4387) in CCCL and a `CCCL_AVOID_SORT_UNROLL` definition in cuDF [merged...

Now that https://github.com/rapidsai/rapids-cmake/pull/793 is merged and https://github.com/rapidsai/cudf/pull/18235 is queued to merge, we are finally building RAPIDS without CCCL patches. To get to a totally clean state, we will want to...

@JigaoLuo Thanks for your interest in this! Yes, happy to work with you.

We can discuss here. That proposal seems fine to me, if it can be made non-breaking for existing `rmm::device_scalar` users.

I was thinking we could add an optional `host_resource_ref` to the `device_scalar` constructor without needing to add a global host resource. Adding more global state opens all kinds of new...

@JigaoLuo Sounds good! Please give that a try. I thought a bit more about this but probably need to see the implementation to have a more complete opinion. Thank you...

Just updating this since it's been a few months since the last reply. My understanding of the CCCL memory resource design agrees with Mark's last statement. It seems like it...