Benjamin Zaitlen
Benjamin Zaitlen
Can you try with `-DUSE_GDS=ON` (https://github.com/rapidsai/cudf/blob/branch-22.08/build.sh#L169) ?
It might be possibly to add CuPy to this line https://github.com/rapidsai/cudf/blob/f42d117621cb73d09a9c0e2b7d95d6fe00a92cfb/ci/gpu/build.sh#L109-L115 But maybe it's reasonable to confirm that in your env with CuPy 11 things work. If so, we can...
Tests seem to all be failing with device buffer handling of cupy array: - [cudf.tests.test_buffer.test_buffer_from_cuda_iface_dtype[uint8-data0]] ``` > assert (ary == buf).all() E assert array(False) E + where array(False) = ()...
Sorry for the commit on top @wence- . I pushed a fix for the last error: ``` E AssertionError: 1 of 4 doctests failed for MultiIndex.values: E ********************************************************************** E File...
I dont' think we can disable decimal 128 support. Instead of building on 11.4, you could build with 11.5 -> 11.7 then rely on CEC for CUDA 11.0->11.4 backwards compatibility....
This is super cool to see! Can you also post one of the early benchmarks plot comparing performance?
@infzo are you wanting to perform this between separate processes or the same process with multiple threads ? Giving us more info about your use case would be helpful. Assuming...
I think this should be re-targeted to 0.19
@nmatare, thanks for the reproducible report! I just tried with dask>=2.0 -- is it possible for you to upgrade ?
That is correct but I think those PRs handle preloading _after_ the original nanny process has started. Additionally, I think there may also be a race between the multiple workers...