cuml
cuml copied to clipboard
[BUG] Unable to build cuML with micromamba
Describe the bug I am unable to install cuml using micromamba. Mamba works to install the environment, but micromamba fails.
Steps/Code to reproduce bug Steps to reproduce the error:
micromamba create -n rapids-22.06 -c rapidsai -c nvidia -c conda-forge \
cuml=22.06 python=3.9 cudatoolkit=11.4
The error:
critical libmamba File exists: '/home/tyler/micromamba/pkgs/libraft-distance-22.06.00-cuda11_ged2c529_0/include/rapids/libcxx/include', '/home/path/env/include/rapids/libcxx/include'
Building the same environment with mamba results in no errors. Here is the command that I run to build the environment in mamba:
mamba create -n rapids-22.06 -c rapidsai -c nvidia -c conda-forge \
cuml=22.06 python=3.9 cudatoolkit=11.4
Expected behavior Generate a new python environment with cuml.
Environment details (please complete the following information):
- Environment location: Bare-metal
- Linux Distro/Architecture: Ubuntu 22.04 amd64
- GPU Model/Driver: RTX 3090 and 470.129.06
- CUDA: 11.4
- Method of cuDF & cuML install: mamba/micromamba
Additional context This issue has also been reported to mamba, but I am not sure if its on their end. https://github.com/mamba-org/mamba/issues/1772
Care of @bdice, possibly related to how we're packaging libcu++?
We're packaging the libcu++ includes in multiple conda packages, which end up clobbering one another in the include/rapids/
path. I don't know how we should prevent that clobbering. An ideal solution would be to package libcu++ as its own conda package and depend on it, but I think there are some limitations for why we can't currently do that. @robertmaynard may have more knowledge on that front.
edit: just read this again and saw it says libcxx
and not libcudacxx
. I believe this is still related to libcu++ but I don't know enough about its packaging to say that with complete certainty.
just read this again and saw it says libcxx and not libcudacxx. I believe this is still related to libcu++ but I don't know enough about its packaging to say that with complete certainty.
You are correct this is libcudacxx. libcxx is a component of it.
An ideal solution would be to package libcu++ as its own conda package and depend on it, but I think there are some limitations for why we can't currently do that.
We can't do that currently due to how CMake and nvcc interact with system includes and the implicit user includes that nvcc has.
I have done some additional testing, and all other rapidsai libraries build. I only get errors when cuML is included. cuSpatial, cuDF, cuGraph, cuXFilter, cuSignal, cuCIM all install without issues.
I am able to build cuML=22.02 with micromamba=0.24/0.25, but 22.04, 22.06 have the same issues.
Hi, I concur about this issue.
We use micromamba in a Docker multistaging context to optimize our build time and images size.
This will be a pain for us to include rapids
+1, running into the same issue with micromamba on CI
As a test, I fully removed conda and mamba from my system and then installed micromamba. I was able to reproduce this error with the 22.06 and 22.08 releases, but was able to successfully solve for and create the environment with the current 22.10 nightly.
Could you let us know if you still see these errors with the current 22.10 nightly?
micromamba create -n micro-rapids-22.10 -c rapidsai-nightly -c nvidia -c conda-forge cuml=22.10 python=3.9 cudatoolkit=11.5
...
nicholasb@nicholasb-HP-Z8-G4-Workstation:~$ micromamba activate micro-rapids-22.10
(micro-rapids-22) nicholasb@nicholasb-HP-Z8-G4-Workstation:~$ python
Python 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:56:21)
[GCC 10.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cuml
>>> cuml.__version__
'22.10.00a+42.g74372f357'
Could you let us know if you still see these errors with the current 22.10 nightly?
I am able to build cuml with the current 22.10 nightly. Thank you for the help!