rules_cuda
rules_cuda copied to clipboard
Hermetic CUDA Toolkit
This issue track the progress of Hermetic CUDA Toolkit implementation.
In 0.2.x
- [x] #284
- [x] #302
- [x] #303
- [x] #304
- [x] #305
- [x] #306
- [x] #301
- [x] #309
- [x] #310
In 0.3.x
- [x] #300
- [x] #285
- [ ] #286
- [ ] #324
- [ ] Generate version.json for each component
@cloudhan, super excited for this! Thanks for starting the work on this. Roughly when do you see this to be done?
I am currently on my way of jumping ship, that is, I am joining NVIDIA ;). It may take sometime for me to settle down so it might take a little bit longer time. I'd hope I can have a working version by the end of next month.
First of all, I wish you all the best in your work! Thank you for your efforts on this. We're eagerly looking forward to seeing progress on this feature, as it’s something we truly need. Please let us know if there’s any way we can assist or contribute.
@cloudhan, I hope your time at NVIDIA is going well! We're really excited about the possibilities of using hermetic CUDA in RBE. We're currently facing a decision about whether to build a temporary non-hermetic solution for RBE or wait for this issue to be resolved.
Could you give us an update on your plan here? Any information you can share would help us make the best decision for our project's roadmap. +1 to @honeway and we'll be happy to assist some way too.
One this effort start, I can recommend using a rule based toolchain which was announced on the last BazelCon to be the modern way of writing toolchains.
@udaya2899 The cloudhan/hermetic-ctk-2 branch is actually working months ago with. Better test on it and provide some feedback.
@hofbi Seem to be very interesting. But it seems to be in a very early stage. Better just wait now.
For a preview,
https://github.com/cloudhan/cuda-samples/blob/bazel-cuda-components/WORKSPACE.bazel shows how a manually configured repo will be. Branch cloudhan/hermetic-ctk-2 contains the related feature.
https://github.com/cloudhan/cuda-samples/blob/bazel-cuda-redist-json/WORKSPACE.bazel shows how a automatically configured repo will be for WORKSPACE based project. Branch cloudhan/hermetic-ctk-3 contains the related feature.
Thanks for working on this now. We're on holiday season and I couldn't get time to experiment with your dev branch until now.
Expect to hear from me by the second week of January.
Unfortunately, we don't support WORKSPACE in our setup, and only use MODULE.bazel. Which is the most recent branch to try from? Is MODULE.bazel considered working in the tmp branch? Or is it hermetic-ctk-2 branch?
Happy New Year 2025! I'm just back from vacation and trying to try out your branch locally using git_override or local_path_override for giving some earlier feedback if any. Which branch has a possible working solution for MODULE.bazel? I see hermetic-ctk-2, hermetic-breaking-changes as well as tmp. Let me know what's the best way to try this out on our RBE setup.
@udaya2899 I updated previous comment. The branchs are stacked one by one, so blindly pick the last one should be OK.
You can also find MODULE base config in the referenced cuda-samples repo.
Auto config with redistrib.json in MODULE based project is not implemented at the moment. Maybe in future PRs. Another unsolved feature is how can we make switch cuda version easier. Say export or maybe a flag to build against different releases of cuda. A possible solution is to extend the current alias mapping to a versioned mapping with select in between.
Hi @cloudhan ! Thank you for your efforts on this! Do you have an estimate on when version 0.3 with these changes could be released?
@vdittmer I think once I can confirm there is a non-breaking path toward multi-version deliverable toolchain, then I can proceed to marge those PRs.
Seem using a selected alias in @local_cuda can solve the problem.
config_setting(name="version")
constraint_setting(...)
constraint_value(...) # version1
constraint_value(...) # version1
alias(name="cublas", actual =select({
":<some_label_for_version_12_2>": "@local_cuda_cublas_v12.2.y",
"//conditions:default": "@local_cuda_cublas_v12.8.x",
}))
should do the trick here.
Thanks to your awesome work, I was able to make progress. A small potential bug is the endswith("~") check here which fails with bazel v8.x as Bazel 8 and above uses + as the terminator. In general, it's not recommended to depend on this. Maybe we drop the check?
I was able to skip that error by changing to endswith("+") .
For now, leaving #324 out and installing clang on the RBE system, and retrying this setup, I get:
(Exit 1): clang failed: error executing CudaCompile command (from target //intrinsic/gpu/gpu_adder:gpu_adder) /usr/bin/clang -x cu '--cuda-path=cuda-not-found' '-frandom-seed=bazel-out/k8-opt/bin/intrinsic/gpu/gpu_adder/_objs/gpu_adder/gpu_adder.cu.pic.o' -iquote . ... (remaining 124 arguments skipped)
clang: error: cannot find libdevice for sm_35; provide path to different CUDA installation via '--cuda-path', or pass '-nocudalib' to build without linking with libdevice
clang: error: cannot find CUDA installation; provide its path via '--cuda-path', or pass '-nocudainc' to build without CUDA includes
I now understand that this is expected with the current setup since the path is scattered and we don't set it explicitly.
This seems okay for the nvcc compiler since the individual components are there already, and it doesn't expect a single cuda_path arg. For clang, whcih expects a --cuda-path, the official documentation only suggests to make them available at a single place and there is no way I see to let clang know they're all scattered and available at different repo paths.
How to proceed here? Since the CUDA code we have and want to build with Bazel is all clang-based and nvcc hermetic ctk isn't enough unfortunately.
An alternative proposal I have is to gather all the files, either through symlinks or copies, in a synthetic root directory and pass that to cuda-path.
For example (not verified to work):
load("@aspect_bazel_lib//lib:copy_to_directory.bzl", "copy_to_directory")
copy_to_directory(
name = "create_cuda_root",
srcs = [
"@@rules_cuda//toolchain+local_cuda_cccl:cccl_all_files",
"@@rules_cuda//toolchain+local_cuda_cudart:cudart_all_files",
"@@rules_cuda//toolchain+local_cuda_nvcc:nvcc_all_files",
],
out = "cuda_root",
replace_prefixes = {
"@rules_cuda//toolchain+local_cuda_cccl:cccl_all_files": "",
"@rules_cuda//toolchain+local_cuda_cudart:cudart_all_files": "",
"@rules_cuda//toolchain+local_cuda_nvcc:nvcc_all_files": "",
},
hardlink = "off",
)
and then use this rule's output to set cuda_path org in _detect_deliverable_cuda_toolkit's returning struct. Please let me know if there's a better solution in your mind.
Thanks in advance!
I think
gather all the files
is the only reasonable way to go. We don't want to be coupled with their abstraction.
https://github.com/bazel-contrib/rules_cuda/blob/27d7499993bb64e92f44b13a642b6c1def00fa03/cuda/private/repositories.bzl#L89-L94
nvcc, cccl, and cudart are all required for nvcc toolchain. Generate a special repo for clang with all files colocated seems fine to me.
I kinda made a dirty hack gathering all those files at one place. There were two problems:
- The downloaded repos don't have a version.txt or version.json file.
- This also doesn't let clang think the path is valid since I understand it tries to read the version.json file to validate the presence of a valid cuda-path
- clang expects
libcurandand throws up and error:fatal error: 'curand_mtgp32_kernel.h' file not found
- I see we are installing
libcurand-devas a separate step in github-action in tests exclusively for clang. - I was able to fix this by adding
curandas a component inMODULE.bazeland adding@local_cuda//:curandexplicitly to mycuda_librarytarget. Can we explicitly add this tocompiler_depsor similar conditionally if compiler isclang?
I don't know if this problem is with our setup here (or even related to bazel/rules_cuda), but I get:
error: cannot specify -o when generating multiple output files
I get the same error even when using clang directly to compile the .cu file with clang and passing a cuda-path with the collected files.
My finding with the error: cannot specify -o when generating multiple output files is that by default clang sets --cuda-compile-host-device flag which compiles code for both host and device generating two .o files but clang.bzl is only configured to expect one.
Is this enough to compile with a flag --cuda-device-only? If not, what's another alternative? We declare multiple .o outputs in the clang.bzl and link against both?
For collecting those component files in a single place to pass to clang as --cuda-path, I haven't made it to work reliably. It works on my local build but not on RBE.
What I did as a prototype is copy those files from the corresponding local_cuda_<component>/<component>/{bin, include, lib, nvvm} paths to create a new folder called clang inside @local_cuda directly.
def config_clang(repository_ctx, cuda, clang_path):
"""Generate `@local_cuda//toolchain/clang/BUILD`
Args:
repository_ctx: repository_ctx
cuda: The struct returned from `detect_cuda_toolkit`
clang_path: Path to clang executable returned from `detect_clang`
"""
is_local_ctk = None
if len(repository_ctx.attr.components_mapping) != 0:
is_local_ctk = False
# for deliverable ctk, clang needs the toolkit as cuda_path
if not is_local_ctk:
nvcc_repo = components_mapping_compat.repo_str(repository_ctx.attr.components_mapping["nvcc"])
cudart_repo = components_mapping_compat.repo_str(repository_ctx.attr.components_mapping["cudart"])
cccl_repo = components_mapping_compat.repo_str(repository_ctx.attr.components_mapping["cccl"])
libpath = "lib" # any special logic for linux/windows difference?
generate_version_json(repository_ctx)
clang_cuda_path = repository_ctx.path("clang_cuda_toolkit")
repository_ctx.execute(["mkdir", "-p", "clang_cuda_toolkit"]) # non-hermetic mkdir call
source_paths = [
repository_ctx.path(Label(nvcc_repo + "//:nvcc/bin")),
repository_ctx.path(Label(nvcc_repo + "//:nvcc/include")),
repository_ctx.path(Label(cudart_repo + "//:cudart/include")),
repository_ctx.path(Label(cccl_repo + "//:cccl/include")),
repository_ctx.path(Label(nvcc_repo + "//:nvcc/" + libpath)),
repository_ctx.path(Label(cudart_repo + "//:cudart/" + libpath)),
repository_ctx.path(Label(cccl_repo + "//:cccl/" + libpath)),
repository_ctx.path(Label(nvcc_repo + "//:nvcc/nvvm")),
]
for source_path in source_paths:
repository_ctx.execute(["cp", "-r", str(source_path), clang_cuda_path]) # non-hermetic cp call
# Generate @local_cuda//toolchain/clang/BUILD
template_helper.generate_toolchain_clang_build(repository_ctx, cuda, clang_path)
And in detect_deliverable_cuda_toolkit before returning the struct with path as None, I change it to this clang_cuda_toolkit path:
cuda_path = str(Label("@local_cuda//:clang_cuda_toolkit"))
return struct(
path = str(cuda_path), # scattered components
version_major = cuda_version_major,
version_minor = cuda_version_minor,
nvcc_version_major = nvcc_version_major,
nvcc_version_minor = nvcc_version_minor,
nvcc_label = nvcc,
nvlink_label = nvlink,
link_stub_label = link_stub,
bin2c_label = bin2c,
fatbinary_label = fatbinary,
)
With this method, the @local_cuda/clang_cuda_toolkit folder has the necessary include, lib, bin, nvvm paths and bazel build <some cuda_library target works locally (with no cuda installed on machine) but not in our RBE and fails as if it never found the cuda_toolkit:
clang-cpp: error: cannot find libdevice for sm_70; provide path to different CUDA installation via '--cuda-path', or pass '-nocudalib' to build without linking with libdevice
clang-cpp: error: cannot find CUDA installation; provide its path via '--cuda-path', or pass '-nocudainc' to build without CUDA includes
Although I verified that inside the execution_root, the path it mentions indeed has the cuda toolkit collected files. Is there a better "bazel rule" way of doing this?
Sorry for the multiple comments. I'm posting my findings as I progress through this.
@jsharpe I'd like move on to 0.3.x, and merge the first two changes with minor fix for endswith("~"). I think I can just drop the check as we don't rely on the presumed repo name format anymore, we use the explicit mapping.
What's the status update for this effort? It would be fantastic if the rules worked with fully hermetic toolchains and nothing needed to be installed on the host for compilation.
All features are in the main tip, just not ready for a release, basically due to clang and lacking of redist_json equivalent for module based project.
Are the remaining issues documented in other issue tickets?
I'd be sending a PR for fully hermetic clang cuda compilation support by this week :) it's the top on my list.
Big thank you from my side for your efforts on this! Thanks for being super helpful and fixing bugs/feature requests soon.