hermetic_cc_toolchain
hermetic_cc_toolchain copied to clipboard
feat: create cc_toolchains for multiple exec platforms
This change aims to allow hermetic_cc_toolchain to create toolchains for multiple exec platforms. It now generates repositories of the format {exec}-zig_sdk and registers each toolchain to only be exec_compatible_with those specific platform constraints.
I want this PR to be a WIP as we discuss how we want this to look and if you're happy with it's general flow.
This is an attempt at fixing https://github.com/uber/hermetic_cc_toolchain/issues/148
So far I have tested that my local client (mac) can build c code using the toolchains registered on a x86_64 remote executor and have seen success. It'll will be hard for me to test every combination; but I will be able to local (x86_64) -> remote (arm64) tests as well.
@linzhp @motiejus Apologies, not sure who is best to ping, but some early eyes on this would be great to understand if you're happy with it's direction.
I know that a valid usecase for this project is using --platforms @zig_sdk//platforms:amd64_linux and we use the mechanism to rely on specific libc versions too if we register >1.
I suggest that the target platform definitions should either be defined inside of hermetic_cc_toolchain itself or a separate generated repository. For the purpose of toolchain resolution, only their constraint_values matter, so should be decoupled from the toolchain logic that generates exec platform aware toolchains.
e.g.
@amd64-linux-zig_sdk//platforms:amd64_linux matches the constraints of
@arm64-linux_zig_sdk//platforms:amd64_linux as they're both describing the target platform only.
so it's likely best this PR does some minimum further work to generate these platforms somewhere central.
Thanks for the initial look @linzhp
I have addressed the comments made, I have added a local repository rule for zig_sdk that symlinks to the exec repo that matches the host. This is to try and ensure backwards compatibility somewhat. I am not entirely confident with it; it might cause more problems than it solves.
I have tested the latest iteration with RE where host was x86 and my remote was arm64, and got successful compilations. I cannot run the full test suite yet as i'm running into a problem with rules_go that I am looking into.
@linzhp ping/bump , is there anything else outstanding on this PR you currently want me to address?
I can take a look sometime this week
There is some issues with Windows' platform names though
There is some issues with Windows' platform names though
Let me investigate.
Hopefully fixed, I wasn't doing correct substitution for windows/macos for the local repository symlinking logic as I dont have a machine to test those on necessarily.
Pushed a change that hopefully fixed the failing worksflows, could we try again please? Is there any way for me to run the majority of it locally outside of bazel test //... to ensure I am not wasting unnecessary resource time testing it in your CI flow?
That's the best bet. I think the main problem is the support for different platforms. I just changed the permission so hopefully your updates can automatically trigger CI without my approval. That makes your iteration faster.
Looks like macos fails for reasons outside my control; see https://github.com/uber/hermetic_cc_toolchain/issues/131
This seems to be because macos isn't allowed to cross-compile the other toolchains that are being registered. For Macos I suppose we need to not register the other platforms, I am unsure the pattern for that.
Fixed all failures outside of the releaase checks, I have attempted to change the API for the WORKSPACE file, but it fails on trying to use the released version of hermetic_cc_toolchain as it does not have the new change.
@linzhp Can you recommend the correct approach to fixing this? I don't see a way of doing this without breaking the API for toolchains() inside of defs/toolchains.bzl
In the meantime I'll attempt a refactor that is fully backwards compatible.
UPDATE: Added backwards compatibility, was simpler than I thought, I don't usually interact with WORKSPACE files. Hopefully it's in a good state now. Thanks again for you patience while I fight the multiple platform issues.
Thanks for the contribution!