Document how to do cross-compilation
The readme mentions how to cross compile for wasm, but doesn't explain cross compilation for any other platforms.
I am using rules_rust in conjunction with rules_docker and would like to compile with macOS as the host, targeting linux. Currently, without modifications, the docker image is build for macOS and fails on launch:
standard_init_linux.go:211: exec user process caused "exec format error"
My current attempt adds an entry to the .bazelrc file which specifies the platform for the image target.
build:image --platforms=//server:docker_platform
The platform is defined as such:
platform(
name = "docker_platform",
constraint_values = [
"@platforms//os:linux",
]
)
The error on build is
ERROR: While resolving toolchains for target //server:image_binary: no matching toolchains found for types @io_bazel_rules_rust//rust:toolchain
ERROR: Analysis of target '//server:image' failed; build aborted: no matching toolchains found for types @io_bazel_rules_rust//rust:toolchain
INFO: Elapsed time: 0.152s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded, 146 targets configured)
Edit
I have loaded a custom toolchain in the WORKSPACE which seems like a step in the right direction but I am still hitting the same error.
rust_repository_set(
name = "rust_darwin_x86_64",
exec_triple = "x86_64-apple-darwin",
extra_target_triples = ["x86_64-unknown-linux-gnu"],
version = "1.39.0",
)
And updated .bazelrc, having tried both @io_bazel_rules_rust//rust/platform:linux and //server:docker_platform for the platforms argument
build:image --extra_toolchains=@rust_darwin_x86_64_toolchains//:toolchain_for_x86_64-unknown-linux-gnu --platforms=...
Hey @arlyon , I am trying this myself. Did you manage to compile rust to be used in dockers in the end?
Unfortunately, no. My main motivation is to use this with skaffold, and I have resorted to maintaining my own docker files instead. I may take another crack at it when o get the time so if you make any headway, feel free to include it here.
https://github.com/bazelbuild/rules_rust/issues/770 also seems related to this.
It seems https://github.com/GoogleContainerTools/distroless/pull/462/files is a nice example for doing what seems like what you ask about (even though it uses quite an old version of rules Rust). Is there anything more to do here?
One thing I think we could do is to document how to cross compile Rust locally (without docker, just locally, so the docs are not distracted by docker).
Volunteers welcomed :)
I have an example here on doing cross-compilation on a macOS host to a Linux musl target.
It leverages the platforms infrastructure and --incompatible_enable_cc_toolchain_resolution. It also uses transitions to unconditionally compile the binary to the musl toolchain.
Not sure if this is the correct approach, but it seems to be working.
The example omits any dependencies, which complicate things a bit. In particular, if we want to compile the target for multiple platforms (e.g., to run unit tests on the host), it may be necessary to sprinkle selects on some raze-generated BUILD files (e.g., ring requires setting some platform-dependent env variables).
The biggest missing piece I think is lack of support for build_settings, especially when we want to add rustc flags like target-cpu to the target compilation across all deps.
I'm trying to cross-compile from macOS intel -> macOS arm64. I had to set up this rust_repository_set in WORKSPACE:
rust_repository_set(
name = "rust_darwin_aarch64_cross",
exec_triple = "x86_64-apple-darwin",
extra_target_triples = ["aarch64-apple-darwin"],
iso_date = "2021-06-09",
version = "nightly",
)
And then set this up in the root BUILD
platform(
name = "apple_m1",
constraint_values = [
"@platforms//os:macos",
"@platforms//cpu:aarch64",
],
)
This appeared to work when called with bazel build --platforms //:apple_m1 //some:target -- at least until the third party packages finished building. Once that was done, the build failed because it couldn't find any third party dep -- maybe a missing crate 'envconfig' -- even though there is a line like --extern 'envconfig=bazel-out/darwin-fastbuild/bin/external/raze__envconfig__0_10_0/libenvconfig-1665319476.rlib' in the rustc command that is printed with bazel build -s. Similar errors were seen for other third party deps as well.
Is there something about deps that still needs to be solved before cross compilation will work?
🤦 This error is caused because I needed to specify edition = "2018" on the rust_repository_set. Otherwise this seems to work. I am still curious why rules_rust does not declare all the main supported platforms as valid cross-compilation targets by default, though.
I found the example from @duarten very useful and got it working for basic crates when cross-compiling from MacOS to Linux.
However, when a crate contains custom Rust macros, it throws an error like this:
can't find crate for futures_macro
--> external/raze__futures_util__0_3_19/src/async_await/stream_select_mod.rs:6:9
|
6 | pub use futures_macro::stream_select_internal;
| ^^^^^^^^^^^^^ can't find crate
|
= note: extern location for futures_macro is of an unknown type: bazel-out/darwin-opt-exec-2B5CBBC6/bin/external/raze__futures_macro__0_3_19/libfutures_macro-152066848.dylib
= help: file name should be lib*.rlib or lib*..so
More info about the issue here: https://github.com/duarten/rust-bazel-cross/issues/2
FWIW I started down this path, learning about transitions and whatnot, and then realized, oh, I don't think I need that. I ended up writing the following two entries in my WORKSPACE:
rust_repository_set(
name = "rust_macos_arm64_linux_tuple",
edition = "2021",
exec_triple = "aarch64-apple-darwin",
extra_target_triples = ["x86_64-unknown-linux-gnu"],
version = "1.61.0",
)
rust_repository_set(
name = "rust_macos_x86_64_linux_tuple",
edition = "2021",
exec_triple = "x86_64-apple-darwin",
extra_target_triples = ["x86_64-unknown-linux-gnu"],
version = "1.61.0",
)
...and everything just started working. (I put both in because we have folks on M1 macs and folks on Intel macs.)
I suspect that this was simple for us because we had already done the work to configure a clang-based toolchain that could target x86_64-unknown-linux-gnu from MacOS, which seems to be what most of @duarten's example is doing (albeit with *-linux-musl instead of *-linux-gnu).
But the problems seem to be separable, and this was not obvious to me from the discussion above.
Step 1: Configure a cc toolchain that runs on MacOS and targets Linux.
Step 2: Call rust_repository_set to declare a rust that does the same.
@DeCarabas any config/code you can share on this exact cc toolchain setup?
This is all really unclear. I can kind of see that you use rust_repository_set() or maybe rust_register_toolchains() like this
rust_register_toolchains(extra_target_triples=["riscv32i-unknown-none-elf"])
(Though the docs for rust_register_toolchains() basically say not to do that. I assume they are out of date.)
That will presumably make sure there's a Rust toolchain that can compile to that -target. But how do you actually tell a Rust Bazel target (i.e. a rust_binary) to use that toolchain or -target?
There's no parameter to rust_binary that I can see, and duarten's example doesn't seem to use the result of rust_repository_set() anywhere.
How can I have a workspace that compiles different things for different targets?
Ok it seems like Bazel's model is that there is only one target per compilation. Unfortunate. I guess that's all Google really needs internally.
Apparently you need to use custom transitions to solve it but that looks very complicated.
Ok it seems like Bazel's model is that there is only one target per compilation. Unfortunate. I guess that's all Google really needs internally.
It's worth noting that --platforms is a plural flag, so you can build for multiple platforms in one invocation, but all targets you specify will be built for those platforms.
i.e. you can do: bazel build --platforms=//some:linux_arm64,//some:linux_amd64 //some/rust:target
FWIW there was quite a bit of discussion at BazelCon this week about folks, including Google, wanting it to be easier to have different targets specify in their BUILD files what platforms they should build for, and properly respecting that, but work needs to be done. I think https://github.com/bazelbuild/bazel/issues/14669 is a reasonable tracking issue here, which links to https://groups.google.com/g/bazel-dev/c/QK7CI__ReDM which concluded: "Needs more design work".
Apparently you need to use custom transitions to solve it but that looks very complicated.
I agree that this is more complicated than it needs to be.
FWIW I currently have roughly the following code to do this:
In a .bzl file:
load("@bazel_skylib//lib:paths.bzl", "paths")
def _transition_to_impl(ctx):
# We need to forward the DefaultInfo provider from the underlying rule.
# Unfortunately, we can't do this directly, because Bazel requires that the executable to run
# is actually generated by this rule, so we need to symlink to it, and generate a synthetic
# forwarding DefaultInfo.
result = []
binary = ctx.attr.binary[0]
default_info = binary[DefaultInfo]
new_executable = None
files = default_info.files
original_executable = default_info.files_to_run.executable
data_runfiles = default_info.data_runfiles
default_runfiles = default_info.default_runfiles
if original_executable:
new_executable_name = ctx.attr.basename if ctx.attr.basename else original_executable.basename
# In order for the symlink to have the same basename as the original
# executable (important in the case of proto plugins), put it in a
# subdirectory named after the label to prevent collisions.
new_executable = ctx.actions.declare_file(paths.join(ctx.label.name, new_executable_name))
ctx.actions.symlink(
output = new_executable,
target_file = original_executable,
is_executable = True,
)
files = depset(direct = [new_executable])
data_runfiles = data_runfiles.merge(ctx.runfiles([new_executable]))
default_runfiles = default_runfiles.merge(ctx.runfiles([new_executable]))
result.append(
DefaultInfo(
files = files,
data_runfiles = data_runfiles,
default_runfiles = default_runfiles,
executable = new_executable,
),
)
return result
def _transition_to_linux_arm64_transition_impl(settings, attr):
return {"//command_line_option:platforms": [
Label("//some:linux_arm64"),
]}
_transition_to_linux_arm64_transition = transition(
implementation = _transition_to_linux_arm64_transition_impl,
inputs = [],
outputs = ["//command_line_option:platforms"],
)
linux_arm64_binary = rule(
implementation = transition_to_impl,
attrs = {
"basename": attr.string(),
"binary": attr.label(allow_files = True, cfg = _transition_to_linux_arm64_transition),
"_allowlist_function_transition": attr.label(
default = "@bazel_tools//tools/allowlists/function_transition_allowlist",
),
},
executable = True,
)
and then in a BUILD file you can write:
rust_binary(
name = "platform_generic_binary",
...
)
linux_arm64_binary(
name = "my_binary_for_linux_arm64",
binary = ":platform_generic_binary",
)
re: https://github.com/bazelbuild/rules_rust/issues/276#issuecomment-1320198911
Doesn't this require you to define custom cc_toolchain targets for all exec and target platforms? Since you'll lose the ability to use the local/auto toolchain.
Doesn't this require you to define custom
cc_toolchaintargets for all exec and target platforms? Since you'll lose the ability to use the local/auto toolchain.
Yes, you would need appropriate toolchains set up for both Rust and C++.