Allow using a different linker in deployment
Describe the improvement
Hi, In my rust projects, I use a different linker like lld or mold for faster compile times because I use a budget laptop for coding and rust is notorious for it's slow default compile times.
Lately, I have started using https://shuttle.rs for personal projects using their cli tool cargo-shuttle. The issue I'm facing here that when running cargo run or cargo build, cargo uses my .cargo/config.toml file to select a linker and that works out just fine and I get great compile times but when I use cargo shuttle run to run the shuttle project locally, the project gets rebuilt using the slow, default gold linker(this is just an assumption) and it takes ~15s to rebuild after every change(I use cargo watch -cx "shuttle run" to reload on change detection) and it is very annoying to debug sessions or just run a dev server while working in general.
The default binary built using cargo build or cargo run is unusable because it requires some options to be set and still doesn't work the same as cargo shuttle run.
There might be two reasons for long compile times that I can think of:
-
cargo shuttle rundoes not respect the$HOME/.cargo/config.tomlor theproject/.cargo/config.tomlfile and uses the default linker every time. -
cargo shuttle runbuilds the binary in release mode and release binaries are known for taking a long time to compile.
The improvements that can be done here are:
- Respect the linker of choice of the users from the configuration.
- Allow which build mode to use for locally run binaries.
- Allow users to use the binary as is with some default options set and not requiring them to use the cli for executing the binary.
Above are the best ideas that I could think of, but I'd really like to hear everyone's opinion and ideas about this.
Duplicate declaration
- [X] I have searched the issues and this improvement has not been requested before.
Since the last release, #922 has been merged, which means that the locally installed cargo will be used instead of a bundled one. I think that might solve the issue you're facing. We can wait until 0.19 and see if it works.
Thanks @jonaro00, I'll wait until the 0.19 and if the issue still persists, I'll let you know; otherwise, I'll close this issue.
v0.19.0 should be released on Monday (19 Jun) :crossed_fingers:
Sounds like this will need #928, which I'm addressing in #1050
I tested this issue on v0.19 and shuttle-cli actually did read my .cargo/config.toml but it failed to build at deployment because the lld binary was not installed in the deployment container.
Maybe we can add a key to Shuttle.toml like required-packages which would install that package at deployment. The package names can be standardised by using the alpine package repository or debian package repo.
Above is just a suggestion but you may have a better idea.
Installing external dependencies is planned, not sure when it will eventually be released.
Similar to #703
@that-ambuj I just realized: You can get around this issue by having your .cargo/config.toml locally, but add it to .gitignore so that it does not get uploaded. Or is there some other hurdle that will stop this from working?
Yeah, I have a global .cargo/config.toml in my home directory. It works perfectly fine in local builds but it causes the deployment builds to crash when I don't .gitignore the config.toml because the container would look for lld(linker) binary which does not come pre-installed. Due to this, I have to ignore the .cargo/config.toml and the deployment builds are much slower.
We will consider allowing custom linkers in the upcoming building system. Not sure how possible it is though.
I have two ideas right now:
- We allow the users to specify a folder for binaries and the deployment container would copy them to
/usr/binon startup but there's the issue of version and architecture mismatch along with the hassle of cross compilation and copy-pasting binaries. - Or we can just specify a package name and version from the Alpine Package Repo in Shuttle.toml with a seperate section for binaries such as
[bins]. This way the deployment container would runapk add <package>on startup. This works provided we use an alpine based distro for the deployment container but it's also possible with Debian Package Repo.
What do you think about this?
The first bullet is possible now, by gitignoring, declaring assets, and using relative path to the binaries, but as you say, version+arch issues. We are planning for something similar to the second bullet.
I should let you know that "hacking" the deployer to run apt install before deployment is possible, but not pretty. https://github.com/shuttle-hq/shuttle/issues/703#issuecomment-1515606621
We could probably add something like lld pre-installed to the deployment container, if it's just a simple apt install.
Could you provide the cargo config showing how lld is used?
In a .cargo/config.toml file inside your project or workspace root you have to add
[target.x86_64-unknown-linux-gnu]
linker = "clang"
rustflags = ["-C", "link-arg=-fuse-ld=/path/to/ld.lld"]
Please note that in one of the recent releases the binary has been renamed form lld to ld.lld, again highlighting a version mismatch issue.
Also, I also want to mention https://github.com/rui314/mold which is a Linux only linker but also faster than lld.
Cool. Does the path /path/to/ld.lld need to be absolute, or can it be ld.lld and found via PATH? It would be optimal to be able to use the same config file locally and in deployer.
I believe the path needs to be absolute.
In most cases the binaries are located in/usr/bin, so I don't think there'll be any exceptions.
@that-ambuj
Ok, then this sounds like a possible feature to implement.
I'll try adding it in the next release, then you can try if it works. It does end up in /usr/bin/ld.lld.
@that-ambuj We're including lld and mold in 0.28.0. I tried compiling a project without and with them, but saw no difference in compile times. You can try with your project though! (soon)
@jonaro00 Actually, there is a slight difference in build times when using lld or mold but it may be less noticable because of the overhead introduced by docker. Bigger gains happen when rebuilding and redeploying a program.