deno
                                
                                 deno copied to clipboard
                                
                                    deno copied to clipboard
                            
                            
                            
                        Release musl builds
Previously I was able to use --target x86_64-unknown-linux-musl on rusty_v8 and deno (excluding the plugin). https://github.com/denoland/rusty_v8/issues/49
It's unclear to me if it's best to only target x86_64-unknown-linux-musl or to additionally support it. e.g. sccache seems to only provide binaries for that target.
The one concern is that glibc is required for plugins (but I don't know if that means glibc cannot be used on a binary created with musl 🤷♂ ).
This is useful for platforms without glibc, examples alpine linux and amazon linux (which is occasionally a pain building, so would be helpful if a compatible binaries was in deno's CI).
xlink: #3243 #1658 #1495 #3356
Some progress using rust-musl-builder, but run into GLIBC issue (maybe?):
https://gist.github.com/hayd/ae5cbe81117863fff98c2f0c877f2b34
/usr/bin/ld: /home/rust/src/deno/target/x86_64-unknown-linux-musl/release/deps/deno-9173e3687c7172b6: hidden symbol `__dso_handle' isn't defined
/usr/bin/ld: final link failed: Bad value
collect2: error: ld returned 1 exit status
Note: In this context it seems like rusty_v8 builds:
cargo build --package rusty_v8
   Compiling ... #snipped 
   Compiling rusty_v8 v0.1.0
    Finished dev [unoptimized + debuginfo] target(s) in 33m 20s
Not sure what that means...
But it seems like it's the deno(bin) that fails (rather than rusty_v8):
Building [=====================================================> ] 256/257: deno(bin)
# same failure
Not sure if it's helpful but __dso_handle is normally provided by libgcc's (not glibc's) crtbegin.o. It sounds like that's not getting linked in on your system?
On my system:
$ dpkg -S `cc -print-file-name=crtbegin.o`
libgcc-9-dev:amd64: /usr/lib/gcc/x86_64-linux-gnu/9/crtbegin.o
$ nm -o `cc -print-file-name=crtbegin.o` | grep __dso_handle
/usr/lib/gcc/x86_64-linux-gnu/9/crtbegin.o:0000000000000000 D __dso_handle
🤷♂ I think the objective is for musl to statically link, specifically not to link them to that specific system... this might already be possible with some flags?
(The environment is in the Dockerfile.)
cc @chrmoritz ?
I think the objective is for musl to statically link
crtbegin.o is linked in statically, not dynamically.
It is in the docker image too, perhaps an env issue? (e.g. RUST_FLAGS or CFLAGS? 😬 )
rust@c3b60eef5da1:~/src/deno/cli$ dpkg -S `cc -print-file-name=crtbegin.o`
libgcc-7-dev:amd64: /usr/lib/gcc/x86_64-linux-gnu/7/crtbegin.o
rust@c3b60eef5da1:~/src/deno/cli$ nm -o `cc -print-file-name=crtbegin.o` | grep __dso_handle
/usr/lib/gcc/x86_64-linux-gnu/7/crtbegin.o:0000000000000000 D __dso_handle
rust@c3b60eef5da1:~/src/deno/cli$ env
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:
LESSCLOSE=/usr/bin/lesspipe %s %s
GN_ARGS=clang_use_chrome_plugins=false treat_warnings_as_errors=false use_sysroot=false use_glib=false use_gold=true clang_base_path="/tmp/clang"
HOSTNAME=c3b60eef5da1
RUST_BACKTRACE=full
GN_VERSION=latest
PKG_CONFIG_ALL_STATIC=true
DENO_VERSION=0.31.0
OPENSSL_DIR=/usr/local/musl/
LIBZ_SYS_STATIC=1
OPENSSL_LIB_DIR=/usr/local/musl/lib/
NINJA=/bin/ninja
PWD=/home/rust/src/deno/cli
PKG_CONFIG_ALLOW_CROSS=true
HOME=/home/rust
PG_CONFIG_X86_64_UNKNOWN_LINUX_GNU=/usr/bin/pg_config
CLANG_BASE_PATH=/tmp/clang
DEP_OPENSSL_INCLUDE=/usr/local/musl/include/
TERM=xterm
SHLVL=1
DENO_BUILD_MODE=release
PQ_LIB_STATIC_X86_64_UNKNOWN_LINUX_MUSL=1
TARGET=musl
GN=/bin/gn
PATH=/home/rust/.cargo/bin:/usr/local/musl/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NINJA_VERSION=1.8.2
OPENSSL_INCLUDE_DIR=/usr/local/musl/include/
OPENSSL_STATIC=1
LESSOPEN=| /usr/bin/lesspipe %s
_=/usr/bin/env
@jeromegn Do you have any tips/suggestions here, noting your comment: https://github.com/emk/rust-musl-builder/issues/65#issuecomment-469049466
Did the musl binaries you created for fly work on other linux systems?
I'm not sure I have much to add. I mostly used the linked Dockerfile and it worked. Inspired (read: ripped off) from: https://hub.docker.com/r/alexmasterov/alpine-libv8
That said, there are alpine images with glibc compiled in there, that might help. It's a lot of trial and error to get it to compile. And then the performance can vary a bit between musl and glibc.
If anyone is interested, I have compiled DENO with V8 in Alpine Linux Docker container - without glibc, but completely using the Alpine muslc library.
https://gist.github.com/kesor/68df53a5d76784a235ca6b0e7efed4d9
Enjoy!
@kesor This is great, good work! Do these (or some of these) options make sense to be incorporated as gn args? That patch looks scary to maintain. :)
@kesor This is great, good work! Do these (or some of these) options make sense to be incorporated as gn args? That patch looks scary to maintain. :)
I spent hours trying to find GN args to pass via env var or some other way. Couldn’t find how. My knowledge of GN is limited, probably someone with more experience can find the way. Didn’t mean to provide that patch as “the solution” only as a proof of concept.
PS: Alpine has other llvm versions as well, maybe with llvm8 the binary will be smaller?
Also note that the eventual binary still has dependency on libgcc at the least, maybe other dependencies I didn’t quite catch, it does run with the provided docker image resulting from the Dockerfile build.
In llvm10 there is no_inline_line_tables which shrinks the binary. I think the solution is that BUILD.gn is modified to allow arguments that support what your diff does :)
Are both these required? Do you have any idea of the reasons?
-      ldflags += [ "-Wl,--color-diagnostics" ]
+      ldflags += [  ]
-  libs = []
+  libs = ["execinfo"]
In llvm10 there is no_inline_line_tables which shrinks the binary. I think the solution is that BUILD.gn is modified to allow arguments that support what your diff does :)
Are both these required? Do you have any idea of the reasons?
- ldflags += [ "-Wl,--color-diagnostics" ] + ldflags += [ ] - libs = [] + libs = ["execinfo"]
Color diagnostics was reported as an invalid option for the linker. Wouldn’t work with it. I also think that there was a different linker used for parts of the build for some reason. Maybe that can be the root cause and resolving it can solve the arguments issue.
libexecinfo was also required because of failing linking. I don’t really know the reason why.
The most annoying thing was the custom location of the header files which was not automatically detected, and I couldn’t find a way to provide an include path via GN args. I would also assume that this path will change for different llvm/alpine versions.
I would also assume that this path will change for different llvm/alpine versions.
Yes, I think we'd want to pass this (as a GN_ARGS)...?
@ry do you have any pointers in adding these/or suggest whether this a good/bad way forward?
https://gist.github.com/kesor/68df53a5d76784a235ca6b0e7efed4d9#file-rusty-changed-build-diff
Also note that the eventual binary still has dependency on libgcc at the least
Is this referring to the libgcc alpine package? I wanted to avoid this dependency by compiling with musl and the linux musl headers in order for deno to be a fully static binary. This can be achieved with node but I'm not sure how far down the rust toolchain requires glibc.
Also note that the eventual binary still has dependency on libgcc at the least
Is this referring to the libgcc alpine package? I wanted to avoid this dependency by compiling with musl and the linux musl headers in order for deno to be a fully static binary. This can be achieved with node but I'm not sure how far down the rust toolchain requires glibc.
@nathanblair Note that libgcc is not the same as glibc, whereas musl is an alternative glibc, there is no "special" alternative for libgcc on Alpine.
Are both these required? Do you have any idea of the reasons?
- libs = [] + libs = ["execinfo"]
I had a similar problem with execinfo and was able to workaround it with:
https://github.com/nizox/v8/commit/f66447ba9d7e8c901f40552c5409a3b9457e8be3
Also, you should be able to replace the libgcc dependency with libunwind: https://github.com/nizox/chromium_build/commit/39a009c6d3898c64c0d3abc65feb9f82f304dec8
Can deno static link to c library to resolve issue like golang does?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I think this one could probably stay open?
closing in favor of #1658
Not the same as #1658
I took a stab at this, but I ultimately lack sufficient knowledge of rust internals (something I unfortunately would need to fix this, since cargo/rustc routinely ignore things like environment LDFLAGS and co).
Still, I got rusty_v8 to successfully build and link under a pure musl+llvm environment (abyss) with no patches, so I figured it was worth documenting.
Documenting the process
Rusty_V8
To get rusty_v8 to build, we need to build v8 from source, and not use any of the downloaded tooling. This means we need a system clang, system ninja (or samu), and system gn. We can achieve this with the following exports:
export V8_FROM_SOURCE=YES GN=/usr/bin/gn NINJA=/usr/bin/samu CLANG_BASE_PATH=/usr
At this point, v8 still won't build because of musl internals and attempts to build libcxx.
Libcxx needs a special cmake flag to build under musl, which the gn build system doesn't know about.
As such, we need to use system libcxx, by passing use_custom_libcxx=false to gn, as so: export GN_ARGS='use_custom_libcxx=false'.
We're close now, but this will fail linking, since backtrace tools are unavailable in musl core, and we need to link to libexecinfo to get those.
Unfortunately, the gn build system does not explicitly handle this (nor allow us to override libs or ldflags without patching the build script - something previous commenters have done).
Thankfully, we can also just disable the parts that need libexecinfo (namely, the debugging components).
This turns our GN_ARGS into the following: export GN_ARGS='use_custom_libcxx=false v8_enable_backtrace=false v8_enable_debugging_features=false'.
At this point, rusty_v8 successfully builds! To summarize, it takes having all the needed tools and libraries installed, and then setting:
export V8_FROM_SOURCE=YES
export GN=/usr/bin/gn NINJA=/usr/bin/samu
export CLANG_BASE_PATH=/usr
export GN_ARGS='use_custom_libcxx=false v8_enable_backtrace=false v8_enable_debugging_features=false'
Deno Runtime
At this point, deno runtime will fail to build, complaining about missing definitions for things like operator new.
It does this because the linker is called through cc (rather than cxx), passes -no-default-libs, and does not pass -lc++.
My guess is that it expects to be dynamically linking, in which case linking to the standard c++ library explicitly isn't needed (ld.so and the linker handle external symbols automatically).
After trying to edit runtime/build.rs to no avail, I simply invoked cargo with RUSTFLAGS='-C link-arg=-lc++'.
This seemed to get me further, but now the deno_runtime build-script-build is erroring out with a segfault. I can't run it manually because a bunch of magic environment variables (such as "TARGET") aren't set, causing a different segfault. After peppering runtime/build.rs with print statements (since with -vv cargo shows me what was output), I narrowed it down to this statement:
    let js_runtime = JsRuntime::new(RuntimeOptions {
      will_snapshot: true,
      extensions,
      ..Default::default()
    });
I have no idea how to debug this, so it'll have to be done by someone else (or maybe later by me).
Deno
Even if this was to be fixed, all of the above was done against the latest deno tarball (1.18.1), since the latest git HEAD fails for other reasons, namely complaining about ops/./lib.rs, about "there is no argument named" name and i.
Suggestions
Since the rust approach to packaging seems to be "we know better", the abovementioned default settings could probably be added to the rusty_v8 build.rs behind a feature, or a target/env check.
My guess is that the ops errors are caused by my rust being out of date (not significantly so, but sufficiently; I tested all of this on 1.57.0). There should likely be a check for the minimum rust version somewhere.
Edit: I've done the above on 1.60.0 and can confirm that the ops errors were just outdated rust.
As for needing to link to libc++ (and potentially more, as I can't go any further in debugging), some standard way of changing the linking flags should be added somewhere. Having looked around, it looks like this can be done in build.rs (though when I tried this it seemed to have no effect) or a cargo config (that's already present; I tried doing this there as well, but it didn't seem to achieve much either). Someone more knowledgeable about rust and cargo internals could probably figure out what exactly is needed there so the efforts to getting this running can continue.
Finally, ideally, the gn script for v8 itself can grow the ability for an "extra libs" and/or "extra ldflags" mechanism that isn't patching. However, this is obviously out of scope for deno, which is just a consumer of v8.
I have managed to build rusty_v8.a for musl, but deno fails with:
error: /build/source/target/release/deps/libdeno_ops-4e7e641473573b40.so: cannot allocate memory in static TLS block
Works after switching to older zig (version 0.8.1)
/nix/store/zqik2z7dxn57z66kqf0qymjbfb8smmlz-hello-world-0.1.0-aarch64-unknown-linux-musl/bin/lambda-hello-world: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, not stripped
[130] void@voidlinux> ./result/bin/lambda-hello-world   
v8 version: 10.8.168.4
thread 'main' panicked at 'Missing AWS_LAMBDA_FUNCTION_NAME env var: NotPresent', /sources/lambda_runtime-0.7.0/src/lib.rs:62:65
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
qemu: uncaught target signal 6 (Aborted) - core dumped
Attached is librusty_v8-aarch64-unknown-linux-musl.zip librusty_v8.a for aarch64-unknown-linux-musl. It should work with v0.54.0 so be careful. I'll put up my toolchain and build files some other place at some point.
Succesfully deployed and working in AWS:
jari@jv-m1-mbp> curl -i -X POST -H 'Content-Type: application/json' -d '{"src": "Deno.core.ops.v8_version()"}' redacted.url
HTTP/2 200
date: Fri, 11 Nov 2022 07:57:05 GMT
content-type: application/json
content-length: 23
x-amzn-requestid: eb28be2d-1d08-468e-a7b0-d98749ba3f2d
x-amz-apigw-id: bbT8iGwbNjMFTPQ=
x-amzn-trace-id: Root=1-636e0050-624e648873eac695624d1e45;Sampled=0
{"result":"10.8.168.4"}
Succesfully deployed and working in AWS:
jari@jv-m1-mbp> curl -i -X POST -H 'Content-Type: application/json' -d '{"src": "Deno.core.ops.v8_version()"}' redacted.url HTTP/2 200 date: Fri, 11 Nov 2022 07:57:05 GMT content-type: application/json content-length: 23 x-amzn-requestid: eb28be2d-1d08-468e-a7b0-d98749ba3f2d x-amz-apigw-id: bbT8iGwbNjMFTPQ= x-amzn-trace-id: Root=1-636e0050-624e648873eac695624d1e45;Sampled=0 {"result":"10.8.168.4"}
Hey @Cloudef, does this indicate you've gotten a musl build working? I would be much happier to continue with a current pipeline automation project with Deno rather than Golang, but the complexity + size of gcc requirement is currently pushing me toward the latter. Musl builds would make life a lot easier, even if small things sometimes break :)
@Propolisa I've got deno_core to build, I don't use deno directly myself, but I guess deno could be able to build as well, since it just uses deno_core as the problematic dep?
I pushed the build files here: https://github.com/Cloudef/nix-zig-stdenv/tree/master/sketchpad
If you aren't familiar with nix, then I suggest reading a bit about it first. The default.nix file in that folder is commented.
One thing to note is that, building rusty-v8 from source needs qemu-binfmt, this may need to be workarounded in future. I only support aarch64 for now as well. When building rust project with deno_core, you may need to provide older zig with --argstr zig-version 0.8.1 on linux. This seems to be a bug in zig's recent linux linker.
To use the prebuilt static lib, run nix store add-file librusty_v8-aarch64-unknown-linux-musl.a this adds it to the nix store.
I've tested building on both aarch64-darwin and x86_64-linux and confirmed the builds working. rusty-v8 obviously can't be build from aarch64-darwin due to the qemu-binfmt requirement, but you can use the produced static lib after.
TL;DR
Get nix from https://nix.dev/tutorials/install-nix
I want to use prebuilt librusty-v8
curl -O 'https://github.com/denoland/deno/files/9923527/librusty_v8-aarch64-unknown-linux-musl.zip' 
unzip librusty_v8-aarch64-unknown-linux-musl.zip
nix store add-file librusty_v8-aarch64-unknown-linux-musl.a
git clone https://github.com/Cloudef/nix-zig-stdenv.git
cd nix-zig-stdenv/sketchpad
# you might need '--argstr zig-version 0.8.1' here if you are on linux
nix-build -A deno-core-hello-world --argstr target aarch64-unknown-linux-musl
file result/bin/deno-core-hello-world
# result/bin/deno-core-hello-world: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, stripped
I want to build everything from scratch
You need to be on a linux system for this with qemu-binfmt emulation set for aarch64 binaries.
qemu-binfmt setup depends on the distro, go read the docs, or you can just be on a aarch64 linux system :)
git clone https://github.com/Cloudef/nix-zig-stdenv.git
cd nix-zig-stdenv/sketchpad
nix-build -A deno-core-hello-world --arg with-rusty-v8 true --argstr target aarch64-unknown-linux-musl
file result/bin/deno-core-hello-world
# result/bin/deno-core-hello-world: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, stripped
I don't care about the hello-world, give me the static lib for rusty-v8
git clone https://github.com/Cloudef/nix-zig-stdenv.git
cd nix-zig-stdenv/sketchpad
nix-build -A rusty-v8 --argstr target aarch64-unknown-linux-musl
ls result/lib/librusty_v8.a
# result/lib/librusty_v8.a
Wait what is this nix? Why do I need it
Simply put, it makes your life easier. Go learn it.
@Cloudef really appreciate the quality of this reply. I am short for free time to set this environment up, but perhaps another may be able to use your writeup to attempt building deno itself sooner than I. Cheers!
Hmm, it seems like a true Deno alpine container image might be just around the corner. The current one uses FROM:alpine-glibc and I'm not sure that even belongs in the official Docker project. It's nice but it's based on an unofficial base image and IMO the official Deno Docker project should only use official base images.
I have successfully built a musl binary (on Alpine) that passes the WPT tests (although it does not pass cargo test yet). The only source code edit needed was the one I raised in #17739; the rest could be handled with environment variables:
export GN_ARGS='use_custom_libcxx=false v8_enable_backtrace=false v8_enable_debugging_features=false use_lld=false symbol_level=0 v8_builtins_profiling_log_file=""'
export CLANG_BASE_PATH=/usr
export V8_FROM_SOURCE=1
export TARGET=x86_64-alpine-linux-musl