workerd icon indicating copy to clipboard operation
workerd copied to clipboard

Support for running via a docker image?

Open tom-sherman opened this issue 3 years ago • 17 comments
trafficstars

Could this be supported out of the box?

Looks like there's been some teething problems getting it to work here: https://github.com/cloudflare/workerd/issues/20#issuecomment-1259542697

tom-sherman avatar Sep 28 '22 09:09 tom-sherman

This is what I am using:

FROM node:18

RUN apt-get update && \
    DEBIAN_FRONTEND=noninteractive apt-get install -qy tini libc++1

WORKDIR /app
RUN npm install workerd
COPY config.capnp hello.js ./

CMD ["tini", "./node_modules/.bin/workerd", "serve", "config.capnp"]

frafra avatar Sep 28 '22 11:09 frafra

I have the following Dockerfile but I get an error, running on M1 MacOS:

FROM node:18

RUN apt-get update && apt-get -y install libc++-dev libunwind-dev

WORKDIR /app
RUN npm install workerd
COPY workerd.capnp worker.js health-check.js ./

CMD ["./node_modules/.bin/workerd", "serve", "workerd.capnp"]
/app/node_modules/@cloudflare/workerd-linux-arm64/bin/workerd: error while loading shared libraries: libunwind.so.1: cannot open shared object file: No such file or directory
node:child_process:910
    throw err;
    ^

Error: Command failed: /app/node_modules/@cloudflare/workerd-linux-arm64/bin/workerd serve workerd.capnp
    at checkExecSyncError (node:child_process:871:11)
    at Object.execFileSync (node:child_process:907:15)
    at Object.<anonymous> (/app/node_modules/workerd/bin/workerd:134:26)
    at Module._compile (node:internal/modules/cjs/loader:1119:14)
    at Module._extensions..js (node:internal/modules/cjs/loader:1173:10)
    at Module.load (node:internal/modules/cjs/loader:997:32)
    at Module._load (node:internal/modules/cjs/loader:838:12)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
    at node:internal/main/run_main_module:18:47 {
  status: 127,
  signal: null,
  output: [ null, null, null ],
  pid: 14,
  stdout: null,
  stderr: null
}

Node.js v18.9.1

Any ideas?

tom-sherman avatar Sep 28 '22 15:09 tom-sherman

You removed libc++1 and added two unnecessary -dev libraries. This is why it fails. libc++1 install libunwind as requirement.

frafra avatar Sep 29 '22 07:09 frafra

@frafra I tried that, same error:

FROM node:18

RUN apt-get update && apt-get -y install libc++1 libunwind8

WORKDIR /app
RUN npm install workerd
COPY workerd.capnp worker.js health-check.js ./

CMD ["./node_modules/.bin/workerd", "serve", "workerd.capnp"]

tom-sherman avatar Sep 29 '22 09:09 tom-sherman

This is an error with your modified version of your Dockerfile. Please stick with the suggested packages. There is no need to install libunwind explicitly, since it is a dependency of libc++1. You are specifying a version of libunwind which is not the one required by libc++1 currently. Look at the Debian packages webpage for more information.

I followed the Getting Started section of this repository and made a new one, with the suggested hello world example and a simple Dockefile, which works flawlessly. I would suggest you start there: https://github.com/frafra/workerd-docker

frafra avatar Sep 29 '22 11:09 frafra

I haven't written out everything I've tried, all of the changes to dockerfiles would probably blow through the character limit on a GitHub comment.

I have of course tried your dockerfile with no luck. Anything I try throws back the error while loading shared libraries: libunwind.so.1: cannot open shared object file: No such file or directory error.

tom-sherman avatar Sep 29 '22 12:09 tom-sherman

You said to use M1. That could explain why you are having different results. Have you just tried to build the Dockerfile within the repository I linked? Do you get the very same error even when using that repository and the hello world files?

libunwind8 does not provide libunwind.so.1. libunwind is a requirement of libc++ only staring from Debian Bookworm (v12, current testing), but node:18 uses Debian Buster (v11, current stable). The package is named libunwind-14 in Debian Sid and libunwind-13 in Debian Buster: https://packages.debian.org/search?suite=bullseye&section=all&arch=any&searchon=contents&keywords=libunwind.so.1.

Could it be that libunwind.so.1 is a (indirect) dependency of workerd only on M1? Try to add libunwind-13.

frafra avatar Sep 29 '22 13:09 frafra

I made a branch with the additional dependency: https://github.com/frafra/workerd-docker/tree/fix-missing-libunwind

frafra avatar Sep 29 '22 14:09 frafra

Ah, adding libunwind-13 gives me a different error:

/app/node_modules/@cloudflare/workerd-linux-arm64/bin/workerd: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /app/node_modules/@cloudflare/workerd-linux-arm64/bin/workerd)
/app/node_modules/@cloudflare/workerd-linux-arm64/bin/workerd: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by /app/node_modules/@cloudflare/workerd-linux-arm64/bin/workerd)
/app/node_modules/@cloudflare/workerd-linux-arm64/bin/workerd: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /app/node_modules/@cloudflare/workerd-linux-arm64/bin/workerd)

tom-sherman avatar Sep 29 '22 14:09 tom-sherman

I have a basic docker image for amd64 here: https://github.com/Cyb3r-Jak3/docker-workerd. Smaller than installing it via npm. Arm64 support is on the way. Also, happy to merge it into this repo.

Cyb3r-Jak3 avatar Sep 30 '22 01:09 Cyb3r-Jak3

Zooming out a bit, a broader problem here may be that our binary has too many dependencies on shared libraries that lack stable ABIs across distros. We should try to statically link more of these, at least in our npm releases.

When it comes to glibc specifically, though, we do need to dynamically link. Fortunately glibc has strong ABI compatibility. However, we need to make sure to link against an older version of glibc, since generally a binary will only work with the version it was linked against and newer versions, but not older ones.

cc @penalosa

kentonv avatar Sep 30 '22 01:09 kentonv

Ah, adding libunwind-13 gives me a different error:

Have you installed it together with libc++1? workerd in my container is a static binary, so I wonder what the result of ldd is on your container running on M1, giving the different requirements.

frafra avatar Sep 30 '22 11:09 frafra

workerd in my container is a static binary

Hmm are you sure? I don't think any of our binaries are static.

kentonv avatar Sep 30 '22 21:09 kentonv

workerd in my container is a static binary

Hmm are you sure? I don't think any of our binaries are static.

My bad, I was looking at the wrong executable:

node@da718d2232c1:~$ ldd ./node_modules/workerd/bin/workerd
	not a dynamic executable
node@da718d2232c1:~$ ldd ./node_modules/@cloudflare/workerd-linux-64/bin/workerd
	linux-vdso.so.1 (0x00007ffd7bd24000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f5db8eb6000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f5db8e94000)
	libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f5db8d50000)
	libc++.so.1 => /usr/lib/x86_64-linux-gnu/libc++.so.1 (0x00007f5db8c86000)
	libc++abi.so.1 => /usr/lib/x86_64-linux-gnu/libc++abi.so.1 (0x00007f5db8c4e000)
	libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f5db8c34000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f5db8a5d000)
	/lib64/ld-linux-x86-64.so.2 (0x00007f5dbbd86000)
	librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f5db8a53000)
	libatomic.so.1 => /usr/lib/x86_64-linux-gnu/libatomic.so.1 (0x00007f5db8a49000)

frafra avatar Oct 01 '22 10:10 frafra

I managed to build workerd binary within Docker using the following Dockerfile

FROM ubuntu:22.04

RUN apt-get update && apt-get install -y build-essential git clang libc++-dev libc++abi-dev curl gnupg git python3-pip python3-distutils
RUN curl -L "https://github.com/bazelbuild/bazelisk/releases/download/v1.14.0/bazelisk-linux-arm64" -o /bin/bazelisk && chmod 755 /bin/bazelisk

RUN cd /tmp && git clone https://github.com/cloudflare/workerd.git 
RUN cd /tmp/workerd && bazelisk build -c opt //src/workerd/server:workerd

The compiled binary is located under /tmp/workerd/bazel-bin/src/workerd/server/workerd.

Appreciate it's still an early Beta.... but an official Dockerfile in the repo would be really helpful for those who wants to try workerd in VSCode Dev Containers environment.

A static binary would be very helpful too, because then it could be copied to VSCode Dev Container as a 1-liner:

COPY --from=workerd /bin/workerd /bin/

vovayartsev avatar Oct 16 '22 21:10 vovayartsev

💯 for static binaries. This is something Deno got sooooo right.

tom-sherman avatar Oct 16 '22 21:10 tom-sherman

Trying to build the project in ubuntu:20.04

root@vm1:/home/ubuntu# git clone https://github.com/cloudflare/workerd.git \
>     && cd workerd \
>     && bazel build -c opt //src/workerd/server:workerd --verbose_failures
Cloning into 'workerd'...
remote: Enumerating objects: 2064, done.
remote: Counting objects: 100% (2064/2064), done.
remote: Compressing objects: 100% (687/687), done.
remote: Total 2064 (delta 1224), reused 1973 (delta 1193), pack-reused 0
Receiving objects: 100% (2064/2064), 1.85 MiB | 2.44 MiB/s, done.
Resolving deltas: 100% (1224/1224), done.
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Analyzed target //src/workerd/server:workerd (201 packages loaded, 16218 targets configured).
INFO: Found 1 target...
ERROR: /home/ubuntu/workerd/BUILD.bazel:7:17: GenCapnp icudata-embed.capnp.h failed: (Exit 1): capnp_tool failed: error executing command 
  (cd /root/.cache/bazel/_bazel_root/b3fbc4211153c5d2f26c97321a65891b/sandbox/linux-sandbox/208/execroot/workerd && \
  exec env - \
  bazel-out/k8-opt-exec-2B5CBBC6/bin/external/capnp-cpp/src/capnp/capnp_tool compile --verbose -obazel-out/k8-opt-exec-2B5CBBC6/bin/external/capnp-cpp/src/capnp/capnpc-c++:bazel-out/k8-opt/bin -I external/capnp-cpp/src icudata-embed.capnp)
# Configuration: d2d1d592403ef6a825f2862044013ce88fb1c177866ebc360e1e17809f6b2a5f
# Execution platform: @local_config_platform//:host

Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
bazel-out/k8-opt-exec-2B5CBBC6/bin/external/capnp-cpp/src/capnp/capnpc-c++: plugin failed: Killed
Target //src/workerd/server:workerd failed to build
INFO: Elapsed time: 1523.757s, Critical Path: 146.27s
INFO: 1751 processes: 1544 internal, 206 linux-sandbox, 1 local.
FAILED: Build did NOT complete successfully

Any ideas?

AnishTiwari avatar Oct 30 '22 13:10 AnishTiwari

I also receive the following error using Docker (arm):

error while loading shared libraries: libc++.so.1: cannot open shared object file

docker compose:

wrangler:
    image: node
    command: >
        sh -cx "yarn install && yarn wrangler dev --experimental-local"

Zerebokep avatar Nov 04 '22 10:11 Zerebokep

I also receive the following error using Docker (arm):

error while loading shared libraries: libc++.so.1: cannot open shared object file

My guess is you need to install libc++-dev and libc++abi-dev in your container.

Cyb3r-Jak3 avatar Nov 04 '22 20:11 Cyb3r-Jak3

I'm working in a WSL Ubuntu VM and installing libc++-dev and libc++abi-dev in addition to libc++1 does not help. I'm still getting the following when running yarn add workderd:

Error: Command failed: /home/<redacted>/.nvm/versions/node/v19.3.0/bin/node /home/<redacted>/node_modules/workerd/bin/workerd --version
/home/<redacted>/node_modules/@cloudflare/workerd-linux-64/bin/workerd: error while loading shared libraries: libunwind.so.1: cannot open shared object file: No such file or directory
node:child_process:924
    throw err;
    ^

Error: Command failed: /home/<redacted>/node_modules/@cloudflare/workerd-linux-64/bin/workerd --version
    at checkExecSyncError (node:child_process:885:11)
    at Object.execFileSync (node:child_process:921:15)
    at Object.<anonymous> (/home<redacted>/node_modules/workerd/bin/workerd:135:26)
    at Module._compile (node:internal/modules/cjs/loader:1218:14)
    at Module._extensions..js (node:internal/modules/cjs/loader:1272:10)
    at Module.load (node:internal/modules/cjs/loader:1081:32)
    at Module._load (node:internal/modules/cjs/loader:922:12)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:82:12)
    at node:internal/main/run_main_module:23:47 {
  status: 127,
  signal: null,
  output: [ null, null, null ],
  pid: 30212,
  stdout: null,
  stderr: null
}

Node.js v19.3.0

fresheneesz avatar Dec 16 '22 20:12 fresheneesz

Which version of Ubuntu are you using in WSL? There have been some issue with Ubuntu 20—trying Ubuntu 22 should work?

penalosa avatar Dec 20 '22 15:12 penalosa

Ah I do have Ubuntu 20. I'll try 22 at some point, thanks!

fresheneesz avatar Dec 20 '22 17:12 fresheneesz

If anyone has this working in Github Codespaces I'd love to know how.

ryan-mars avatar Dec 22 '22 20:12 ryan-mars

It's the same thing on ArchLinux.. Seems like that libunwind is version 1.6 and it definitely doesn't contain libunwind.so.1 but without the 1.

I have all devel things, like base-devel, libc++ 15, libunwind 1.6, llvm 15, clang 15.0.7, glibc, and etc.

It definitely would be better to lower the dependencies.

We should try to statically link more of these, at least in our npm releases.

Yep.

tunnckoCore avatar Mar 01 '23 21:03 tunnckoCore

Does anyone have it working on WSL without the dreaded error while loading shared libraries: libc++.so.1: cannot open shared object file?

KeesCBakker avatar May 04 '23 04:05 KeesCBakker

@KeesCBakker you need to upgrade your ubuntu version, 22.04

c0b41 avatar May 04 '23 10:05 c0b41

Now that Miniflare v3 (with workerd) is the default dev command in Wrangler, this issue is more important to robustly solve.

To summarise the above: The two packages that are most commonly concurrently out of date in this thread are glibc and libunwind. Unfortunately, glibc can’t be upgraded without upgrading the OS version that your Docker image is based on. In the Ubuntu, the minimum supported version is 22.04, and for Debian, it’s Debian 12 (Bookworm), which is still a few weeks away from stably releasing. I can’t speak to Alpine because it has a different libc setup that might require some extra configuration.

The workaround is to upgrade your Docker image—or really, to upgrade whatever base images you’re relying on—to one of those OS versions. In my case, the stack of images I was relying on was:

  • debian:bullseye (Bookworm available)
  • buildpack-deps:bullseye (Bookworm available)
  • node (Bookworm proposed)
  • mcr.microsoft.com/devcontainers/javascript-node
  • mcr.microsoft.com/devcontainers/typescript-node

(The last two images are commonly used by GitHub Codespaces)

I was able to adapt the images pretty easily by combining them into one Dockerfile and adapting the deepest dependency from buildpack-deps:bullseye to buildpack-deps:bookworm.

My Dockerfile is below, but don’t copy-paste this. Your setup is going to be different to mine depending on your container/OS.

#
# Adapted from Node's image (with `corepack enable` instead of installing Yarn)
#

FROM buildpack-deps:bookworm

RUN groupadd --gid 1000 node \
  && useradd --uid 1000 --gid node --shell /bin/bash --create-home node

ENV NODE_VERSION 18.6.0

RUN ARCH= && dpkgArch="$(dpkg --print-architecture)" \
  && case "${dpkgArch##*-}" in \
    amd64) ARCH='x64';; \
    ppc64el) ARCH='ppc64le';; \
    s390x) ARCH='s390x';; \
    arm64) ARCH='arm64';; \
    armhf) ARCH='armv7l';; \
    i386) ARCH='x86';; \
    *) echo "unsupported architecture"; exit 1 ;; \
  esac \
  # gpg keys listed at https://github.com/nodejs/node#release-keys
  && set -ex \
  && for key in \
    4ED778F539E3634C779C87C6D7062848A1AB005C \
    141F07595B7B3FFE74309A937405533BE57C7D57 \
    74F12602B6F1C4E913FAA37AD3A89613643B6201 \
    DD792F5973C6DE52C432CBDAC77ABFA00DDBF2B7 \
    61FC681DFB92A079F1685E77973F295594EC4689 \
    8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 \
    C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \
    890C08DB8579162FEE0DF9DB8BEAB4DFCF555EF4 \
    C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C \
    108F52B48DB57BB0CC439B2997B01419BD92F80A \
  ; do \
      gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || \
      gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; \
  done \
  && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH.tar.xz" \
  && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
  && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
  && grep " node-v$NODE_VERSION-linux-$ARCH.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
  && tar -xJf "node-v$NODE_VERSION-linux-$ARCH.tar.xz" -C /usr/local --strip-components=1 --no-same-owner \
  && rm "node-v$NODE_VERSION-linux-$ARCH.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt \
  && ln -s /usr/local/bin/node /usr/local/bin/nodejs \
  && corepack enable \
  # smoke tests
  && node --version \
  && npm --version

#
# Adapted from Microsoft's `javascript-node` repo
#

ARG USERNAME=node
ARG NPM_GLOBAL=/usr/local/share/npm-global

# Add NPM global to PATH.
ENV PATH=${NPM_GLOBAL}/bin:${PATH}

RUN \
    # Configure global npm install location, use group to adapt to UID/GID changes
    if ! cat /etc/group | grep -e "^npm:" > /dev/null 2>&1; then groupadd -r npm; fi \
    && usermod -a -G npm ${USERNAME} \
    && umask 0002 \
    && mkdir -p ${NPM_GLOBAL} \
    && touch /usr/local/etc/npmrc \
    && chown ${USERNAME}:npm ${NPM_GLOBAL} /usr/local/etc/npmrc \
    && chmod g+s ${NPM_GLOBAL} \
    && npm config -g set prefix ${NPM_GLOBAL} \
    && su ${USERNAME} -c "npm config -g set prefix ${NPM_GLOBAL}" \
    # Install eslint
    && su ${USERNAME} -c "umask 0002 && npm install -g eslint" \
    && npm cache clean --force > /dev/null 2>&1
    
#
# Adapted from Microsoft's `typescript-node` repo
#

ARG NODE_MODULES="tslint-to-eslint-config typescript"
RUN su node -c "umask 0002 && npm install -g ${NODE_MODULES}" \
    && npm cache clean --force > /dev/null 2>&1

# Install `libc++-dev` for workerd to work
RUN apt-get update && apt-get -y install libc++-dev

This is too much to expect a normal user to do, however, and at least being clearer about minimum OS versions in Wrangler would be a good move.

huw avatar May 18 '23 00:05 huw

I was able to get this working with the proposed node Bookworm Docker images. I git pulled the proposed repo, switched to the Bookworm branch and built the image locally that I needed.

HeyITGuyFixIt avatar May 18 '23 19:05 HeyITGuyFixIt

It's the same thing on ArchLinux.. Seems like that libunwind is version 1.6 and it definitely doesn't contain libunwind.so.1 but without the 1.

For Arch users, it seems we can make a soft link to libunwind.so.1 -> libunwind.so and it just works. I know it is a very bad practice, but I have no better workaround now.

mnixry avatar May 18 '23 20:05 mnixry

A (temporary) solution for Fedora 38 is to install llvm-libunwind via DNF. For example:

$ ./node_modules/workerd/bin/workerd --version
./node_modules/workerd/bin/workerd: error while loading shared libraries: libunwind.so.1: cannot open shared object file: No such file or directory
$ dnf install -y llvm-libunwind
$ ./node_modules/workerd/bin/workerd --version
workerd 2023-05-12

The library has a different soname (libunwind.so.1) so it can live together with GCC's libunwind (libunwind.so.8).

For distros that doesn't distribute LLVM's libunwind, one could attempt to symlink libunwind.so.8 (or the unversioned one) to libunwind.so.1. However, I cannot recommend this, since GCC's libunwind does not have exactly the same ABI as the one provided by LLVM.

kleisauke avatar May 19 '23 09:05 kleisauke