docker-node
docker-node copied to clipboard
Node 20.3.0 images give error `/usr/bin/env: 'node': Text file busy`
Environment
- Platform: macos 13.4 M1
- Docker Version: 24.0.2
- Node.js Version: 20.3.0
- Image Tag: node:20-alpine3.17
Expected Behavior
yarn tsc should run correctly
Current Behavior
> [service 5/5] RUN yarn build:
#0 0.298 yarn run v1.22.19
#0 0.314 $ tsc
#0 0.325 /usr/bin/env: 'node': Text file busy
#0 0.334 error Command failed with exit code 126.
Possible Solution
downgrade to node:20-alpine3.16 (or node:20.2.0-bullseye-slim)
Steps to Reproduce
dockerfile:
FROM node:20-alpine3.17 as install
WORKDIR /service
COPY ./service ./
RUN yarn install --frozen-lockfile
RUN yarn build
yarn build creates a directory, compiles some protobuf, then execs tsc (bin from typescript dependency)
Additional Information
Things work correctly on 3.16, but give the text file busy error with 3.17 and 3.18. Also tried bullseye-slim and node:20.3.0-bullseye-slim gives the same error, but 20.2.0 works correctly.
Maybe this is an architecture mismatch or file permissions problem?
I can provide a more complete repro node project if that is needed.
+1, getting an issue on this as well, had to switch to node:18 to get builds to run, instead of node:latest
A colleague of mine running a Mac M1 has the same issue, but it works for me on PC (Ubuntu). Might be an issue for Mac M1 only.
Can (temporarily) be fixed by using OrbStack instead of Docker for Mac
Same here with node:20.3-slim running in Docker Desktop 4.20.1 on macOS 13.4 Apple Silicon.
Reverting to node:20.2-slim worked.
Only affected shell/bash scripts trying to start node (e.g. yarn start in a NextJS project), when I ran it manually it worked.
Only saw this on local dev and not pro container (but prod != ARM, so… 🤷🏼 )
A colleague of mine running a Mac M1 has the same issue, but it works for me on PC (Ubuntu). Might be an issue for Mac M1 only.
Can confirm I got this issue on M1 mac and Pop!_os
This is breaking in docker on x86 as well as M1 for us, and we get the same error on containers that start yarn.
This issue was also observed in the following environments:
- Windows 10 22H2 (Build 19045.2965)
- CPU: AMD Ryzen 9 5900X (x86_64)
- Docker Desktop 4.19.0 (106363)
- Image tag:
node:20-alpine - Package manager: Yarn
It is not limited to a specific Node.js project, but occurs in all projects that use node:20-alpine. I have confirmed that replacing it with node:18-alpine works.
However, this problem does not occur in the following environments:
- Ubuntu 22.04.2 LTS
- CPU: Intel Core i7-6700 (x86_64)
- Docker 24.0.2
This can be recreated in my project from commit b76bc9b with the frontend Dockerfile but I have since pushed this version: node:18-alpine.
Seemingly related: Docker forums My related issue with more details can be found here: https://github.com/jerlendds/osintbuddy/issues/49
FYI there is another case of people hitting this problem here: https://github.com/evanw/esbuild/issues/3156.
Sharing an additional data point here as well (from https://github.com/evanw/esbuild/issues/3156#issuecomment-1587445800): the issue only reproduces for me using Docker Engine 24, whereas the image works fine using Docker Engine 20. Tested on M1 Max.
This was also happening with me and all my coworkers that use mac for development.
The "solution" we found for now was to use OrbStack instead, it's a drop-in replacement for docker desktop, and seems to solve this problem. While not ideal, it did unblock us while we wait for the official solution.
I started looking more closely at the libuv v1.45.0 upgrade after @JoostK mentioned it in the esbuild thread, particularly some of the changes around io_uring support. There is an environment variable (undocumented and unstable, intended for debugging purposes only) to disable libuv's use of io_uring, and with UV_USE_IO_URING=0 set in my container's environment I can no longer reproduce the issue during esbuild installs with any of the Node.js v20.3.0 image variants I've tested (M1 Max, Docker Desktop v4.20.1, Docker Engine v24.0.2).
I also have an Arch Linux ARM virtual machine on my Mac separate from Docker Desktop, and tried building and installing libuv v1.45.0 there. In that case, I couldn't reproduce the same issue during an esbuild install, even though I could confirm with strace that Node.js was dispatching io_uring operations. That makes me wonder if the kernel version is partly at play (6.3.7 on my Arch VM vs. 5.15.49-linuxkit-pr in the Docker VM). (edit: I set up a separate system with a separate build of Linux 5.15.49, and couldn't reproduce either on the system or with the Docker images. Looking at kernel versions alone may not be a useful lead.)
~~Given @evanw's note in the original thread about possible changes to fs.renameSync, it might be notable that rename is apparently one of the operations now backed by io_uring, per libuv/libuv#4012 (my guess is that the *Sync functions in Node.js still use libuv under the hood, but I'm not certain). If that really is where this comes from, the discussion may ultimately belong with libuv, or even with Node.js if it's somehow particular to Node's usage of libuv.~~ (edit: This entire part is incorrect, please see below. Thank you @santigimeno for the correction, I apologize for the error.)
Experiencing same issue here, but downgrading to node:20-alpine3.16 seems to have fixed it. Seems the issue reappears for us in 20-alpine3.17 (which includes Node 20.3.0) - perhaps (guessing here) it's related to a line in the changelog for Node v20.3.0 which says:
[bfcb3d1d9a] - deps: upgrade to libuv 1.45.0, including significant performance improvements to file system operations on Linux (Santiago Gimeno) https://github.com/nodejs/node/pull/48078
(which does backup what @ahamlinman mentioned above)
Same here for a pipeline that run semantic-release. Gitlab - node:alpine
- yarn global add handpick
- handpick --target=devDependencies --manager=yarn
- yarn semantic-release
Error:
$ yarn global add handpick
yarn global v1.22.19
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Installed "[email protected]" with binaries:
- handpick
Done in 0.96s.
$ handpick --target=devDependencies --manager=yarn
Picking DIRTY devDependencies via YARN
Done 854 packages in 71.30 seconds
$ yarn semantic-release
yarn run v1.22.19
$ /builds/project/node_modules/.bin/semantic-release
env: can't execute 'node': Text file busy
error Command failed with exit code 126.
By retrying several times, it seems to work after a while.... I solved the problem temporarily by downgrading to an older node image.
Not sure what causes this, but this is what I have found:
- problem happens on version 20.3.0 on ubuntu 22.04 (manually installed), Debian 10, 11, 12 (node images from docker hub)
- problem does NOT happen with node 20.2.0 or node 19
So I would guess this is some bug in node itself?
Given @evanw's note in the original thread about possible changes to
fs.renameSync, it might be notable thatrenameis apparently one of the operations now backed by io_uring, per libuv/libuv#4012 (my guess is that the*Syncfunctions in Node.js still use libuv under the hood, but I'm not certain). If that really is where this comes from, the discussion may ultimately belong with libuv, or even with Node.js if it's somehow particular to Node's usage of libuv.
FWIW, that specific libuv commit is not yet in any release so it didn't get to Node.js. Also, the fs sync operations aren't io_uring backed.
- problem happens on version 20.3.0 on ubuntu 22.04 (manually installed),
@jkuchar can you elaborate exactly what steps to follow? I'd like to have a reproducer in that environment. Thanks!
This was my test scenario environment:
FROM ubuntu:22.04
# NodeJS & Chromium for tests
RUN curl -sL https://deb.nodesource.com/setup_20.x | bash -
RUN apt install -y nodejs
I'm seeing this within a CircleCI linux environment when attempting to bump node from 20.2.0-alpine3.17 to 20.3.1-alpine3.17.
This is the docker server info from CircleCI:
Server Engine Details:
Version: 20.10.18
API version: 1.41 (minimum version 1.12)
Go version: go1.18.6
Git commit: e42327a
Built: 2022-09-08T23:09:30.000000000+00:00
OS/Arch: linux/amd64
Experimental: false
The error happened during `docker build`, when yarn tried running `react-scripts build`:
Step 15/23 : RUN yarn build && yarn test -- --watchAll=false
---> Running in e344e5cf9846
yarn run v1.22.19
$ react-scripts build
env: can't execute 'node': Text file busy
error Command failed with exit code 126.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
The command '/bin/sh -c yarn build && yarn test -- --watchAll=false' returned a non-zero code: 126
Not sure what causes this, but this is what I have found:
* problem happens on version 20.3.0 on ubuntu 22.04 (manually installed), Debian 10, 11, 12 (node images from docker hub) * problem does NOT happen with node 20.2.0 or node 19So I would guess this is some bug in node itself?
I could not reproduce in my system which is a Ubuntu 22.04 that was upgraded automatically from the Ubuntu 20.04.
I notice a difference between my installed packages and my colleagues packages:
@ docker-buildx-plugin : 0.10.5-1~ubuntu.20.04~focal -> 0.10.5-1~ubuntu.22.04~jammy
@ docker-ce : 5:24.0.2-1~ubuntu.20.04~focal -> 5:24.0.2-1~ubuntu.22.04~jammy
@ docker-ce-cli : 5:24.0.2-1~ubuntu.20.04~focal -> 5:24.0.2-1~ubuntu.22.04~jammy
@ docker-ce-rootless-extras : 5:24.0.2-1~ubuntu.20.04~focal -> 5:24.0.2-1~ubuntu.22.04~jammy
@ docker-compose-plugin : 2.18.1-1~ubuntu.20.04~focal -> 2.18.1-1~ubuntu.22.04~jammy
My Docker packages are 20.04~focal, although I have the same version as him, the OS package is different.
We have the same versions for Docker: 24.0.2, build cb74dfc and the uname -a show pretty much the same kernel and configs: 5.19.0-45-generic #46~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Jun 7 15:06:04 UTC 20 x86_64 x86_64 x86_64 GNU/Linux.
The docker compose was different, I was on v2.2.3 and him at v2.15.1, I upgraded to the latest v2.19.0, but still couldn't reproduce.
~~We will try to see if the environment variable that @ahamlinman mentioned (UV_USE_IO_URING=0) can allow him to build the image.~~ Just read @ahamlinman comment again, this won't work anyway by now.
The production builds at our company didn't fail with the upgrade to node:20-alpine, but it is using another version of Docker to build the images.
I had a same issue with node:slim and node:20-slim.
Solution: Changing to node:18-slim or node:16-slim worked.
I had a same issue with
node:slimandnode:20-slim.Solution: Changing to
node:18-slimornode:16-slimworked.
@matoruru Node 16 will enter EOL in 2023-09-11. I would not recommend this version by now.
Also, could you share your setup? I simply cannot reproduce the issue whatsoever. What specs, platform, etc.
@FernandoKGA
Node 16 will enter EOL in 2023-09-11. I would not recommend this version by now. 👍
I'm on Mac M2.
❯ docker --version
Docker version 24.0.2, build cb74dfc
Hi! I faced this issue too. In my case, it was upgrade node from 20.0-bullseye-slim to 20.3.0-bullseye-slim.
Mac M2, Version 13.4.1 (22F82)
docker --version
Docker version 23.0.4, build f480fb1e37
I also see this on an Intel Mac, in addition to my previous Linux comment. (Looks like every previous Mac callout on this thread is apple silicon..)
Docker Desktop 4.20.1
Client:
Cloud integration: v1.0.33
Version: 24.0.2
API version: 1.43
Go version: go1.20.4
Git commit: cb74dfc
Built: Thu May 25 21:51:16 2023
OS/Arch: darwin/amd64
Context: desktop-linux
Server: Docker Desktop 4.20.1 (110738)
Engine:
Version: 24.0.2
API version: 1.43 (minimum version 1.12)
Go version: go1.20.4
Git commit: 659604f
Built: Thu May 25 21:52:17 2023
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.21
GitCommit: 3dce8eb055cbb6872793272b4f20ed16117344f8
runc:
Version: 1.1.7
GitCommit: v1.1.7-0-g860f061
docker-init:
Version: 0.19.0
GitCommit: de40ad0
We saw that on x86 AWS instances running Ubuntu.
We saw that on x86 AWS instances running Ubuntu.
Node hasn't supported x86 for quite some time https://github.com/nodejs/build/issues/885. I believe there may still be an Alpine x86 still, but I'm not sure why
Sorry meant amd64
Seeing the same issue.