concourse
concourse copied to clipboard
Worker fails to start on Docker for Mac with M1
Summary
I am trying to run concourse on Docker for Mac with M1 processor and it fails with the following:
{"timestamp":"2022-04-13T12:19:49.021023920Z","level":"info","source":"baggageclaim","message":"baggageclaim.using-driver","data":{"driver":"overlay"}}
{"timestamp":"2022-04-13T12:19:49.041516378Z","level":"info","source":"baggageclaim","message":"baggageclaim.listening","data":{"addr":"127.0.0.1:7788"}}
{"timestamp":"2022-04-13T12:19:49.181814128Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.failed-to-dial","data":{"addr":"127.0.0.1:7777","error":"dial tcp 127.0.0.1:7777: connect: connection refused","network":"tcp","session":"4.1.4"}}
time="2022-04-13T12:19:49.354600295Z" level=info msg="starting containerd" revision=de8046a5501db9e0e478e1c10cbcfb21af4c6b2d version=v1.6.2
time="2022-04-13T12:19:49.450579003Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
time="2022-04-13T12:19:49.451645670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
time="2022-04-13T12:19:49.452032212Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
time="2022-04-13T12:19:49.452118795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
time="2022-04-13T12:19:49.453981503Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
time="2022-04-13T12:19:49.456074753Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
time="2022-04-13T12:19:49.456580628Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
time="2022-04-13T12:19:49.456803837Z" level=info msg="metadata content store policy set" policy=shared
time="2022-04-13T12:19:49.460575295Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
time="2022-04-13T12:19:49.460786253Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
time="2022-04-13T12:19:49.460870337Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
time="2022-04-13T12:19:49.461949712Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
time="2022-04-13T12:19:49.462096295Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
time="2022-04-13T12:19:49.462428170Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
time="2022-04-13T12:19:49.462772045Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
time="2022-04-13T12:19:49.463014087Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
time="2022-04-13T12:19:49.463340962Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
time="2022-04-13T12:19:49.463534962Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
time="2022-04-13T12:19:49.463786045Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
time="2022-04-13T12:19:49.464331253Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
time="2022-04-13T12:19:49.464768337Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
time="2022-04-13T12:19:49.465828587Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
time="2022-04-13T12:19:49.467781670Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
time="2022-04-13T12:19:49.468444003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
time="2022-04-13T12:19:49.468649087Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
time="2022-04-13T12:19:49.470030628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
time="2022-04-13T12:19:49.470394920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
time="2022-04-13T12:19:49.471128087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
time="2022-04-13T12:19:49.471465045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
time="2022-04-13T12:19:49.471782212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
time="2022-04-13T12:19:49.471904628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
time="2022-04-13T12:19:49.472022003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
time="2022-04-13T12:19:49.472444545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
time="2022-04-13T12:19:49.472600295Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
time="2022-04-13T12:19:49.474332503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
time="2022-04-13T12:19:49.474494378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
time="2022-04-13T12:19:49.474620420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
time="2022-04-13T12:19:49.474700795Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
time="2022-04-13T12:19:49.474919753Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
time="2022-04-13T12:19:49.475010545Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
time="2022-04-13T12:19:49.476593837Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
time="2022-04-13T12:19:49.480993503Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
time="2022-04-13T12:19:49.481496170Z" level=info msg=serving... address=/run/containerd/containerd.sock
time="2022-04-13T12:19:49.484670545Z" level=info msg="containerd successfully booted in 0.139909s"
{"timestamp":"2022-04-13T12:19:49.587140628Z","level":"error","source":"worker","message":"worker.garden-runner.logging-runner-exited","data":{"error":"Exit trace for group:\ncontainerd-garden-backend exited with error: setup host network failed: create chain or flush if exists failed: running [/sbin/iptables -t filter -N CONCOURSE-OPERATOR --wait]: exit status 3: iptables v1.6.1: can't initialize iptables table `filter': iptables who? (do you need to insmod?)\nPerhaps iptables or your kernel needs to be upgraded.\n\ncontainerd exited with nil\n","session":"8"}}
{"timestamp":"2022-04-13T12:19:49.588884503Z","level":"info","source":"worker","message":"worker.debug-runner.logging-runner-exited","data":{"session":"10"}}
{"timestamp":"2022-04-13T12:19:49.589299712Z","level":"info","source":"worker","message":"worker.healthcheck-runner.logging-runner-exited","data":{"session":"11"}}
{"timestamp":"2022-04-13T12:19:49.589722128Z","level":"info","source":"worker","message":"worker.container-sweeper.sweep-cancelled-by-signal","data":{"session":"6","signal":2}}
{"timestamp":"2022-04-13T12:19:49.589902962Z","level":"info","source":"worker","message":"worker.container-sweeper.logging-runner-exited","data":{"session":"13"}}
{"timestamp":"2022-04-13T12:19:49.590042128Z","level":"info","source":"worker","message":"worker.volume-sweeper.sweep-cancelled-by-signal","data":{"session":"7","signal":2}}
{"timestamp":"2022-04-13T12:19:49.590645212Z","level":"info","source":"worker","message":"worker.baggageclaim-runner.logging-runner-exited","data":{"session":"9"}}
{"timestamp":"2022-04-13T12:19:49.590711712Z","level":"info","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.cancelled","data":{"addr":"127.0.0.1:7777","network":"tcp","session":"4.1.4"}}
{"timestamp":"2022-04-13T12:19:49.590845712Z","level":"info","source":"worker","message":"worker.beacon-runner.beacon.run.context-done","data":{"command":"forward-worker","context-error":{},"session":"4.1.3"}}
{"timestamp":"2022-04-13T12:19:49.591065837Z","level":"info","source":"worker","message":"worker.volume-sweeper.logging-runner-exited","data":{"session":"14"}}
{"timestamp":"2022-04-13T12:19:49.592804962Z","level":"info","source":"worker","message":"worker.beacon-runner.beacon.run.signal-sent","data":{"command":"forward-worker","session":"4.1.3"}}
{"timestamp":"2022-04-13T12:19:49.594491462Z","level":"error","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.failed-to-dial","data":{"addr":"127.0.0.1:7788","error":"dial tcp 127.0.0.1:7788: connect: connection refused","network":"tcp","session":"4.1.5"}}
{"timestamp":"2022-04-13T12:19:49.594700795Z","level":"info","source":"worker","message":"worker.beacon-runner.beacon.forward-conn.cancelled","data":{"addr":"127.0.0.1:7788","network":"tcp","session":"4.1.5"}}
{"timestamp":"2022-04-13T12:19:49.599277045Z","level":"info","source":"worker","message":"worker.beacon-runner.beacon.signal.signalled","data":{"session":"4.1.6"}}
{"timestamp":"2022-04-13T12:19:49.599969087Z","level":"info","source":"worker","message":"worker.beacon-runner.logging-runner-exited","data":{"session":"12"}}
error: Exit trace for group:
garden exited with error: Exit trace for group:
containerd-garden-backend exited with error: setup host network failed: create chain or flush if exists failed: running [/sbin/iptables -t filter -N CONCOURSE-OPERATOR --wait]: exit status 3: iptables v1.6.1: can't initialize iptables table `filter': iptables who? (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
containerd exited with nil
debug exited with nil
healthcheck exited with nil
container-sweeper exited with nil
baggageclaim exited with nil
volume-sweeper exited with nil
beacon exited with nil
which is very similar to this issue https://github.com/concourse/concourse/issues/7985 but I couldn't apply provided solution on docker linux vm.
here is part of my docker-compose.yml file:
concourse-worker:
image: concourse/concourse:7.7.1
restart: always
privileged: true
links: [concourse-web, registry]
depends_on: [concourse-web]
command: worker
volumes: ["./keys/worker:/concourse-keys"]
stop_signal: SIGUSR2
environment:
CONCOURSE_TSA_HOST: concourse-web:2222
CONCOURSE_RUNTIME: containerd
CONCOURSE_CONTAINERD_DNS_PROXY_ENABLE: "true"
Steps to reproduce
- Run Docker for Mac on M1
- docker compose up (docker compose includes concourse worker and web containers)
- observe with
docker psworker container crash
Triaging info
- Concourse version: 7.7.1
- Docker Desktop for mac m1: 4.7.0
- Did this used to work? No
Good thing I actually have an M1 to test this out on now!
This is the key error causing the worker to stop. The worker can't create the iptable rules:
garden exited with error: Exit trace for group:
containerd-garden-backend exited with error: setup host network failed: create chain or flush if exists failed: running [/sbin/iptables -t filter -N CONCOURSE-OPERATOR --wait]: exit status 3: iptables v1.6.1: can't initialize iptables table `filter': iptables who? (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
So it appears that the docker vm doesn't have the iptables filter module installed :(
After some searching I found the following command to create a container giving you access to the host vm docker runs the containers on
docker run --rm -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
Then I checked what modules were available:
# ls -lh /lib/modules/5.10.104-linuxkit/kernel/net/ipv4
total 640K
-rw-r--r-- 1 root root 96.2K Mar 17 17:17 ah4.ko
-rw-r--r-- 1 root root 143.4K Mar 17 17:17 esp4.ko
-rw-r--r-- 1 root root 166.3K Mar 17 17:17 ip_gre.ko
-rw-r--r-- 1 root root 110.7K Mar 17 17:17 ip_vti.ko
-rw-r--r-- 1 root root 63.0K Mar 17 17:17 ipcomp.ko
-rw-r--r-- 1 root root 49.4K Mar 17 17:17 xfrm4_tunnel.ko
No iptable_filter.ko 😞
Maybe this is something that can be asked for over here https://github.com/docker/for-mac ? Not sure how the Docker ecosystem is setup on the dev side, so maybe that's the wrong repo to ask in as well 😕
Update
Realized I should try running the same iptables command concourse is trying to run inside the container:
┌─[~]
└─▪ docker run --rm -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
/ # iptables -t filter -N OPS --wait
/ #
It works. So yeah, need to build arm image.
@theghost5800 as an alternative I was using this image: rdclda/concourse:7.7.1
Original reference: https://github.com/docker/for-mac/issues/5547#issuecomment-815283327
This won't work if you're trying to develop though. Would be nice to get it running on M1 eventually.
@AnilPothula Thank you for provided solution, it works. I hope that we can expect in near future official docker image from concourse team for linux/arm64 and fly cli tool build for mac/arm64.
I am currently having the same issue, unable to run it locally for more than 15 seconds.
error: Exit trace for group:
worker exited with error: Exit trace for group:
garden exited with error: Exit trace for group:
containerd-garden-backend exited with error: setup host network failed: create chain or flush if exists failed: running [/sbin/iptables -t filter -N CONCOURSE-OPERATOR --wait]: exit status 3: iptables v1.6.1: can't initialize iptables table 'filter': iptables who? (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
containerd exited with nil
volume-sweeper exited with nil
container-sweeper exited with nil
debug exited with nil
healthcheck exited with nil
baggageclaim exited with nil
beacon exited with nil
web exited with nil
{"timestamp":"2022-08-15T18:05:37.528948838Z","level":"error","source":"tsa","message":"tsa.connection.channel.command.register.failed-to-list-volumes","data":{"command":"forward-worker","error":"Get \"http://127.0.0.1:37961/volumes\": EOF","remote":"127.0.0.1:54346","session":"1.4.1.2"}}
{"timestamp":"2022-08-15T18:05:37.529260088Z","level":"info","source":"tsa","message":"tsa.connection.channel.command.register.failed-to-reach-worker","data":{"baggageclaim-took":"3.031125ms","command":"forward-worker","garden-took":"429.205292ms","remote":"127.0.0.1:54346","session":"1.4.1.2"}}
{"timestamp":"2022-08-15T18:05:37.529380963Z","level":"info","source":"tsa","message":"tsa.connection.channel.command.register.done","data":{"command":"forward-worker","remote":"127.0.0.1:54346","session":"1.4.1.2","worker-address":"127.0.0.1:41549","worker-platform":"linux","worker-tags":""}}
{"timestamp":"2022-08-15T18:05:37.529506713Z","level":"info","source":"tsa","message":"tsa.connection.channel.command.done","data":{"command":"forward-worker","remote":"127.0.0.1:54346","session":"1.4.1"}}
{"timestamp":"2022-08-15T18:05:37.530002505Z","level":"info","source":"tsa","message":"tsa.connection.channel.command.draining-forwarded-connections","data":{"command":"forward-worker","remote":"127.0.0.1:54346","session":"1.4.1"}}
{"timestamp":"2022-08-15T18:05:37.530894380Z","level":"info","source":"tsa","message":"tsa.connection.channel.command.forward-process-exited","data":{"bind-addr":"0.0.0.0:7777","bound-port":41549,"command":"forward-worker","remote":"127.0.0.1:54346","session":"1.4.1"}}
{"timestamp":"2022-08-15T18:05:37.531039380Z","level":"info","source":"tsa","message":"tsa.connection.channel.command.forward-process-exited","data":{"bind-addr":"0.0.0.0:7788","bound-port":37961,"command":"forward-worker","remote":"127.0.0.1:54346","session":"1.4.1"}}
{"timestamp":"2022-08-15T18:05:37.533546380Z","level":"info","source":"worker","message":"worker.beacon-runner.beacon.signal.signalled","data":{"session":"4.1.6"}}
{"timestamp":"2022-08-15T18:05:37.534062671Z","level":"info","source":"worker","message":"worker.beacon-runner.logging-runner-exited","data":{"session":"12"}}
{"timestamp":"2022-08-15T18:05:37.534361171Z","level":"error","source":"quickstart","message":"quickstart.worker-runner.logging-runner-exited","data":{"error":"Exit trace for group:\ngarden exited with error: Exit trace for group:\ncontainerd-garden-backend exited with error: setup host network failed: create chain or flush if exists failed: running [/sbin/iptables -t filter -N CONCOURSE-OPERATOR --wait]: exit status 3: iptables v1.6.1: can't initialize iptables table `filter': iptables who? (do you need to insmod?)\nPerhaps iptables or your kernel needs to be upgraded.\n\ncontainerd exited with nil\n\nvolume-sweeper exited with nil\ncontainer-sweeper exited with nil\ndebug exited with nil\nhealthcheck exited with nil\nbaggageclaim exited with nil\nbeacon exited with nil\n","session":"2"}}
{"timestamp":"2022-08-15T18:05:37.535963296Z","level":"info","source":"atc","message":"atc.tracker.drain.start","data":{"session":"26.1"}}
{"timestamp":"2022-08-15T18:05:37.536149921Z","level":"info","source":"atc","message":"atc.tracker.drain.waiting","data":{"session":"26.1"}}
{"timestamp":"2022-08-15T18:05:37.536468380Z","level":"info","source":"web","message":"web.tsa-runner.logging-runner-exited","data":{"session":"2"}}
{"timestamp":"2022-08-15T18:05:37.536560963Z","level":"info","source":"atc","message":"atc.tracker.drain.done","data":{"session":"26.1"}}
{"timestamp":"2022-08-15T18:05:37.545301880Z","level":"info","source":"web","message":"web.atc-runner.logging-runner-exited","data":{"session":"1"}}
{"timestamp":"2022-08-15T18:05:37.546068338Z","level":"info","source":"quickstart","message":"quickstart.web-runner.logging-runner-exited","data":{"session":"1"}}
Resolved by applying this comment https://github.com/concourse/concourse/commit/eed48c6f4f2ff27a477925f3a3c9da506336921a
change `CONCOURSE_WORKER_RUNTIME: "containerd"` to `CONCOURSE_WORKER_RUNTIME: "houdini"`
Resolved by applying this comment eed48c6
change `CONCOURSE_WORKER_RUNTIME: "containerd"` to `CONCOURSE_WORKER_RUNTIME: "houdini"`
Thanks, It worked for me!
this is related to Concousre ARM release so I will close it to reduce duplication. Please refer to https://github.com/concourse/concourse/issues/1379 for details.
Resolved by applying this comment eed48c6
change `CONCOURSE_WORKER_RUNTIME: "containerd"` to `CONCOURSE_WORKER_RUNTIME: "houdini"`
Thanks it's working !