wdpksrc
wdpksrc copied to clipboard
Docker 20.10.5 on armhf platforms
Platform My Cloud EX2 Ultra
Application Docker
Describe the bug I installed the dependencies on Ubuntu, cloned the repository and launched build.sh inside the docker folder. I obtained inside packages/docker/OS5 the compiled package docker_20.10.5_EX2Ultra.bin (along with the other platforms and source). I installed it through the web interface, obtaining a successful installation and a version number 20.10.5. Accessing through ssh and running "docker -v", I get version 19.03.8. If I uninstall Docker from the webinterface, "docker -v" gives me command not found.
I needed the version 20.10.0+ in order to use the docker option "--add-host host.docker.internal:host-gateway" and access the mariadb database in the host from the docker container.
Attached the source docker_20.10.5_src.tar.gz
I found why the mismatch happened, because of the if condition in the install.sh script:
if [ ${ARCH} != "x86_64" ]; then
# Update the "ARCH" to "armhf" so it matches the docker download site
# Versions above "19.03.8" do not have a working "dockerd" binary on WD EX4100
ARCH="armhf"
VERSION="19.03.8"
fi
Do you know why later versions are not working? How can I help?
The dockerd process throws a segmentation fault on the new releases higher than 19.03.8 every time it is called (dockerd -version for example) and this happens with both a compiled version and the one from the docker download site. I only have an EX4100 but may work okay on other arm devices.
Get Outlook for iOShttps://aka.ms/o0ukef
From: gabrielitos87 @.> Sent: Saturday, March 27, 2021 9:27:06 AM To: WDCommunity/wdpksrc @.> Cc: Subscribed @.***> Subject: Re: [WDCommunity/wdpksrc] Mismatch Docker version? (#85)
I found why the mismatch happened, because of the if condition in the install.sb script:
if [ ${ARCH} != "x86_64" ]; then # Update the "ARCH" to "armhf" so it matches the docker download site # Versions above "19.03.8" do not have a working "dockerd" binary on WD EX4100 ARCH="armhf" VERSION="19.03.8" fi
Do you know why later versions are not working? How can I help?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/WDCommunity/wdpksrc/issues/85#issuecomment-808586085, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADMYAD72EFF73JV4IZEBIRDTFUJ4TANCNFSM4Z4AY34Q.
Thanks for your answer.
I made some investigations too. I can confirm that downloading docker-20.10.5.tgz manually in the My Cloud EX2 Ultra and running ./dockerd -v also provides Segmentation Fault.
However, I tried something more: I did the same on my Raspberry Pi 3 (which is also armhf) and I get the same Segmentation Fault. Moreover, I tried also docker-20.10.2.tgz , and that also gives Segmentation Fault.
I did the last test because Docker 20.10.2 is the one available from the official repositories (through the script get-docker.sh): that works fine! You can appreciate the difference:
root@raspy:/home/gabrielitos/test/docker# dockerd -v
Docker version 20.10.2, build 8891c58
root@raspy:/home/gabrielitos/test/docker# ./dockerd -v
Segmentation fault
Following some Google digging, I found that this debugging can help, but this is above my current level of understanding.
root@raspy:/home/gabrielitos/test/docker# strace -f ./dockerd -v
execve("./dockerd", ["./dockerd", "-v"], 0xffc17a88 /* 17 vars */) = 0
syscall_0x101cc(0x37b0d6, 0x158031, 0x1, 0x18ef8f0, 0x101c4, 0x18c461d) = -1 ENOSYS (Function not implemented)
--- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0x4f00000} ---
+++ killed by SIGSEGV +++
Segmentation fault
Any guess?
I created a bug report in the Docker repository: https://github.com/docker/for-linux/issues/1226 @JediNite, can you maybe confirm happening also on your platform?
@gabrielitos87,
I got as far as you did with diagnosing it with strace as well. I also tried "gdb", but not an expert in it and not sure it works well for troubleshooting issues with "golang" compiled applications.
Cheers,
JediNite
Maybe this discussion and especially the solution at the end might be the way to go: https://github.com/moby/moby/issues/40733 It suggests to use the raspbian sources instead of debian ...
@stefaang
Thanks for your answer. I checked the post too, but it seems that was a repository mistake: the Debian instead of the Raspbian package for was used (and those are packages with dynamically linked libraries, so we cannot use them here, right?).
I couldn't find how the Raspbian package was compiled: since it is hosted in the official Docker repository, I guess they used the standard source code...
@stefaang
I tried to compile on my (poor old) Raspberry Pi 3 the whole Docker following the standard procedure (https://oyvindsk.com/writing/docker-build-from-source). After some hours, I obtained some binaries which are still providing Illegal Instruction!
I couldn't find how the Raspberry compilation should be treated differently than the normal compilation, and it is not working using the correct procedure from scratch. So I opened an issue on the moby tracker: https://github.com/moby/moby/issues/42212
https://github.com/tttapa/RPi-Cpp-Toolchain Did you try compiling docker with this toolchain? It claims to build working packages for ARMv6. FYI, I don't think the EX2 ultra has hardfloat support enabled in its kernel.
I am currently trying to build the static binaries of dockerd and the rest on my Rpi3 using a manual procedure similar to this: https://gist.github.com/cwgem/c913c80dcb8eeef38abc30ff3abf1750
The main difference here is that I am also compiling the go compiler: I hope this makes a difference.
As I wrote on the Moby tracker, I compiled the docker using the automatic procedure (make binaries) from their git, but it resulted again in a dockerd providing "Illegal instructions". I managed also to cross compile forcing an armv5 and armv6 versions, but still the same problem.
I managed to have working binaries!
After quite a number of tests, I finally found the winning recipe:
- I used my Raspberry Pi 3 with Rasbpian Buster to compile armhf natively
- I cloned the https://github.com/moby/moby repository
- Checked out the 20.10 branch
root@raspy:~/moby# git branch -a
* 20.10
master
remotes/origin/1.12.x
remotes/origin/1.13.x
remotes/origin/17.03.x
remotes/origin/17.04.x
remotes/origin/17.05.x
remotes/origin/19.03
remotes/origin/20.10
remotes/origin/HEAD -> origin/master
remotes/origin/docs
remotes/origin/master
remotes/origin/revert-39415-master
- Thinking about @JediNite comment, I figured that the broken official builds started approximately to be released when Debian Buster was stable. So I modified the Dockerfile to use Debian Stretch as base image, plus the following changes that allow to use it with the selected Moby branch.
diff --git a/Dockerfile b/Dockerfile
index f5ec77836b..4ac9fb7ebd 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -8,7 +8,7 @@ ARG DEBIAN_FRONTEND=noninteractive
ARG VPNKIT_VERSION=0.5.0
ARG DOCKER_BUILDTAGS="apparmor seccomp"
-ARG BASE_DEBIAN_DISTRO="buster"
+ARG BASE_DEBIAN_DISTRO="stretch"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
FROM ${GOLANG_IMAGE} AS base
@@ -23,6 +23,7 @@ ARG DEBIAN_FRONTEND
# Install dependency packages specific to criu
RUN --mount=type=cache,sharing=locked,id=moby-criu-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-criu-aptcache,target=/var/cache/apt \
+ rm -rf /var/lib/apt/lists/ && \
apt-get update && apt-get install -y --no-install-recommends \
libcap-dev \
libnet-dev \
@@ -85,6 +86,7 @@ FROM debian:${BASE_DEBIAN_DISTRO} AS frozen-images
ARG DEBIAN_FRONTEND
RUN --mount=type=cache,sharing=locked,id=moby-frozen-images-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-frozen-images-aptcache,target=/var/cache/apt \
+ rm -rf /var/lib/apt/lists/ && \
apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
curl \
@@ -110,6 +112,7 @@ RUN dpkg --add-architecture armel
RUN dpkg --add-architecture armhf
RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \
+ rm -rf /var/lib/apt/lists/ && \
apt-get update && apt-get install -y --no-install-recommends \
crossbuild-essential-arm64 \
crossbuild-essential-armel \
@@ -122,6 +125,7 @@ ARG DEBIAN_FRONTEND
RUN echo 'deb http://deb.debian.org/debian buster-backports main' > /etc/apt/sources.list.d/backports.list
RUN --mount=type=cache,sharing=locked,id=moby-cross-false-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-cross-false-aptcache,target=/var/cache/apt \
+ rm -rf /var/lib/apt/lists/ && \
apt-get update && apt-get install -y --no-install-recommends \
binutils-mingw-w64 \
g++-mingw-w64-x86-64 \
@@ -141,6 +145,7 @@ ARG DEBIAN_FRONTEND
RUN echo 'deb http://deb.debian.org/debian buster-backports main' > /etc/apt/sources.list.d/backports.list
RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \
+ rm -rf /var/lib/apt/lists/ && \
apt-get update && apt-get install -y --no-install-recommends \
libapparmor-dev:arm64 \
libapparmor-dev:armel \
@@ -167,8 +172,10 @@ RUN --mount=type=cache,target=/root/.cache/go-build \
FROM dev-base AS containerd
ARG DEBIAN_FRONTEND
+RUN echo 'deb http://deb.debian.org/debian stretch-backports main' > /etc/apt/sources.list.d/backports.list
RUN --mount=type=cache,sharing=locked,id=moby-containerd-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-containerd-aptcache,target=/var/cache/apt \
+ rm -rf /var/lib/apt/lists/ && \
apt-get update && apt-get install -y --no-install-recommends \
libbtrfs-dev
ARG CONTAINERD_COMMIT
@@ -226,6 +233,7 @@ ARG DEBIAN_FRONTEND
ARG TINI_COMMIT
RUN --mount=type=cache,sharing=locked,id=moby-tini-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-tini-aptcache,target=/var/cache/apt \
+ rm -rf /var/lib/apt/lists/ && \
apt-get update && apt-get install -y --no-install-recommends \
cmake \
vim-common
@@ -268,6 +276,7 @@ RUN ldconfig
# Do you really need to add another package here? Can it be done in a different build stage?
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
+ rm -rf /var/lib/apt/lists/ && \
apt-get update && apt-get install -y --no-install-recommends \
apparmor \
aufs-tools \
@@ -330,6 +339,7 @@ ENTRYPOINT ["hack/dind"]
FROM dev-systemd-false AS dev-systemd-true
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
+ rm -rf /var/lib/apt/lists/ && \
apt-get update && apt-get install -y --no-install-recommends \
dbus \
dbus-user-session \
-
make build followed by make binary , waiting for the binaries to be created.
-
Move the binaries to the WD NAS and tadaa:
root@MyCloudEX2Ultra test # ./dockerd -v
Docker version dev, build 88bd96d6e5
The files that I generated are quite large and are available from my Dropbox. Can we package them in the WD format, @stefaang ?
Just for completeness, this is what I also tried but didn't work:
- Compile the moby/moby binaries with the original "buster" setting, straight from git clone or selecting the branch 20.10.
- Compile moby/moby forcing GOARCH=arm and GOARM=6 or GOARM=5 (so theoretically for armhfv6 armhfv5).
- Compiling manually without docker, following e.g. https://gist.github.com/cwgem/c913c80dcb8eeef38abc30ff3abf1750
@gabrielitos87
Good work in tracking that down. I copied to binaries over to my EX4100 from your dropbox and tried to run "dockerd -v" as well and it works. I used to build binaries for the EX4100 on OS3 because "seccomp" was not enabled in the kernel, so can probably go back and have a go with the changes you have provided as well.
Have you updated the issue that you created on moby/moby as well ?
Cheers,
Jedinite
@JediNite
Yes, I updated the issue in moby/moby but there was no reaction.. Maybe that's not depending on them, but on the golang buster Docker image.
How can the docker binaries be included in the WD package? Shall i just put them in the folder before calling build.sh?
@gabrielitos87,
If you go back and have a look at https://github.com/WDCommunity/wdpksrc/blob/f6ef810c78a75fbb94541e5df4395a104ecf8655/wdpk/docker/install.sh I used to host the bundled binaries I used to build for EX4100 on another Github Repo. You could probably do something similar.
Cheers,
JediNite
@gabrielitos87,
I've updated the old Github repo I had with details based on your findings and a new build procedure and script along with a copy of the binaries. Check it out at https://github.com/JediNite/docker-ce-WDEX4100-binaries.
Cheers,
JediNite
Hi all,
Commit https://github.com/WDCommunity/wdpksrc/commit/9726307cad2599292e61b37836634929355b4540 should now address this issue.
Cheers,
JediNite
@JediNite ,
It works! I checked out the commit, I created the package with build.sh and I installed in my EX2 Ultra. The flag --add-host host.docker.internal:host-gateway is accepted, so the version is confirmed!
Thanks!
The flag --add-host host.docker.internal:host-gateway is accepted, so the version is confirmed!
It should also be possible to run "docker version", "docker --version" and "dockerd -v" and all of these should show version 20.10.5 as well.
@JediNite
Yes, that shows the correct version too! But I thought that number was hardcoded by this:
make VERSION=${GEN_STATIC_VER} binary
What is the procedure now for this issue? Should a pull request be made?
Great work @gabrielitos87 ! Don't worry about the binary size. It might be possible though to wrap the compile process in a single docker / make file, so the CI can build it. I'm working on the website revamp the coming weeks in my very spare time.. it should be live by end of the month.
What is the procedure now for this issue? Should a pull request be made?
@gabrielitos87
Check out the "quick and dirty" build script I made in https://github.com/JediNite/docker-ce-WDEX4100-binaries/blob/master/build.sh. One of the variables it takes in is the target build version to get from the docker-cli and moby/moby GitHub projects.
More then happy for someone to take this and try and automate it more.
Cheers,
JediNite
Great work @gabrielitos87 ! Don't worry about the binary size. It might be possible though to wrap the compile process in a single docker / make file, so the CI can build it. I'm working on the website revamp the coming weeks in my very spare time.. it should be live by end of the month.
i have pr2100 do you make new version docker and release? now docker is so old version so i want update but i don' know what i do
i hope release new version
current docker has libseccomp problem with transmission. https://forum.openmediavault.org/index.php?thread/38387-tranmission-using-stacks-unable-to-download/ new release fix this issue?
@htogether,
Thanks to the work from @gabrielitos87, we were able to get docker working on the armhf platforms and part of getting this to work was to change the Ubuntu release used to build docker from stretch to buster. I saw as per https://docs.linuxserver.io/faq#libseccomp that the fixes involve installing a version of libseccomp2 which appears to have been sourced from buster. We would need to check if a similar package is available in stretch, update the "Dockerfile.patch" (which can be located at https://github.com/JediNite/docker-ce-WDEX4100-binaries) to include libseccomp2 in the "apt-get-install" statements and do some testing to see if this then allows firstly a successful docker compile and secondly a working transmission container.
Anyone else got thoughts on a different approach ?
Cheers,
JediNite
@htogether
I've just put some binaries for 20.10.6 on my binaries repo (https://github.com/JediNite/docker-ce-WDEX4100-binaries/releases/tag/v20.10.6). Do you want to try these and see if the same issue is in 20.10.6 ? I did get an error in the patching process for the Dockerfile as some of the lines have changed between 20.10.5 and 20.10.6 and need to check these out further.
Cheers,
JediNite
I have mirror gen2.
@htogether,
The packages I release should work for any WD armhf platform, so if your gen2 is also arm based, it is worth a shot.
Reading through this a bit more, the fixes suggested seem to involve installing "libseccomp2" within the HOST OS itself. WD do not really allow access to the firmware files in order to add the package. They do occasionally release the source code packages and provide tools to build your own firmware file, but can be a bit hit and miss and has potential to brick your NAS if done incorrectly.
A "workaround" for your issue if it is seccomp related, might be to see if you can start the container with seccomp disabled.
Cheers,
JediNite
any tips to installation? I am not familiar with os5. I used to use omv
used ssh to extract and copy but write error. root@mcmg2 tmp # tar xzvf docker-20.10.6.tgz docker/ docker/docker tar: write error: No space left on device
Hi,
You can't extract this into /tmp as it is by design a small filesystem. This has to be extracted to on of the data drives. If you have installed a previous version of the docker packages, you can update the contents in /mnt/HD/HD_a2/Nas_Prog/docker/docker for example.
JediNite