clamav
clamav copied to clipboard
docker arm64 image is needed
hello,
please build docker arm64 image also and push it to docker hub. thanks
I think this is a great idea.
I found this post about doing it with docker desktop and will try to find some time to give it a try: https://www.docker.com/blog/multi-arch-images/ But I don't have any experience building arm64 images so if anyone wants to give me some tips to help me get started, I would appreciate it.
Hi @micahsnyder, I'm happy to help. Quick question: what is the pipeline by which x86-64 images are published to Docker Hub today? I looked for a relevant GitHub Action in this repo, but didn't find anything.
Thanks @otterley
Right now we're building the docker images through Jenkins on a Debian 11 worker VM that has Docker installed.
The script it runs looks like this:
clamav_docker_user="${DOCKER_USERNAME}"
docker_registry="registry.hub.docker.com"
# Make sure we have the latest alpine image.
docker pull alpine:latest
# Build the base image
docker build --tag "${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION}_base" .
# Login to docker hub
echo "${_passwd:-${DOCKER_PASSWD}}" | \
docker login --password-stdin \
--username "${clamav_docker_user}" \
"${docker_registry}"
# Make a tag with the registry name in it so we can push wherever
docker image tag ${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION}_base ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION}_base
# Push the image/tag
docker image push ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION}_base
# Give it some time to add the new ${CLAMAV_FULL_PATCH_VERSION}_base image.
# In past jobs, it didn't detect the new image until we re-ran this job. I suspect because it needed a little delay after pushing before pulling.
sleep 20
# Create extra tags of the base image.
docker image tag ${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION}_base ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FEATURE_RELEASE}_base
docker image push ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FEATURE_RELEASE}_base
docker image tag ${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION}_base ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:stable_base
docker image push ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:stable_base
docker image tag ${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION}_base ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:latest_base
docker image push ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:latest_base
# Generate and push an image without the "_base" suffix that contains the databases.
#
# TODO: There's bug where this actually updates _all_ containers and not just the tag we specify.
# See https://jira-eng-sjc1.cisco.com/jira/browse/CLAM-1552?filter=16896
CLAMAV_DOCKER_USER="${clamav_docker_user}" \
CLAMAV_DOCKER_PASSWD="$DOCKER_PASSWD" \
DOCKER_REGISTRY="${docker_registry}" \
CLAMAV_DOCKER_IMAGE="${CLAMAV_DOCKER_IMAGE_NAME}" \
CLAMAV_DOCKER_TAG="${CLAMAV_FULL_PATCH_VERSION}" \
./dockerfiles/update_db_image.sh -t ${CLAMAV_FULL_PATCH_VERSION}
# Login to docker hub (again, because the update_db_image.sh script removed our creds in its cleanup stage)
echo "${_passwd:-${DOCKER_PASSWD}}" | \
docker login --password-stdin \
--username "${clamav_docker_user}" \
"${docker_registry}"
# Create extra tags of the main (database loaded) image.
docker image tag ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION} ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FEATURE_RELEASE}
docker image push ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FEATURE_RELEASE}
docker image tag ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION} ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:stable
docker image push ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:stable
docker image tag ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:${CLAMAV_FULL_PATCH_VERSION} ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:latest
docker image push ${docker_registry}/${CLAMAV_DOCKER_IMAGE_NAME}:latest
# log-out (again)
docker logout "${docker_registry:-}"
I could switch this over to run on a 2020 Mac Mini M1 we have racked in a server room to make use of Docker Desktop on Mac.
Moving to docker buildx is the easiest way to handle this.
Usually the arm64 build will run using qemu on the same machine as the amd64 build, but the arm64 build takes about 10x more time to run.
There are two ways to work around this:
- Setup buildx to use a different machine for the arm64 platform (reaching the remote docker API through SSH or TLS) https://github.com/docker/buildx/discussions/683
- Run the builder image natively, but cross-compile to a different architecture if your toolchain supports it. https://www.docker.com/blog/faster-multi-platform-builds-dockerfile-cross-compilation-guide/
There is another guide here https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/
on Linux you may have to install buildx if you switch to building on the M1 Mac it should already be installed with Docker Desktop. Really keen to see the official arm image if you have the time to implement. I’ve tested building an image from the official Dockerfile in this repo on the M1 Mac and it works great as I mentioned in #536
If you search for "docker multi-arch github actions" you will find a lot of resources, how it can easily be build with github actions.
I have a pretty simple script to build multi-arch, but on gitlab:
build.sh
#!/usr/bin/env sh
architecture=$(arch)
image=<fully-qualified-image-name-with-tag>
if [ "$architecture" = "arm64" ] || [ "$CI" = true ]; then
docker buildx build \
--platform linux/arm64/v8,linux/amd64 \
--no-cache --pull \
-t "${image}" \
--push .
else
docker build --no-cache --pull -t "${image}" .
docker push "${image}"
fi
Excerpt of my gitlab-ci.yml
.docker-multi-arch: # supports regular docker build as well as docker buildx build!
stage: build
image: jonoh/docker-buildx-qemu
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker buildx create --name "${BUILDER_NAME}" --driver docker-container --use
- docker buildx inspect --bootstrap
- update-binfmts --enable
- cd $SCRIPT_DIR
after_script:
- docker buildx rm "${BUILDER_NAME}"
script:
- sh ./build.sh
variables:
SCRIPT_DIR: '.docker'
DOCKER_DRIVER: overlay2
BUILDER_NAME: multiarch-$CI_JOB_ID
References build image: https://hub.docker.com/r/jonoh/docker-buildx-qemu
Maybe someone may adopt this to github-actions/jenkins?
We use AWS CodePipeline to publish our own clamav image in ARM and AMD architectures. Same image is published via ARM and AMD machines, and uses docker manifest. Reference: https://aws.amazon.com/blogs/devops/creating-multi-architecture-docker-images-to-support-graviton2-using-aws-codebuild-and-aws-codepipeline/
I am working on the ClamAV docker support in new clamav-docker repo: https://github.com/Cisco-Talos/clamav-docker
For multi-arch efforts, looking at using debian slim images as discussed in #673
See: https://github.com/Cisco-Talos/clamav-docker/tree/main/clamav/unstable/debian And: https://github.com/Cisco-Talos/clamav-docker/tree/main/clamav/0.105/debian
I have to pause to focus on fixes for the 1.0.0 release and will resume when I get a chance.
My plan is to publish the debian-based multi-arch image tags to dockerhub under clamav/clamav-debian
. The original Alpine-based images will continue under clamav/clamav
, for now. If all goes well, then we can consider setting a date to deprecate the Alpine-based images and switch to Debian for the main images.
Guys, am I just wasting my time trying to get ClamAV working on my M1 Mac at the moment? I am continuously running into the "No clamscan/clamdscan binaries found" after installation with HomeBrew. My NodeJS app won't pick it up at all. Is this a wasted effort or am I just not configuring it correctly?
I am running on a MB Pro M1 Max.
@leafyshark this is really not the right place for your question. It is off topic. Since this is a homebrew install problem and not an issue with clamav, please move your question to the users mailing list, discord, or else and issue in the homebrew issue tracker.
@micahsnyder anything I can help with? I see the Dockerfile definition but no image has been published. Happy to help, let me know.
@a7i @peschee we were having issues with the multiarch builder for docker buildx on our 2020 M1 Mac Mini that is in our (internal) Jenkins pipeline. In fact, Docker on that machine was entirely broken for a while for testing linux arm64 builds (we don't have linux running on arm64 metal to test). But we did just get Docker working again on that device.
I am afraid that trying to create a buildx builder on the Mac Mini will break Docker again. I am tempted to find a different dedicated linux device to do this on.
@a7i @peschee we were having issues with the multiarch builder for docker buildx on our 2020 M1 Mac Mini that is in our (internal) Jenkins pipeline. In fact, Docker on that machine was entirely broken for a while for testing linux arm64 builds (we don't have linux running on arm64 metal to test). But we did just get Docker working again on that device.
I am afraid that trying to create a buildx builder on the Mac Mini will break Docker again. I am tempted to find a different dedicated linux device to do this on.
Hetzner is offering cheap ARM VMs now. Maybe you can consider using that for your builds?
@micahsnyder We'd be willing to sponsor this if it helps you move the multi-arch builds forward.
AWS can provide the build infrastructure free of charge. If you reach out to me at fiscmi at a-m-a-z-o-n dot c-o-m, we can get the process started.
Thank you @otterley and @peschee I will bring it up with my management and team.
@micahsnyder do you have any updates on this?
@micahsnyder do you have any updates on this?
My management was going to discuss it late last week, but some high priority items came up and so they were unable to get to it before my manager wemt on PTO. He will be back in just over 2 weeks, so I will bring it up again then.
Any updates on the discussion?
Is there is a draft PR for an arm based image that I can help get over the line? Happy to help. If it's a matter of CI automation, any chance we can try and publish a one-off version for now or at least get the dockerfiles available to build an arm based image locally?
I managed to build an ARM based image locally using the Jenkinsfile as a guide on how to build the clamav image from scratch.
It seems to work. We can host an unofficial version if folks are interested or provide the steps to do this locally. It was fairly easy.
@nishils Could you kindly provide instructions for performing this task on a local setup? Additionally, hosting it would be highly beneficial and likely assist a lot of individuals.
I did the following steps:
-
clone clamav and clamav-docker repo
-
In
clamav
, remove conflicting files by runningrm -rf ./Dockerfile ./dockerfiles
-
cd into the
1.2.0/alpine
directory inclamav-docker
and copy over the files into the root of theclamav
folder. Ex.cp -r Dockerfile scripts/ ~/Documents/clamav
-
To build, go back into the
clamav
report and rundocker build --tag clamav/clamav:1.2.0_base .
I am sure the same steps can be used for the debian or other versions but I haven't tested.
I can push an image in the next day or two
We're actively working on it. Ref: https://github.com/Cisco-Talos/clamav-docker/pull/26
@micahsnyder Is this available in a docker registry?
Will be here soon: https://hub.docker.com/r/clamav/clamav-debian
Small update: our docker build scripts currently have an issue where we can publish images with multiple architectures but cannot update the databases for all of the architectures. We're planning to work on that in a couple weeks - presently working on some other things so can't do it all right now.
Anyways, that's why we haven't announced general availability for these images.
Hi, any update on this?