ngsi-timeseries-api
ngsi-timeseries-api copied to clipboard
Amend Dockerfile to support alternative base image
Proposed changes
This PR updates the Dockerfile so it is flexible enough to be able to use alternative base images should you wish. The base image still defaults to using the alpine distro, but other base images can be injected using --build-arg parameters on the command line. For example, to create a container based on Red Hat UBI (Universal Base Image) 8
add BUILDER, DISTRO, PACKAGE_MANAGER parameters as shown:
sudo docker build -t iot-agent \
--build-arg BUILDER=registry.access.redhat.com/ubi8/python-38 \
--build-arg DISTRO=registry.access.redhat.com/ubi8/python-38 \
--build-arg PACKAGE_MANAGER=yum
To create a container based on Debian Linux add BUILDER, DISTRO, PACKAGE_MANAGER
parameters as shown:
docker build -t iot-agent \
--build-arg BUILDER=python:3.8 \
--build-arg DISTRO=python:3.8-slim \
--build-arg PACKAGE_MANAGER=apt . --no-cache
Types of changes
What types of changes does your code introduce to the project: Put
an x in the boxes that apply
- [ ] Bugfix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
Checklist
Put an x in the boxes that apply. You can also fill these out after
creating the PR. If you're unsure about any of them, don't hesitate to
ask. We're here to help! This is simply a reminder of what we are going
to look for before merging your code.
- [x] I have read the CONTRIBUTING doc
- [x] I have signed the CLA
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] I have updated the [RELEASE_NOTES][RELEASE_NOTES.md]
- [x] I have added necessary documentation (if appropriate)
- [ ] Any dependent changes have been merged and published in downstream modules
Further Comments
The default build does not change, it just give a user the chance to try out different things. I've seen articles like Don't use Alpine or Switch to Python 3.10 - this is giving the end-user the option to try things out for themselves.
Image sizes vary wildly, but of course if you want to standardize your base images this isn't an issue.
ql-ubi latest d1276ef2ac96 27 minutes ago 1.05GB
ql-alpine latest d830ffcc4985 34 minutes ago 136MB
ql-slim latest d8ecb41f06df 48 minutes ago 335MB
There are minor updates to the Dockerfile like setting a non-privileged USER to follow best practice. Hadolint still complains, but these are false positives like not setting the POSIX shell (which is recommended to be an ignored rule for alpine actually)
CLA Assistant Lite bot All contributors have signed the CLA ✍️
Hi @jason-fox, I really like the flexibility this PR adds, that's great. But it also introduces quite a bit of complexity, is this a Fiware requirement?
@jason-fox
It looks like the tests are broken b/c in our test code we assume the Docker file is in the root dir. Hopefully it shouldn't be a train smash though. Famous last words. I should actually grep all the docker build commands we have in the repo...
here's a start. there may be more places though
- https://github.com/orchestracities/ngsi-timeseries-api/search?q=docker+build
if you could please fix the paths and push again, hopefully we get a clean test run :-)
On a side note, I was wondering how you guys feel about moving away from Docker files and using Nix instead? Some advantages I can think of off the top of my head:
- Truly reproducible builds---Docker can't do that, see e.g. #273 about it.
- Tiny Docker images and also easily customisable to include whatever extras you need if you'd like to tweak the official QL image.
- Reproducible dev envs. You can reuse the same Nix expression you use to build the image to start a shell with an isolated development environment with all the libs and tools you need to develop QL. Say good-bye to hours spent setting up and maintaining broken dev envs, "but it works on my box..." situations, and all that. (Python sandbox tools like
pipenvcan only get you so far b/c they can't manage native deps.)
Just an idea...
here's a start. there may be more places though
- https://github.com/orchestracities/ngsi-timeseries-api/search?q=docker+build
if you could please fix the paths and push again, hopefully we get a clean test run :-)
Fixed b1f7073 . Because you are extensively using Docker for integration testing (of potentially failing code) it makes more sense to maintain and update the existing Dockerfile in root. There are now currently two Dockerfiles in this PR.
Dockerfileis used for testing. It retrieves sources usingCOPY .docker/Dockerfileis used for production builds - it retrieves sources from GitHub
The advantage of the former is that it applies the .dockerignore rules. The advantage of the latter is that the build will always retrieve code that is committed and pushed on to GitHub and has an indelible record.
Which one makes sense for the orchestracities dev team to push to your container registry depends on this line here in .github/workflows/docker.yml - the PR's context is looking for docker/Dockerfile - maybe it shouldn't.
Even if you choose to revert this context back to the Dockerfile in root, docker/Dockerfile is still useful as I (as an external dev) can then build an image passing in the GitHash or ask for latest or stable in the --build-args should I wish. Building from the Dockerfile in root uses COPY . and will get whatever bleeding-edge changes I make locally which would not be reproducible elsewhere.
Hi @jason-fox, I really like the flexibility this PR adds, that's great. But it also introduces quite a bit of complexity, is this a FIWARE requirement?
Lots of ongoing discussions around Log4Shell and CVEs. The aim of the PR is to reduce the burden of maintaining and testing a safe and current base image elsewhere. It shouldn't be the OrchestraCities dev team building and testing every possible variant of an image. Given our relationship with RedHat, I could see the FIWARE Foundation testing and then pushing UBI images of selected FIWARE components to quay.io for example, where the open CVEs on the image would be publicly readable.
On a side note, I was wondering how you guys feel about moving away from Docker files and using Nix instead?
Having a Dockerized container image is already a FIWARE Contributor requirement:
ⓕ At least one
Dockerfile(hereby named as 'referenceDockerfile'), intended to FIWARE Generic Enabler users, MUST be provided.The base image (Ubuntu, CentOS, etc.) for such a
Dockerfilemight depend on each FIWARE Generic Enabler, although some recommendations (see below) are provided. TheDockerfilecan be at the root folder of the FIWARE Generic Enabler's Repository or under a folder nameddocker.
Nothing stopping you from doing so, but there is no push from the FIWARE Foundation to move to Nix.
Nothing stopping you from doing so, but there is no push from the FIWARE Foundation to move to Nix.
Cool, I actually missed having a Docker file is a requirement. I'd rather go w/ Docker then, ignore my suggestion about Nix :-)
i am a bit lost :)
@jason-fox, let me try to recap what I understood:
- you edited the docker file to use different base images and to use the code in the repo.
- for the local testing, you retained the image, but this time it sources from the code base.
correct?
if that's the case:
- having two different docker files brings maintenance complexity and code duplication, is there a way to have a base "docker" file with the common parts and two files with the specific parts?
- it would be better that all docker files stays in the same directory and that we simply name them differently, assuming there is a way to "import":
Dockerfile.base,Dockerfile.distro, andDockerfile.dev
i am a bit lost :)
@jason-fox, let me try to recap what I understood:
- you edited the docker file to use different base images and to use the code in the repo.
- for the local testing, you retained the image, but this time it sources from the code base.
correct?
if that's the case:
- having two different docker files brings maintenance complexity and code duplication, is there a way to have a base "docker" file with the common parts and two files with the specific parts?
- it would be better that all docker files stays in the same directory and that we simply name them differently, assuming there is a way to "import":
Dockerfile.base,Dockerfile.distro, andDockerfile.dev
actually, reading the docker docs, we can simply take advantage of the target options:
- we have a single docker file with two stages one for prod and one for dev that both builds on the same base:
- for dev we specify as target the dev stage, for prod the prod stage
feelings?
hi @jason-fox did you have any chance to look at my comment?