contour
contour copied to clipboard
improve tagging & deployment for install-contour/provisioner-working
Currently, make install-contour-working and make install-provisioner-working use a pseudo-random tag based on the running process ID when building an image to load into the local development kind cluster.
The benefit of that approach is that every time make install-contour-working is run, the tag changes, so the Contour Deployment is updated which triggers a rollout and the new code gets deployed into the development cluster.
However, this also results in a large number of unique images in the local Docker system over time, which can fill up disk.
The goal is to come up with a tagging and deployment scheme for that enables rapid re-deployment of updated code for both the static Contour and the provisioner use cases, and also doesn't fill up the dev environment with randomly-tagged images.
At least for make install-contour-working, one option would be to use the current git commit SHA for the tag, and to trigger a rollout either by adding a unique label to the pod template spec, or explicitly using kubectl rollout restart.
Is this issue up now or not?
Nobody had time to work with this so far. The test scripts still generate random image tag which makes the pod restart (as a side effect) when re-executed
https://github.com/projectcontour/contour/blob/c37e4b71ab234fed61c3213fd5ac63924c3505a8/test/scripts/install-contour-working.sh#L47-L49
https://github.com/projectcontour/contour/blob/c37e4b71ab234fed61c3213fd5ac63924c3505a8/test/scripts/install-provisioner-working.sh#L31-L33
What could be the possible solution here of this problem? I would like to contribute.
Sounds great @harshil1973!
There are couple of ideas in the description
At least for make install-contour-working, one option would be to use the current git commit SHA for the tag, and to trigger a rollout either by adding a unique label to the pod template spec, or explicitly using kubectl rollout restart.
Meaning, any change in deployment's pod template would trigger the re-creation of the pod - but it would need to be something else than the image tag change we have now, let's say custom label with a timestamp for example. Or just using kubectl to restart the deployment.
I was thinking of deleting all the images that are built with the name ghcr.io/projectcontour/contour but I think without user consent we shouldn't delete docker images from the environment. What could be the solution of filling up disk space with newly built docker images?
Re-using single image tag between builds would avoid creating tons of new images (one per build) that get left behind.