cage
cage copied to clipboard
Cage testing and building overhaul
In our internal code base, we've noticed a pattern where we have the following files:
-
Dockerfile.in
: This contains two build arguments that are manually set usingsed
, and which are used inFROM
lines. This can finally be replaced byARG
in the latest version ofdocker
. -
docker-compose.yml
: Many individual service repos wind up recreating the info inplaceholders.yml
in adocker-compose.yml
file for testing purposes. - Standard CI scripts: Build image, run tests (also needs
docker-compose.yml
), publish image. - Standard dev scripts: An isolated
./test.sh
setup script that usesdocker-compose.yml
to run tests withoutcage
. I'm not sure that this is necessary in any way, at least in most cases.
Some further issues have come up with Rust projects:
- Passing limited Vault credentials to builds. Some builds may need to access either (1) internal package repositories or (2) private git repositories, which requires passing crendentials as build args in some cases. We can't pass credentials directly, because
docker
leaks them into the built image, but this doesn't necessarily prevent us from passing short-lived magic cookies that allow requesting credentials from elsewhere. - Caching compilation artifacts using named Docker volumes.
- Multi-stage builds, where development, building and testing require a large image, but the final build would actually work quite nicely in an Alpine image.
It might be nice to be able to generate Dockerfile, docker-compose.yml, and maybe CI and dev scripts.
For instance, if you have a global placeholder postgres database, you might want to generate a docker-compose.yml that includes that placeholder for your tests. Running:
cage generate tests --placeholder db --placeholder redis
Would generate:
# docker-compose.yml
services:
app:
build: .
image: <name of app>
links:
- db
- redis
db:
image: postgres # imported from pods/placeholders.yml
redis:
image: redis # imported from pods/placeholders.yml
Stuff that goes in CI scripts and a shortcut test.sh
is going to be very dependent on CI system, need for db initialization, test scripts, etc. So that may remain a user task.
This would reduce some duplication while allowing individual apps to be independently testable. It's common for microservices to manage their own datastore schemas, and it's a common desire (in the microservice world) to be able to develop on a service in complete isolation. cage test
goes against this grain in some ways.
IOW, you could think of the pods/*
specification that cage manages as a sort of "schema" into your entire architecture - that is one team may be interested in running services A, B, C, and D together, while another team is interested in using cage to orchestrate services B, D, E, and F.
This is definitely still an interesting issue. Certainly, --build-arg
can replace Dockerfile.in
, but the rest of the issues are still of interest.