terraform-provider-docker
terraform-provider-docker copied to clipboard
Support for `progress` option for docker build
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Description
Currently there is no way to get logs from the build of the image, you just get the exit code and the command that failed that is not so helpful for the debugging. Adding progress
option to the build could allow to set it to plain
(default is auto
) to get output of the docker build.
New or Affected Resource(s)
- docker_image
Potential Terraform Configuration
resource "docker_image" "zoo" {
name = "zoo"
build {
path = "."
progress = "plain"
tag = ["zoo:develop"]
build_arg = {
foo : "zoo"
}
label = {
author : "zoo"
}
}
}
This is actually tricky to implement for two reasons:
-
The https://github.com/docker/cli (where we often look at implementation details) has two different ways of implementing that: one for docker servers without buildkit (https://github.com/docker/cli/blob/v20.10.17/cli/command/image/build.go) and one with buildkit (https://github.com/docker/cli/blob/v20.10.17/cli/command/image/build_buildkit.go) We currently support both types of servers, so we also would have to implement both types of
progress
. And what is really interesting, theprogress
flag is actually a feature of buildkit. In the "old" docker server there is no such flag (but as we don't output the build progress anyway this is not really a blocker) -
The terraform provider framework does not support writing/logging output to the terminal directly: https://www.terraform.io/plugin/log/writing (see warning at the top). That means we would also have to workaround that. Possible Options are:
- output the build progress as
[INFO]
log, but then you would also get other log outputs of the provider and terraform itself - add some kind of
progress_output_file
attribute and we would write the output to that file? - ???
- output the build progress as
I am leaning towards the first option. @m4t22 , you probably want to have the progress
option as a debug thing to see if everything worked correctly. You wouldn't use it in an automated way and parse the output programmatically, would you?
Hello @Junkern, thank you for looking into this! My main pain point is that the user is not able to see what exactly went wrong once the image build fails, this is the case when logs are the most important. I don't need to parse it or anything, just to check them. If you will be able to provide any way to check build logs that would be awesome.
It doesn't even have to be printed to stdout, if there will be possibility to view logs with docker logs <build container_id>
that would be cool. Having build logs in terminal output though would be really cool.
docker logs
is only for running docker containers, it does not contain build logs for failed builds...
One thing I could look into is passing the build error all the way to the the provider error output.
If that does not work, providing a build log file might be another way...
I have just tested it with a really simple Dockerfile
FROM busybox
RUN ./build.sh
and the corresponding terraform
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
}
}
}
resource "docker_image" "this" {
name = "terraform-docker-playground"
build {
path = "${path.module}"
}
}
The error message was
│ Error: [DEBUG] failed to display jsonmessages stream &jsonmessage.JSONError{Code:127, Message:"The command '/bin/sh -c ./build.sh' returned a non-zero code: 127"}
When running docker build -t fail --progress plain .
I would get (only showing the last step):
#5 [2/2] RUN ./build.sh
#5 sha256:8672b5dba5e2604503c47d0e18fe97f0f0f08f445611141571b653e8336ea4d6
#5 0.133 /bin/sh: ./build.sh: not found
#5 ERROR: executor failed running [/bin/sh -c ./build.sh]: exit code: 127
Even with --progress plain
the docker build
is not showing more output than the provider error message. What would you expect to be different for debugging purposes?
@m4t22
Hello @Junkern,
The issue I had was with node server image build, we run npm run compile
script in our Dockerfile and the information in the terraform was that this command failed, while in docker compose build output we could identify the actual reason for this failure. It extends the time we need to spend on debugging because to debug we had to run docker compose eitherway.
docker logs
is only for running docker containers, it does not contain build logs for failed builds... One thing I could look into is passing the build error all the way to the the provider error output. If that does not work, providing a build log file might be another way...
It's not true, if docker is in exited
state you can still read its logs and that is the case if the build fails. If image build fails build container is in exited
state. Executing docker logs
is just reading from a log file located in the fs.
Reference: https://stackoverflow.com/questions/36666246/docker-look-at-the-log-of-an-exited-container
Build progress would be great, but also upload progress.
@Vicar-of-AI, as already mentioned, it is sadly not possible to directly write build/upload progress to stdout
(related terraform sdk issue: https://github.com/hashicorp/terraform-plugin-sdk/issues/145)
The only possible way would be to output this as INFO
or DEBUG
logs and then the user would have to set the log levels accordingly while running terraform
, but this is not very intuitive from my point of view..
@m4t22 You are right about the build containers. Still: We are relying on the docker
cli here. And with docker build -t fail --progress plain .
we don't get any more output than in the error message of this provider...
In general I am developing a tendency to not support a build/progress output, because it is really hard to implement. I am open to any suggestions, though!
The only possible way would be to output this as INFO or DEBUG logs and then the user would have to set the log levels accordingly while running terraform, but this is not very intuitive from my point of view..
@Junkern it may not be intuitive but the end result would be tremendously valuable. As it is now I have to tail the docker daemon logs to get any context (which is minimal from the daemon logs).
Would it not be sufficient for docker builds to put the exact build command to the stdout, to be able to reproduce the build in case of errors?
In practice, one only looks into logs either when something went wrong or one is testing something.
For testing it is anyway better to use the docker
binary directly first and make sure everything works fine before automating things with this provider.
Therefore, given the constraints of outputting to stdout explained by @Junkern in the first comment, I think it would be sufficient (and much better than nothing) to have the build logs written to a configured file.
... For testing it is anyway better to use the docker binary directly first and make sure everything works fine before automating things with this provider. ... Taking into account the complexity of terraform builds with their variables and expressions, it is not always completely obvious to see which build arguments are exactly passed to the docker build and if they match the expectations. And since build logs seem not to be collectable too easy putting the exact docker build command on the output in case of build errors is a dead simple and easy way to save time, finding the root cause of errors. Even if you have the build logs - to reproduce the build you need to know the exact parameters that were used.
@pspot2 We have a docker image that builds locally with docker build
but fails in CI when using this provider from Terraform.
The fact that we have no visibility into the error makes it really hard to troubleshoot this issue a disqualifies this provider for us.
Seems like we have no other choice but running docker build
using local-exec
. It's a shame because this would be so much nicer, as we're using a high level Terraform module which also takes care of creating an ECR repo, etc.
Fully agreed with @cristian-zebracat here - exactly same problem and I will have to reside to local-exec