terraform-provider-docker icon indicating copy to clipboard operation
terraform-provider-docker copied to clipboard

Support for Docker buildx

Open yuriy-yarosh opened this issue 1 year ago • 8 comments

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

It would be great if this docker provider would support docker buildx.

I'm primarily interested in kubernetes driver for buildx - this would allow distributed builds across multiple buildkit builders launched as kubernetes pods, which speeds up builds drastically.

Otherwise it would make total sense to run buildkit inside a docker container itself with the common docker container driver.

The general idea is to provision workers with buildx create and terminate them after some idle timeout.

This is very important feature that would allow multi-arch builds using this docker provider, while leveraging qemu based in-container emulation, if necessary.

New or Affected Resource(s)

  • docker_image - will get a new buildx block for docker buildx support
  • docker provider settings will need a buildx section

Potential Terraform Configuration

Added buildx settings:

provider "docker" {
  host = "unix:///var/run/docker.sock"
  
  buildx = {
    buildkitd = {
      flags = []
      config = "./buildkitd.toml"
    }
    
    kubernetes = {
      image = "buildkitd:latest"
      namespace = "default"
      replicas = 10
      
      requests = {
        cpu = "2"
        memory = "2G"
      }
      
      limits = {
        cpu = "4"
        memory = "4G"
      }

      nodeselector = "kubernetes.io/arch=arm64" # buildkit can handle multiple arch'es automatically, just an example
      tolerations = "nvidia_gpu=exists" # just an example ...
      rootless = true # obviously
      loadbalance = "sticky" # to stick layer hashes to specific nodes, so layer cache could be used
      
      qemu = {
        install = true
        image = "qemu:latest"
      }
   }
}

Nothing really changes in image build department

resource "docker_image" "this" {
  name = "zoo"
  buildx { 
    ## everything as usual
    
    path = "."
    tag  = ["zoo:develop"]
    
    build_arg = {
      foo : "zoo"
    }
    
    label = {
      author : "zoo"
    }
  }
}

References

Kubernetes driver for Docker BuildX medium article.

yuriy-yarosh avatar Jul 25 '22 23:07 yuriy-yarosh

Docker contexts will no longer support kubernetes, but it it should not affect buildx kubernetes driver... at least NTT uses it afaik.

yuriy-yarosh avatar Jul 26 '22 01:07 yuriy-yarosh

Thanks for submitting this issue! On the first glance, this seems like a feature which needs thorough analysis and implementation, I might be wrong, though. I currently do not have the capacity to implement this, especially since there are many other upvoted issues open since a while which also need attention.

Junkern avatar Jul 27 '22 15:07 Junkern

@Junkern thank you, Martin, Attention appreciated.

+/- there's not much to implement as it's just a CLI plugin and the respective check should be performed prior to initiating the build. If no buildx CLI plugin present although it's usage had been configured in the provider settings - buildx should be skipped... and that's pretty much it.

I can contribute this, if / when I'll get some free time.

yuriy-yarosh avatar Jul 27 '22 16:07 yuriy-yarosh

It looks like this was already implemented in https://github.com/kreuzwerker/terraform-provider-docker/commit/78c42d7657cd9e1ab6216fce2c798d9b09980550

See https://github.com/kreuzwerker/terraform-provider-docker/blame/3b904484f4c8fce8234c451f8927c8b14260b4f7/internal/provider/resource_docker_image_funcs.go#L359

MFAshby avatar Dec 21 '22 10:12 MFAshby

@MFAshby to a certain degree, yes. My goal is to get proper multiarch concurrent builds with this provider, and with the new 3.0.1 release and the introduction of the platform build arg it should do the trick.

Although, maybe we'll also need platforms ? Not sure right now, needs investigation - I'll drop a line or two in here if something goes wrong.

yuriy-yarosh avatar Jan 30 '23 18:01 yuriy-yarosh

This would also allow support for pulling from private repos using the docker RUN command like so:

RUN --mount=type=ssh

JohnKoss avatar Aug 23 '23 17:08 JohnKoss

I would also like to use multi-stage builds and to define the output of docker buildx build, for example to export some files from an image to the local filesystem for use as input in another resource.

sprat avatar Nov 15 '23 21:11 sprat

+1 for buildx support. I can not build arm64 image for aws lambda inside x86 gitlab ci runner due to such issue

svdimchenko avatar Apr 29 '24 19:04 svdimchenko