terraform icon indicating copy to clipboard operation
terraform copied to clipboard

Support provisioning using `docker exec`

Open clofresh opened this issue 10 years ago • 14 comments

Instead of requiring an ssh connection to run a provisioner on a docker container, it would be nice to just do a docker exec so that we don't need to set up an ssh daemon on the container.

(I know I know, I'm not supposed to run a provisioner on a docker image, configuration should be done at build time. But I'm trying to mirror my prod installation which isn't docker)

clofresh avatar Jan 15 '16 03:01 clofresh

Would a hypothetical new docker-exec provisioner suit your use-case?

resource "docker_container" "foo" {
    // ...

    provisioner "docker-exec" {
        inline = [
            "echo hello world"
        ]
    }
}

apparentlymart avatar Jan 17 '16 01:01 apparentlymart

Yep that'd work!

clofresh avatar Jan 17 '16 01:01 clofresh

I think it would be a terrific feature!

But wouldn't an hypothetical new docker connection type a better solution as it would be used in both remote-exec and file provisioners allowing to upload files to the container as well.

I had a look to the docker provider code and it uses https://github.com/fsouza/go-dockerclient as docker client. This library seems to support both file uploads and command execution

What do you think about this? Is this in line with the concept of connection?

loicalbertin avatar Mar 16 '16 16:03 loicalbertin

HashiCorp guys working on terraform (@phinze, @mitchellh, @catsby, @jen20, ...) what do you think about this idea of a new docker connection type ?

Thanks in advance for your feedback. Loïc

loicalbertin avatar Mar 24 '16 11:03 loicalbertin

@jen20 I'm interested in contributing such feature. But I'd like to discuss it a little bit and specially check if the docker connection is actually the good way to implement this.

loicalbertin avatar Mar 31 '16 20:03 loicalbertin

What happened to this? It's just easier to use docker machine and docker swarm? No need for HashiCorp. Hmmm? Get your fingers out.

richard-senior avatar Apr 13 '19 15:04 richard-senior

Would a hypothetical new docker-exec provisioner suit your use-case?

resource "docker_container" "foo" {
    // ...

    provisioner "docker-exec" {
        inline = [
            "echo hello world"
        ]
    }
}

Is this going to be implemented? It's a great idea.

mesea-mms avatar Sep 23 '20 09:09 mesea-mms

:+1: for a docker connection type. Either on existing containers or being able to create new containers based off of on image would be appreciated. Same with kubernetes pods.

mjsir911 avatar Jun 21 '24 16:06 mjsir911

The concept of provisioners has since emerged as largely a mistake: they don't really do anything that a managed resource type can't do and Terraform can't track them well because they are not stateful, so Terraform ends up having to make worst-case assumptions like that the failure of any provisioner means that the entire resource object is damaged ("tainted") and therefore needs replacing. The current provisioners remain largely for backward compatibility and because they have only minimal dependencies in the Terraform codebase so they don't cause too many maintenance headaches.

While I don't intend this comment as a "no, absolutely not, never", I find it unlikely that what this issue suggested would be implemented exactly as described. Instead, this is something I would suggest to implement as a new managed resource type in a provider, which allows specifying something to execute in Docker both during its create and during its delete actions. It could also potentially allow executing something on update, but that's typically harder to design because it's unclear what "update" means for an object representing arbitrary imperative actions.

That means that the docker dependencies only need to be downloaded for those who choose to use that particular provider, and that Terraform can track (in its usual way) whether the action has already been taken, propose to replace it when needed, etc.

apparentlymart avatar Jun 24 '24 16:06 apparentlymart

Docker/Kubernetes/Linux Containers and many hypervisors support APIs to manipulate files and execute commands inside container/VM. I think the best way is to extend provisioner connection block with addition connection types. I think provider can export supported connection type for provisioner and provide connectivity to the container. For example:

  • Docker provider could export connection type docker
  • Kubernetes provider could export connection type kube
  • LXD provider could export connection type 'lxd'
  • Incus provider could export connection type 'incus'
  • etc.

tregubovav-dev avatar Jun 25 '24 22:06 tregubovav-dev

The "communicator" abstraction (which is what connection blocks are configuring) is poorly-specified and already very strained from a design standpoint. It was originally designed only for SSH and had WinRM retrofitted in a clumsy way where the connection content gets decoded by different code depending on the type but is nonetheless expected to follow the same schema in both cases. I don't think that abstraction has any future and is preserved primarily for backward compatibility.

For a system that has a reasonable API for writing a file into something our current best practice is to have the provider for that system offer a managed resource type representing a file in that system, such as local_file for the local filesystem, aws_s3_object for Amazon S3, and so forth.

This allows each system to tailor the resource type schema to suit the capabilities of the remote system, rather than trying to place everything behind an unnecessary abstraction that is a poor fit for some systems. It's an especially poor fit for systems that require additional configuration beyond a hostname to connect to and SSH-like credentials, because those details are a fixed part of the connection block schema that all communicators must use.

There isn't yet an SSH provider for writing files over SSH or SFTP, but that's only because we already have the legacy provisioner/communicator mechanism and so there's not been any strong need for it. We are intending to build an SSH provider for https://github.com/hashicorp/terraform/issues/8367 and once that exists it would be a good home for an ssh_scp_file and/or ssh_sftp_file resource type that would be the new recommended way to represent a file written over SSH.

There is already a Kubernetes provider, but I don't know if it exposes the ability to write files into a container. If it doesn't then that seems like a reasonable feature request for that provider.

I'm not seeing any significant benefit to adding "file writer" or "command runner" as first-class concepts for providers, since resources are already a broad enough abstraction to encompass both, and already have considerable design investment to integrate them well into Terraform's plan/apply workflow, whereas the provisioner features have been intentionally neglected for many years because the design of that concept is a poor fit for everything else Terraform does. If we do add something new in this area, I expect that it will be a better-designed replacement for provisioners/communicators, rather than an evolution of that design.

apparentlymart avatar Jun 25 '24 23:06 apparentlymart

Hello Martin, Nice to see Hashicorp's visions for provisioners' functionality. Please declare that provisioner functionality is obsolete and should be used only for compatibility purposes in the Terraform language documentation. Based on your comment, it appears that we, as customers, need to take the initiative and request providers' maintainers/developers to implement the corresponding functionality, correct?

tregubovav-dev avatar Jun 26 '24 00:06 tregubovav-dev

The current recommendation is that provisioners are a last resort, but they cannot be removed during the 1.x series because they are protected by compatibility promises. The Terraform team intends to preserve the current behaviors but to not change them, as described in the Provisioners section of the Terraform v1.x compatibility promises. There is no reason you cannot continue using the functionality that's already present if it already meets your needs.

If you want any new functionality that is related to an external system that is integrated with Terraform using a Terraform provider (which includes both Docker and Kubernetes) then yes, the appropriate place to record a feature request for any new functionality related to that external system is in the GitHub repository for that system's provider. If there is currently no such provider (as is the case for SSH) then you could open a feature request for such a provider to exist in this repository, but as I mentioned we are already intending to introduce an SSH provider as part of another project so there is no need to open a separate feature request for that one.

We do not intend to add any new target-platform-specific functionality to Terraform Core, because Terraform Core is supposed to be a target-agnostic runtime engine that integrates with other systems using providers. (Existing integrations are retained for backward compatibility but are likely to be deprecated in favor of provider-defined functionality at some point.)

apparentlymart avatar Jun 26 '24 01:06 apparentlymart

Thank you Martin for clear explanation!

tregubovav-dev avatar Jun 26 '24 02:06 tregubovav-dev