terraform-provider-lxd
terraform-provider-lxd copied to clipboard
how to perform "remote-exec", but inside the LXD
Terraform version: Terraform v0.9.6 terraform-provider-lxd: 0.9.5 beta1
As part of the LXD creation, I push some config files and a service startup script (upstart) These cannot be baked into the LXD image.
It would be quite useful if I could target command execution directly inside the LXD container.
For now, I am making do with a "remote-exec" on the LXD server and do:
lxd exec mylxd1 -- telinit 2 (to make the upstart script start)
Regards, Shantanu
@jtopjian and I have discussed this a little in the past.
To get the native remote-exec working via the LXD websocket exec channel we'd need to add some code to Terraform core (which that project may or may not be willing to accept, but is extremely unlikely while this provider is a separate project).
@jtopjian had floated the idea of a lxd_remote_exec pseudo resource as a workaround in the meantime, however I believe he didn't put in any effort yet. This is because the plan is to eventually contribute this provider code to the main Terraform project and then tackle websocket exec channel.
+1 for lxd_remote_exec from side.
I already have a setup where I could test this.
I work around this by doing something like
provisioner "local-exec" {
command = <<EXEC
lxc exec ${var.instance_name} -- bash -xe -c '
echo "this runs inside the container!"
'
EXEC
}
locals {
cloud-init-config = <<EOF
#cloud-config
disable_root: 0
ssh_authorized_keys:
- ${file("conf/terraform.pub")}
EOF
}
resource "lxd_cached_image" "ubuntu1804" {
source_remote = "ubuntu"
source_image = "18.04"
}
resource "lxd_profile" "terraform_default" {
config = {
"user.user-data" = local.cloud-init-config
}
description = "Default LXD profile created by terrraform"
name = "terraform_default"
device {
name = "root"
properties = {
"path" = "/"
"pool" = "default"
}
type = "disk"
}
device {
name = "eth0"
properties = {
"nictype" = "bridged"
"parent" = "br1"
}
type = "nic"
}
}
resource "lxd_container" "test1" {
name = "test1"
image = lxd_cached_image.ubuntu1804.fingerprint
ephemeral = false
config = {
"boot.autostart" = true
}
limits = {
cpu = 2
}
profiles = ["terraform_default"]
# ...
}
Works only for cloud-init enabled images though which is very inconvenient
@jtopjian I know this issue is old, but still, I see great benefit in adding a lxd_remote_exec resource, and on first look, it doesn't seem a big issue, since this is something that LXD API supports. Have you tried it before? What stopped you from doing it?
Yeah, it's an idea that has been floated around a lot before. We've looked into it in two different ways:
The idempotency of a remote exec resource becomes difficult to manage. Each resource block would need to have a conditional property/parameter to ensure the block isn't executed on each run. If you're familiar with Puppet, this is similar to effectively managing an exec block. It's not impossible to do, but it soon becomes apparent that using a provisioner would be easier and more manageable.
I created this a long time ago and kept it around for reference: https://github.com/terraform-lxd/terraform-provider-lxd/pull/115
When I spoke to some Terraform devs about this years ago, the idea itself made sense but there wasn't a cleaner way of implementing it (having two of the same binaries exist, with different names). I've been away from Terraform for a long time now, so I don't know if there is a better way to implement something like this. If there is, then I'd be in favour of it.
Recently LXD introduced new attributes in /1.0/containers/
Source: https://documentation.ubuntu.com/lxd/en/latest/api-extensions/#container-exec-recording
container_exec_recording
Introduces a new Boolean record-output, parameter to /1.0/containers/