Bastion key and host key must be the same for provisioning to work
As per the question on Terraform's google group: https://groups.google.com/forum/#!topic/terraform-tool/4Pmq5W8AXDo, I am unable to provision an AWS instance via a bastion host unless I use the same key for both instances.
I was unable to locate a similar issue. If one exists, feel free to reference it.
Terraform Version
Terraform v0.6.10
Affected Resource(s)
Please list the resources as a list, for example:
- aws_instance
- provisioner connection
- provisioner "file"
Terraform Configuration Files
resource "aws_instance" "locust" {
instance_type = "${var.instance_type}"
ami = "${var.ami_id}"
subnet_id = "${element(split(",", terraform_remote_state.eng.output.private_subnet_ids), 2)}"
vpc_security_group_ids = ["${aws_security_group.locust.id}"]
key_name = "${var.key}"
iam_instance_profile = "${aws_iam_instance_profile.locust.id}"
connection {
user = "ec2-user"
key_file = "${var.key}.pem"
bastion_host = "${terraform_remote_state.eng.output.nat_eip_public_ips}"
bastion_key = "${var.bastion_key}.pem"
agent = false
}
provisioner "file" {
source = "config/locust_config.json"
destination = "/tmp/config.json"
}
provisioner "file" {
source = "scripts/${var.locust_script}"
destination = "/tmp/install.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/install.sh",
"/tmp/install.sh ${var.environment} ${var.consul_ip}"
]
}
Expected Behavior
Host should have been provisioned. This occurs if both the bastion host and and aws_instance.locust use the same key.
Actual Behavior
aws_instance.locust: Provisioning with 'file'...
Error applying plan:
1 error(s) occurred:
* Error connecting to bastion: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
- terraform apply -var-file=var-files/environment_vars
Important Factoids
None I know of.
References
https://groups.google.com/forum/#!topic/terraform-tool/4Pmq5W8AXDo
@tyrostone — It appears you're setting bastion_key to the path of the key, but the documentation indicates that should be the actual contents of the key via the file() interpolation function. It's possible the docs were updated since you ran into this problem.
Does using the key contents there fix it for you?
@erydo - I believe I attempted this and it did not work. That being said, when I set bastion_key to the same variable as host_key, this works. If your theory were correct, that should fail as well, since both are strings, not the file() interpolation function.
@tyrostone The host key here is set with key_file, which does refer to a file path. I'm not sure if terraform falls back to the host key if the bastion key isn't accepted, but if it does then that would cause success if the bastion were accepting the ${var.key}.pem. According to the docs the bastion_key default value is that of the host key.
Anyway, I'm not sure—this issue just caught my eye while doing research and I noticed the interpolation difference so wanted to point that out.
I'm not sure if this is true in this particular case, but a while ago we switched a bunch of arguments that previously took filenames to instead take literal file contents, with a backwards-compatibility hack to still accept the explicit filename. Possibly that's what's going on here.
+1
I've just spent ages trying to work this out using 0.7.0 and 0.7.4.
We try to use variable to provide the SSH key files. Even if I point both variables at the same key the bastion host connection fails. They have to point to the same variable and same key. I think it also needs agent = false
The working config looks like
provisioner "file" {
source = "${path.module}/provision.sh"
destination = "/tmp/provision.sh"
connection {
agent = false
bastion_host = "${var.ras_server}"
bastion_user = "${var.ras_server_user}"
bastion_port = 222
bastion_private_key = "${file(var.ec2_user_ssh_key_path)}"
user = "root"
host = "${self.private_ip}"
private_key = "${file(var.ec2_user_ssh_key_path)}"
timeout = "2m"
}
}
We would like
provisioner "file" {
source = "${path.module}/provision.sh"
destination = "/tmp/provision.sh"
connection {
bastion_host = "${var.ras_server}"
bastion_user = "${var.ras_server_user}"
bastion_port = 222
bastion_private_key = "${file(var.personal_ssh_key_path)}"
user = "root"
host = "${self.private_ip}"
private_key = "${file(var.ec2_user_ssh_key_path)}"
timeout = "2m"
}
}
+1
I have the same problem. I would like to reach a bunch of machines through the bastion server but using different private keys. I think it's more secure this way.
Thank you!
Just rename bastion_key to bastion_private_key
if using bastion_private_key where does the private key file sit? Can it sit on disk alongside the terraform files?
Any updates on this? Looks like i face the same issue. terraform version 0.9.4
This worked well for me v0.9.5.
I couldn't get ssh-agent integration to work on OS X, so I had to remove the pass phrase from the bastion_private_key ;-)
hello guys,
I have a scenario to provision a two AWS Ec2 instances in same VPC. --> 1 Ec2 instance in public subnet. --> 1 Ec2 instance in private subnet. --> And try to connect my private subnet instance using public subnet instance to provision chef in private subnet instance.
And we try to provision the bastion host instance in public subnet and application server in private subnet but when try to connect to the private subnet from the bastion host.
provisioner "file" {
source = "test.txt"
destination = "/home/ec2-user/test.txt"
connection {
agent = false
bastion_host = "${var.bastion_host}"
bastion_user = "ec2-user"
bastion_port = 22
bastion_private_key = "${file("${path.module}/../keys/${var.serverinfo["bastion_private_key"]}")}"
user = "${var.serverinfo["user"]}"
private_key = "${file("${path.module}/../keys/${var.serverinfo["private_key"]}")}"
host = "192.168.2.10"
timeout = "2m"
}
}
It's easy connected to the public subnet instance but failed to connect to the private subnet instance.
Error applying plan:
1 error(s) occurred:
* module.xyz.aws_instance.instance: 1 error(s) occurred:
* Error connecting to bastion: dial tcp: lookup 74D93920-ED26-11E3-AC10-0800200C9A66 on 10.195.99.194:53: no such host
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
It tries to search for the <<-- 10.195.99.194:53: no such host -->> but my private subnet instance IP starts with 198.168.2.x and public subnet instance IP starts with 198.168.0.X
Hi guys,
I have exactly the same scenario described by @ankitkl, but instead of passing a fixed private IP I used ${self.private_ip}. Any news about it?
There's pretty much no point of connecting through a bastion node if both keys have to be the same.
Bump.
Also to add to this, while you can use bastion_private_key with a decrypted key, seems like there's no way to pass a key to the host, and let bastion use the agent (it's either key/key or agent/agent) due to the fallback to private_key if the bastion doesn't specify bastion_private_key.
I can still reproduce this issue on :
Terraform v0.12.24
+ provider.vsphere v1.16.2
.
.
2020-06-17T19:27:28.218+0100 [DEBUG] plugin.terraform: remote-exec-provisioner (internal) 2020/06/17 19:27:28 [DEBUG] Connecting to 192.168.4.230:22 for SSH
2020-06-17T19:27:28.218+0100 [DEBUG] plugin.terraform: remote-exec-provisioner (internal) 2020/06/17 19:27:28 [DEBUG] Connecting to bastion: 81.143.215.2:9999
vsphere_virtual_machine.BluePrintDev (remote-exec): Connecting to remote host via SSH...
vsphere_virtual_machine.BluePrintDev (remote-exec): Host: 192.168.4.230
vsphere_virtual_machine.BluePrintDev (remote-exec): User: packman
vsphere_virtual_machine.BluePrintDev (remote-exec): Password: false
vsphere_virtual_machine.BluePrintDev (remote-exec): Private key: true
vsphere_virtual_machine.BluePrintDev (remote-exec): Certificate: true
vsphere_virtual_machine.BluePrintDev (remote-exec): SSH Agent: true
vsphere_virtual_machine.BluePrintDev (remote-exec): Checking Host Key: true
vsphere_virtual_machine.BluePrintDev (remote-exec): Using configured bastion host...
vsphere_virtual_machine.BluePrintDev (remote-exec): Host: 81.143.215.2
vsphere_virtual_machine.BluePrintDev (remote-exec): User: iac4me
vsphere_virtual_machine.BluePrintDev (remote-exec): Password: false
vsphere_virtual_machine.BluePrintDev (remote-exec): Private key: true
vsphere_virtual_machine.BluePrintDev (remote-exec): Certificate: true
vsphere_virtual_machine.BluePrintDev (remote-exec): SSH Agent: true
vsphere_virtual_machine.BluePrintDev (remote-exec): Checking Host Key: true
2020/06/17 19:27:28 [TRACE] dag/walk: vertex "root" is waiting for "provisioner.remote-exec (close)"
2020/06/17 19:27:28 [TRACE] dag/walk: vertex "meta.count-boundary (EachMode fixup)" is waiting for "vsphere_virtual_machine.BluePrintDev"
2020/06/17 19:27:28 [TRACE] dag/walk: vertex "provisioner.remote-exec (close)" is waiting for "vsphere_virtual_machine.BluePrintDev"
2020/06/17 19:27:28 [TRACE] dag/walk: vertex "provider.vsphere (close)" is waiting for "vsphere_virtual_machine.BluePrintDev"
2020/06/17 19:27:28 [TRACE] dag/walk: vertex "provisioner.file (close)" is waiting for "vsphere_virtual_machine.BluePrintDev"
2020-06-17T19:27:28.525+0100 [DEBUG] plugin.terraform: remote-exec-provisioner (internal) 2020/06/17 19:27:28 [ERROR] connection error: Error connecting to bastion: ssh: handshake failed: ssh: no authorities for hostname: 81.143.215.2:9999
2020-06-17T19:27:28.525+0100 [DEBUG] plugin.terraform: remote-exec-provisioner (internal) 2020/06/17 19:27:28 [WARN] retryable error: Error connecting to bastion: ssh: handshake failed: ssh: no authorities for hostname: 81.143.215.2:9999
And as soon as the same keys are configured on both the bastion and the target host everything springs into life
vsphere_virtual_machine.BluePrintDev: Provisioning with 'remote-exec'...
vsphere_virtual_machine.BluePrintDev (remote-exec): Connecting to remote host via SSH...
vsphere_virtual_machine.BluePrintDev (remote-exec): Host: 192.168.4.230
vsphere_virtual_machine.BluePrintDev (remote-exec): User: packman
vsphere_virtual_machine.BluePrintDev (remote-exec): Password: false
vsphere_virtual_machine.BluePrintDev (remote-exec): Private key: true
vsphere_virtual_machine.BluePrintDev (remote-exec): Certificate: true
vsphere_virtual_machine.BluePrintDev (remote-exec): SSH Agent: true
vsphere_virtual_machine.BluePrintDev (remote-exec): Checking Host Key: true
vsphere_virtual_machine.BluePrintDev (remote-exec): Using configured bastion host...
vsphere_virtual_machine.BluePrintDev (remote-exec): Host: 81.143.215.2
vsphere_virtual_machine.BluePrintDev (remote-exec): User: packman
vsphere_virtual_machine.BluePrintDev (remote-exec): Password: false
vsphere_virtual_machine.BluePrintDev (remote-exec): Private key: true
vsphere_virtual_machine.BluePrintDev (remote-exec): Certificate: true
vsphere_virtual_machine.BluePrintDev (remote-exec): SSH Agent: true
vsphere_virtual_machine.BluePrintDev (remote-exec): Checking Host Key: true
vsphere_virtual_machine.BluePrintDev (remote-exec): Connected!