packer-builder-vsphere
packer-builder-vsphere copied to clipboard
Power on VM without requiring communicator
Issue
When communicator
is set to none
, the VM won't boot even if boot commands are specified in the configuration file, as seen below:
Input
...
"communicator": "none",
"boot_wait": "5s",
"boot_command": [
"<enter><wait20>",
"vyos<enter><wait>",
"vyos<enter><wait>",
"install system<enter><wait>",
"<enter><wait>",
"<enter><wait>",
"<enter><wait>",
"Yes<enter><wait>",
"<enter><wait30>",
"<enter><wait>",
"vyos<enter><wait>",
"vyos<enter><wait>",
"<enter><wait5>",
"reboot<enter><wait>",
"Yes<enter><wait30>",
"vyos<enter><wait>",
"vyos<enter><wait>"
]
...
Output
packer build vyos-1.1.8/vyos-1.1.8.json
vsphere-iso output will be in this color.
==> vsphere-iso: Creating VM...
==> vsphere-iso: Customizing hardware...
Build 'vsphere-iso' finished.
==> Builds finished. The artifacts of successful builds are:
--> vsphere-iso: packer-test-vyos
Use case
The use case for running boot commands without a communicator is to provision machines via console that:
- May not support SSH or WinRM
- Are on an isolated network that the provisioning machine doesn't have access to
Suggestion
I'd like to suggest separating the vm.PowerOn
functionality from the communicator, and instead tie the functionality to either boot_command
or boot_wait
, or make boot
a separate key that accepts a boolean value in the configuration altogether. If there's already a way around this, please let me know. Thanks to everyone who has contributed to this plugin. It's incredibly helpful!
none
option was intentionally added to support a use case when we need to clone a template and customize hardware, but do not boot VM ever once, so VM content is byte-to-byte identical.
Changing the logic breaks current behavior, so here we are speaking about some 4th option. Implementing it is possible, but we need to design it carefully, so I'm really interested to hear details: what exactly OSes do not support SSH? How are you going to manage such instances later? Why an isolated network is a build time requirement? Maybe we can remove the connection after provisioning.
Ah, I see why none
was implemented now. Good idea.
A 4th option would be a lifesaver. It's not that the OS doesn't directly support SSH, but if the OS isn't fully supported by VMware tools or open-vm-tools (VyOS, different flavors of BSD), then configuration by console becomes a viable, out of band solution. Neither VyOS nor FreeBSD with VMware tools installed will correctly report a routable IP to vSphere that Packer or Terraform can then use for provisioning via SSH. Instead, both tools wait until timeout and then fail.
While this is an issue with VMware Tools, and not the vSphere builder for Packer, having the ability to perform configuration via console provides more options for configuration, such as building VMs in an isolated network, in multi-tiered networks, or around firewalls that disallow SSH access.
In my specific situation, we create isolated, multi-tier environments for red/blue team exercises. Teams are responsible for defending their infrastructure while attacking another team's infrastructure. Because of the nature of the activity happening in these environments, we choose to isolate it.
As far as managing the instances, they are provisioned for a particular exercise, then destroyed, so management after provisioning isn't required.
I know this is an edge case, but giving users to perform out of band configuration via console would provide an incredible amount of flexibility, especially with OSes that don't play nice with VMware Tools.
At the moment we are waiting for reported IP address, as a signal OS has been installed successfully, and provisioners can start. Then a successfully finished SSH/WinRM connection confirms provisioning has been performed correctly, and we can shut down the instance and finish a build.
In this case, how can we detect such an event?
Much in the same way users on timing for boot_commands
, users would rely on timing in this situation. It would require manual testing, much in the same way users determine the proper boot commands for a particular OS where a documented example doesn't exist. Once the user tested the necessary commands on a few machines, they would be able to scale out.
Digging this up since it would benefit me greatly.
Where I'm working (essentially co-location/datacenter/hosting provider) we will be using your builder within Ansible to build base-images, with further use of Ansible to clone out machines or build more specialized templates. Ansible uses pyvmomi for communication with VMware and when sending commands through vmware/openvm-tools (vmware_vm_shell module) they can tell it to wait for the process to complete before moving on so wouldn't surprise me if govmomi has/enables that functionality as well.
The main reason for us is security since we have a lot of clients/customers and many have their own net/vlan within our infrastructure and coss-vlan/net communication is kept to a minimum, even from our own nets/vlans. So at least during the cloning stage any provisioning-commands we have to do will go through vmware/openwm-tools. Building the base-images will have a similar security requirement, and since we have to talk to vcenter in the first place being able to do provisioning through it as well would minimize the ports and IP's that are "open". It also leaves one less point of administration.
Would love this feature! I just wanted to add my use case if it might help to provide a better picture of why this could be useful.
We use this plugin to build Windows based templates in vSphere. The actual provisioning of the VM --which forms the base of our templates--is actually done using a Microsoft System Center Configuration (SCCM) task sequence. Rather than writing a wrapper script around the whole SCCM build process we use Packer to handle the orchestration bits and interaction with vSphere.
So essential, packer spins everything up in vSphere, SCCM applies the os, mods it, shuts it down, then packer turns it into a template.
Since packer needs to be able to connect via WinRM, during our custom SCCM provisioning process we enable WinRM ( just so packer can connect ), then disable it and shutdown the machine. Ideally, we'd love for packer to not have to connect over WinRM at all and just wait for the VM to power off or after a given timeout or something. Thanks for the work on this plugin though, it's been invaluable!
I also would benefit from this. Since packer doesn't provide a way to use the IP from a packer provided variable, I would like to power the VM on and then run a script to parse the logs and return the IP address, then run some commands with it locally, which will return the machines password.