vm-bhyve
vm-bhyve copied to clipboard
Network recommendatations
Hi. Cool project!
I've got a couple VMs running on a switch attached to an interface that the host uses for its connection. When I start a VM, the host becomes unreachable for about 10 seconds, and then everything returns to normal. I'm wondering if this is expected. Since I'm only testing this project out right now, its not an issue, but I'm pretty excited about moving things out of jails and into VMs, so I want to take this to my bigger server that has way more running on it. 10 seconds of unreachability while a VM starts is less than ideal.
I've tried to have two switches also, so I can have VMs run on two different networks. One interface that thehost uses, and the other a completely separated interface that was running fine for weeks. Once I added VMs to the host interface, the VM running on the non-host interface stopped working on the network. Still boots and can console in and such, but no network availability.
Should I plan to dedicate an interface for the VMs? Should I expect that same 10 seconds of non-response when VMs are on their own interface? Should I expect that multiple vswitches work properly? As I look to make this tool more central in my prod environments, any guidance here would be appreciated it.
I notice a similar issue on my dev machine when the first guest is added to a virtual switch that is bridged to my primary network adapter (the one I'm connecting to the host over). Further guests are fine. As I'm constantly starting and stopping guests I get around this by bridging a dummy tap device to the bridge when I first boot up so there's always at least one tap device in the bridge. Not exactly a production solution of course.
We did have a setting to change the priority on the bridge which was submitted to fix this, although I never saw it make a difference on my machine.
It's not obvious why guests using a separate virtual switches would interfere with each other. I only have a single test machine though and haven't done extensive testing with multiple interfaces. If it's reproducible it would be useful to see the ifconfig status when the guests are not working.
A dedicated interface for guests is obviously the ideal, as it completely separates them from your management network and means that any changes going on while guests start/stop should have no effect on your ability to access the host.
Do you still experience the issue when moving IP address from ethernet interface to the bridge? It helped us solve few strange issues.
Good point. I'm pretty sure my dev machine just has an IP on the lan interface as it's a machine I was already using and just started running guests on...
Even I'm wary of letting vm-bhyve create a bridge and assign a management IP to it on boot though... If vm-bhyve doesn't load correctly for any reason you could be unable to access the machine.
In that situation I think I'd rather use a manual bridge configured via rc.conf
, or ideally have a management interface fully handled by the OS, and a second interface used for guest traffic.
Thank you for the ideas. I should be able to get back to this shortly, and more seriously as I want to move workloads out of jails in some cases and into vms here and would rely on the separation between management and service networks.