zos
zos copied to clipboard
Improve IPv6 assignment to VMs
As it stands, nodes connected to IPv6 enabled routers are generally able to assign "public" IPv6 addresses to VMs that they host. This is a great feature for nodes that are reachable from the public internet and for VM users who want a free publicly reachable address, requiring no configuration from farmers beyond their network setup.
For nodes that are behind a firewall (hidden), IPv6 assignments carry the following caveats:
- The addresses can't fulfill their primary purpose of providing a public entry point to the VM. Additionally, it is not possible to know with certainty before creating the VM if a node can provide publicly reachable IPv6 addresses
- Assuming the farmer has not subnetted and appropriately firewalled their farm (most farmers can't or won't do this), the virtual interface created for the assigned IPv6 address provides a route into the farmer's LAN. While this can be a benefit for some farmers who run workloads on their own nodes, it is generally undesirable
To improve this situation, I propose limiting the ability for nodes to assign IPv6 addresses to VMs in some combination in the following ways:
- Nodes with a public config can issue public IPV6s
- Nodes with working dual NICs can issue public IPV6s
- To cover the case of single interface nodes without public config that still have publicly reachable IPv6, allow farmers to optionally enable this for their nodes through some mechanism. While it could be through IPv6 only public configs (not sure if this is already supported), I think a simpler way such as a per farm flag on TF Chain is worth considering
Such a change can improve the odds that the IPv6 addresses that nodes hand out are actually publicly reachable and also improve security for farmers who don't actually want to be distributing addresses from their LAN.
@delandtj please provide your thoughts on this.
This won't be feasible in 3.13, moved to 3.14 until that's settled
@delandtj please check
mjuh... For public, it's public, so no issue there. Nodes behind a nat firewall that receive an IPv6 delegation (/48, /56, /64) it all depends on the provider. Mostly, the provider allows only outgoing IPv6 traffic. For vms to not reach inside the network they're attached to, we have(?need to check) firewall rules in place that the vm can only send and receive packets from the default gateway. Filtering is based on MAC addr. That should cover it. But we need to add the mycelium port to the firewall so that the mycelium nodes can discover each other in the same network for as well Link-local discovery as the TCP port . @muhamadazmy is this true (I remember we had that discussion to implement it)?
For vms to not reach inside the network they're attached to, we have(?need to check) firewall rules in place that the vm can only send and receive packets from the default gateway. Filtering is based on MAC addr.
But this was only for the natted (Wireguard) virtual interfaces, based on what I was understanding. When a user reserves an IPv6 address for their VM, it gets delivered on a separate virtual interface.
Even if we apply the same rule of only allowing traffic to the default gateway, I think we have the issue as follows:
- User deploys a couple VMs to a node that has a publicly reachable IPv6 range (for example FreeFarm or a GE farm) and reserves IPv6 addresses
- The user wants these VMs to talk to each other on those IPv6 addresses
- But the VMs can't communicate, because they are trying to reach a MAC in the same subnet that isn't the default gateway
This seems pretty bad because it's placing a significant limitation on the functionality in a case where no protection is needed—the network is public!
On the other hand, even if this does solve the security issue for the average farmer at home with IPv6, it doesn't address the fact that from a user's perspective everything is called a "public IPv6" regardless of whether there's an intervening firewall or not.
In the case of hidden nodes there seems to be no use case for or value in handing out these IPv6 addresses. The VMs already have IPv6 connectivity on the WireGuard NIC (not sure how this works, since they only appear to have a link local and ULA address, but it works).
@delandtj there was some work against the preventing the VMs from reaching local farm network and you were doing some research a while back (nothing we tried worked back then correctly, we always had some issue or another).