DHCP bridge driver when DHCP server/client on the same interface
On my setup, dhcp server and dhcp client(netavark dhcp proxy) is on the same bridge interface. Kernel eventually dropped UDP packets because it is sent by itself to itself.
I need a new method to specify the interface that netavark proxy will use to send its UDP packets.
My current patch is adding a new field in options.dhcp_interface="veth_bind_to_bridge_interface" just like options.mode=unmanaged. But I'd like to get ideas from your side.
/cc @Luap99
Adding an option for this seems backwards as it just complicates things for no real reason. I guess the point should be this should work out of the box. I haven't really looked deeply into how we do the DHCP handshakes but most likely the issue would be around us sending a broadcast package as that of course would not be send to the same interface where it was send from.
Maybe we should have the proxy join the netns of each container and make it send the packages from the container interface side like a regular dhcp client would do.
I haven't really looked deeply into how we do the DHCP handshakes but most likely the issue would be around us sending a broadcast package as that of course would not be send to the same interface where it was send from.
Yes, that is the key. Whatever stack you use, SOCK_RAW or SOCK_DGRAM, broadcast on bridge and its children interface just won't work.
Maybe we should have the proxy join the netns of each container and make it send the packages from the container interface side like a regular dhcp client would do.
That is ideal! Though I am not sure how netavark proxy can join every container simultaneously...
That is ideal! Though I am not sure how netavark proxy can join every container simultaneously...
You do not really have to, I have not looked into how the dhcp lib opens the sockets but with namesapce speaking all that needs to be done is setns(container) socket(SOCK_RAW) setns(host)
The socket then always stays associated with the original namespace it was opened in AFAIK. I never tried it with SOCK_RAW though but we use this trick to open the AF_NETLINK socket to configure the interfaces in the namespace so I would assume it works for all socket types.
So then we keep open one socket fd per container which is manageable.
So then we keep open one socket fd per container which is manageable.
https://github.com/nispor/mozim/blob/main/src/dhcpv6/client.rs#L296-L317
I've read the source code a bit. While theoritically, we can do this, I saw a problem.
Some new sockets will be invoked in the renew events for both v4 and v6, which we can not know when and setns in time. FD is managed internally by the lib, and we can do much.
I found that I can setup two bridge of two veth interface, one for dhcp server, one for netavark. This works, and eliminate the need of changing netavark!
Close it.