nim-chronos
nim-chronos copied to clipboard
Add ServerFlag to control dualstack behaviour
This adds two new server flags, ServerFlag.Dualstack
and ServerFlag.NoDualstack
. When used with ::
as the bind address, this will allow to choose if the server should be dualstack or IPv6 only. If neither is set, the OS default behaviour will be used.
Should I add tests for this? It depends on IPv6 support, so not sure whether you want that failing when IPv6 is not available.
If neither is set, the OS default behaviour will be used.
So if no flags are specified and no special options are given, what happens? I imagine the ideal behavior would be that it tries to create an ipv6 dual stack socket and if that fails, falls back on ipv4 - ie that the cheapest and easiest thing that happens is that ipv6 is enabled by default with no further action from the developer.
So if no flags are specified and no special options are given, what happens?
In Linux, for example, the value will be read from /proc/sys/net/ipv6/bindv6only.
I imagine the ideal behavior would be that it tries to create an ipv6 dual stack socket and if that fails, falls back on ipv4 - ie that the cheapest and easiest thing that happens is that ipv6 is enabled by default with no further action from the developer.
I'm not sure if that's a good idea. Giving an IPv6 bind address to newDatagramTransport6
and receiving an IPv4 socket would be pretty obscure behaviour. An IPv6-only socket seems more reasonable in that case.
Then it would work like this:
-
Dualstack
andNoDualstack
set -> error -
Dualstack
set -> create dualstack socket or error -
NoDualstack
set -> create IPv6 only or error - nothing set -> create dualstack or fail silently with IPv6 only
If you bind ::
without Dualstack
flag set it could represent value of NoDualstack
flag.
Sure, that would be possible.
Currently, there are three possibilities:
- Dualstack: set IPV6_V6ONLY to false or raise an error
- NoDualstack: set IPV6_V6ONLY to true or raise an error
- none set: Leave it at OS default
With @arnetheduck's suggestion there would also be three possibilites:
- Dualstack: set IPV6_V6ONLY to false or raise an error
- NoDualstack: set IPV6_V6ONLY to true or raise an error
- none set: set IPV6_V6ONLY to false but ignore any errors
Or, as you suggested:
- Dualstack: set IPV6_V6ONLY to false or raise an error
- none set: set IPV6_V6ONLY to true or raise an error
Then we wouldn't need the NoDualstack
server flag but the user would have to implement error handling on their own. I personally like @arnetheduck's suggestion as it sets a reasonable default but gives the possibility to override. I guess it's just a matter of how opinionated this library should be. I would be fine with implementing either, just let me know which one you prefer.
First of all its impossible to ignore any errors, all errors should be reported.
DualStack
is an optional feature which could be enabled or not, but it exactly should not be set by default. Just because we could break legacy code which used chronos
already.
DualStack is an optional feature which could be enabled or not, but it exactly should not be set by default.
well, it depends a bit how you think about it - by default, when you create a listening socket, you want it to "bind to anything" - it doesn't really matter if it's ipv4 or ipv6 or whatever - it's just "make it as easy as possible to connect" - if I'm writing an application using chronos, I don't care at all if it's using ipv4 or ipv6, I just want it to accept incoming connections - it's a little bit a breaking change because previously you'd get only ipv4 connections and now suddenly on top you get ipv6, but that's not a "significant" difference for end users at the end of the day - ie as a user I don't really care, and chronos would do well to "promote" this user-centric view by having a default that "just works as well as possible" without special flags.
regarding backwards compat, this "new mode" of accept-anything could get a new function and we can deprecate the old one, so that the easiest thing when writing an application using chronos is to get dual stack mode so that by default, ipv6 is supported.
Its a breaking change because IPv6 is not widely adopted, and there many OS around which has IPv6 disabled by default. I have seen a lot of problems while running different IPv6 tests in CI. Also this option is only working for one particular case bind to ANY_ADDRESS
. So even if user do not care, developer MUST care about what he is creating. For me it would be very surprising that from some chronos version it automatically starts to bind IPv4 addresses even if i have not asked it, to do so.
ANY_ADDRESS
- this is what most applications use as default and the aim would be to use whatever the user has set up on their system as "default" - ie if the user has dual stack enabled, we use dual stack - if the user has single stack ipv6 or ipv4 setup, we use that - picking whatever works "automatically" - the point is to be as flexible and open as possible in the default "accept" API - then we should of course have flags and so on for applications that precisely need to manage things - if this is a breaking change, we add it as a new api and no existing applications break.
This is table of all possible variants. I dont see any reason why 12% should become default.
Protocol | Address | Option | Decision |
---|---|---|---|
IPv4 | ANY_ADDRESS | NO DUALSTACK | fine |
IPv4 | ANY_ADDRESS | DUALSTACK | error |
IPv4 | SPECIFIC ADDRESS | NO DUALSTACK | fine |
IPv4 | SPECIFIC ADDRESS | DUALSTACK | error |
IPv6 | ANY ADDRESS | NO DUALSTACK | fine |
IPv6 | ANY ADDRESS | DUALSTACK | fine (+) |
IPv6 | SPECIFIC ADDRESS | NO DUALSTACK | fine |
IPv6 | SPECIFIC ADDRESS | DUALSTACK | error |
chronos
is low-level library which should allow any type of configuration working without any discrimination. I hate all the "automatic" things at such level, because we do not know and can't predict usage of chronos API. Its why "weird" defaults (which i have never seen around) looks ugly.
And this PR do not handle errors properly, so if you want to ignore errors for ::
bind process you should check it for every OS and find what errors exactly means error that IPv6 not supported. Because if you start ignoring errors you can easily ignore busy port
errors too, and because of that behavior you will start working on IPv4 port while user will expect you to work with IPv6 and IPv4 will be routed to some different location...
I dont see any reason why 12% should become default.
I guess because not all options are equally useful - ie if I'm listening to "any address" , it really should mean "any address" regardless if it's ipv4 or ipv6 - that's the whole point of dual stack mode, and why it has become the lowest-friction way of supporting the ipv6 transition - but whatever, it's a detail - let's first make sure it's possible to create a dual-stack application using chronos - we can debate how to make it convenient afterwards.
This is table of all possible variants.
the question in the PR is what to do when no mode is selected - ie no dualstack
flag, and no no dualstack
flag.
NoDualstack
is absolutely redundant.
First of all this flags do not have any meaning for IPv4, so there flag should be removed to IP6DualStack
or something like that, because this Dualstack
flag has no meaning for IPv4 addresses. And lets not start dispute on this case too, just because if people use 0.0.0.0
they actually do not want to have ::
too.
So if DualStack
is not specified chronos
binds socket with IPV6_V6ONLY
flag set to 1
.
If DualStack
is specified chronos
binds socket with IPV6_ONLY
flag set to 0
.
Also I propose to rename Dualstack
option to IPv6Only
option, because it has no meaning in IPv4 world and correlates with OS option, so could be easily found by developers.
One more thing is to check at least MacOS for defaults because and i want that documented in this PR too.
Windows
https://docs.microsoft.com/en-us/windows/win32/winsock/ipproto-ipv6-socket-options
IPV6_V6ONLY
Indicates if a socket created for the AF_INET6 address family is restricted to
IPv6 communications only. Sockets created for the AF_INET6 address family
may be used for both IPv6 and IPv4 communications. Some applications
may want to restrict their use of a socket created for the AF_INET6 address
family to IPv6 communications only. When this value is nonzero (the default
on Windows), a socket created for the AF_INET6 address family can be used
to send and receive IPv6 packets only.
Linux
https://man7.org/linux/man-pages/man7/ipv6.7.html
IPV6_V6ONLY (since Linux 2.4.21 and 2.6)
If this flag is set to true (nonzero), then the socket is
restricted to sending and receiving IPv6 packets only. In
this case, an IPv4 and an IPv6 application can bind to a
single port at the same time.
If this flag is set to false (zero), then the socket can
be used to send and receive packets to and from an IPv6
address or an IPv4-mapped IPv6 address.
The argument is a pointer to a boolean value in an
integer.
The default value for this flag is defined by the contents
of the file /proc/sys/net/ipv6/bindv6only. The default
value for that file is 0 (false).
As you can see defaults for different operation systems are different, so this is exactly breaking change. So lets not use this option as default.
As you can see defaults for different operation systems are different, so this is exactly breaking change. So lets not use this option as default.
yes - there was a point earlier in the discussion that if we want more platform-independent behavior, we should explicitly set IPV6_V6ONLY
to 0
unless the user / developer asks for something else - this way, all chronos applications behave the same regardless of platform.
One more thing is to check at least MacOS for defaults because and i want that documented in this PR too.
I'm not sure how I would find that out, I don't have a Mac. Apparently some BSD variants don't support this at all, maybe macOS is one of them?
Let me try to summarise what you two said: The current behaviour when binding :: is inconsistent and depends on OS settings. This should be changed to a consistent behaviour on all platforms. However, old code may not break in effect.
If we were to set this to either 0 or 1 by default, then that would break currently working code on systems that don't support it (an error would occur where it currently doesn't.
So... how should I continue? Keep this as something that needs to explicitly set? Create a new proc newDatagramSocketAny*
or something like that?
Yep, its impossible to control IPV6_ONLY
flag on MacOS and all BSD (including FreeBSD, OpenBSD, NetBSD) and DragonflyBSD which misses IPV6_V6ONLY
flag at all...
OpenBSD
https://man.openbsd.org/ip6#IPV6_V6ONLY
Get or set whether only IPv6 connections can be made to this socket. For wildcard sockets,
this can restrict connections to IPv6 only. With OpenBSD IPv6 sockets are always IPv6-only,
so the socket option is read-only (not modifiable).
And according to https://stackoverflow.com/questions/5587935/cant-turn-off-socket-option-ipv6-v6only its impossible to change this option on MacOS and FreeBSD (but i'm not checked it).
Looks like its impossible to achieve acceptable level of cross-platform support for proposed feature.
And according to https://stackoverflow.com/questions/5587935/cant-turn-off-socket-option-ipv6-v6only its impossible to change this option on MacOS and FreeBSD (but i'm not checked it).
On MacOS 10.15.7 I have just tested setsockopt(fd, IPPROTO_IPV6, IPV6_V6ONLY, &val, sizeof(val))
. It is possible to change the option. Default starts as 0 (dual-stack enabled), it can be set to either 0 or 1.
The value can be changed repeatedly before bind
. It cannot be changed after bind
, attempts are rejected with EINVAL
. Tested with TCP and UDP sockets binding to in6addr_any
. Not that I was expecting any difference, but just to be sure.
This does not say anything about the BSDs of course, only MacOS.
Looks like its impossible to achieve acceptable level of cross-platform support for proposed feature.
It has long been my understanding that the cross-platform way to produce "universally listening" sockets is to recognise that some platforms need two sockets to be opened at the same time (don't support dual-stack), and others need to use one socket (::
always listens on IPv4 as well), so build services that will happily use multiple listening sockets as if they are one (perhaps hidden in a library like Chronos, hiding the distinction).
Multiple sockets are the proper way to service UDP anyway. With UDP servers on multi-homed systems (anything with multiple IP addresses), you need to send replies out using the source IP that matches the destination IP of the request that came in. Otherwise the replies may be routed incorrectly and not seen, although it usually works, hiding the issue. Some OSes have ways of doing that with one socket (IP_PKTINFO
, IPV6_PKTINFO
) that are better to use if available, but the general way is to listen on multiple UDP sockets as if they are one, and remember which one to use for the reply. (Unfortunately, getting a list of local IPs to listen on (and updating it in real time) isn't portable.)
Once you have taken the leap to handle UDP with synthetic "multi-socket" sockets, using that approach to offer a cross-platform API with consistent behaviour for IPv4+IPv6 ::
fits alongside it well.
The scope of this PR was just to add dualstack capability to nimbus-eth2.
Extensive discussions here and in status-im/nim-eth#367 have shown different viewpoints on the multiple sockets vs. dualstack socket question.
While the points raised against this approach are correct, I don't think they are relevant enough to block the feature. If BSD doesn't support dualstack sockets (or do they?) then nimbus-eth2 will continue to be singlestack there, as it is now. For all other OS the situation would improve. Issues on systems with multiple IP addresses will not be solved with this PR but that's also not the scope of this work.
I do think the current implementation is generic enough that the implementation could be changed to a multi-socket based approach for dualstack later on, without breaking the API.
Please let me know if we can find a consensus on which approach should be implemented.
Dualstack has been introduced in #456