Velocity
Velocity copied to clipboard
Enable tcp fast open by default if available
The tcpFastopenMode method is taken from netty: https://github.com/netty/netty-incubator-transport-io_uring/blob/dfc64bbe4716c70f252bb68d8cec039e0d3a7adc/transport-classes-io_uring/src/main/java/io/netty/incubator/channel/uring/Native.java#L349-L382
The AccessController is going bye bye, and so idk what we want to do on that front; but, given how useless of a feature TCP fast open seems to be, I'm not sure if there's much benefit to expanding this, we also have no mechanism of updating existing config files with new defaults, etc
Well AccessController is still used in two places in Velocity: https://github.com/PaperMC/Velocity/blob/2cf18b0a6d845bbc28eb64774bed371e410f6469/proxy/src/main/java/com/velocitypowered/proxy/VelocityServer.java#L555 and I believe netty also uses it in a few places. You could also get this option using natives like epoll does: https://github.com/netty/netty/blob/e8176601344b1c22e0f6c1dfc2861d7a92f54030/transport-native-epoll/src/main/c/netty_epoll_native.c#L606-L610 I don't think changing existing installations to enable fast-open is necessary. If you wanted to, you could create a new config option with it enabled by default and then adoption of this would be a lot higher.
could this use the existing Epoll#isTcpFastOpenClientSideAvailable + Epoll#isTcpFastOpenServerSideAvailable API?
It could and I've taken a bit inspiration from that code, but I've opted not to do it because the method contract of those methods is that it also checks if Epoll is available. If Velocity uses different transports in the future that also support TFO (Such as IOUring, which also has IOUring#isTcpFastOpenClientSideAvailable + IOUring#isTcpFastOpenServerSideAvailable) then it would be problematic. I've done it like this to future-proof the detection code.
does this check work for kqueue too? kqueue is also supported in Velocity and it's capable of doing fast open as well
I think it'd be better to make the checks in TransportType per transport with the proper Netty API methods
Alright done, it's now checking it on transport level.
One way this could be enabled by default is splitting the tcp-fast-open config key into tcp-fast-open-client and tcp-fast-open-server for finer control and that way also removing the tcp-fast-open key, which has always been false by default.
We have 0 mechanism for modifying peoples existing config files
Yeah I guess that is not ideal. My idea above is a solution anyways. I believe TFO was disabled by default because enabling it without the transport supporting it, would send a warning message "Channel option unknown". Changing it to tcp-fast-open-client and tcp-fast-open-server would then enable it even if it was disabled by default before. I believe this is unlikely to break any setups. Maybe just write in the release notes that the config option was removed and there are two new ones that are true by default?
We don't really have release notes, somebody would need to write a migration but making both sides configurable just seems bleh; I forgot we had the migration stuff, the config cruddery is a mess, but, if we want to enable by default, I'd much rather we just wrote a migration to flip it under the assertion that it shouldn't blow up for people, but, idk what to expect here as I can imagine somebody will find an environment that this blows up in.
I'm honestly just apprehensive around such a feature given most peoples servers aren't even setup to utilise it
I've now added a migration for flipping the value. I'm not sure if Paper supports TFO, but if it doesn't then I'll look into making a PR to Paper to enable TFO by default.
We don't really have release notes, somebody would need to write a migration but making both sides configurable just seems bleh; I forgot we had the migration stuff, the config cruddery is a mess, but, if we want to enable by default, I'd much rather we just wrote a migration to flip it under the assertion that it shouldn't blow up for people, but, idk what to expect here a
The TFO is unnecessary if the backend does not have it, besides it is poorly implemented, since the levels as such do not exist. Rather, they are the connections per second that can be accepted by the mechanism itself.