tokio-core
tokio-core copied to clipboard
Accepting connections in the real-world server
While tokio_core::net::Incoming provides simple Stream interface to accepting connections, there are few important issues for real-world scenarios:
Incomingreturns every accept error as an error in stream, which means every other combinator shuts down the stream. Also, some errors must be ignored, but other errors (resource limit exhaustion) should incur a timeout and retry of accepting connection.- Limit of number of simultaneous connections is usually desired (so naive solution of using
spawn()for every connection doesn't work in practice). It was recommended to usebuffer_unordered()combinator for that. But it also requires suppressing errors and afor_each(), which is not trivial for newcomers.
My solution implemented in tk-listen is to provide two combinators:
sleep_on_errorwhich does (1) form the list above, and returns a stream of just successful accepted connectionslistenfunction which does (2) and returns a future which doesBufferUnorderedinside and provides aFutureinterface which can be spawned, selected on, etc.
This makes code simple enough, and allows other combinators to add power (e.g. graceful shutdown,...):
lp.run(
listener.incoming()
.sleep_on_error(TIME_TO_WAIT_ON_ERROR, &h2)
.map(move |(mut socket, _addr)| {
// protocol handler
})
.listen(MAX_SIMULTANEOUS_CONNECTIONS)
)
The point here is where it fits the tokio ecosystem? Should it be separate crate?
Thanks for your crate! But I prefer to handle the error myself (so e.g. I have a chance to log or report the error). My use is to handle temporary Too many open files errors (caused by misbehaving clients, request bursts, etc).
Thanks for the report!
I definitely agree that just "raw using Incoming" probably isn't the best strategy for all servers and I wouldn't say buffer_unordered() is recommended per-se, just a preferable implementation strategy to the spawn_many in https://github.com/tokio-rs/tokio-core/pull/60 :).
The tk-listen crate looks great to me and it may also be able to incorporate strategies such as registering new connections on different event loops as well!
In terms of where this belongs I'm not entirely certain. If it's useful enough then we should definitely consider insertion into tokio-core, but as-is I think it's useful for tokio-uds as well, right?
It's also possible that -proto should include some of the functionality.
That said, it can stay in a crate for now, which would allow for the greatest flexibility for evolving too.
But I prefer to handle the error myself (so e.g. I have a chance to log or report the error). My use is to handle temporary Too many open files errors (caused by misbehaving clients, request bursts, etc).
Sure. I'm logging it by default on debug level because there might be too many such messages. Personally, I'd want to improve on this too. I.e. log it the first time it has been triggered at warn level, and log again only after successful connection, probably. We can discuss it on crate's github.
Point is that implementation is not trivial, and many servers do it wrong (i.e. you need to differentiate between per-connection errors that must be ignored and important ones, which must be slept on). And it's easier to keep this code in one place.
I wouldn't say buffer_unordered() is recommended per-se
So what is the recommended way?
Aside: I'm thinking to make spawn_many-like interface as an option.
The tk-listen crate looks great to me and it may also be able to incorporate strategies such as registering new connections on different event loops as well!
Yes, this is a part of the plan.
That said, it can stay in a crate for now, which would allow for the greatest flexibility for evolving too.
Sure. But the major point is that every example we currently have anywhere is mostly wrong. Technically adding another dev-dependency is not a big deal, but...
Anyway, I'm going to support it as is for near future.
Oh I wouldn't really say there's a 'blessed recommended way' right now. Or for now it's now going to be "use tk-listen" :)
We're definitely still developing best practices over time so I'd expect this to evolve regardless.