tarpc
tarpc copied to clipboard
Clients should attempt to automatically reconnect.
If we do this, does that mean Client::connect
will never return connection_refused
? Does it mean we'll need to set timeouts on all requests, since it could loop endlessly trying to connect?
Any progress on this? :)
Nope, I've been pretty busy with work/life the last few months and haven't been actively developing new features. I'd be happy to mentor on this or any other feature request.
With #199, this can be implemented in the transport layer. However, I think this could also be done at a higher level of abstraction. I'm imagining some kind of load balancer that is represented by a Stream<Item = Transport>
, and the client can automatically manage multiple transports, maybe round-robin style.
With something like stubborn-io this could mostly be a drop-in replacement for TCP.
Yes, seems stubborn-io is a good reconnect implementation; are we considering to integrate the two?
I'm not sure we need any specific integration in tarpc
itself, since as @tikue said, we can already plug in our own transport layer. For instance one can create a client that tries to reconnect every second using stubborn-io
:
use std::{iter, time::Duration};
use stubborn_io::{ReconnectOptions, StubbornTcpStream};
use tarpc::{client::Config, serde_transport::Transport};
use tokio::net::ToSocketAddrs;
pub async fn client_connect<A: ToSocketAddrs + Unpin + Clone + Send + Sync + 'static>(
addr: A,
) -> io::Result<Client> {
let reconnect_opts = ReconnectOptions::new()
.with_exit_if_first_connect_fails(false)
.with_retries_generator(|| iter::repeat(Duration::from_secs(1)));
let tcp_stream = StubbornTcpStream::connect_with_options(addr, reconnect_opts).await?;
let transport = Transport::from((tcp_stream, Json::default()));
Client::new(Config::default(), transport).spawn()
}
I'm just not sure how this example works in practice. I assume the Sink
gets full at some point and requests will start failing. Will the client be able to recover from that? Also will requests time out?
The field client::Config.pending_request_buffer
controls how large the Sink
can grow. When it reaches capacity, the response Future will remain in a pending state until there's room to add the request to the outbound request buffer. So it applies backpressure without erroring out the call.
Currently, the client-side timeout that enforces the request deadline is only applied to the response future after the request is successfully buffered. This arguably should change, though, so that a large request backlog will time out further pending requests that haven't yet been buffered.