rippled
rippled copied to clipboard
Allow peer bandwidth management
I would like to run a rippled
server from my home, but I can't leave it running because the upload demands from peer servers sometimes spike very high. This can interfere with other users of my home's network bandwidth (e.g. my roommate streaming himself on Twitch.tv).
My server does not need to use all of this upstream bandwidth to keep sync with the network. However, because I serve a bunch of history shards, peers tend to download a lot of data from my server. My home internet doesn't have a cap on monthly bandwidth usage, so I'm perfectly happy to serve what data with my spare bandwidth but I don't want it to interfere with other users in my house.
I would like to be able to control how much upstream bandwidth my server uses at a time. I could use a bandwidth shaper like Trickle, but that can only limit usage for the whole server. Ideally I'd like some more fine-grained controls over different uses of bandwidth, such as partitioning bandwidth for serving peers' historical data requests separately from the bandwidth necessary for keeping up with the network state. (It would also be informative to track such usage separately. I imagine those who run public client handlers would also be curious to see how much of their bandwidth is spent serving WebSocket and JSON-RPC requests, although depending on their tracking I suppose that much is already possible because those aren't usually on the same port as the peer protocol.)
It would also be neat, although not strictly necessary for my purposes, to be able to disconnect or block specific peers, for example "insane" peers following a testnet fork, or peers who make excessive / abusive use of the peer protocol. I'm thinking something along the lines of rbh but built-into the server and a little more flexible.
I am supportive. This kind of rate limiting isn't always possible for the peer protocol (if the network stream takes 2 Mbps, it takes 2 Mbps...), but improvements can be made and some soft limits can be added.
Generally, limits can be implemented by using the new rate_policy
stuff in Boost.Beast
: see https://www.boost.org/doc/libs/1_71_0/libs/beast/doc/html/beast/using_io/rate_limiting.html.
Upstream can be shaped more easily than downstream at least. Having a limit for serving history vs. other protocol messages would already be a nice improvement.