core/muxing: Use total number of alive inbound streams for backpressure
Description
A PoC implementation for what is described in https://github.com/libp2p/rust-libp2p/issues/2865.
Links to any relevant issues
- https://github.com/libp2p/rust-libp2p/issues/2863
Open tasks
- [x] Merge https://github.com/libp2p/rust-libp2p/pull/2861
- [x] Remove
SubstreamBox::newfrom public API oflibp2p-core - [ ] Bump version of
libp2p-coreto 0.37.0 - [ ] Bump version of
libp2p-swarmto 0.40.0
Open Questions
Change checklist
- [x] I have performed a self-review of my own code
- [x] I have made corresponding changes to the documentation
- [x] I have added tests that prove my fix is effective or that my feature works
- [x] A changelog entry has been made in the appropriate crates
Thanks for writing this POC!
From what I can see here, I am in favor of this proposal.
I would like to only merge here once we have at least one user of this API. I would guess that the best user for this would be
libp2p-metrics.
Yes, makes sense. I see the first user in swarm::Connection but for that #2861 needs to merge first. Together with #2861, this will give us actual backpressure on the number of open streams and not just the ones that are currently negotiating.
While we could emit a new
SwarmEventon each new stream, I wonder how we could informlibp2p-metricsonce a stream closed.Next to the number of substreams, it would be wonderful to expose the protocol that is being negotiated on the substream as well. That would enable us to expose the following metric:
libp2p_swarm_stream { direction = "inbound", protocol = "/libp2p/kad/1.0.0" } 10
I did something like this in the past. If we make swarm::Connection aware of metrics, it is fairly easily doable.
@mxinden Updated the PR description with an open question.
@mxinden In https://github.com/libp2p/rust-libp2p/discussions/2957, a usecase for tracking the number of substreams in metrics came up.
I can't think of a way of recording this though if we don't want to take in a dependency on prometheus-client. If we had a metric instance inside Connection, I could update it on every loop iteration. We could emit events every time we open or accept a new substream?
@AgeManning I'd be curious what you think of this PR in regards to what it changes in gossipsub (534c3f6 (#2878)).
The gist is that we can now enforce, how many substreams to accept from the remote. Given that GossipSub should only ever have one inbound stream, I set this to one.
Yeah the changes look fine to me. We only have one inbound substream as you've pointed out, so I dont see any issue here
I can't reproduce this error on my machine. Anyone else got any luck?
Also, can you have another look at this @mxinden? I implemented a per connection-configurable limit now as requested in https://github.com/libp2p/rust-libp2p/pull/2878#discussion_r983655065 :)
I think long term this is something we should be doing. I am not yet sure on whether the current implementation allows collaborative usage of these limits across
ConnectionHandlerimplementations.
Do you think the proposed implementation is an improvement?
I think it is but I agree with you that it can be even better.
In my eyes, before we do this, we should tackle https://github.com/libp2p/rust-libp2p/issues/2863 first, thus simplifying the sets (pending-multistream, pending-upgrade, negotiated) of inbound streams.
Funny, I see this a step towards that solution. With this PR, a substream counts towards a limit as soon as it exists and not just during the upgrade phase.
The next idea (but could be done in parallel) for #2863 is to make less use of upgrades in other protocols but build a FuturesUnordered-like primitive that implementations can use to do the upgrade themselves.
Another thought would be to redesign
max_inbound_streamsto be closer toSink::poll_ready:fn poll_new_inbound_stream_ready(&self, cx) -> Poll<()> {A
ConnectionHandlercould thus signal in a async way whether it is willing to accept new streams.
This is an interesting idea! Should a ConnectionHandler have multiple poll functions perhaps?
poll_ready_inboundpoll_new_outboundpoll_event
Also adding this to the agenda for the next rust-libp2p community call.
👍
Setting to draft until we reach a decision.
Closing because stale.