go-libp2p-pubsub
go-libp2p-pubsub copied to clipboard
Option to use variable length queue to avoid dropping message on burst
There is in this package a bunch of pressure release mechanisms to avoid breaking or using too much memory when things get clogged under load. The downside is that pubsub can drop some messages under load, even if the message did find its way across the network. This makes sense most of the time.
There is at least 3 spots where that can happen:
- validation queue (tunnable with
pubsub.WithValidateQueueSize) - outbound queue (tunnable with
pubsub.WithPeerOutboundQueueSize) - topic subscribe output queue (soon to be tunnable as well)
I believe some applications (like mine) would benefit from being able to tell pubsub to use as much memory as necessary and not drop messages in a heavy load scenario. Of course that mean that the application expose itself to being OOMed, but that can be easier to predict and handle than message semi-randomly disappearing.
@vyzo does this seem like a reasonable idea if someone wanted to implement it?
yeah, for subscriptions definitely; and it shouldn't be too hard.
On Fri, Jul 23, 2021, 18:21 Adin Schmahmann @.***> wrote:
@vyzo https://github.com/vyzo does this seem like a reasonable idea if someone wanted to implement it?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/libp2p/go-libp2p-pubsub/issues/431#issuecomment-885714538, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAI4STTJTQYYMD3BJ2JFQTTZGCG7ANCNFSM5AHPYG2A .
Hey @vyzo,
Has there been any effort recently regarding this specific issue?
no, do you want to take it in? Shouldnt be too hard for subscriptions.
also note that validation of published messages is now synchronous and cannot be dropped.
@vyzo Could this be extended to include the outbound/publishing queue as well? I have an application with similar requirements to @MichaelMure's which is also expected to publish in rather large bursts.
Separately, would it be reasonable to define a Queue interface and allow users to pass in their own implementation? This would make it possible to tune the runtime performance of unbounded queues. For example, one might use an implementation based on VList to reduce allocations in an unbounded queue.
Probably, yeah. All very reasonable propositions.