webrtc: close data channels cleanly
WebRTC data channel close is a synchronous close procedure. We close our outgoing stream, in response the peer is expected to close its outgoing stream. If the peer doesn't close its side of the stream we will end up with a memory leak where the SCTP transport keeps reference to the stream. So we check the number of invalid data channel closures and when this goes over a threshold we close the connection.
For our custom purposes we can fork SCTP and implement a unilateral stream Reset which is feasible because we anyway have a state machine on top of the data channels. But for a RFC compliant SCTP implementation, this is how the spec is supposed to work. SCTP stream numbers are limited (uint16) so we do need to reuse the stream ids forcing us to use a synchronous close mechanism.
Was there some scenario that we've run into that hits this case? Is it expected to see a remote peer not ack the close? Or is this preventative?
It is preventative peer can create 32k streams. Not configurable in pion :( So the total memory for the streams = 32k * 200B/stream = 6.4MB Add 1MB for the receive window in the connection. So we are roughly at 10MB.
This would have been fine but pion/sctp has a receive window calculation formula that only counts the size of the userData in chunks and not the whole chunk size. This leads to a 100x amplification of memory used vs that allocated in receive window so we can use 100MB memory per connection in the peer by creating 10k datachannels and sending 100, 1byte chunks on every channel. 10k * 100 * 100 = 100MB
profile.pb.gz This was created by the benchmark in branch: https://github.com/libp2p/go-libp2p/tree/webrtc-bm
An alternative is to limit our receive window to 100kB. And let reserve 10MB memory for every webrtc connection(thanks @marten-seemann for the idea). This would slow down things but is much simpler.
This would have been fine but pion/sctp has a receive window calculation formula that only counts the size of the userData in chunks and not the whole chunk size. This leads to a 100x amplification of memory used vs that allocated in receive window so we can use 100MB memory per connection in the peer by creating 10k datachannels and sending 100, 1byte chunks on every channel. 10k * 100 * 100 = 100MB
I'm missing where the second 100 comes from: 10k (data channels) * 100 (1 byte chunks) * 100 (?) = 100 MB
profile.pb.gz This was created by the benchmark in branch: https://github.com/libp2p/go-libp2p/tree/webrtc-bm
This 404's for me