go-libp2p
go-libp2p copied to clipboard
webrtc: investigate resource usage
@ckousik reports that running WebRTC is expensive, and a significant amount of CPU can be spent on handling a comparatively small number of WebRTC connections.
We should investigate how expensive it actually is. Our current DoS mitigation assumes that handling connections doesn't come with a large overhead in terms of CPU, and that the main cost is imposed by handling streams, in particular by the memory consumed on streams. This might not hold true for WebRTC.
If that's the case, we'll need to impose strict limits on the number of concurrent WebRTC connections that a node is willing to handle.
@ckousik : can we link to the benchmarks that were run?
@ckousik is picking this up today
@ckousik built out the test for this. He was blocked while waiting for hosted resources, but was given the green light from @p-shahi to acquire our own instances over the break, so will pick this work back up this week.
Just curious what the status is here.
I don't think it's all that different from @ddimaria 's update
@ckousik can provide an update here? @marten-seemann in order to have output match up with other transport metrics you've collected in the past, can you point @ckousik to that output?
What is the done criteria here?
@ckousik : what did you have in mind? @marten-seemann : what are you expecting?
@ckousik : thanks for sharing. We're ultimately going to need @marten-seemann to weigh in.
A couple of things:
- Are there any next steps you'd recommend of these findings?
- Was there anything that surprised you?
- It would be great to attach the notebook so someone can review the setup so can confirm/verify the experiment methodology. (I personally don't have a datadog account.)
- As for next steps, Glen is working on optimisation passes over the current PR.
- Couple of things that stood out to me:
- Pion has issues with datachannel ID reuse. We have a workaround for this, but are holding off on investigating the issue in Pion. The corresponding issue in Pion can be found here: https://github.com/pion/webrtc/issues/2258
- Pion rate limits the creation of streams. There can be 16 SCTP streams that are not accepted at any given time. This value is hardcoded and not configurable, therefore we have to ramp up the number of connections and streams.
- @BigLep Glen is also going to be running the tests and verifying them. Is there anything you would prefer in place of datadog?
Thanks @ckousik .
For 3, the key thing is to enable reproducibility. This is for two fold:
- Easy enable others to spot-check the methodology. It can be useful to make sure that the test parameters, configuration, etc. is what we expect.
- If someone looks at this issue six months from now, and assuming we're all gone, they should be to understand how we arrived at these results. At the minimum, let's have a way to see the config/code that was used. Attaching an .ipynb notebook is fine, or a gist, etc. We just want to avoid the case of future folks not having access to your datadog and then not being able to verify / understand what was executed.
Sorry, I had misunderstood the notebook as a DataDog notebook. The test code is present here: https://github.com/little-bear-labs/libp2p-webrtc-bench
The test code is present here: https://github.com/little-bear-labs/libp2p-webrtc-bench
Looks like this was now included in the pr itself right? Maybe you can archive this repo?
We have a workaround for this, but are holding off on investigating the issue in Pion.
Was this workaround in go-libp2p PR (couldn't find a reference to https://github.com/pion/webrtc/issues/2258 in a comment) or elsewhere? Can you link to it?
@p-shahi We manually assign stream ID's here: https://github.com/libp2p/go-libp2p/pull/1999/files#diff-f3e8c67f01e1cd4597f5d58558db1e0e28f21be14b640d8e31282eb9580476aaR310-R320. I'll add a comment linking to pion/webrtc#2258
Resource usage investigation is being done as part of this pr https://github.com/libp2p/go-libp2p/pull/1999
Status update: @glendc to provide Pion bottlenecks report sometime in the next two weeks
We have 1 fix in: https://github.com/pion/mdns/pull/172 We need
- Remove the closed datachannels from the
pion/webrtc.PeerConnection
object. - Fix a memory leak in pion/sctp
Tracking issue for 1. https://github.com/pion/webrtc/issues/2672
For
- Remove the closed datachannels from the pion/webrtc.PeerConnection object
The PR in webrtc decreases memory usage from 31MB to 17MB when running 10 connections and echoing 1MB over 100 different streams(1GB total data transferred). The rest of the memory use are fix sized allocations that'd take more effort to reduce (1 MB buffers in sctp and 1MB buffers for read from ice connections)
the benchmarks are in branch webrtc-bm-2.
Setup: two ec2 instances. c5.xlarge(4 cores 8GB Ram). us-east-1 and us-west-2. Ping time is 60ms. BDP is 40MB assuming 5Gbps link.
-
All bandwidth numbers are in Mega Bits per second. All buffers are in Bytes.
-
maxBufferedAmount here is the amount of data we write on the channel before waiting for an ACK.
-
Scenario: Ping pong 10MB repeatedly on multiple streams. The number of streams is conn x streams in the tables below.
-
There is no dependence of cpu used on the number of connections
-
Per percent of CPU used we get roughly 6-8 Mb/s throughput. So 3% CPU usage gives us 25 Mb/s throughput
-
Single stream througput is limited by maxBufferedAmount. For 100kb buffered amount we get 12Mb/s and for 32kB we get 3Mb/s throughput. This throughput is lower for higher latencies as we write the same amount on the channel but have to wait more time to get the ACK
-
Max throughput we get is 550 - 600 Mb/s
-
Roughly, 100kb of buffer on either side translates to 12 Mb/s of throughput. So 100kB of recv buf has max throughput of 12Mb/s and 1MB of recv buf has 130 Mb/s of throughput. 100kb of send buffer has a throughput of 12 Mb/s and 32kb of send buffer has a throughput of 3 Mb/s
- recv buf: 1MB, maxBufferedAmount: 32kB
- On a single stream we get 3Mb/s throughput
- On a single connection we can get a maximum throughput of 130 Mb/s
conn | stream | Mb/s | cpu |
---|---|---|---|
1 | 1 | 3 | 1 |
1 | 10 | 40 | 6 |
1 | 40 | 130 | 19 |
1 | 100 | 130 | 20 |
2 | 1 | 6 | 1 |
2 | 40 | 265 | 38 |
10 | 1 | 30 | 4.5 |
10 | 10 | 280 | 50 |
20 | 10 | 350 | 55 |
50 | 1 | 155 | 20 |
100 | 5 | 500 | 70 |
- recv buf: 100kB, maxBufferedAmount: 32kB
- On a single stream we get 3Mb/s throughput
- On a single connection we get a maximum throughput of 12 Mb/s
conn | stream | Mb/s | cpu |
---|---|---|---|
1 | 1 | 3 | 1 |
1 | 20 | 12 | 3 |
10 | 10 | 120 | 13 |
20 | 10 | 240 | 27 |
20 | 20 | 240 | 26 |
40 | 10 | 450 | 50 |
60 | 10 | 600 | 65 |
70 | 10 | 600 | 75 |
- recv buf: 1MB, maxBufferedAmount: 100kB
- On a single stream we get 12 Mb/s throughput
- On a single connection we get a maximum throughput of 130 Mb/s
conn | stream | Mb/s | cpu |
---|---|---|---|
1 | 1 | 12 | 2 |
1 | 10 | 120 | 18 |
1 | 50 | 140 | 19 |
2 | 10 | 260 | 33 |
4 | 10 | 450 | 70 |
5 | 2 | 130 | 17 |
5 | 50 | 580 | 77 |
10 | 10 | 450 | 75 |
30 | 1 | 400 | 45 |
40 | 1 | 520 | 57 |
50 | 1 | 520 | 62 |
I quite like the idea of 100kb receive buf. The performance seems acceptable for now and the peer can enqueue 10x less on the sctp layer.
- Per percent of CPU used we get roughly 6-8 Mb/s throughput. So 3% CPU usage gives us 25 Mb/s throughput
- Single stream througput is limited by maxBufferedAmount. For 100kb buffered amount we get 12Mb/s and for 32kB we get 3Mb/s throughput. This throughput is lower for higher latencies as we write the same amount on the channel but have to wait more time to get the ACK
Does this mean we can limit CPU usage by using maxBufferedAmount
, or is there a better way to accomplish that?
Applications can limit their throughput if they want to limit CPU usage. For now we aren't going to expose maxBufferedAmount
to the go-libp2p user.
The source code for @sukunrt's test is here: https://github.com/libp2p/go-libp2p/tree/webrtc-echobm/examples/echobm
recv buf: 100kB, maxBufferedAmount: 32kB
I agree that this one seems like a good option. It is fast enough while using little CPU. If use cases appear that need to optimize past 12 Mb/s on WebRTC connections we can expose an option to tune the recv buffer and the maxBufferedAmount. But defaulting to conservative resource usage seems better to me.
I think we can close this issue as @sukunrt's report and code is enough.
@SgtPooki the better way to do that is to limit the sctp receive buffer to 100kB as this affects all streams on a connection.
Apologies for the poor naming, I used the term we are using in code. The maxBufferedAmount
is the send buffer we have per stream. So increasing maxBufferedAmount increases throughput per stream and also the cpu used as we need more cpu for the higher throughput. Just changing this number still won't increase the per connection(sum of all streams) cpu used because that number is limited by the receive buffer which is shared across all streams. The receive buffer is shared across all streams because SCTP doesn't have per stream flow control.
this should be done through the resource mamager, dont hardcode values please.
That's a much larger change since the underlying sctp connection's receive buffer is a fix sized buffer and doesn't support window updates.