utp icon indicating copy to clipboard operation
utp copied to clipboard

Starts stalling under load

Open AudriusButkevicius opened this issue 9 years ago • 43 comments

Using this benchmark:

import (
    "net"
    "testing"

    "github.com/anacrolix/utp"
)

func getTCPConnectionPair() (net.Conn, net.Conn, error) {
    lst, err := net.Listen("tcp", "127.0.0.1:0")
    if err != nil {
        return nil, nil, err
    }

    var conn0 net.Conn
    var err0 error
    done := make(chan struct{})
    go func() {
        conn0, err0 = lst.Accept()
        close(done)
    }()

    conn1, err := net.Dial("tcp", lst.Addr().String())
    if err != nil {
        return nil, nil, err
    }

    <-done
    if err0 != nil {
        return nil, nil, err0
    }
    return conn0, conn1, nil
}

func getUTPConnectionPair() (net.Conn, net.Conn, error) {
    lst, err := utp.NewSocket("udp", "127.0.0.1:0")
    if err != nil {
        return nil, nil, err
    }

    var conn0 net.Conn
    var err0 error
    done := make(chan struct{})
    go func() {
        conn0, err0 = lst.Accept()
        close(done)
    }()

    conn1, err := utp.Dial(lst.Addr().String())
    if err != nil {
        return nil, nil, err
    }

    <-done
    if err0 != nil {
        return nil, nil, err0
    }

    return conn0, conn1, nil
}


func benchConnPair(b *testing.B, c0, c1 net.Conn) {
    b.ReportAllocs()
    b.SetBytes(128 << 10)
    b.ResetTimer()

    request := make([]byte, 52)
    response := make([]byte, (128<<10)+8)

    pair := []net.Conn{c0, c1}
    for i := 0; i < b.N; i++ {
        if i%2 == 0 {
            pair[0] = c0
            pair[1] = c1
        } else {
            pair[0] = c1
            pair[1] = c0
        }

        if _, err := pair[0].Write(request); err != nil {
            b.Fatal(err)
        }

        if _, err := pair[1].Read(request[:8]); err != nil {
            b.Fatal(err)
        }
        if _, err := pair[1].Read(request[8:]); err != nil {
            b.Fatal(err)
        }
        if _, err := pair[1].Write(response); err != nil {
            b.Fatal(err)
        }
        if _, err := pair[0].Read(response[:8]); err != nil {
            b.Fatal(err)
        }
        if _, err := pair[0].Read(response[8:]); err != nil {
            b.Fatal(err)
        }
    }
}

func BenchmarkTCP(b *testing.B) {
    conn0, conn1, err := getTCPConnectionPair()
    if err != nil {
        b.Fatal(err)
    }

    defer conn0.Close()
    defer conn1.Close()

    benchConnPair(b, conn0, conn1)
}

func BenchmarkUTP(b *testing.B) {
    conn0, conn1, err := getUTPConnectionPair()
    if err != nil {
        b.Fatal(err)
    }

    defer conn0.Close()
    defer conn1.Close()

    benchConnPair(b, conn0, conn1)
}


I get the following:

PASS
BenchmarkTCP-4      5000        263246 ns/op     497.91 MB/s          27 B/op          0 allocs/op
BenchmarkUTP-4  2016/05/26 23:21:51 error resending packet: write udp [::]:56345->127.0.0.1:56344: use of closed network connection
2016/05/26 23:21:51 error resending packet: write udp [::]:56345->127.0.0.1:56344: use of closed network connection
2016/05/26 23:21:51 error resending packet: write udp [::]:56345->127.0.0.1:56344: use of closed network connection
     100     134251556 ns/op       0.98 MB/s     1195857 B/op       4016 allocs/op

If you tweak the numbers a bit, you might get it to 0.2MB/s

I did manage to get it to a point where it just mostly never completes and fails with "connection closed", but that was using syncthings protocol wrappers which do async read/writes. In cases where it did complete it was around 0.3MB/s

AudriusButkevicius avatar May 26 '16 22:05 AudriusButkevicius

Is this a new trend, or has this benchmark been used on eagle earlier revisions of uTP? This looks like a useful benchmark, I'll do some work with it.

anacrolix avatar May 27 '16 03:05 anacrolix

I've done some investigating. I've found similar issues with these kinds of tests before. Essentially people forget that UDP is not guaranteed delivery, even over localhost. After reaching a certain level of UDP packet throughput (10k/s roughly on mac?), packets start getting dropped for long periods of time. I've added the "inproc" test, which bypasses UDP and gives you a more theoretical limit to performance. I recommend you create more realistic, real-world tests, for example transfers over links with 1-2ms latency, which would a more lower bound for latency between machines.

If you want to continue chasing performance on localhost, I believe there are a few options:

  1. Improve the latency calculation, and retransmission code so that retries occur sooner.
  2. Rate limit UDP packets, so that they're slowed down before the system starts dropping them. I don't currently do this, except externally, in a few tests.

Let me know your thoughts, I'll do some more experimenting with this in the meanwhile.

anacrolix avatar May 27 '16 04:05 anacrolix

So under normal circumstances and a real world scenario it seems to do just fine. I added this synthetic benchmark as we have it for tcp. I haven't tested it previous revisions, just head, so I can't claim that it's a regression. I guess I'll add sleep to the benchmarks and see if I can get it to show more realistic numbers, but I wonder if the package should handle situations like these, give tcp.works well even under sythetic conditions?

AudriusButkevicius avatar May 27 '16 06:05 AudriusButkevicius

This is what I get on my machine:

$ go test -bench Syncthing -run @ github.com/anacrolix/utp
BenchmarkSyncthingTCP-4            10000        130547 ns/op    1004.01 MB/s          13 B/op          0 allocs/op
BenchmarkSyncthingUDPUTP-4           100      61124441 ns/op       2.14 MB/s     1350979 B/op       4229 allocs/op
BenchmarkSyncthingInprocUTP-4        500       2304482 ns/op      56.88 MB/s     1706268 B/op       4040 allocs/op

anacrolix avatar May 27 '16 06:05 anacrolix

So even if I add a 10ms delay between each iteration of the benchmark, I still get a splew of 2016/05/27 19:18:27 error resending packet: write udp [::]:60754->127.0.0.1:60753: use of closed network connection and the rate is still fairly horrid, nowhere close the 56MB you get over a pipe pair.

AudriusButkevicius avatar May 27 '16 18:05 AudriusButkevicius

I agree that the implementation is capable of achieving 56mb under ideal lossless conditions, but it seems that it really struggles when where is some packet loss?

AudriusButkevicius avatar May 27 '16 18:05 AudriusButkevicius

Open to improvements!

anacrolix avatar May 28 '16 04:05 anacrolix

I concur, it should be faster. What command are you running when you get those log messages?

anacrolix avatar May 28 '16 06:05 anacrolix

I am running the same command as you are in your example, yet I am on Windows, so perhaps the semantics are different.

To be honest, I am struggling to repro this now.

I've dug deeper into this, and usually the stalling happens when we need to recv the 128kb buffer, but it properly dies when we get timed out waiting for ack, which is not always but fairly often, even on a local connection.

AudriusButkevicius avatar May 28 '16 10:05 AudriusButkevicius

Hm that's interesting. The network stack does perform very differently between platforms when you push it. If you get timed out waiting for ack, that's utp.Conn giving up completely. You shouldn't see this in tests unless your system isn't coping. I suspect in your syncthing tests, this is when so many retries have happened, and still not got through that it times out. The other log message, use of closed network connection, I should be suppressing.

anacrolix avatar May 28 '16 11:05 anacrolix

So I think that pushing it is an overstatement, given it barely reaches 1MB/s. Also number of Write() calls that we produce is in the hundreds over 10s benchmark, so I don't think we're hitting a 10k packets a second problem here either (unless it's mostly UTP overhead)

I guess the other thing I could do is "record" the payloads I am sending, and at the point it hits the problem, try replaying it see if I can repro it?

Is there any meaningful logging I could enable which would tell me about the state of the UTC connection? (window sizes etc etc)

AudriusButkevicius avatar May 28 '16 11:05 AudriusButkevicius

utp breaks up your Conn.Write calls into chunks of no more than 1202 bytes each. Those have to be retransmitted on failure, and acknowledged at intervals by the receiver.

There are several very important variables in a const block at the top of utp.go. There's also testing timeouts set by setDefaultTestingDurations for going over localhost.

envpprof exposes status information over http on localhost for some tests, although it doesn't go to the individual Conn currently. That would be very easy to add. See status.go. It's exposed by passing GOPPROF=http as an environment variable, the status is printed to the index handler at /, check the log while running tests for the port. Also see /debug/vars, where all the expvar variables are exported. Those are also listed at the top of utp.go.

anacrolix avatar May 28 '16 12:05 anacrolix

Well turns out the test was bad. I went to implement a purely UDP version to see if the UDP overhead is what was killing it. Then I discovered the reads and writes are synchronous, and mutually exclusive. I've fixed this, and here are the new benchmarks on my machine:

$ GOPPROF=http go test -bench Syncthing -run @ github.com/anacrolix/utp 2016/05/29 14:30:03 envpprof serving http://127.0.0.1:6061 PASS BenchmarkSyncthingTCP-4 20000 89890 ns/op 1458.80 MB/s 112 B/op 13 allocs/op BenchmarkSyncthingUDPUTP-4 500 2419213 ns/op 54.20 MB/s 1032166 B/op 2027 allocs/op BenchmarkSyncthingInprocUTP-4 1000 1185226 ns/op 110.64 MB/s 1247499 B/op 1938 allocs/op

Much nicer. Also this would explain why all my attempted performance tweaks seemed to have no impact: The first ~64KB was written in each test, and the rest would have been struggled out through incessant resends, as the uTP window isn't as high as the TCP one on my system. I would guess that the TCP window is lower on your Windows machine too, which would explain your occasional timeouts. If the Write stalls because the window is full, it'll never get to the read, if you do a 128KB synchronous read like it was.

anacrolix avatar May 29 '16 04:05 anacrolix

I am not sure I understand why would synchronous mutually exclusive reads and writes cause problems?

So I understand that you changed the test to fix this, but it's a bit of a mystery to me why the same code works with TCP without the fix?

It makes me think that I have to write the application in a specific way in order to use UTP and not get stalling?

AudriusButkevicius avatar May 29 '16 09:05 AudriusButkevicius

Change your original test to use io.ReadFull in place of your Read calls. Read is returning early, and not all the bytes. It doesn't even need to be concurrently read and written. I've tried this per your originally provided test, and it also corrects the issue. I won't add it to my standard suite of tests, as it only coincidentally works due to sufficient window sizes.

func requireReadFull(tb testing.TB, r io.Reader, b []byte) {
    n, err := io.ReadFull(r, b)
    require.NoError(tb, err)
    require.EqualValues(tb, len(b), n)
}

The windows are relevant, but aren't the issue in this case. https://en.wikipedia.org/wiki/TCP_tuning#Window_size, and read the spec on io.Writer and io.Reader for more.

anacrolix avatar May 30 '16 05:05 anacrolix

Here's the patch that fixes the synchronous version.

14.txt

anacrolix avatar May 30 '16 05:05 anacrolix

Glad to see this got reported and fixed!! I've been noticing the same issue when stressing utp connections with ipfs

whyrusleeping avatar Jun 01 '16 20:06 whyrusleeping

It was a fix to the test not the code. Sadly when actually trying to use it, it still stalls/disconnects with timed out waiting for ack

AudriusButkevicius avatar Jun 01 '16 20:06 AudriusButkevicius

@AudriusButkevicius ah, dang.

whyrusleeping avatar Jun 01 '16 21:06 whyrusleeping

So I know this library is used in the torrent library, so I'd assume that it has had some exposure and usage?

AudriusButkevicius avatar Jun 01 '16 21:06 AudriusButkevicius

Yeah, its pretty stable overall. In our usage though there are some occasional misbehaviours that i havent been able to track down too closely yet. I'll update our code to use the latest version of this lib soon and stress things again

whyrusleeping avatar Jun 01 '16 21:06 whyrusleeping

So for me, on a fairly good line it's not bad, but over ADSL in some rural area, it disconnects quite fast.

AudriusButkevicius avatar Jun 01 '16 21:06 AudriusButkevicius

@AudriusButkevicius if you have some free time, do you think you could try reproducing those failures in a test? For ipfs i've done some work with docker nodes and tc to simulate slow and lossy networks. Most of my work towards that is here and here. Having some semi-automated docker test to repro the problems would be a great start.

whyrusleeping avatar Jun 01 '16 21:06 whyrusleeping

This package is used in production 24/7 on linux/amd64, and gets a lot of testing on osx. There's another issue that's come to my attention when running utp on Windows. If you guys are getting these stalls on Windows, it's probably related, if not the actual problem. https://github.com/anacrolix/torrent/issues/83. I've pushed some workarounds to that in the meanwhile. d7ad5aff2b8a5fa415d1c1ed00b71cfd8b4c69e0

anacrolix avatar Jun 02 '16 04:06 anacrolix

This issue is about the stalls, not the broken test, I shouldn't have closed it.

anacrolix avatar Jun 02 '16 04:06 anacrolix

So, i'm still getting stalls on linux machines. Nothing is hanging, messages still appear to be coming in and processed by the utp read loop (no select 15 minutes type messages in the stack dumps). But my code that is waiting to read on utp sockets is hanging for hours.

Here is an example (large) stack trace: https://ipfs.io/ipfs/QmSLTNdSNWSus9a83ToR2WRho9HeCMSTT7KTbEqx4vyqDo

The yamux reads that are stuck for two or more hours are waiting on data from utp

whyrusleeping avatar Jun 15 '16 19:06 whyrusleeping

If the remote end of the conn has closed without a FIN getting through, and you never attempt to send to them, you will never Read anything again. There's a timeout for writes, but no read keepalive. The easiest solution would be to ensure you have an application level keep alive in your protocol that is sitting on top of utp. This is how BitTorrent does it (2 mins), which would be why I've never encountered this. If you're aware of a keep alive (probably a state packet every so often) in the utp spec I'll implement that.

anacrolix avatar Jun 16 '16 02:06 anacrolix

I can confirm that UDP connections stall and after 3-5 minutes of launching torrent, it keeps downloading only using TCP. Almost nothing gets downloaded using UDP after some time. New UDP connections keep created and totally only a small amount of traffic gets received (300-400kb) per UDP connection until the torrent is finished. TCP connections at the same time receive 10-50MB each before the torrent is finished. When the torrent is started, UDP connections receive traffic with the same speed as the TCP ones, but after some point all UDP connections stall. I suspect it only happens when no other LAN computers download at the same time using the same internet connection, i.e. when the internet connection is monopolized by anacrolix/torrent.

stunndard avatar Jun 25 '16 13:06 stunndard

@stunndard Are you behind a NAT router for ADSL or some such?

anacrolix avatar Jun 26 '16 01:06 anacrolix

Why does ADSL matter?

AudriusButkevicius avatar Jun 28 '16 08:06 AudriusButkevicius