shoop
shoop copied to clipboard
shoop slower than scp in optimal bandwidth case
Hi. I tested shoop on my environment and see that it 5x times slower than scp.
My environment:
Client:
- Core i5 with 8Gb RAM
- Debian 8 with latest updates
- Rust from rustup 0.5.0 (4be1012 2016-07-30)
- libsodium 1.0.0-1 from debian repos
- shoop 0.0.1-prealpha.4
Server:
- Digital Ocean 512Mb droplet
- Debian 8 with latest updates
- Rust from rustup 0.5.0 (4be1012 2016-07-30)
- libsodium 1.0.0-1 from debian repos
- shoop 0.0.1-prealpha.4
Network and test file:
- Broadband connection, speed is around 40Mbit/s.
- Ping and speed to the USA: http://www.speedtest.net/my-result/5546858786
- Test file is a 1GB sample, downloaded from http://www.thinkbroadband.com/download.html
SCP:
strangeman@strangebook:~/tmp$ time scp [email protected]:/root/1GB.zip .
1GB.zip 100% 1024MB 4.1MB/s 04:08
real 4m12.082s
user 0m7.052s
sys 0m10.004s
SHOOP:
strangeman@strangebook:~/tmp$ time shoop [email protected]:/root/1GB.zip
downloading ./1GB.zip (1024.0MB)
1024.0M / 1024.0M (100%) [ avg 0.8 MB/s ]
shooped it all up in 20m48s
real 20m51.508s
user 13m48.492s
sys 7m9.124s
Why? Did shoop optimised only for slow and unstable connections?
Hey @strangeman! Thanks so much for testing.
If possible, can you build the latest HEAD (remember to do cargo build --release to optimize and get rid of debug info) and let me know if it's still that slow?
There was a memory regression but the fix is in a forked library, so I can't publish a more recent release on crates until the dependency merges the fix.
Even without that, though, we still need to add multithreading to make the high-speed case actually as high-speed as it wants to be :).
So basically, yeah, it's not currently optimized for the high-speed, high-reliability case, since I'm stuck in a high-ish-speed, low-reliability case :P. You've inspired me to get on the threading business though... I just replicated your test on two VPS's, and it's making me sad.
Tested with HEAD version. It's faster (~1.5-2MB/s) than old version (~0.8 MB/s), but still slower than scp (~4MB/s). But HEAD version have another annoying bug: sometimes transmission freezes for 20-30 seconds (looks like server process crashing).
Anyway, this is a great tool! Hope you'll fix these problems in future. :)
Cool, I think with threading and limiting the progressbar stdout insanity we should be closer to parity on stable connections, which should also be our goal (never regress from TCP).
Added basic asynchronous file I/O (client side only right now) and de-crazied stdout, and I'm still only seeing ~40Mbps from two 100Mbps VPSs, so there's something more to it. I haven't profiled the server-side yet, so that's next :).
I'm really excited about https://aturon.github.io/blog/2016/08/11/futures/.
Given time, I'm also really interested in a better benching framework for shoop, at the very least to refactor enough where we can have a "virtual" perfect UDT connection and have a solid measure on our real max bandwidth.
Been making some gradual speed improvements over the last few alpha releases, but there's still a big gap in the case of VPS-VPS transfers where we have low-latency 100Mbps-1Gbps connections. The weird thing is, on occassion, I'm seeing the same transfer speed, but often times it will inexplicably drop from 11MB/s to ~5MB/s, making me wonder if this is perhaps a UDT congestion control thing.
Still more digging to be done.