mysql
mysql copied to clipboard
packets: implemented compression protocol
Description
Implemented the SQL compression protocol. This new feature is used by using "compress=1".
Checklist
- [X] Code compiles correctly
- [X] Created tests which fail without the change (if possible)
- [X] All tests passing
- [X] Extended the README / documentation, if necessary
- [X] Added myself / the copyright holder to the AUTHORS file
Could you show benchmark result for compressed, uncompressed (new) and uncompressed (current)?
For reference, here is the documentation I consulted.
Benchmarks: Here are the old benchmark stats:
BenchmarkQueryContext/1-4 20000 98673 ns/op 729 B/op 22 allocs/op
BenchmarkQueryContext/2-4 20000 89651 ns/op 728 B/op 22 allocs/op
BenchmarkQueryContext/3-4 20000 91343 ns/op 729 B/op 22 allocs/op
BenchmarkQueryContext/4-4 20000 88788 ns/op 729 B/op 22 allocs/op
BenchmarkExecContext/1-4 20000 97542 ns/op 728 B/op 22 allocs/op
BenchmarkExecContext/2-4 20000 91997 ns/op 728 B/op 22 allocs/op
BenchmarkExecContext/3-4 20000 89143 ns/op 728 B/op 22 allocs/op
BenchmarkExecContext/4-4 10000 104725 ns/op 728 B/op 22 allocs/op
BenchmarkQuery-4 20000 89338 ns/op 748 B/op 23 allocs/op
BenchmarkExec-4 20000 81922 ns/op 67 B/op 3 allocs/op
BenchmarkRoundtripTxt-4 10000 198159 ns/op 15938 B/op 16 allocs/op
BenchmarkRoundtripBin-4 10000 162701 ns/op 695 B/op 18 allocs/op
BenchmarkInterpolation-4 2000000 820 ns/op 176 B/op 1 allocs/op
BenchmarkParseDSN-4 200000 9042 ns/op 6896 B/op 63 allocs/op
PASS
ok github.com/mysql 53.699s
Here are the new benchmark stats: BenchmarkQueryCompression does the same thing as BenchmarkQuery, but with compression turned on.
BenchmarkQueryContext/1-4 10000 100547 ns/op 732 B/op 22 allocs/op
BenchmarkQueryContext/2-4 20000 91023 ns/op 728 B/op 22 allocs/op
BenchmarkQueryContext/3-4 20000 90626 ns/op 729 B/op 22 allocs/op
BenchmarkQueryContext/4-4 10000 100878 ns/op 729 B/op 22 allocs/op
BenchmarkExecContext/1-4 10000 157982 ns/op 728 B/op 22 allocs/op
BenchmarkExecContext/2-4 10000 104514 ns/op 728 B/op 22 allocs/op
BenchmarkExecContext/3-4 10000 101186 ns/op 728 B/op 22 allocs/op
BenchmarkExecContext/4-4 20000 99318 ns/op 728 B/op 22 allocs/op
BenchmarkQuery-4 10000 105718 ns/op 752 B/op 23 allocs/op
BenchmarkQueryCompression-4 10000 104518 ns/op 972 B/op 27 allocs/op
BenchmarkExec-4 20000 82831 ns/op 67 B/op 3 allocs/op
BenchmarkRoundtripTxt-4 10000 247710 ns/op 15938 B/op 16 allocs/op
BenchmarkRoundtripBin-4 10000 163471 ns/op 695 B/op 18 allocs/op
BenchmarkInterpolation-4 1000000 1303 ns/op 176 B/op 1 allocs/op
BenchmarkParseDSN-4 100000 13798 ns/op 6896 B/op 63 allocs/op
PASS
ok github.com/mysql 46.113s
Thank you for your comments on my pull request!
FYI, current benchmark is using too small data to know zlib performance.
ping
This pull request needs some changes (and a rebase to resolve the conflicts)
I've made all changes suggested above except for changing the Benchmark tests. I wanted to ask how it would be recommended I best do this. Generally, the utility of the compression becomes apparent only when several rows of information are received at once. However, I'm not sure how I would do this with the go-sql-driver, as it appears that rows are obtained one by one using a cursor. Would you suggest a test where only one row is looked at but it has a very large string within it that row?
Please also run go fmt
before pushing code. The CI builds are currently not successful since it finds some not properly formatted code.
I will be getting to these changes after my university's midterms. Thank you for the good suggestions!
However, I'm not sure how I would do this with the go-sql-driver, as it appears that rows are obtained one by one using a cursor.
rows.Next()
is not server-side cursor, it's just an iterator.
So you can receive much rows via compressed packet.
You can use this data for benchmarking. http://downloads.mysql.com/docs/world.sql.gz
SELECT ID, Name, CountryCode, District, Population FROM city
will be large enough
for compressed protocol.
@methane I am curious so as how to add in world.sql into the testing strategy. Should all of this script be occurring before running tests, when mysql -e 'create database gotest;
occurs? If so, how would I add it there? Or should i inline the commands of world.sql into the compression test?
I benchmarked this branch (with merge master):
# master
$ go test -bench .
goos: darwin
goarch: amd64
BenchmarkSmall-4 20000 86824 ns/op 482 B/op 14 allocs/op
BenchmarkPlain-4 300 4157599 ns/op 653566 B/op 20413 allocs/op
--- FAIL: BenchmarkCompressed
world_test.go:18: compression not implemented yet
FAIL
exit status 1
FAIL _/Users/inada-n/Downloads/world 4.368s
# pr649
$ go test -bench .
goos: darwin
goarch: amd64
BenchmarkSmall-4 20000 99420 ns/op 482 B/op 14 allocs/op
BenchmarkPlain-4 300 4306360 ns/op 653560 B/op 20413 allocs/op
BenchmarkCompressed-4 100 11169376 ns/op 856138 B/op 20784 allocs/op
PASS
ok _/Users/inada-n/Downloads/world 5.842s
I confirmed that impact for uncompressed protocol is negligible.
(bench code is here: https://gist.github.com/methane/f92bd9fe83ef97b9c00829fe2fb3ee43)
@methane: BenchmarkSmall-4: 86824 ns/op vs 99420 ns/op That's roughly +15%. I wouldn't call that negligible.
Please rebase your own master branch on top of origin/master
instead of merging origin/master
into yours:
git remote update
git rebase origin/master
git push -f
No need to rebase, since we use "Squash and merge". "merge master" helps reviewing changes from last review.
I run again the benchmark on more stable Linux machine. I can't see significant difference:
benchmark old ns/op new ns/op delta
BenchmarkSmall-6 56154 54070 -3.71%
BenchmarkPlain-6 2502622 2464086 -1.54%
How is it going now?
Any updates on this? What's left to do, and is there anything I can help with?
Any updates on this PR ?
thanks for all your great efforts. just want to know is there anyone still work on this important feature?
Hi! Any updates on this PR?
Am also interested in this PR 😄
Hello everyone -- I will likely not be picking this up again. Things have gotten hectic in my life and I can't give this the proper attention. However, anyone is welcome to pick it up where I left off!
continued in #1487