add C speedups for key_slot & pack_command(s)
Pull Request check-list
- [x] Does
$ toxpass with this change (including linting)? - [x] Do the CI tests pass with this change (enable it first in your forked repo and wait for the github action build to finish)?
- [x] Is the new or changed code fully tested?
- [x] Is a documentation update included (if this change modifies existing APIs, or introduces new ones)?
- [x] Is there an example added to the examples folder (if applicable)?
- [x] Was the change added to CHANGES file?
Description of change
Closes #2162
C versions are up to 6x faster than python versions. Currently, pack_command takes ~10% of the time in a request & pack_commands takes ~20% of the time in a request. With the C version, the normal requests are ~5% faster & pipeline requests are ~15% faster.
Codecov Report
Merging #2118 (a2365d1) into master (a2365d1) will not change coverage. The diff coverage is
n/a.
:exclamation: Current head a2365d1 differs from pull request most recent head e2401e2. Consider uploading reports for the commit e2401e2 to get more accurate results
@@ Coverage Diff @@
## master #2118 +/- ##
=======================================
Coverage 91.84% 91.84%
=======================================
Files 108 108
Lines 27691 27691
=======================================
Hits 25433 25433
Misses 2258 2258
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact),ø = not affected,? = missing dataPowered by Codecov. Last update a2365d1...e2401e2. Read the comment docs.
@chayim @dvora-h Thoughts?
@chayim @dvora-h Any updates on this? Will you consider merging this in? If the maintainability of C code is an issue, I can also write this in Rust. Ideally, we should move the whole Node determination part of the Cluster client into C/Rust.
I think the right answer is using the rust bridge - agreed. However, I'm worried about general maintainability in the client - this is my only concern.
As long as users without compilers (let's call them base users) can have what they need to work via a stock pip install, I see this being sane. But - rust over C, since this would at least be in line with what other modules (i.e cryptography) has done.
@dvora-h comments?
I have not worked with rust in python yet, so it may take time to get started. I'll see when I've time. Regarding installation, the current C version or the rust version would be transparent to pip/wheel users, but may not be to source users. I use orjson (json module written in rust) in our projects & have never faced any issues.
With the C version, the normal requests are ~5% faster & pipeline requests are ~15% faster.
Bumped into this PR while going over the open ones with some other bug fixes, and this is really awesome stuff as we are actually suffering from some performance issues and pack_commands indeed stood out from our profiling. Will keep a close look
Sounds like these speedups should be shipped as a separate opt-in module like hiredis is a separate opt-in parser for the protocol... (Even better, maybe this should be part of hiredis?)
Sounds like these speedups should be shipped as a separate opt-in module like
hiredisis a separate opt-in parser for the protocol... (Even better, maybe this should be part ofhiredis?)
I think these belong in redis-py rather than hiredis. However, as mentioned, they really should be in rust. setuptools and core changes recently have led to embracing rust for these things - and it's something I fully support.
Is anyone interested in helping @dvora-h associate these in rust?
Sounds like these speedups should be shipped as a separate opt-in module like
hiredisis a separate opt-in parser for the protocol... (Even better, maybe this should be part ofhiredis?)I think these belong in redis-py rather than hiredis. However, as mentioned, they really should be in rust. setuptools and core changes recently have led to embracing rust for these things - and it's something I fully support.
Is anyone interested in helping @dvora-h associate these in rust?
@chayim I'd be happy to work on this. I'm just still waiting for the PRs I submitted 3-4 months ago to be released 😅
The upside, and downside of the size of this community is the number of PRs, and the speed of change :)
I only see this one opened, as authored by you. Is there another one I'm missing?
The upside, and downside of the size of this community is the number of PRs, and the speed of change :)
I only see this one opened, as authored by you. Is there another one I'm missing?
My PRs got merged in July itself, but there has been no stable release since then. I'm unable to use this library in production due to a critical bug in 4.3.x async cluster. I would first like to start using it in production, before making more contributions.
True. 4.4.x is so big that we're taking our time with milestones. Much like a couple of other big releases. FYI I'd like to have a 4.4 release shortly.
This pull request is marked stale. It will be closed in 30 days if it is not updated.