valkey
valkey copied to clipboard
Valkey Fuzzer
Add Fuzzing Capability to Valkey
Overview
This PR adds a fuzzing capability to Valkey, allowing developers and users to stress test their Valkey deployments with randomly generated commands. The fuzzer is integrated with the existing valkey-benchmark tool, making it easy to use without requiring additional dependencies.
Key Features
• Command Generator: Automatically generates Valkey commands by retrieving command information directly from the server • Two Fuzzing Modes:
- normal: Generates only valid commands, doesn't modify server configurations
- aggressive: Includes malformed commands and allows CONFIG SET operations
• Multi-threaded Testing: Each client runs in a dedicated thread to maximize interaction between clients and enable testing of complicated scenarios
• Integration with valkey-benchmark: Uses the existing CLI interface
Implementation Details
• Added new files:
fuzzer_command_generator.h/c: Dynamically generates valkey commands.fuzzer_client.c: Orchestrate all the client threads, report test progress, and handle errors.
• Modified existing files:
- valkey-benchmark.c: Added fuzzing mode options and integration
Command Generation Approach
The fuzzer dynamically retrieves command information from the server, allowing it to adapt to different Valkey versions and custom modules. Since the command information generated from JSON files is sometimes limited, not all generated commands will be valid, but approximately 95% valid command generation is achieved.
It is important to generate valid commands to cover as much code path as possible and not just the invalid command/args path. The fuzzer prioritizes generating syntactically and semantically correct commands to ensure thorough testing of the server's core functionality, while still including a small percentage of invalid commands in aggressive mode to test error handling paths
Config modification
For CONFIG SET command, the situation is more complex as the server currently provides limited information through CONFIG GET *. Some hardcoded logic is implemented that will need to be modified in the future. Ideally, the server should provide self-inspection commands to retrieve config keys-values with their properties (enum values, modifiability status, etc.).
Issue Detection
The fuzzer is designed to identify several types of issues: • Server crashes • Server memory corruptions / memory leaks(when compiled with ASAN) • Server unresponsiveness • Server malformed replies
For unresponsiveness detection, command timeout limits are implemented to ensure no command blocks for excessive periods. If a server doesn't respond within 30 seconds, the fuzzer signals that something is wrong.
Proven Effectiveness
When running against the latest unstable version, the fuzzer has already identified several issues, demonstrating its effectiveness:
- https://github.com/valkey-io/valkey/issues/2111
- https://github.com/valkey-io/valkey/issues/2112
- https://github.com/valkey-io/valkey/pull/2109
- https://github.com/valkey-io/valkey/pull/2113
- https://github.com/valkey-io/valkey/pull/2108
- https://github.com/valkey-io/valkey/pull/2137
- https://github.com/valkey-io/valkey/issues/2106
- https://github.com/valkey-io/valkey/pull/2347
How to Use
Run the fuzzer using the valkey-benchmark tool with the --fuzz flag:
# Basic usage (10000 commands 1000 commands per client, 10 clients)
./src/valkey-benchmark --fuzz -h 127.0.0.1 -p 6379 -n 10000 -c 10
# With aggressive fuzzing mode
./src/valkey-benchmark --fuzz --fuzz-level aggressive -h 127.0.0.1 -p 6379 -n 10000 -c 10
# With detailed logging
./src/valkey-benchmark --fuzz --fuzz-log-level debug -h 127.0.0.1 -p 6379 -n 10000 -c 10
The fuzzer supports existing valkey-benchmark options, including TLS and cluster mode configuration.
Codecov Report
:x: Patch coverage is 73.48969% with 373 lines in your changes missing coverage. Please review.
:warning: Please upload report for BASE (unstable@30ea139). Learn more about missing BASE report.
:warning: Report is 231 commits behind head on unstable.
| Files with missing lines | Patch % | Lines |
|---|---|---|
| src/fuzzer_command_generator.c | 76.18% | 237 Missing :warning: |
| src/fuzzer_client.c | 69.48% | 119 Missing :warning: |
| src/valkey-benchmark.c | 22.72% | 17 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## unstable #2340 +/- ##
===========================================
Coverage ? 72.79%
===========================================
Files ? 130
Lines ? 71738
Branches ? 0
===========================================
Hits ? 52224
Misses ? 19514
Partials ? 0
| Files with missing lines | Coverage Δ | |
|---|---|---|
| src/valkey-benchmark.c | 60.63% <22.72%> (ø) |
|
| src/fuzzer_client.c | 69.48% <69.48%> (ø) |
|
| src/fuzzer_command_generator.c | 76.18% <76.18%> (ø) |
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
- :package: JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.
@uriyage shell we also add a daily job to run the fuzzer?
I also thought about this but unsure about the triage part?
Nice work @uriyage! Gave it a run locally, works pretty well.
I had a question about the final report, do you manually analyze all the errors and spot the issues, what is your mechanism around it?
The errors can be divided into two groups: server-side-observable and client-side-observable.
For server-side issues, we can identify crashes, memory corruptions, or memory leaks. With client-side issues, we can observe timeouts (where the server is unexpectedly unresponsive) or malformed replies from the server that indicate problems with reply generation.
For the first two server-side issues, which are the most common, we can simply validate that after the fuzzer run the server didn't crash and validate that no memory issues were reported (when compiled with ASAN).
Client-side issues are more difficult to root cause and require manual work to understand what went wrong. We could potentially add in the future more server-side debug capabilities, such as having the server inspect its own output and crash whenever it sends a malformed reply. We could also add some thread monitoring capability to identify when the server becomes unresponsive.
@uriyage shell we also add a daily job to run the fuzzer?
Thanks, I added it to the TCL tests so it will run with all the variations we currently use for TCL testing. Currently, the test will be considered a failure only if the server crashes or becomes unresponsive after the fuzzer run
This isn't required for 9.0, but I would like us to try to get it merged after the 9.0 rc-1 goes out.
This is great! This seems quite thoroughly designed, and I like it a lot 😁
A few questions/suggestions:
- Can we add unit tests? Or make AI do it? (actual UTs, in src/unit and written in C++)
- How difficult is it to keep the fuzzer up to date if more commands or arguments are added in the future?
- Could it potentially support fuzzing module commands in the future?
@rainsupreme
- Yes, I will try to do it as a follow up item.
- The fuzzer dynamically retrieves commands and arguments from the server at runtime, so new commands are automatically tested as long as they're properly defined in the JSON specification files. Manual updates are only needed for edge cases like filtering dangerous commands or handling special argument patterns.
- Yes, it already supports module commands. The fuzzer dynamically retrieves all commands via COMMAND DOCS, including module commands, so they're automatically tested as long as the module supplies proper documentation in the JSON