fuzzbench
fuzzbench copied to clipboard
Experiment request for custom benchmarks
Description
Add mutant-based benchmarks and update experiment data in the YAML file. This experiment only introduces new benchmarks as we want to address the saturated seed corpus problem through corpus reduction techniques.
We have decided to use AFL and AFL++ for this experiment to observe any difference in the outcomes due to the difference in these fuzzers.
We use four benchmarks:
- The original seed corpus from the lcms_cms_transform_fuzzer benchmark
- An unfiltered seed corpus from our saturated corpus
- Filtering strategy one applied to the seed corpus
- Filtering strategy two applied to the seed corpus
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
View this failed invocation of the CLA check for more information.
For the most up to date status, view the checks section at the bottom of the pull request.
Updated the Description.
@DonggeLiu @jonathanmetzman Could you please have a look?
Hi @ardier, we are happy to run experiments for you, but could you please:
- Move the seeds directory in this PR to a cloud storage (e.g., GitHub repo), and download it in the Dockerfile? Otherwise the 'Files changed' tab becomes too slow or crashes.
- Would you mind making a trivial modification to service/gcbrun_experiment.py? This will allow me to launch experiments in this PR before merging. Here is an example to add a dummy comment : )
- In addition, could you please write your experiment request in this format?
You can swap the
--experiment-name,--fuzzers,--benchmarksparameters with your values:
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name <YYYY-MM-DD-NAME> --fuzzers <FUZZERS> --benchmarks <BENCHMARKS>
We would really appreciate that.
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-09-03-afl-mutants --fuzzers afl aflplusplus --benchmarks lcms_cms_transform_fuzzer lcms_cms_transform_fuzzer_all_seeds lcms_cms_transform_fuzzer_minimized_mutants lcms_cms_transform_fuzzer_dominator_mutants
@DonggeLiu, apologies that this took a while for me to get to. I have applied the changes you asked for. Please let me know if I should be taking any additional steps.
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-09-03-afl-mutants --fuzzers afl aflplusplus --benchmarks lcms_cms_transform_fuzzer lcms_cms_transform_fuzzer_all_seeds lcms_cms_transform_fuzzer_minimized_mutants lcms_cms_transform_fuzzer_dominator_mutants
Hello. I don't see the results of this experiment anywhere. Am I missing something, or do I need to take other steps to generate the reports?
Sorry @ardier , it appears cloud build failed to pick up the previous experiment request command:
Let me retry this.
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-09-11-afl-mutants --fuzzers afl aflplusplus --benchmarks lcms_cms_transform_fuzzer lcms_cms_transform_fuzzer_all_seeds lcms_cms_transform_fuzzer_minimized_mutants lcms_cms_transform_fuzzer_dominator_mutants
No problem. Thank you for looking into this.
Hi @ardier, the experiment request failed again for the same reason, and there is no further log from the cloud logs.
Let's do it again, and I will spend time debugging it if it fails again.
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-09-12-afl-mutants --fuzzers afl aflplusplus --benchmarks lcms_cms_transform_fuzzer lcms_cms_transform_fuzzer_all_seeds lcms_cms_transform_fuzzer_minimized_mutants lcms_cms_transform_fuzzer_dominator_mutants
Experiment 2024-09-12-afl-mutants data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).
A quick update on this:
- The experiment launched successfully this time (finally).
- No report generated because of a known issue with
llvm-profdatacoverage measurement. It failed to measure the coverage hence cannot generate report. - ~~From the cloud log, so far the issue is from benchmark
lcms_cms_transform_fuzzer_dominator_mutantsonly.~~ - ~~I will rerun the exp without that benchmark.~~
- Unfortunately the error happened on all benchmarks, I reckon that's because they are all based on
lcms. - Would you mind using other benchmarks? If not I can run some other benchmarks and let you know which one works.
- We will look into ways to fix this, but currently I am fully occupied by other tasks and may take weeks before I can go back to this.
I created a simpler experiment here with the suggested changes to ensure everything works before creating a larger experiment.