fuzzbench icon indicating copy to clipboard operation
fuzzbench copied to clipboard

Experiment request for custom benchmarks

Open ardier opened this issue 1 year ago • 15 comments

Description

Add mutant-based benchmarks and update experiment data in the YAML file. This experiment only introduces new benchmarks as we want to address the saturated seed corpus problem through corpus reduction techniques.

We have decided to use AFL and AFL++ for this experiment to observe any difference in the outcomes due to the difference in these fuzzers.

We use four benchmarks:

  1. The original seed corpus from the lcms_cms_transform_fuzzer benchmark
  2. An unfiltered seed corpus from our saturated corpus
  3. Filtering strategy one applied to the seed corpus
  4. Filtering strategy two applied to the seed corpus

ardier avatar Aug 14 '24 13:08 ardier

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

google-cla[bot] avatar Aug 14 '24 13:08 google-cla[bot]

Updated the Description.

ardier avatar Aug 15 '24 09:08 ardier

@DonggeLiu @jonathanmetzman Could you please have a look?

ardier avatar Aug 22 '24 14:08 ardier

Hi @ardier, we are happy to run experiments for you, but could you please:

  1. Move the seeds directory in this PR to a cloud storage (e.g., GitHub repo), and download it in the Dockerfile? Otherwise the 'Files changed' tab becomes too slow or crashes.
  2. Would you mind making a trivial modification to service/gcbrun_experiment.py? This will allow me to launch experiments in this PR before merging. Here is an example to add a dummy comment : )
  3. In addition, could you please write your experiment request in this format? You can swap the --experiment-name, --fuzzers, --benchmarks parameters with your values:
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name <YYYY-MM-DD-NAME>  --fuzzers <FUZZERS> --benchmarks <BENCHMARKS>

We would really appreciate that.

DonggeLiu avatar Aug 23 '24 00:08 DonggeLiu

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-09-03-afl-mutants --fuzzers afl aflplusplus --benchmarks lcms_cms_transform_fuzzer lcms_cms_transform_fuzzer_all_seeds lcms_cms_transform_fuzzer_minimized_mutants lcms_cms_transform_fuzzer_dominator_mutants

ardier avatar Sep 03 '24 09:09 ardier

@DonggeLiu, apologies that this took a while for me to get to. I have applied the changes you asked for. Please let me know if I should be taking any additional steps.

ardier avatar Sep 03 '24 10:09 ardier

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-09-03-afl-mutants --fuzzers afl aflplusplus --benchmarks lcms_cms_transform_fuzzer lcms_cms_transform_fuzzer_all_seeds lcms_cms_transform_fuzzer_minimized_mutants lcms_cms_transform_fuzzer_dominator_mutants

DonggeLiu avatar Sep 03 '24 12:09 DonggeLiu

Hello. I don't see the results of this experiment anywhere. Am I missing something, or do I need to take other steps to generate the reports?

ardier avatar Sep 10 '24 22:09 ardier

Sorry @ardier , it appears cloud build failed to pick up the previous experiment request command: image

Let me retry this.

DonggeLiu avatar Sep 11 '24 00:09 DonggeLiu

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-09-11-afl-mutants --fuzzers afl aflplusplus --benchmarks lcms_cms_transform_fuzzer lcms_cms_transform_fuzzer_all_seeds lcms_cms_transform_fuzzer_minimized_mutants lcms_cms_transform_fuzzer_dominator_mutants

DonggeLiu avatar Sep 11 '24 00:09 DonggeLiu

No problem. Thank you for looking into this.

ardier avatar Sep 11 '24 07:09 ardier

Hi @ardier, the experiment request failed again for the same reason, and there is no further log from the cloud logs.

Let's do it again, and I will spend time debugging it if it fails again.

DonggeLiu avatar Sep 12 '24 01:09 DonggeLiu

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-09-12-afl-mutants --fuzzers afl aflplusplus --benchmarks lcms_cms_transform_fuzzer lcms_cms_transform_fuzzer_all_seeds lcms_cms_transform_fuzzer_minimized_mutants lcms_cms_transform_fuzzer_dominator_mutants

DonggeLiu avatar Sep 12 '24 01:09 DonggeLiu

Experiment 2024-09-12-afl-mutants data and results will be available later at: The experiment data. The experiment report. The experiment report(experimental).

DonggeLiu avatar Sep 12 '24 06:09 DonggeLiu

A quick update on this:

  1. The experiment launched successfully this time (finally).
  2. No report generated because of a known issue with llvm-profdata coverage measurement. It failed to measure the coverage hence cannot generate report.
  3. ~~From the cloud log, so far the issue is from benchmark lcms_cms_transform_fuzzer_dominator_mutants only.~~
  4. ~~I will rerun the exp without that benchmark.~~
  5. Unfortunately the error happened on all benchmarks, I reckon that's because they are all based on lcms.
  6. Would you mind using other benchmarks? If not I can run some other benchmarks and let you know which one works.
  7. We will look into ways to fix this, but currently I am fully occupied by other tasks and may take weeks before I can go back to this.

DonggeLiu avatar Sep 12 '24 06:09 DonggeLiu

I created a simpler experiment here with the suggested changes to ensure everything works before creating a larger experiment.

ardier avatar Nov 26 '24 19:11 ardier