Fail on failed checkout
What is going on here? This has revealed that our commit checking out is really really broken
What is going on here? This has revealed that our commit checking out is really really broken
By that, i mean our benchmarks are broken and this bug was hiding that fact
BTW, this is the previous git checkout failure I mentioned: #1608.
But it seems to be irrelevant to errors here, despite they share similar error messages.
@jonathanmetzman Reasons for CI failures:
- Project cloned with depth 1 or a specific branch that may not have the commit hash. For example,
bloaty. - Project's
benchmark.yamldoes not have a commit. For example,freetype2andharfbuzz.
I did not check through all of them, but I suppose we can see the pattern here: We should not do python3 -u /opt/fuzzbench/checkout_commit.py on all benchmarks. We should do that on bug-based ones only.
Maybe some recent commit changed that? I don't recall seeing this error in the past.
We should do that on bug-based ones only. Maybe some recent commit changed that? I don't recall seeing this error in the past.
I think the correct thing is to checkout on every benchmark. Otherwise it breaks merging experiments (e.g. you do experiment 1 on monday with commit A, then do experiment 2 on Thursday with commit B, you really shouldn't merge them anymore).
We should do that on bug-based ones only. Maybe some recent commit changed that? I don't recall seeing this error in the past.
I think the correct thing is to checkout on every benchmark. Otherwise it breaks merging experiments (e.g. you do experiment 1 on monday with commit A, then do experiment 2 on Thursday with commit B, you really shouldn't merge them anymore).
Also I suspect most of these are actually bug based.
no commit because it's bug-draft (delete?) njs_njs_process_script_fuzzer libhevc_hevc_dec_fuzzer libgit2_objects_fuzzer muparser_set_eval_fuzzer libhtp_fuzz_htp zstd wireshark usrsctp systemd_fuzz-varlink: quickjs_eval-2020-01-05 proj4_standard_fuzzer poppler_pdf_fuzzer php_php-fuzz-parser php_php-fuzz-execute njs_njs_process_script_fuzzer matio_matio_fuzzer libxml2_xml libarchive_libarchive_fuzzer
code benchmark (add commit): stb_stbi_read_fuzzer libjpeg-turbo-07-2017 php_php-fuzz-parser-2020-07-25 openthread-2019-12-23 openh264_decoder_fuzzer mbedtls_fuzz_dtlsclient libxslt_xpath libxml2-v2.9.2 libpng-1.6.38 harfbuzz-1.3.2 jsoncpp_jsoncpp_fuzzer bloaty_fuzz_target freetype2-2017
@Alan32Liu I don't think any of these failed because of depth 1 or rewriting git history at all.
I'm going to fix this in a few ways.
- Most of these bug-draft benchmarks are going to be axed. They aren't useful anymore if they can't build where the bug occurred. Dongge did you try fixing all of these and were unsuccessful?
- Some of these like mruby and njs i'm gonna keep but take out of CI because some folks like @andreafioraldi want them, but I think unless they can be fixed, they need to be removed too.
- We should try to make each code benchmarks checkout the correct commit too.
@Alan32Liu I don't think any of these failed because of depth 1 or rewriting git history at all.
I see, thanks!
I will do the following 2 things today:
- Remove
--depth 1from benchmarks' Dockerfile - Add commit for coverage-based benchmarks.
For bug-drafts, could we keep them for now?
My plan is to gradually revive them by adding back the ID from OSS-Fuzz. I will need sometime to find the best ID for each, but we know this should work from past examples.
We use bug-draft label as a middle ground for this situation. I was a bit distracted in the past few weeks but should be able to work on them later.
@Alan32Liu I don't think any of these failed because of depth 1 or rewriting git history at all.
I see, thanks!
I will do the following 2 things today:
- Remove
--depth 1from benchmarks' Dockerfile- Add commit for coverage-based benchmarks.
For
bug-drafts, could we keep them for now? My plan is to gradually revive them by adding back the ID from OSS-Fuzz. I will need sometime to find the best ID for each, but we know this should work from past examples. We usebug-draftlabel as a middle ground for this situation. I was a bit distracted in the past few weeks but should be able to work on them later.
I'm not sure what you mean by middle ground? Are the benchmarks that you didn't get a chance to fix? Or are they broken? if the former we can keep them, if the latter we should delete them.
I'm not sure what you mean by middle ground? Are the benchmarks that you didn't get a chance to fix? Or are they broken? if the former we can keep them, if the latter we should delete them.
Yep, I meant the benchmarks I did not have a chance to fix. Recall that after updating Ubuntu, we need to update benchmarks for compatibility reasons. To update bug-based benchmarks, I need to find the suitable buggy commit ID from OSS-Fuzz. I will need more time to find the commit ID and fix them.
This shouldnt be failing for so many tests still. :-(
This shouldnt be failing for so many tests still. :-(
OK, I will look into this today.
This shouldnt be failing for so many tests still. :-(
OK, I will look into this today.
Thanks!
CI Failures and reasons:
lcms: No commit ID. Nit: rename it with the project name and benchmark name, and omit the date.mruby: No commit ID. Nit: rename it with the project name and benchmark name, and omit the date.njs: No commit ID.proj4: No commit ID. Nit: rename it with the project name and benchmark name, and omit the date.quickjs_eval: No commit ID. Nit: rename it with the project name and benchmark name, and omit the date.re2: No commit ID. Nit: rename it with the project name and benchmark name, and omit the date.sqlite3: No commit ID. It hascommitattribute but does not specify any ID, is this an intended design? It is suffixed withossfuzz, not sure what that represents.vorbis: No commit ID. Nit: rename it with the project name and benchmark name, and omit the date.woff2: No commit ID. Nit: rename it with the project name and benchmark name, and omit the date.
@jonathanmetzman All failures are due to missing commit ID. I will add them now.
CI Failures and reasons:
lcms: No commit ID. Nit: rename it with the project name and benchmark name, and omit the date.mruby: No commit ID. Nit: rename it with the project name and benchmark name, and omit the date.njs: No commit ID.proj4: No commit ID. Nit: rename it with the project name and benchmark name, and omit the date.quickjs_eval: No commit ID. Nit: rename it with the project name and benchmark name, and omit the date.re2: No commit ID. Nit: rename it with the project name and benchmark name, and omit the date.sqlite3: No commit ID. It hascommitattribute but does not specify any ID, is this an intended design? It is suffixed withossfuzz, not sure what that represents.vorbis: No commit ID. Nit: rename it with the project name and benchmark name, and omit the date.woff2: No commit ID. Nit: rename it with the project name and benchmark name, and omit the date.@jonathanmetzman All failures are due to missing commit ID. I will add them now.
Thanks. I guess an alternative is to mark the commits as "none" and allow checkout_commit to fail gracefully. I think we will need to implement this for sqlite3 which doesn't have a git repo.
@jonathanmetzman Please let me know if 21f1fe6 makes sense.
@jonathanmetzman Please let me know if 21f1fe6 makes sense.
Previously, when no commit id was parsed from benchmark.yaml, env variable CHECKOUT_COMMIT will be empty, which misleads script check_out.py to think its first parameter is $SRC.