Enable the Scorecard Github Action and badge
Closes #926
Hi, here is the PR with the changes to enable the OpenSSF Scorecard Gihub Action as described at #926
Thanks for the attention and any doubts or concerns please reach me out.
Thanks for the contribution! One comment: we looked at doing this before but have a number of binaries and similar artifacts committed to the repository for testing, is there a way to explicitly omit such things from scorecard?
About this topic, I've looked it into the details to bring a more correct response and it seems that it is not possible to configure a "ignore list" yet, but it is an issue that the Scorecard team is working on (https://github.com/ossf/scorecard/issues/1270).
Besides, I would like to bring to attention that only one check really look to binarie files, which is the Binary-Artifacts check. This check just points out if there is a binary file in the repo or not. All the other checks do not look or scan through binary files.
@kzantow I don't think the binary one hurts the score very much anyways - looked like it was 9 to me for that one. The one that brings it down the most appears to be not pinning GitHub Actions to a hash
Hi @joycebrum -- sorry for the delay getting back to this. I've had a look through the scorecard results and I'm trying to understand a few things. First of all, the results I see are:
Overall score: 6.4
Based on what I found in the JSON:
| Check | Score |
|---|---|
| Maintained | 10 |
| Code review | 10 |
| CII best practices | <missing> |
| Vulnerabilities | 10 |
| Packaging | -1 |
| License | 10 |
| Dangerous workflow | 10 |
| Token permissions | <missing> |
| Binary artifacts | 9 |
| Security policy | <missing> |
| Signed releases | <missing> |
| Branch protection | 8 |
| Dependency update tool | 10 |
| Fuzzing | <missing> |
| Pinned dependencies | 5 |
| SAST | -1 |
I'd like to understand how this score is calculated. I assume missing values are zero? What about -1 values -- are these unknown and omitted? (I just found the section explaining the weighting, so haven't done the math with weighted numbers yet...)
Some of these seem to be misleading, for example we are running codeql already (albeit only on pushes to main due to the time the check takes): https://github.com/anchore/syft/blob/main/.github/workflows/codeql-analysis.yml
And we are signing at least some of the release artifacts (e.g. macOS builds).
Is there a way to exclude specific checks because they aren't applicable or giving incorrect results?
Hi @kzantow, don't worry about the time at all.
Running the scorecard locally here the results I've got are a little more human friendly, so it might help us better understand the results:
| SCORE | NAME | REASON |
|---|---|---|
| 9/10 | Binary-Artifacts | There is at least one binary artifact in the source code |
| 8/10 | Branch-Protection | It is 0, 3, 6, 8, 9 or 10 according to https://github.com/ossf/scorecard/blob/main/docs/checks.md#branch-protection |
| 10/10 | CI-Tests | |
| 0/10 | CII-Best-Practices | The missing here really means 0, because the evaluation of this check depends onwhether the project has earned an OpenSSF (formerly CII) Best Practices Badge |
| 10/10 | Code-Review | |
| 10/10 | Contributors | |
| 10/10 | Dangerous-Workflow | |
| 10/10 | Dependency-Update-Tool | |
| 0/10 | Fuzzing | Project is not Fuzzed |
| 10/10 | License | |
| 10/10 | Maintained | |
| ? | Packaging | "no published package detected". The missing here doesn't mean 0, it means that the check is not "applicable" so it will not be counted in the score |
| 5/10 | Pinned Dependencies | dependency not pinned by hash detected -- score normalized to 5 |
| 7/10 | SAST | "SAST tool detected but do not run on all commits" |
| 0/10 | Security-Policy | "Security Policy file not detected" - In this case, it is a 0 or 10 check, but it is also a quick-win |
| 0/10 | Signed-Releases | "0 out of 5 artifacts are signed or have provenance" - It seemed to find 5 artifacts in the repo that were not signed. |
| 0/10 | Token-Permissions | "non read-only tokens detected in Github Workflow" - According to Token-Permissions Check, this check determines whether the project's automated workflows tokens are set to read-only by default. |
| 10/10 | Vulnerabilities |
About the score calculation there is the weights:
- “Critical” risk checks are weighted at 10
- “High” risk checks are weighted at 7.5
- “Medium” risk checks are weighted at 5
- “Low” risk checks are weighted at 2.5
This we would have:
( 10 * (Dangerous-Workflow) + 7.5 * (Binary-Artifacts + Branch-Protection + Code-Review + Dependency-Update-Tool + Maintained + Signed-Releases + Token-Permissions + Vulnerabilities) + 5 * (Fuzzing + Pinned-Dependencies + Packaging + SAST + Security-Policy) + 2.5 * (CI-Tests + CII-Best-Practices + Contributors + License) ) / (10 + 60 + 20 + 10)
(100 + 427.5 + 60 + 75) / 100 = 6,625 (which was the value that the scorecard calculate in the CLI)

@kzantow The explanation from @joycebrum LGTM. I think we can merge this and improve on some of the quick wins she highlighted in future smaller PR. WDYT?
And it looks like the signed-releases and token-permissions would boost the score the most?
Thanks for the contribution @joycebrum -- we're going to move forward with this as-is and work to make incremental improvements 👍
Great to hear that @kzantow. I would like to also offer my help on working in any issue from scorecard checks (or any other security issue tbh) you might want to implement. Feel free to ping me and/or assign it to me 😄