model-registry icon indicating copy to clipboard operation
model-registry copied to clipboard

OpenSSF "Analysis" section

Open tarilabs opened this issue 9 months ago • 3 comments

we need to document the following sections, with the minimal amount of effort required to meet the ask at https://www.bestpractices.dev/en/projects/9937#analysis

Static code analysis

  • [x] At least one static code analysis tool (beyond compiler warnings and "safe" language modes) MUST be applied to any proposed major production release of the software before its release, if there is at least one FLOSS tool that implements this criterion in the selected language. [static_analysis]
  • [x] It is SUGGESTED that at least one of the static analysis tools used for the static_analysis criterion include rules or approaches to look for common vulnerabilities in the analyzed language or environment. [static_analysis_common_vulnerabilities]
  • [x] All medium and higher severity exploitable vulnerabilities discovered with static code analysis MUST be fixed in a timely way after they are confirmed. [static_analysis_fixed]
  • [x] It is SUGGESTED that static source code analysis occur on every commit or at least daily. [static_analysis_often]

Dynamic code analysis

  • [ ] It is SUGGESTED that at least one dynamic analysis tool be applied to any proposed major production release of the software before its release. [dynamic_analysis]
  • [ ] It is SUGGESTED that if the software produced by the project includes software written using a memory-unsafe language (e.g., C or C++), then at least one dynamic tool (e.g., a fuzzer or web application scanner) be routinely used in combination with a mechanism to detect memory safety problems such as buffer overwrites. If the project does not produce software written in a memory-unsafe language, choose "not applicable" (N/A). [dynamic_analysis_unsafe]
  • [ ] It is SUGGESTED that the project use a configuration for at least some dynamic analysis (such as testing or fuzzing) which enables many assertions. In many cases these assertions should not be enabled in production builds. [dynamic_analysis_enable_assertions]
  • [ ] All medium and higher severity exploitable vulnerabilities discovered with dynamic code analysis MUST be fixed in a timely way after they are confirmed. [dynamic_analysis_fixed]

tarilabs avatar May 08 '25 13:05 tarilabs

At least one static code analysis tool (beyond compiler warnings and "safe" language modes) MUST be applied to any proposed major production release of the software before its release, if there is at least one FLOSS tool that implements this criterion in the selected language. [static_analysis]

We use Ruff for the MR py client and Golangci-lint for Go. These tools meet this requirement.

It is SUGGESTED that at least one of the static analysis tools used for the static_analysis criterion include rules or approaches to look for common vulnerabilities in the analyzed language or environment. [static_analysis_common_vulnerabilities]

This is the case for both Go (Golangci-lint) and Python (Ruff).
For example, Golangci rule: G204 "Audit use of command execution"
For example, Ruff rule: S307 "suspicious-eval-usage"

All medium and higher severity exploitable vulnerabilities discovered with static code analysis MUST be fixed in a timely way after they are confirmed. [static_analysis_fixed]

Setting as "N/A".

For the Model Registry project, there hasn't been occurrences of exploitable vulnerabilities discovered with static code analysis in the main branch.

It is SUGGESTED that static source code analysis occur on every commit or at least daily. [static_analysis_often]

The static coda analysis are configured as part of the Makefiles, and are also run as part of the GitHub Actions.
This requirement is met.

tarilabs avatar May 19 '25 07:05 tarilabs

looks good @tarilabs

dbasunag avatar May 19 '25 12:05 dbasunag

lgtm @tarilabs

lugi0 avatar May 19 '25 16:05 lugi0

It is SUGGESTED that at least one dynamic analysis tool be applied to any proposed major production release of the software before its release. [dynamic_analysis] A dynamic analysis tool examines the software by executing it with specific inputs. For example, the project MAY use a fuzzing tool (e.g., American Fuzzy Lop) or a web application scanner (e.g., OWASP ZAP or w3af). In some cases the OSS-Fuzz project may be willing to apply fuzz testing to your project. For purposes of this criterion the dynamic analysis tool needs to vary the inputs in some way to look for various kinds of problems or be an automated test suite with at least 80% branch coverage. The Wikipedia page on dynamic analysis and the OWASP page on fuzzing identify some dynamic analysis tools. The analysis tool(s) MAY be focused on looking for security vulnerabilities, but this is not required.

We use Unit, Integration and E2E testing as dynamic analysis tools for both Go and Python deliverables; further, when E2E testing, we test with a Fuzzer tool the invocations of REST APIs.
More specifically; for Go, we use standard Go testing for Unit testing, testcontainers-go for Integration testing and other testing framework like testify, ginkgo, etc where meaningful. For Python, we use Pytest for Unit, Integration and E2E testing with a live deployment in a KinD cluster. We use schemathesis as the Fuzzer tool.

It is SUGGESTED that if the software produced by the project includes software written using a memory-unsafe language (e.g., C or C++), then at least one dynamic tool (e.g., a fuzzer or web application scanner) be routinely used in combination with a mechanism to detect memory safety problems such as buffer overwrites. If the project does not produce software written in a memory-unsafe language, choose "not applicable" (N/A). [dynamic_analysis_unsafe] Examples of mechanisms to detect memory safety problems include Address Sanitizer (ASAN) (available in GCC and LLVM), Memory Sanitizer, and valgrind. Other potentially-used tools include thread sanitizer and undefined behavior sanitizer. Widespread assertions would also work.

We use Go and Python, both memory-safe languages.

It is SUGGESTED that the project use a configuration for at least some dynamic analysis (such as testing or fuzzing) which enables many assertions. In many cases these assertions should not be enabled in production builds. [dynamic_analysis_enable_assertions] This criterion does not suggest enabling assertions during production; that is entirely up to the project and its users to decide. This criterion's focus is instead to improve fault detection during dynamic analysis before deployment. Enabling assertions in production use is completely different from enabling assertions during dynamic analysis (such as testing). In some cases enabling assertions in production use is extremely unwise (especially in high-integrity components). There are many arguments against enabling assertions in production, e.g., libraries should not crash callers, their presence may cause rejection by app stores, and/or activating an assertion in production may expose private data such as private keys. Beware that in many Linux distributions NDEBUG is not defined, so C/C++ assert() will by default be enabled for production in those environments. It may be important to use a different assertion mechanism or defining NDEBUG for production in those environments.

As above.
More specifically; for Go, assertions are checked during Go test with testify, and for Python assertions are validated during Pytest.

All medium and higher severity exploitable vulnerabilities discovered with dynamic code analysis MUST be fixed in a timely way after they are confirmed. [dynamic_analysis_fixed] If you are not running dynamic code analysis and thus have not found any vulnerabilities in this way, choose "not applicable" (N/A). A vulnerability is considered medium or higher severity if its Common Vulnerability Scoring System (CVSS) base qualitative score is medium or higher. In CVSS versions 2.0 through 3.1, this is equivalent to a CVSS score of 4.0 or higher. Projects may use the CVSS score as published in a widely-used vulnerability database (such as the National Vulnerability Database) using the most-recent version of CVSS reported in that database. Projects may instead calculate the severity themselves using the latest version of CVSS at the time of the vulnerability disclosure, if the calculation inputs are publicly revealed once the vulnerability is publicly known.

No exploitable vulnerabilities has been discovered and confirmed so far on the project.

tarilabs avatar Jul 04 '25 05:07 tarilabs

this Section is complete as all the questions have been fulfilled. If you are reading this message and you'd like to change any of the answers, don't hesitate to reopen this issue or creating a new one with your suggestions. cc @fege

tarilabs avatar Jul 04 '25 05:07 tarilabs