[Tracking] localhost `test/vm` test suite - occasionally capture results tables
Motivation
See most recent README file of the localhost VM based test suite https://github.com/falcosecurity/libs/tree/master/test/vm.
Feature
Occasionally capture results tables to keep some history re our drivers kernel compatibilities wrt compiler versions.
Note that since Aug 2023 libs features official CI powered kernel version testing and the results are shared with each driver release.
2022-11-16

2023-03-14

wow, the last plot is quite worrying, to be honest, I would think again at #940 WDYT? @incertum @FedeDP @Molter73
maybe is not clear but the final patch I have in mind will simplify a lot the logic, maybe we can work on it on a separate branch and if you like the final result we can think of merging it (?)
I fully agree, we should definitely spend some time to fix those issues; perhaps not all issues will suddenly go away by splitting bpf_val_to_ring, but experience says that like 90% of the past issues were because of it.
June 7, 2023
libs master commit 4db06c67e943b5d6dc6ff66d30c1fc3c7ddb1930 (corresponding to approx 0.11.2 release)
Build containers have been optimized resulting in less gaps due to failed builds especially for kmod plus now 2 separate results tables are reported, that is, [compiled] vs [compiled + success].
Kernel grid has been adjusted as well, while including 2.6.32 and 3.10 kernels to check on the kmod build, but they are not tested in a VM.
uhh, that's super cool! thank you for this! we need to move on with this test grid, IMHO it should be one of the main tasks of the following weeks!
Aug 7, 2023
libs master commit bbcc5c747ed177bf8b6ec5847fa9866f8a2dcf9a
Now markdown tables as we needed to remove matplotlib dependency.
Still need to add modern bpf ... need to check again on upstream builder containers, that was the blocker.
In addition, was chatting with @FedeDP about the possibility to include such reports (multiple compiler versions) alongside the new awesome official CI powered kernel version testing in the next iterations of the test frameworks.
Driver (clang -> bpf, gcc -> kmod) kernel compatibility matrix [compiled]
| kernel_uname_r | clang-7 | clang-12 | clang-14 | clang-16 | gcc-5 | gcc-9 | gcc-11 | gcc-13 |
|---|---|---|---|---|---|---|---|---|
| 3.10.0-1160.49.1.el7.x86_64 | ❌ | ❌ | ❌ | ❌ | 🔵 | ❌ | ❌ | ❌ |
| 4.14.296-222.539.amzn2.x86_64 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 |
| 4.16.18-041618-generic | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 |
| 4.19.277-0419277-generic | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 |
| 5.4.247-1.el7.elrepo.x86_64 | 🔵 | 🔵 | 🔵 | 🔵 | ❌ | 🔵 | 🔵 | 🔵 |
| 5.10.9-1.el7.elrepo.x86_64 | 🔵 | 🔵 | 🔵 | 🔵 | ❌ | 🔵 | 🔵 | 🔵 |
| 5.14.15-1.el7.elrepo.x86_64 | 🔵 | 🔵 | 🔵 | 🔵 | ❌ | 🔵 | 🔵 | 🔵 |
| 5.19.17-051917-generic | ❌ | 🔵 | 🔵 | 🔵 | ❌ | ❌ | ❌ | 🔵 |
| 6.3.5-060305-generic | ❌ | 🔵 | 🔵 | 🔵 | ❌ | ❌ | ❌ | 🔵 |
| 6.3.8-1.el7.elrepo.x86_64 | 🔵 | 🔵 | 🔵 | 🔵 | ❌ | 🔵 | 🔵 | 🔵 |
Driver (clang -> bpf, gcc -> kmod) kernel compatibility matrix [compiled + success]
| kernel_uname_r | clang-7 | clang-12 | clang-14 | clang-16 | gcc-5 | gcc-9 | gcc-11 | gcc-13 |
|---|---|---|---|---|---|---|---|---|
| 4.14.296-222.539.amzn2.x86_64 | 🟢 | 🟢 | 🟢 | ❌ | 🟢 | 🟢 | 🟢 | ❌ |
| 4.16.18-041618-generic | 🟢 | 🟢 | 🟢 | ❌ | 🟢 | 🟢 | 🟢 | 🟢 |
| 4.19.277-0419277-generic | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 |
| 5.4.247-1.el7.elrepo.x86_64 | 🟢 | 🟢 | 🟢 | 🟢 | ❌ | 🟢 | 🟢 | 🟢 |
| 5.10.9-1.el7.elrepo.x86_64 | 🟢 | ❌ | 🟢 | 🟢 | ❌ | 🟢 | 🟢 | 🟢 |
| 5.14.15-1.el7.elrepo.x86_64 | 🟢 | 🟢 | 🟢 | 🟢 | ❌ | 🟢 | 🟢 | 🟢 |
| 5.19.17-051917-generic | ❌ | 🟢 | 🟢 | 🟢 | ❌ | ❌ | ❌ | 🟢 |
| 6.3.5-060305-generic | ❌ | 🟢 | 🟢 | 🟢 | ❌ | ❌ | ❌ | 🟢 |
| 6.3.8-1.el7.elrepo.x86_64 | 🟢 | 🟢 | 🟢 | 🟢 | ❌ | 🟢 | 🟢 | 🟢 |
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle stale
/remove-lifecycle stale
7.0.0+driver
Driver (clang -> bpf, gcc -> kmod) kernel compatibility matrix [compiled]
| kernel_uname_r | clang-7 | clang-12 | clang-14 | clang-16 | gcc-5 | gcc-9 | gcc-11 | gcc-13 |
|---|---|---|---|---|---|---|---|---|
| 3.10.0-1160.49.1.el7.x86_64 | ❌ | ❌ | ❌ | ❌ | 🔵 | ❌ | ❌ | ❌ |
| 4.14.296-222.539.amzn2.x86_64 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 |
| 4.16.18-041618-generic | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 |
| 4.19.296-0419296-generic | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 | 🔵 |
| 5.4.247-1.el7.elrepo.x86_64 | 🔵 | 🔵 | 🔵 | 🔵 | ❌ | 🔵 | 🔵 | 🔵 |
| 5.10.9-1.el7.elrepo.x86_64 | 🔵 | 🔵 | 🔵 | 🔵 | ❌ | 🔵 | 🔵 | 🔵 |
| 5.14.15-1.el7.elrepo.x86_64 | 🔵 | 🔵 | 🔵 | 🔵 | ❌ | 🔵 | 🔵 | 🔵 |
| 5.19.17-051917-generic | ❌ | 🔵 | 🔵 | 🔵 | ❌ | ❌ | ❌ | 🔵 |
| 6.5.0-060500-generic | ❌ | ❌ | ❌ | 🔵 | ❌ | ❌ | ❌ | 🔵 |
| 6.5.8-1.el7.elrepo.x86_64 | 🔵 | 🔵 | 🔵 | 🔵 | ❌ | 🔵 | 🔵 | 🔵 |
Driver (clang -> bpf, gcc -> kmod) kernel compatibility matrix [compiled + success]
| kernel_uname_r | clang-7 | clang-12 | clang-14 | clang-16 | gcc-5 | gcc-9 | gcc-11 | gcc-13 |
|---|---|---|---|---|---|---|---|---|
| 4.14.296-222.539.amzn2.x86_64 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | ❌ |
| 4.16.18-041618-generic | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | ❌ |
| 4.19.296-0419296-generic | 🟢 | 🟢 | 🟢 | 🟢 | ❌ | 🟢 | 🟢 | 🟢 |
| 5.4.247-1.el7.elrepo.x86_64 | 🟢 | 🟢 | 🟢 | 🟢 | ❌ | 🟢 | 🟢 | 🟢 |
| 5.10.9-1.el7.elrepo.x86_64 | 🟢 | 🟢 | 🟢 | 🟢 | ❌ | 🟢 | 🟢 | 🟢 |
| 5.14.15-1.el7.elrepo.x86_64 | 🟢 | 🟢 | 🟢 | 🟢 | ❌ | 🟢 | 🟢 | 🟢 |
| 5.19.17-051917-generic | ❌ | 🟢 | 🟢 | 🟢 | ❌ | ❌ | ❌ | 🟢 |
| 6.5.0-060500-generic | ❌ | ❌ | ❌ | 🟢 | ❌ | ❌ | ❌ | 🟢 |
| 6.5.8-1.el7.elrepo.x86_64 | 🟢 | 🟢 | 🟢 | 🟢 | ❌ | 🟢 | 🟢 | 🟢 |
@FedeDP and @Andreagit97 once again in the last dev sprint leading up to the 7.0.0 the CI kernel tests but these tests across multiple compiler versions were pivotal in pinpointing some eBPF verifier issues for the legacy eBPF driver. Was thinking if we couldn't just run it as is on the CNCF test server as ad-hoc on demand CI pipeline (non required)? While the vagrant and vbox approach was more intended for your dev laptops I don't see why it shouldn't easily work as well given that we now have the bare-metal server? WDYT?
Was thinking if we couldn't just run it as is on the CNCF test server as ad-hoc on demand CI pipeline (non required)? While the vagrant and vbox approach was more intended for your dev laptops I don't see why it shouldn't easily work as well given that we now have the bare-metal server? WDYT?
I think it makes sense! It belongs to https://github.com/falcosecurity/libs/blob/master/.github/workflows/reusable_kernel_tests.yaml and we might also add a new github page about it (with its matrix): https://github.com/falcosecurity/libs/blob/master/.github/workflows/pages.yml. It should be fairly simple since it is like running it on your local machine ;)
Was thinking if we couldn't just run it as is on the CNCF test server as ad-hoc on demand CI pipeline (non required)? While the vagrant and vbox approach was more intended for your dev laptops I don't see why it shouldn't easily work as well given that we now have the bare-metal server? WDYT?
I think it makes sense! It belongs to https://github.com/falcosecurity/libs/blob/master/.github/workflows/reusable_kernel_tests.yaml and we might also add a new github page about it (with its matrix): https://github.com/falcosecurity/libs/blob/master/.github/workflows/pages.yml. It should be fairly simple since it is like running it on your local machine ;)
Okie after the release in Feb I'll reach out re how to help setting it up and perform testing. For example, I could get interim ssh access to test and verify an install script for all dependencies, CC @LucaGuerra I believe you manage the bare-metal access right? Also no rush on that.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Provide feedback via https://github.com/falcosecurity/community. /close
@poiana: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with
/reopen.Mark the issue as fresh with
/remove-lifecycle rotten.Provide feedback via https://github.com/falcosecurity/community. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.