scanner
scanner copied to clipboard
chore: Update oci-ta tasks, add Renovate config
@msugakov: I took over this PR adding more maintenance things on top of Renovate's change. It's best to review this PR per commit.
Relates to ROX-22359
Hi @red-hat-konflux[bot]. Thanks for your PR.
I'm waiting for a stackrox member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Edited/Blocked Notification
Renovate will not automatically rebase this PR, because it does not recognize the last commit author and assumes somebody else may have edited the PR.
You can manually request rebase by checking the rebase/retry box above.
⚠️ Warning: custom changes will be lost.
/renovate rebase
This is going to be fun: there's a behavior change in the task so that it does not seem to allow passing through the chain. DBs failed but scanners did not. Probably because DBs don't do any actual dependency prefetch. I think we should just put blobs fetch before dependencies prefetch to solve the issue.
/ok-to-test
/retest
It seems to me that prefetch tasks puts some skip marker file so that the next attempt to save oci artifact bails out respecting that file.
The pipeline run where this happened is https://console.redhat.com/application-pipeline/workspaces/rh-acs/applications/acs/pipelineruns/scanner-db-build-vtkz6
It seems to me that prefetch tasks puts some skip marker file so that the next attempt to save oci artifact bails out respecting that file.
👍 Could you check that the pipeline run timeout is still accurate?
@red-hat-konflux[bot]: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| ci/prow/scale-tests | aac7b788176d510d04987665e0c3b46ed5c9cd98 | link | false | /test scale-tests |
| ci/prow/slim-e2e-tests | aac7b788176d510d04987665e0c3b46ed5c9cd98 | link | false | /test slim-e2e-tests |
Full PR test history. Your PR dashboard.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
/retest scanner-db-build
@msugakov: The /retest command does not accept any targets.
The following commands are available to trigger optional jobs:
/test e2e-tests/test scale-tests/test slim-e2e-tests
Use /test all to run all jobs.
In response to this:
/retest scanner-db-build
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
scanner-db-build even if it timeouts got successfully to the last two tasks. It was just slow due to multiarch and pipelines slowness. I don't want to bump *-db-* timeouts just yet. Will merge.