albertschwarzkopf

Results 11 comments of albertschwarzkopf
trafficstars

Thank you! Great Work! The patched version has fixed some but not all vulnerabilities. So we have to wait for new debian version.

Thanks @faust64 in my case this has solved the same issue!

Today I have again an "scan-vulnerabilityreport" pod and corresponding job which are in status "Completed". But starboard operator has following error: `{"level":"error","ts":1647341024.8277369,"logger":"controller.job","msg":"Reconciler error","reconciler group":"batch","reconciler kind":"Job","name":"scan-vulnerabilityreport-7b89599899","namespace":"starboard-system","error":"unexpected EOF","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227"} ` `kubectl -n starboard-system...

> What about VulnerabilityReport? Is it created after all? Yes the VR for the specific image exists. But new VR were not created.

> I'm not sure I understood. What do you mean by "new VR"? We use the "OPERATOR_VULNERABILITY_SCANNER_REPORT_TTL" (vulnerabilityScannerReportTTL) parameter. So that the VulnerabilityReports (VRs) are generated every 24h.

Today it happend again: ``` ╰─ kubectl -n starboard-system get pods NAME READY STATUS RESTARTS AGE scan-vulnerabilityreport-77444bf746-lzlq7 0/1 Completed 0 24h starboard-exporter-6fc5c8f9c6-6bhx5 1/1 Running 0 53d starboard-operator-866776846f-tdcg8 1/1 Running 0...

@travisghansen thanks for your answer. I have tried with additional volume mounted at /tmp. But this has not worked.

> Same here...I have already done some eks cluster upgrades and I think that karpenter replaces the nodes too fast. E.g. if some pods take longer to start, this can...