kubeclarity icon indicating copy to clipboard operation
kubeclarity copied to clipboard

Not working with kube-image-keeper mutating webhook

Open ppapp92 opened this issue 8 months ago • 1 comments

What happened:

kubeclarity-runtime-k8s-scanner throws error trying to scan docker image.

What you expected to happen:

Expect the docker image in the cluster to get scanned successfully.

How to reproduce it (as minimally and precisely as possible):

  1. Install kube-image-keeper
  2. Run KubeClarity Run-Time scan against a Namespace that has images cached by kube-image-keeper
  3. Scan will error out because it's not able to scan the docker image due to the mutating webhook as it rewrites the image URL to localhost:7439/

Are there any error messages in KubeClarity logs?

kubeclarity-kubeclarity-wait-for-pg-db kubeclarity-kubeclarity-postgresql:5432 - accepting connections
kubeclarity 
kubeclarity 2024/05/28 21:07:14 /build/backend/pkg/database/scheduler.go:58 record not found
kubeclarity [1.032ms] [rows:0] SELECT * FROM "scheduler" ORDER BY "scheduler"."id" LIMIT 1
kubeclarity 2024/05/28 21:07:14 Serving kube clarity runtime scan a p is at http://:8888
kubeclarity 2024/05/28 21:07:14 Serving kube clarity a p is at http://:8080
kubeclarity time="2024-05-28T21:07:46Z" level=warning msg="Vulnerabilities scan of imageID \"localhost:7439/typesense/typesense@sha256:035ccfbc3fd8fb9085ea205fdcb62de63eaefdbebd710e88e57f978a30f2090d\" has failed: &{failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: unable to load image: unable to use OciRegistry source: failed to get image descriptor from registry: Get \"https://localhost:7439/v2/\": dial tcp [::1]:7439: connect: connection refused; Get \"http://localhost:7439/v2/\": dial tcp [::1]:7439: connect: connection refused TBD}" func="github.com/openclarity/kubeclarity/runtime_scan/pkg/scanner.(*Scanner).HandleScanResults" file="/build/runtime_scan/pkg/scanner/scanner.go:415" scanner id=24da9132-749b-4e9d-943d-327af7a67275
kubeclarity time="2024-05-28T21:09:30Z" level=warning msg="Vulnerabilities scan of imageID \"localhost:7439/typesense/typesense@sha256:035ccfbc3fd8fb9085ea205fdcb62de63eaefdbebd710e88e57f978a30f2090d\" has failed: &{failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: unable to load image: unable to use OciRegistry source: failed to get image descriptor from registry: Get \"https://localhost:7439/v2/\": dial tcp [::1]:7439: connect: connection refused; Get \"http://localhost:7439/v2/\": dial tcp [::1]:7439: connect: connection refused TBD}" func="github.com/openclarity/kubeclarity/runtime_scan/pkg/scanner.(*Scanner).HandleScanResults" file="/build/runtime_scan/pkg/scanner/scanner.go:415" scanner id=776eb73a-263a-423d-aa5f-01f25a901dee
kubeclarity 
kubeclarity 2024/05/28 21:09:36 /build/backend/pkg/database/refresh_materialized_views.go:155 SLOW SQL >= 200ms
kubeclarity [1907.530ms] [rows:0] REFRESH MATERIALIZED VIEW CONCURRENTLY packages_view;
kubeclarity 
kubeclarity 2024/05/28 21:09:36 /build/backend/pkg/database/refresh_materialized_views.go:155 SLOW SQL >= 200ms
kubeclarity [1912.167ms] [rows:0] REFRESH MATERIALIZED VIEW CONCURRENTLY vulnerabilities_view;
kubeclarity 
kubeclarity 2024/05/28 21:09:37 /build/backend/pkg/database/application.go:236 record not found
kubeclarity [6.892ms] [rows:0] SELECT * FROM "applications" WHERE applications.id = 'b17e8e84-3330-5f16-93aa-3b425dd46e40' ORDER BY "applications"."id" LIMIT 1
kubeclarity-kubeclarity-wait-for-sbom-db + curl -sw '%{http_code}' http://kubeclarity-kubeclarity-sbom-db:8081/healthz/ready -o /dev/null
kubeclarity-kubeclarity-wait-for-sbom-db + '[' 200 -ne 200 ]
kubeclarity-kubeclarity-wait-for-grype-server + curl -sw '%{http_code}' http://kubeclarity-kubeclarity-grype-server:8080/healthz/ready -o /dev/null
kubeclarity-kubeclarity-wait-for-grype-server + '[' 200 -ne 200 ]
Stream closed EOF for kubeclarity-test/kubeclarity-kubeclarity-6ddcd445b8-pnvdt (kubeclarity-kubeclarity-wait-for-pg-db)
Stream closed EOF for kubeclarity-test/kubeclarity-kubeclarity-6ddcd445b8-pnvdt (kubeclarity-kubeclarity-wait-for-sbom-db)
Stream closed EOF for kubeclarity-test/kubeclarity-kubeclarity-6ddcd445b8-pnvdt (kubeclarity-kubeclarity-wait-for-grype-server)

Anything else we need to know?:

Environment:

  • Kubernetes version: EKS 1.28
  • Helm version (use helm version): v3.14.4
  • KubeClarity version: latest
  • KubeClarity Helm Chart version: latest
  • Cloud provider or hardware configuration: AWS

ppapp92 avatar May 28 '24 21:05 ppapp92