home-ops icon indicating copy to clipboard operation
home-ops copied to clipboard

feat(helm): update chart cilium (1.16.6 → 1.17.3)

Open bot-akira[bot] opened this issue 9 months ago • 8 comments

This PR contains the following updates:

Package Update Change
cilium (source) minor 1.16.6 -> 1.17.3

Release Notes

cilium/cilium (cilium)

v1.17.3: 1.17.3

Compare Source

Summary of Changes

Minor Changes:

Bugfixes:

CI Changes:

Misc Changes:

Other Changes:

Docker Manifests
cilium

quay.io/cilium/cilium:v1.17.3@​sha256:1782794aeac951af139315c10eff34050aa7579c12827ee9ec376bb719b82873 quay.io/cilium/cilium:stable@sha256:1782794aeac951af139315c10eff34050aa7579c12827ee9ec376bb719b82873

clustermesh-apiserver

quay.io/cilium/clustermesh-apiserver:v1.17.3@​sha256:98d5feaf67dd9b5d8d219ff5990de10539566eedc5412bcf52df75920896ad42 quay.io/cilium/clustermesh-apiserver:stable@sha256:98d5feaf67dd9b5d8d219ff5990de10539566eedc5412bcf52df75920896ad42

docker-plugin

quay.io/cilium/docker-plugin:v1.17.3@​sha256:aece31ec01842f78ae30009b5ca42ab5abd4b042a6fff49b48d06f0f37eddef9 quay.io/cilium/docker-plugin:stable@sha256:aece31ec01842f78ae30009b5ca42ab5abd4b042a6fff49b48d06f0f37eddef9

hubble-relay

quay.io/cilium/hubble-relay:v1.17.3@​sha256:f8674b5139111ac828a8818da7f2d344b4a5bfbaeb122c5dc9abed3e74000c55 quay.io/cilium/hubble-relay:stable@sha256:f8674b5139111ac828a8818da7f2d344b4a5bfbaeb122c5dc9abed3e74000c55

operator-alibabacloud

quay.io/cilium/operator-alibabacloud:v1.17.3@​sha256:e9a9ab227c6e833985bde6537b4d1540b0907f21a84319de4b7d62c5302eed5c quay.io/cilium/operator-alibabacloud:stable@sha256:e9a9ab227c6e833985bde6537b4d1540b0907f21a84319de4b7d62c5302eed5c

operator-aws

quay.io/cilium/operator-aws:v1.17.3@​sha256:40f235111fb2bca209ee65b12f81742596e881a0a3ee4d159776d78e3091ba7f quay.io/cilium/operator-aws:stable@sha256:40f235111fb2bca209ee65b12f81742596e881a0a3ee4d159776d78e3091ba7f

operator-azure

quay.io/cilium/operator-azure:v1.17.3@​sha256:6a3294ec8a2107048254179c3ac5121866f90d20fccf12f1d70960e61f304713 quay.io/cilium/operator-azure:stable@sha256:6a3294ec8a2107048254179c3ac5121866f90d20fccf12f1d70960e61f304713

operator-generic

quay.io/cilium/operator-generic:v1.17.3@​sha256:8bd38d0e97a955b2d725929d60df09d712fb62b60b930551a29abac2dd92e597 quay.io/cilium/operator-generic:stable@sha256:8bd38d0e97a955b2d725929d60df09d712fb62b60b930551a29abac2dd92e597

operator

quay.io/cilium/operator:v1.17.3@​sha256:169c137515459fe0ea4c483021f704dba8901ac5180bdee4e05f5901dbfd7115 quay.io/cilium/operator:stable@sha256:169c137515459fe0ea4c483021f704dba8901ac5180bdee4e05f5901dbfd7115

v1.17.2: 1.17.2

Compare Source

Summary of Changes

Minor Changes:

Bugfixes:

CI Changes:

Misc Changes:

Other Changes:

Docker Manifests
cilium

quay.io/cilium/cilium:v1.17.2@​sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1 quay.io/cilium/cilium:stable@sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1

clustermesh-apiserver

quay.io/cilium/clustermesh-apiserver:v1.17.2@​sha256:981250ebdc6e66e190992eaf75cfca169113a8f08d5c3793fe15822176980398 quay.io/cilium/clustermesh-apiserver:stable@sha256:981250ebdc6e66e190992eaf75cfca169113a8f08d5c3793fe15822176980398

docker-plugin

quay.io/cilium/docker-plugin:v1.17.2@​sha256:a599893f1fc76fc31afad2bbb73af7e7f618adbf02043b2098fafeca4adf551c quay.io/cilium/docker-plugin:stable@sha256:a599893f1fc76fc31afad2bbb73af7e7f618adbf02043b2098fafeca4adf551c

hubble-relay

quay.io/cilium/hubble-relay:v1.17.2@​sha256:42a8db5c256c516cacb5b8937c321b2373ad7a6b0a1e5a5120d5028433d586cc quay.io/cilium/hubble-relay:stable@sha256:42a8db5c256c516cacb5b8937c321b2373ad7a6b0a1e5a5120d5028433d586cc

operator-alibabacloud

quay.io/cilium/operator-alibabacloud:v1.17.2@​sha256:7cb8c23417f65348bb810fe92fb05b41d926f019d77442f3fa1058d17fea7ffe quay.io/cilium/operator-alibabacloud:stable@sha256:7cb8c23417f65348bb810fe92fb05b41d926f019d77442f3fa1058d17fea7ffe

operator-aws

quay.io/cilium/operator-aws:v1.17.2@​sha256:955096183e22a203bbb198ca66e3266ce4dbc2b63f1a2fbd03f9373dcd97893c quay.io/cilium/operator-aws:stable@sha256:955096183e22a203bbb198ca66e3266ce4dbc2b63f1a2fbd03f9373dcd97893c

operator-azure

quay.io/cilium/operator-azure:v1.17.2@​sha256:455fb88b558b1b8ba09d63302ccce76b4930581be89def027184ab04335c20e0 `quay.io/cilium/operator-azure:stable@sha256:455fb88b558b1b8ba09d63302ccce76b4930581be89def027184


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • [ ] If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

bot-akira[bot] avatar Feb 04 '25 16:02 bot-akira[bot]

--- kubernetes/apps/observability/kromgo/app Kustomization: flux-system/kromgo HelmRelease: observability/kromgo

+++ kubernetes/apps/observability/kromgo/app Kustomization: flux-system/kromgo HelmRelease: observability/kromgo

@@ -38,13 +38,13 @@

               HEALTH_PORT: 8888
               PROMETHEUS_URL: http://prometheus-operated.observability.svc.cluster.local:9090
               SERVER_HOST: 0.0.0.0
               SERVER_PORT: 8080
             image:
               repository: ghcr.io/kashalls/kromgo
-              tag: v0.5.1@sha256:1f86c6151c676fa6d368230f1b228d67ed030fd4409ae0a53763c60d522ea425
+              tag: v0.4.4@sha256:4f6770a49ffa2d1a96517761d677ababe5fa966a5da398530cc35ee4714c315b
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3
--- kubernetes/apps/kube-system/external-secrets/app Kustomization: flux-system/cluster-apps-external-secrets HelmRelease: kube-system/external-secrets

+++ kubernetes/apps/kube-system/external-secrets/app Kustomization: flux-system/cluster-apps-external-secrets HelmRelease: kube-system/external-secrets

@@ -13,13 +13,13 @@

       chart: external-secrets
       interval: 15m
       sourceRef:
         kind: HelmRepository
         name: external-secrets
         namespace: flux-system
-      version: 0.15.0
+      version: 0.13.0
   install:
     createNamespace: true
     remediation:
       retries: 3
   interval: 15m
   maxHistory: 3
--- kubernetes/apps/network/e1000e-fix/app Kustomization: flux-system/e1000e-fix HelmRelease: network/e1000e-fix

+++ kubernetes/apps/network/e1000e-fix/app Kustomization: flux-system/e1000e-fix HelmRelease: network/e1000e-fix

@@ -1,8 +1,8 @@

 ---
-apiVersion: helm.toolkit.fluxcd.io/v2
+apiVersion: helm.toolkit.fluxcd.io/v2beta2
 kind: HelmRelease
 metadata:
   labels:
     app.kubernetes.io/name: e1000e-fix
     kustomize.toolkit.fluxcd.io/name: e1000e-fix
     kustomize.toolkit.fluxcd.io/namespace: flux-system
--- kubernetes/apps/kube-system/cilium/app Kustomization: flux-system/cilium HelmRelease: kube-system/cilium

+++ kubernetes/apps/kube-system/cilium/app Kustomization: flux-system/cilium HelmRelease: kube-system/cilium

@@ -13,13 +13,13 @@

     spec:
       chart: cilium
       sourceRef:
         kind: HelmRepository
         name: cilium
         namespace: flux-system
-      version: 1.16.6
+      version: 1.17.2
   install:
     remediation:
       retries: 3
   interval: 30m
   upgrade:
     cleanupOnFail: true
--- kubernetes/apps/kube-system/kubelet-csr-approver/app Kustomization: flux-system/kubelet-csr-approver HelmRelease: kube-system/kubelet-csr-approver

+++ kubernetes/apps/kube-system/kubelet-csr-approver/app Kustomization: flux-system/kubelet-csr-approver HelmRelease: kube-system/kubelet-csr-approver

@@ -13,13 +13,13 @@

     spec:
       chart: kubelet-csr-approver
       sourceRef:
         kind: HelmRepository
         name: postfinance
         namespace: flux-system
-      version: 1.2.6
+      version: 1.2.5
   install:
     remediation:
       retries: 3
   interval: 30m
   upgrade:
     cleanupOnFail: true
--- kubernetes/apps/cert-manager/cert-manager/app Kustomization: flux-system/cert-manager HelmRelease: cert-manager/cert-manager

+++ kubernetes/apps/cert-manager/cert-manager/app Kustomization: flux-system/cert-manager HelmRelease: cert-manager/cert-manager

@@ -13,13 +13,13 @@

     spec:
       chart: cert-manager
       sourceRef:
         kind: HelmRepository
         name: jetstack
         namespace: flux-system
-      version: v1.17.1
+      version: v1.16.3
   install:
     remediation:
       retries: 3
   interval: 30m
   upgrade:
     cleanupOnFail: true
--- kubernetes/apps/kube-system/coredns/app Kustomization: flux-system/coredns HelmRelease: kube-system/coredns

+++ kubernetes/apps/kube-system/coredns/app Kustomization: flux-system/coredns HelmRelease: kube-system/coredns

@@ -13,13 +13,13 @@

     spec:
       chart: coredns
       sourceRef:
         kind: HelmRepository
         name: coredns
         namespace: flux-system
-      version: 1.39.1
+      version: 1.37.3
   install:
     remediation:
       retries: 3
   interval: 30m
   upgrade:
     cleanupOnFail: true
--- kubernetes/apps/network/echo-server/app Kustomization: flux-system/echo-server HelmRelease: network/echo-server

+++ kubernetes/apps/network/echo-server/app Kustomization: flux-system/echo-server HelmRelease: network/echo-server

@@ -34,13 +34,13 @@

               HTTP_PORT: 8080
               LOG_IGNORE_PATH: /healthz
               LOG_WITHOUT_NEWLINE: true
               PROMETHEUS_ENABLED: true
             image:
               repository: ghcr.io/mendhak/http-https-echo
-              tag: 36
+              tag: 35
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3
--- kubernetes/apps/kyverno/kyverno/app Kustomization: flux-system/kyverno HelmRelease: kyverno/kyverno

+++ kubernetes/apps/kyverno/kyverno/app Kustomization: flux-system/kyverno HelmRelease: kyverno/kyverno

@@ -12,13 +12,13 @@

     spec:
       chart: kyverno
       sourceRef:
         kind: HelmRepository
         name: kyverno
         namespace: flux-system
-      version: 3.3.7
+      version: 3.3.4
   install:
     remediation:
       retries: 3
   interval: 30m
   upgrade:
     cleanupOnFail: true
--- kubernetes/apps/media/komga/app Kustomization: flux-system/komga HelmRelease: media/komga

+++ kubernetes/apps/media/komga/app Kustomization: flux-system/komga HelmRelease: media/komga

@@ -35,13 +35,13 @@

           app:
             env:
               SERVER_PORT: 8080
               TZ: Europe/Prague
             image:
               repository: gotson/komga
-              tag: 1.21.2@sha256:ba587695d786f0e8f4de8598b8aa2785cc8c671098ef1cb624819c2bb812789c
+              tag: 1.19.0@sha256:b7bd32bc66159d020d682702f4b010e5977fecf37351903ed8b959c32c759638
             resources:
               limits:
                 memory: 2Gi
               requests:
                 cpu: 15m
                 memory: 1Gi
@@ -51,28 +51,21 @@

       app:
         annotations:
           gatus.io/enabled: 'true'
           hajimari.io/icon: mdi:thought-bubble-outline
         className: internal
         hosts:
-        - host: '{{ .Release.Name }}.juno.moe'
-          paths:
-          - path: /
-            service:
-              identifier: app
-              port: http
-        - host: comics.juno.moe
+        - host: '{{ .Release.Name }}...PLACEHOLDER_SECRET_DOMAIN..'
           paths:
           - path: /
             service:
               identifier: app
               port: http
         tls:
         - hosts:
-          - '{{ .Release.Name }}.juno.moe'
-          - comics.juno.moe
+          - '{{ .Release.Name }}...PLACEHOLDER_SECRET_DOMAIN..'
     persistence:
       config:
         existingClaim: komga
       media:
         globalMounts:
         - path: /data
--- kubernetes/apps/default/piped/app Kustomization: flux-system/piped HelmRelease: default/piped

+++ kubernetes/apps/default/piped/app Kustomization: flux-system/piped HelmRelease: default/piped

@@ -15,13 +15,13 @@

     spec:
       chart: piped
       sourceRef:
         kind: HelmRepository
         name: piped
         namespace: flux-system
-      version: 7.2.2
+      version: 7.0.1
   install:
     createNamespace: true
     remediation:
       retries: 5
   interval: 15m
   upgrade:
--- kubernetes/apps/network/external-dns/unifi Kustomization: flux-system/cluster-apps-external-dns-unifi HelmRelease: network/external-dns-unifi

+++ kubernetes/apps/network/external-dns/unifi Kustomization: flux-system/cluster-apps-external-dns-unifi HelmRelease: network/external-dns-unifi

@@ -49,13 +49,13 @@

           valueFrom:
             secretKeyRef:
               key: UNIFI_PASS
               name: external-dns-unifi-secret
         image:
           repository: ghcr.io/kashalls/external-dns-unifi-webhook
-          tag: v0.5.1@sha256:fc031337a83e3a7d5f3407c931373455fe6842e085b47e4bb1e73708cb054b06
+          tag: v0.4.1@sha256:5c01923d9a2c050362335c1750c2361046c0d2caf1ab796661c215da47446aad
         livenessProbe:
           httpGet:
             path: /healthz
             port: http-webhook
           initialDelaySeconds: 10
           timeoutSeconds: 5
--- kubernetes/apps/observability/kube-prometheus-stack/app Kustomization: flux-system/kube-prometheus-stack HelmRelease: observability/kube-prometheus-stack

+++ kubernetes/apps/observability/kube-prometheus-stack/app Kustomization: flux-system/kube-prometheus-stack HelmRelease: observability/kube-prometheus-stack

@@ -13,13 +13,13 @@

     spec:
       chart: kube-prometheus-stack
       sourceRef:
         kind: HelmRepository
         name: prometheus-community
         namespace: flux-system
-      version: 68.5.0
+      version: 68.3.2
   dependsOn:
   - name: rook-ceph-cluster
     namespace: rook-ceph
   install:
     crds: CreateReplace
     remediation:
--- kubernetes/apps/observability/gatus/app Kustomization: flux-system/gatus HelmRelease: observability/gatus

+++ kubernetes/apps/observability/gatus/app Kustomization: flux-system/gatus HelmRelease: observability/gatus

@@ -40,13 +40,13 @@

               TZ: Europe/Prague
             envFrom:
             - secretRef:
                 name: gatus-secret
             image:
               repository: ghcr.io/twin/gatus
-              tag: v5.17.0@sha256:a8c53f9e9f1a3876cd00e44a42c80fc984e118d5ba0bdbaf08980cb627d61512
+              tag: v5.15.0@sha256:45686324db605e57dfa8b0931d8d57fe06298f52685f06aa9654a1f710d461bb
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3
--- kubernetes/apps/network/ingress-nginx/internal Kustomization: flux-system/ingress-nginx-internal HelmRelease: network/ingress-nginx-internal

+++ kubernetes/apps/network/ingress-nginx/internal Kustomization: flux-system/ingress-nginx-internal HelmRelease: network/ingress-nginx-internal

@@ -55,13 +55,13 @@

         - name: TEMPLATE_NAME
           value: lost-in-space
         - name: SHOW_DETAILS
           value: 'false'
         image:
           repository: ghcr.io/tarampampam/error-pages
-          tag: 3.3.2
+          tag: 3.3.1
       extraArgs:
         default-ssl-certificate: network/-.PLACEHOLDER_SECRET_DOMAIN..-production-tls
       ingressClassResource:
         controllerValue: k8s.io/internal
         default: true
         name: internal
--- kubernetes/apps/media/sabnzbd/app Kustomization: flux-system/sabnzbd HelmRelease: media/sabnzbd

+++ kubernetes/apps/media/sabnzbd/app Kustomization: flux-system/sabnzbd HelmRelease: media/sabnzbd

@@ -41,13 +41,13 @@

               TZ: Europe/Prague
             envFrom:
             - secretRef:
                 name: sabnzbd-secret
             image:
               repository: ghcr.io/buroa/sabnzbd
-              tag: 4.4.1@sha256:146646057a9049b4eca4b9996b3e2d3135a520402cf64f00abba0ef17f00d1d1
+              tag: 4.4.1@sha256:440fe03b57692411378f88697f3dfe099438af60d947f1f795eaf3f52dcdb622
             probes:
               liveness:
                 enabled: true
               readiness:
                 enabled: true
               startup:
--- kubernetes/apps/media/recyclarr/app Kustomization: flux-system/recyclarr HelmRelease: media/recyclarr

+++ kubernetes/apps/media/recyclarr/app Kustomization: flux-system/recyclarr HelmRelease: media/recyclarr

@@ -40,13 +40,13 @@

               TZ: Europe/Prague
             envFrom:
             - secretRef:
                 name: recyclarr-secret
             image:
               repository: ghcr.io/recyclarr/recyclarr
-              tag: 7.4.1@sha256:759540877f95453eca8a26c1a93593e783a7a824c324fbd57523deffb67f48e1
+              tag: 7.4.0@sha256:619c3b8920a179f2c578acd0f54e9a068f57c049aff840469eed66e93a4be2cf
             resources:
               limits:
                 memory: 128Mi
               requests:
                 cpu: 10m
             securityContext:
--- kubernetes/apps/observability/karma/app Kustomization: flux-system/karma HelmRelease: observability/karma

+++ kubernetes/apps/observability/karma/app Kustomization: flux-system/karma HelmRelease: observability/karma

@@ -34,13 +34,13 @@

         containers:
           app:
             env:
               CONFIG_FILE: /config/config.yaml
             image:
               repository: ghcr.io/prymitive/karma
-              tag: v0.121@sha256:9f0ad820df1b1d0af562de3b3c545a52ddfce8d7492f434a2276e45f3a1f7e28
+              tag: v0.120@sha256:733bff15f2529065f1c1b50b13e4a56a541d3c0615dbc6b4b6a07befbfcc27ff
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3
--- kubernetes/apps/network/cloudflared/app Kustomization: flux-system/cloudflared HelmRelease: network/cloudflared

+++ kubernetes/apps/network/cloudflared/app Kustomization: flux-system/cloudflared HelmRelease: network/cloudflared

@@ -49,13 +49,13 @@

               TUNNEL_METRICS: 0.0.0.0:8080
               TUNNEL_ORIGIN_ENABLE_HTTP2: true
               TUNNEL_POST_QUANTUM: true
               TUNNEL_TRANSPORT_PROTOCOL: quic
             image:
               repository: docker.io/cloudflare/cloudflared
-              tag: 2025.2.1
+              tag: 2025.1.1
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3

bot-akira[bot] avatar Feb 04 '25 16:02 bot-akira[bot]

--- HelmRelease: kube-system/kubelet-csr-approver Deployment: kube-system/kubelet-csr-approver

+++ HelmRelease: kube-system/kubelet-csr-approver Deployment: kube-system/kubelet-csr-approver

@@ -33,13 +33,13 @@

           readOnlyRootFilesystem: true
           runAsGroup: 65532
           runAsNonRoot: true
           runAsUser: 65532
           seccompProfile:
             type: RuntimeDefault
-        image: ghcr.io/postfinance/kubelet-csr-approver:v1.2.6
+        image: ghcr.io/postfinance/kubelet-csr-approver:v1.2.5
         imagePullPolicy: IfNotPresent
         args:
         - -metrics-bind-address
         - :8080
         - -health-probe-bind-address
         - :8081
--- HelmRelease: media/komga Deployment: media/komga

+++ HelmRelease: media/komga Deployment: media/komga

@@ -38,13 +38,13 @@

       containers:
       - env:
         - name: SERVER_PORT
           value: '8080'
         - name: TZ
           value: Europe/Prague
-        image: gotson/komga:1.21.2@sha256:ba587695d786f0e8f4de8598b8aa2785cc8c671098ef1cb624819c2bb812789c
+        image: gotson/komga:1.19.0@sha256:b7bd32bc66159d020d682702f4b010e5977fecf37351903ed8b959c32c759638
         name: app
         resources:
           limits:
             memory: 2Gi
           requests:
             cpu: 15m
--- HelmRelease: media/komga Ingress: media/komga

+++ HelmRelease: media/komga Ingress: media/komga

@@ -11,26 +11,15 @@

     gatus.io/enabled: 'true'
     hajimari.io/icon: mdi:thought-bubble-outline
 spec:
   ingressClassName: internal
   tls:
   - hosts:
-    - komga.juno.moe
-    - comics.juno.moe
+    - komga...PLACEHOLDER_SECRET_DOMAIN..
   rules:
-  - host: komga.juno.moe
-    http:
-      paths:
-      - path: /
-        pathType: Prefix
-        backend:
-          service:
-            name: komga
-            port:
-              number: 8080
-  - host: comics.juno.moe
+  - host: komga...PLACEHOLDER_SECRET_DOMAIN..
     http:
       paths:
       - path: /
         pathType: Prefix
         backend:
           service:
--- HelmRelease: network/echo-server Deployment: network/echo-server

+++ HelmRelease: network/echo-server Deployment: network/echo-server

@@ -45,13 +45,13 @@

         - name: LOG_IGNORE_PATH
           value: /healthz
         - name: LOG_WITHOUT_NEWLINE
           value: 'true'
         - name: PROMETHEUS_ENABLED
           value: 'true'
-        image: ghcr.io/mendhak/http-https-echo:36
+        image: ghcr.io/mendhak/http-https-echo:35
         livenessProbe:
           failureThreshold: 3
           httpGet:
             path: /healthz
             port: 8080
           initialDelaySeconds: 0
--- HelmRelease: observability/kromgo Deployment: observability/kromgo

+++ HelmRelease: observability/kromgo Deployment: observability/kromgo

@@ -58,13 +58,13 @@

         - name: PROMETHEUS_URL
           value: http://prometheus-operated.observability.svc.cluster.local:9090
         - name: SERVER_HOST
           value: 0.0.0.0
         - name: SERVER_PORT
           value: '8080'
-        image: ghcr.io/kashalls/kromgo:v0.5.1@sha256:1f86c6151c676fa6d368230f1b228d67ed030fd4409ae0a53763c60d522ea425
+        image: ghcr.io/kashalls/kromgo:v0.4.4@sha256:4f6770a49ffa2d1a96517761d677ababe5fa966a5da398530cc35ee4714c315b
         livenessProbe:
           failureThreshold: 3
           httpGet:
             path: /-/ready
             port: 8888
           initialDelaySeconds: 0
--- HelmRelease: kube-system/coredns Deployment: kube-system/coredns

+++ HelmRelease: kube-system/coredns Deployment: kube-system/coredns

@@ -48,13 +48,13 @@

         operator: Exists
       - effect: NoSchedule
         key: node-role.kubernetes.io/control-plane
         operator: Exists
       containers:
       - name: coredns
-        image: coredns/coredns:1.12.0
+        image: coredns/coredns:1.11.4
         imagePullPolicy: IfNotPresent
         args:
         - -conf
         - /etc/coredns/Corefile
         volumeMounts:
         - name: config-volume
--- HelmRelease: default/piped Deployment: default/piped-ytproxy

+++ HelmRelease: default/piped Deployment: default/piped-ytproxy

@@ -25,13 +25,13 @@

       serviceAccountName: default
       automountServiceAccountToken: null
       dnsPolicy: ClusterFirst
       enableServiceLinks: null
       containers:
       - name: piped-ytproxy
-        image: 1337kavin/piped-proxy:latest@sha256:880b1117b6087e32b82c0204a96210fb87de61a874a3a2681361cc6d905e4d0e
+        image: 1337kavin/piped-proxy:latest@sha256:833ca24c048619c9cd6fe58e2d210bfc7b1e43875ba5108aeddea0b171f04dbd
         imagePullPolicy: IfNotPresent
         command:
         - /app/piped-proxy
         livenessProbe:
           tcpSocket:
             port: 8080
--- HelmRelease: network/cloudflared Deployment: network/cloudflared

+++ HelmRelease: network/cloudflared Deployment: network/cloudflared

@@ -62,13 +62,13 @@

         - name: TUNNEL_ORIGIN_ENABLE_HTTP2
           value: 'true'
         - name: TUNNEL_POST_QUANTUM
           value: 'true'
         - name: TUNNEL_TRANSPORT_PROTOCOL
           value: quic
-        image: docker.io/cloudflare/cloudflared:2025.2.1
+        image: docker.io/cloudflare/cloudflared:2025.1.1
         livenessProbe:
           failureThreshold: 3
           httpGet:
             path: /ready
             port: 8080
           initialDelaySeconds: 0
--- HelmRelease: media/sabnzbd Deployment: media/sabnzbd

+++ HelmRelease: media/sabnzbd Deployment: media/sabnzbd

@@ -52,13 +52,13 @@

           value: '8080'
         - name: TZ
           value: Europe/Prague
         envFrom:
         - secretRef:
             name: sabnzbd-secret
-        image: ghcr.io/buroa/sabnzbd:4.4.1@sha256:146646057a9049b4eca4b9996b3e2d3135a520402cf64f00abba0ef17f00d1d1
+        image: ghcr.io/buroa/sabnzbd:4.4.1@sha256:440fe03b57692411378f88697f3dfe099438af60d947f1f795eaf3f52dcdb622
         livenessProbe:
           failureThreshold: 3
           initialDelaySeconds: 0
           periodSeconds: 10
           tcpSocket:
             port: 8080
--- HelmRelease: observability/gatus Deployment: observability/gatus

+++ HelmRelease: observability/gatus Deployment: observability/gatus

@@ -95,13 +95,13 @@

           value: '80'
         - name: TZ
           value: Europe/Prague
         envFrom:
         - secretRef:
             name: gatus-secret
-        image: ghcr.io/twin/gatus:v5.17.0@sha256:a8c53f9e9f1a3876cd00e44a42c80fc984e118d5ba0bdbaf08980cb627d61512
+        image: ghcr.io/twin/gatus:v5.15.0@sha256:45686324db605e57dfa8b0931d8d57fe06298f52685f06aa9654a1f710d461bb
         livenessProbe:
           failureThreshold: 3
           httpGet:
             path: /health
             port: 80
           initialDelaySeconds: 0
--- HelmRelease: network/external-dns-unifi Deployment: network/external-dns-unifi

+++ HelmRelease: network/external-dns-unifi Deployment: network/external-dns-unifi

@@ -76,13 +76,13 @@

             port: http
           initialDelaySeconds: 5
           periodSeconds: 10
           successThreshold: 1
           timeoutSeconds: 5
       - name: webhook
-        image: ghcr.io/kashalls/external-dns-unifi-webhook:v0.5.1@sha256:fc031337a83e3a7d5f3407c931373455fe6842e085b47e4bb1e73708cb054b06
+        image: ghcr.io/kashalls/external-dns-unifi-webhook:v0.4.1@sha256:5c01923d9a2c050362335c1750c2361046c0d2caf1ab796661c215da47446aad
         imagePullPolicy: IfNotPresent
         env:
         - name: UNIFI_HOST
           value: https://192.168.69.1
         - name: UNIFI_USER
           valueFrom:
--- HelmRelease: media/recyclarr CronJob: media/recyclarr

+++ HelmRelease: media/recyclarr CronJob: media/recyclarr

@@ -50,13 +50,13 @@

             env:
             - name: TZ
               value: Europe/Prague
             envFrom:
             - secretRef:
                 name: recyclarr-secret
-            image: ghcr.io/recyclarr/recyclarr:7.4.1@sha256:759540877f95453eca8a26c1a93593e783a7a824c324fbd57523deffb67f48e1
+            image: ghcr.io/recyclarr/recyclarr:7.4.0@sha256:619c3b8920a179f2c578acd0f54e9a068f57c049aff840469eed66e93a4be2cf
             name: app
             resources:
               limits:
                 memory: 128Mi
               requests:
                 cpu: 10m
--- HelmRelease: observability/karma Deployment: observability/karma

+++ HelmRelease: observability/karma Deployment: observability/karma

@@ -50,13 +50,13 @@

         topologyKey: kubernetes.io/hostname
         whenUnsatisfiable: DoNotSchedule
       containers:
       - env:
         - name: CONFIG_FILE
           value: /config/config.yaml
-        image: ghcr.io/prymitive/karma:v0.121@sha256:9f0ad820df1b1d0af562de3b3c545a52ddfce8d7492f434a2276e45f3a1f7e28
+        image: ghcr.io/prymitive/karma:v0.120@sha256:733bff15f2529065f1c1b50b13e4a56a541d3c0615dbc6b4b6a07befbfcc27ff
         livenessProbe:
           failureThreshold: 3
           httpGet:
             path: /health
             port: 8080
           initialDelaySeconds: 0
--- HelmRelease: observability/kube-prometheus-stack Service: observability/kube-state-metrics

+++ HelmRelease: observability/kube-prometheus-stack Service: observability/kube-state-metrics

@@ -8,12 +8,14 @@

     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
     release: kube-prometheus-stack
+  annotations:
+    prometheus.io/scrape: 'true'
 spec:
   type: ClusterIP
   ports:
   - name: http
     protocol: TCP
     port: 8080
--- HelmRelease: observability/kube-prometheus-stack PrometheusRule: observability/kube-prometheus-stack-kubernetes-system-kubelet

+++ HelmRelease: observability/kube-prometheus-stack PrometheusRule: observability/kube-prometheus-stack-kubernetes-system-kubelet

@@ -18,16 +18,14 @@

     - alert: KubeNodeNotReady
       annotations:
         description: '{{ $labels.node }} has been unready for more than 15 minutes
           on cluster {{ $labels.cluster }}.'
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubenodenotready
         summary: Node is not ready.
-      expr: |-
-        kube_node_status_condition{job="kube-state-metrics",condition="Ready",status="true"} == 0
-        and on (cluster, node)
-        kube_node_spec_unschedulable{job="kube-state-metrics"} == 0
+      expr: kube_node_status_condition{job="kube-state-metrics",condition="Ready",status="true"}
+        == 0
       for: 15m
       labels:
         severity: warning
     - alert: KubeNodeUnreachable
       annotations:
         description: '{{ $labels.node }} is unreachable and some workloads may be
@@ -64,16 +62,14 @@

     - alert: KubeNodeReadinessFlapping
       annotations:
         description: The readiness status of node {{ $labels.node }} has changed {{
           $value }} times in the last 15 minutes on cluster {{ $labels.cluster }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubenodereadinessflapping
         summary: Node readiness status is flapping.
-      expr: |-
-        sum(changes(kube_node_status_condition{job="kube-state-metrics",status="true",condition="Ready"}[15m])) by (cluster, node) > 2
-        and on (cluster, node)
-        kube_node_spec_unschedulable{job="kube-state-metrics"} == 0
+      expr: sum(changes(kube_node_status_condition{job="kube-state-metrics",status="true",condition="Ready"}[15m]))
+        by (cluster, node) > 2
       for: 15m
       labels:
         severity: warning
     - alert: KubeletPlegDurationHigh
       annotations:
         description: The Kubelet Pod Lifecycle Event Generator has a 99th percentile
--- HelmRelease: cert-manager/cert-manager Deployment: cert-manager/cert-manager-cainjector

+++ HelmRelease: cert-manager/cert-manager Deployment: cert-manager/cert-manager-cainjector

@@ -31,13 +31,13 @@

       securityContext:
         runAsNonRoot: true
         seccompProfile:
           type: RuntimeDefault
       containers:
       - name: cert-manager-cainjector
-        image: quay.io/jetstack/cert-manager-cainjector:v1.17.1
+        image: quay.io/jetstack/cert-manager-cainjector:v1.16.3
         imagePullPolicy: IfNotPresent
         args:
         - --v=2
         - --leader-election-namespace=kube-system
         ports:
         - containerPort: 9402
--- HelmRelease: cert-manager/cert-manager Deployment: cert-manager/cert-manager

+++ HelmRelease: cert-manager/cert-manager Deployment: cert-manager/cert-manager

@@ -31,19 +31,19 @@

       securityContext:
         runAsNonRoot: true
         seccompProfile:
           type: RuntimeDefault
       containers:
       - name: cert-manager-controller
-        image: quay.io/jetstack/cert-manager-controller:v1.17.1
+        image: quay.io/jetstack/cert-manager-controller:v1.16.3
         imagePullPolicy: IfNotPresent
         args:
         - --v=2
         - --cluster-resource-namespace=$(POD_NAMESPACE)
         - --leader-election-namespace=kube-system
-        - --acme-http01-solver-image=quay.io/jetstack/cert-manager-acmesolver:v1.17.1
+        - --acme-http01-solver-image=quay.io/jetstack/cert-manager-acmesolver:v1.16.3
         - --max-concurrent-challenges=60
         - --dns01-recursive-nameservers-only=true
         - --dns01-recursive-nameservers=https://1.1.1.1:443/dns-query,https://1.0.0.1:443/dns-query
         ports:
         - containerPort: 9402
           name: http-metrics
--- HelmRelease: cert-manager/cert-manager Deployment: cert-manager/cert-manager-webhook

+++ HelmRelease: cert-manager/cert-manager Deployment: cert-manager/cert-manager-webhook

@@ -31,13 +31,13 @@

       securityContext:
         runAsNonRoot: true
         seccompProfile:
           type: RuntimeDefault
       containers:
       - name: cert-manager-webhook
-        image: quay.io/jetstack/cert-manager-webhook:v1.17.1
+        image: quay.io/jetstack/cert-manager-webhook:v1.16.3
         imagePullPolicy: IfNotPresent
         args:
         - --v=2
         - --secure-port=10250
         - --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE)
         - --dynamic-serving-ca-secret-name=cert-manager-webhook-ca
--- HelmRelease: cert-manager/cert-manager Job: cert-manager/cert-manager-startupapicheck

+++ HelmRelease: cert-manager/cert-manager Job: cert-manager/cert-manager-startupapicheck

@@ -31,13 +31,13 @@

       securityContext:
         runAsNonRoot: true
         seccompProfile:
           type: RuntimeDefault
       containers:
       - name: cert-manager-startupapicheck
-        image: quay.io/jetstack/cert-manager-startupapicheck:v1.17.1
+        image: quay.io/jetstack/cert-manager-startupapicheck:v1.16.3
         imagePullPolicy: IfNotPresent
         args:
         - check
         - api
         - --wait=1m
         - -v
--- HelmRelease: kube-system/external-secrets ClusterRole: kube-system/external-secrets-controller

+++ HelmRelease: kube-system/external-secrets ClusterRole: kube-system/external-secrets-controller

@@ -13,13 +13,12 @@

   resources:
   - secretstores
   - clustersecretstores
   - externalsecrets
   - clusterexternalsecrets
   - pushsecrets
-  - clusterpushsecrets
   verbs:
   - get
   - list
   - watch
 - apiGroups:
   - external-secrets.io
@@ -36,32 +35,16 @@

   - clusterexternalsecrets
   - clusterexternalsecrets/status
   - clusterexternalsecrets/finalizers
   - pushsecrets
   - pushsecrets/status
   - pushsecrets/finalizers
-  - clusterpushsecrets
-  - clusterpushsecrets/status
-  - clusterpushsecrets/finalizers
   verbs:
   - get
   - update
   - patch
-- apiGroups:
-  - generators.external-secrets.io
-  resources:
-  - generatorstates
-  verbs:
-  - get
-  - list
-  - watch
-  - create
-  - update
-  - patch
-  - delete
-  - deletecollection
 - apiGroups:
   - generators.external-secrets.io
   resources:
   - acraccesstokens
   - clustergenerators
   - ecrauthorizationtokens
@@ -71,13 +54,12 @@

   - quayaccesstokens
   - passwords
   - stssessiontokens
   - uuids
   - vaultdynamicsecrets
   - webhooks
-  - grafanas
   verbs:
   - get
   - list
   - watch
 - apiGroups:
   - ''
@@ -126,15 +108,7 @@

   resources:
   - externalsecrets
   verbs:
   - create
   - update
   - delete
-- apiGroups:
-  - external-secrets.io
-  resources:
-  - pushsecrets
-  verbs:
-  - create
-  - update
-  - delete
 
--- HelmRelease: kube-system/external-secrets ClusterRole: kube-system/external-secrets-view

+++ HelmRelease: kube-system/external-secrets ClusterRole: kube-system/external-secrets-view

@@ -15,13 +15,12 @@

   - external-secrets.io
   resources:
   - externalsecrets
   - secretstores
   - clustersecretstores
   - pushsecrets
-  - clusterpushsecrets
   verbs:
   - get
   - watch
   - list
 - apiGroups:
   - generators.external-secrets.io
@@ -33,13 +32,11 @@

   - gcraccesstokens
   - githubaccesstokens
   - quayaccesstokens
   - passwords
   - vaultdynamicsecrets
   - webhooks
-  - grafanas
-  - generatorstates
   verbs:
   - get
   - watch
   - list
 
--- HelmRelease: kube-system/external-secrets ClusterRole: kube-system/external-secrets-edit

+++ HelmRelease: kube-system/external-secrets ClusterRole: kube-system/external-secrets-edit

@@ -14,13 +14,12 @@

   - external-secrets.io
   resources:
   - externalsecrets
   - secretstores
   - clustersecretstores
   - pushsecrets
-  - clusterpushsecrets
   verbs:
   - create
   - delete
   - deletecollection
   - patch
   - update
@@ -34,14 +33,12 @@

   - gcraccesstokens
   - githubaccesstokens
   - quayaccesstokens
   - passwords
   - vaultdynamicsecrets
   - webhooks
-  - grafanas
-  - generatorstates
   verbs:
   - create
   - delete
   - deletecollection
   - patch
   - update
--- HelmRelease: kube-system/external-secrets ClusterRole: kube-system/external-secrets-servicebindings

+++ HelmRelease: kube-system/external-secrets ClusterRole: kube-system/external-secrets-servicebindings

@@ -10,12 +10,11 @@

     app.kubernetes.io/managed-by: Helm
 rules:
 - apiGroups:
   - external-secrets.io
   resources:
   - externalsecrets
-  - pushsecrets
   verbs:
   - get
   - list
   - watch
 
--- HelmRelease: kube-system/external-secrets Deployment: kube-system/external-secrets-cert-controller

+++ HelmRelease: kube-system/external-secrets Deployment: kube-system/external-secrets-cert-controller

@@ -34,13 +34,13 @@

             - ALL
           readOnlyRootFilesystem: true
           runAsNonRoot: true
           runAsUser: 1000
           seccompProfile:
             type: RuntimeDefault
-        image: oci.external-secrets.io/external-secrets/external-secrets:v0.15.0
+        image: oci.external-secrets.io/external-secrets/external-secrets:v0.13.0
         imagePullPolicy: IfNotPresent
         args:
         - certcontroller
         - --crd-requeue-interval=5m
         - --service-name=external-secrets-webhook
         - --service-namespace=kube-system
--- HelmRelease: kube-system/external-secrets Deployment: kube-system/external-secrets

+++ HelmRelease: kube-system/external-secrets Deployment: kube-system/external-secrets

@@ -34,13 +34,13 @@

             - ALL
           readOnlyRootFilesystem: true
           runAsNonRoot: true
           runAsUser: 1000
           seccompProfile:
             type: RuntimeDefault
-        image: oci.external-secrets.io/external-secrets/external-secrets:v0.15.0
+        image: oci.external-secrets.io/external-secrets/external-secrets:v0.13.0
         imagePullPolicy: IfNotPresent
         args:
         - --enable-leader-election=true
         - --concurrent=1
         - --metrics-addr=:8080
         - --loglevel=info
--- HelmRelease: kube-system/external-secrets Deployment: kube-system/external-secrets-webhook

+++ HelmRelease: kube-system/external-secrets Deployment: kube-system/external-secrets-webhook

@@ -34,13 +34,13 @@

             - ALL
           readOnlyRootFilesystem: true
           runAsNonRoot: true
           runAsUser: 1000
           seccompProfile:
             type: RuntimeDefault
-        image: oci.external-secrets.io/external-secrets/external-secrets:v0.15.0
+        image: oci.external-secrets.io/external-secrets/external-secrets:v0.13.0
         imagePullPolicy: IfNotPresent
         args:
         - webhook
         - --port=10250
         - --dns-name=external-secrets-webhook.kube-system.svc
         - --cert-dir=/tmp/certs
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-dashboard

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-dashboard

@@ -15,261 +15,323 @@

   cilium-dashboard.json: |
     {
       "annotations": {
         "list": [
           {
             "builtIn": 1,
-            "datasource": "-- Grafana --",
+            "datasource": {
+              "type": "datasource",
+              "uid": "grafana"
+            },
             "enable": true,
             "hide": true,
             "iconColor": "rgba(0, 211, 255, 1)",
             "name": "Annotations & Alerts",
             "type": "dashboard"
           }
         ]
       },
       "description": "Dashboard for Cilium (https://cilium.io/) metrics",
       "editable": true,
-      "gnetId": null,
+      "fiscalYearStartMonth": 0,
       "graphTooltip": 1,
-      "iteration": 1606309591568,
+      "id": 1,
       "links": [],
       "panels": [
         {
-          "aliasColors": {
-            "error": "#890f02",
-            "warning": "#c15c17"
-          },
-          "bars": false,
-          "dashLength": 10,
-          "dashes": false,
           "datasource": {
             "type": "prometheus",
             "uid": "${DS_PROMETHEUS}"
           },
           "fieldConfig": {
             "defaults": {
-              "custom": {}
-            },
-            "overrides": []
-          },
-          "fill": 1,
-          "fillGradient": 0,
+              "color": {
+                "mode": "palette-classic"
+              },
+              "custom": {
+                "axisBorderShow": false,
+                "axisCenteredZero": false,
+                "axisColorMode": "text",
+                "axisLabel": "",
+                "axisPlacement": "auto",
+                "barAlignment": 0,
+                "drawStyle": "line",
+                "fillOpacity": 10,
+                "gradientMode": "none",
+                "hideFrom": {
+                  "legend": false,
+                  "tooltip": false,
+                  "viz": false
+                },
+                "insertNulls": false,
+                "lineInterpolation": "linear",
+                "lineWidth": 1,
+                "pointSize": 5,
+                "scaleDistribution": {
+                  "type": "linear"
+                },
+                "showPoints": "never",
+                "spanNulls": false,
+                "stacking": {
+                  "group": "A",
+                  "mode": "none"
+                },
+                "thresholdsStyle": {
+                  "mode": "off"
+                }
+              },
+              "links": [],
+              "mappings": [],
+              "thresholds": {
+                "mode": "absolute",
+                "steps": [
+                  {
+                    "color": "green",
+                    "value": null
+                  },
+                  {
+                    "color": "red",
+                    "value": 80
+                  }
+                ]
+              },
+              "unit": "opm"
+            },
+            "overrides": [
+              {
+                "matcher": {
+                  "id": "byName",
+                  "options": "error"
+                },
+                "properties": [
+                  {
+                    "id": "color",
+                    "value": {
+                      "fixedColor": "#890f02",
+                      "mode": "fixed"
+                    }
+                  }
+                ]
+              },
+              {
+                "matcher": {
+                  "id": "byName",
+                  "options": "warning"
+                },
+                "properties": [
+                  {
+                    "id": "color",
+                    "value": {
+                      "fixedColor": "#c15c17",
+                      "mode": "fixed"
+                    }
+                  }
+                ]
+              }
+            ]
+          },
           "gridPos": {
             "h": 5,
             "w": 12,
             "x": 0,
             "y": 0
           },
-          "hiddenSeries": false,
           "id": 76,
-          "legend": {
-            "avg": false,
-            "current": false,
-            "max": false,
-            "min": false,
-            "show": true,
-            "total": false,
-            "values": false
-          },
-          "lines": true,
-          "linewidth": 1,
-          "links": [],
-          "nullPointMode": "null",
           "options": {
-            "dataLinks": []
-          },
-          "paceLength": 10,
-          "percentage": false,
-          "pointradius": 5,
-          "points": false,
-          "renderer": "flot",
-          "seriesOverrides": [
-            {
-              "alias": "error",
-              "yaxis": 2
-            }
-          ],
-          "spaceLength": 10,
-          "stack": false,
-          "steppedLine": false,
+            "legend": {
+              "calcs": [],
+              "displayMode": "list",
+              "placement": "bottom",
+              "showLegend": true
+            },
+            "tooltip": {
+              "mode": "multi",
+              "sort": "none"
+            }
+          },
+          "pluginVersion": "10.4.3",
           "targets": [
             {
+              "datasource": {
+                "type": "prometheus",
+                "uid": "${DS_PROMETHEUS}"
+              },
+              "editorMode": "code",
               "expr": "sum(rate(cilium_errors_warnings_total{k8s_app=\"cilium\", pod=~\"$pod\"}[1m])) by (pod, level) * 60",
               "format": "time_series",
               "intervalFactor": 1,
               "legendFormat": "{{level}}",
+              "range": true,
               "refId": "A"
             }
           ],
-          "thresholds": [],
-          "timeFrom": null,
-          "timeRegions": [],
-          "timeShift": null,
           "title": "Errors & Warnings",
-          "tooltip": {
-            "shared": true,
-            "sort": 0,
-            "value_type": "individual"
-          },
-          "type": "graph",
-          "xaxis": {
-            "buckets": null,
-            "mode": "time",
-            "name": null,
-            "show": true,
-            "values": []
-          },
-          "yaxes": [
-            {
-              "format": "opm",
-              "label": null,
-              "logBase": 1,
-              "max": null,
-              "min": null,
-              "show": true
-            },
-            {
-              "format": "opm",
-              "label": null,
-              "logBase": 1,
-              "max": null,
-              "min": null,
-              "show": true
-            }
-          ],
-          "yaxis": {
-            "align": false,
-            "alignLevel": null
-          }
+          "type": "timeseries"
         },
         {
-          "aliasColors": {
-            "avg": "#cffaff"
-          },
-          "bars": false,
-          "dashLength": 10,
-          "dashes": false,
           "datasource": {
             "type": "prometheus",
             "uid": "${DS_PROMETHEUS}"
           },
           "fieldConfig": {
             "defaults": {
-              "custom": {}
-            },
-            "overrides": []
-          },
-          "fill": 0,
-          "fillGradient": 0,
+              "color": {
+                "mode": "palette-classic"
+              },
+              "custom": {
+                "axisBorderShow": false,
+                "axisCenteredZero": false,
+                "axisColorMode": "text",
+                "axisLabel": "",
+                "axisPlacement": "auto",
+                "barAlignment": 0,
+                "drawStyle": "line",
+                "fillOpacity": 35,
+                "gradientMode": "none",
+                "hideFrom": {
+                  "legend": false,
+                  "tooltip": false,
+                  "viz": false
+                },
+                "insertNulls": false,
+                "lineInterpolation": "linear",
+                "lineWidth": 1,
+                "pointSize": 5,
+                "scaleDistribution": {
+                  "type": "linear"
+                },
+                "showPoints": "never",
+                "spanNulls": false,
+                "stacking": {
+                  "group": "A",
+                  "mode": "none"
+                },
+                "thresholdsStyle": {
+                  "mode": "off"
+                }
+              },
+              "links": [],
+              "mappings": [],
+              "thresholds": {
+                "mode": "absolute",
+                "steps": [
+                  {
+                    "color": "green",
+                    "value": null
+                  },
+                  {
+                    "color": "red",
+                    "value": 80
+                  }
+                ]
+              },
+              "unit": "percent"
+            },
+            "overrides": [
+              {
+                "matcher": {
+                  "id": "byName",
+                  "options": "avg"
+                },
+                "properties": [
+                  {
+                    "id": "color",
+                    "value": {
+                      "fixedColor": "#cffaff",
+                      "mode": "fixed"
+                    }
+                  }
+                ]
+              },
+              {
+                "matcher": {
+                  "id": "byName",
+                  "options": "max"
+                },
+                "properties": [
+                  {
+                    "id": "custom.fillBelowTo",
+                    "value": "min"
+                  },
+                  {
+                    "id": "custom.lineWidth",
+                    "value": 0
+                  }
+                ]
+              },
+              {
+                "matcher": {
+                  "id": "byName",
+                  "options": "min"
+                },
+                "properties": [
+                  {
+                    "id": "custom.lineWidth",
+                    "value": 0
+                  }
+                ]
+              }
+            ]
+          },
           "gridPos": {
[Diff truncated by flux-local]
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-config

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-config

@@ -16,42 +16,52 @@

   policy-cidr-match-mode: ''
   prometheus-serve-addr: :9962
   controller-group-metrics: write-cni-file sync-host-ips sync-lb-maps-with-k8s-services
   proxy-prometheus-port: '9964'
   operator-prometheus-serve-addr: :9963
   enable-metrics: 'true'
+  enable-policy-secrets-sync: 'true'
+  policy-secrets-only-from-secrets-namespace: 'true'
+  policy-secrets-namespace: cilium-secrets
   enable-ipv4: 'true'
   enable-ipv6: 'false'
   custom-cni-conf: 'false'
   enable-bpf-clock-probe: 'false'
   monitor-aggregation: medium
   monitor-aggregation-interval: 5s
   monitor-aggregation-flags: all
   bpf-map-dynamic-size-ratio: '0.0025'
   bpf-policy-map-max: '16384'
   bpf-lb-map-max: '65536'
   bpf-lb-external-clusterip: 'false'
+  bpf-lb-source-range-all-types: 'false'
+  bpf-lb-algorithm-annotation: 'false'
+  bpf-lb-mode-annotation: 'false'
+  bpf-distributed-lru: 'false'
   bpf-events-drop-enabled: 'true'
   bpf-events-policy-verdict-enabled: 'true'
   bpf-events-trace-enabled: 'true'
   preallocate-bpf-maps: 'false'
   cluster-name: home-kubernetes
   cluster-id: '1'
   routing-mode: native
+  tunnel-protocol: vxlan
+  tunnel-source-port-range: 0-0
   service-no-backend-response: reject
   enable-l7-proxy: 'true'
   enable-ipv4-masquerade: 'true'
   enable-ipv4-big-tcp: 'false'
   enable-ipv6-big-tcp: 'false'
   enable-ipv6-masquerade: 'true'
   enable-tcx: 'true'
   datapath-mode: veth
   enable-bpf-masquerade: 'false'
   enable-masquerade-to-route-source: 'false'
   enable-xt-socket-fallback: 'true'
   install-no-conntrack-iptables-rules: 'false'
+  iptables-random-fully: 'false'
   auto-direct-node-routes: 'true'
   direct-routing-skip-unreachable: 'false'
   enable-local-redirect-policy: 'true'
   ipv4-native-routing-cidr: 10.69.0.0/16
   enable-runtime-device-detection: 'true'
   kube-proxy-replacement: 'true'
@@ -63,24 +73,27 @@

   enable-health-check-loadbalancer-ip: 'false'
   node-port-bind-protection: 'true'
   enable-auto-protect-node-port-range: 'true'
   bpf-lb-mode: dsr
   bpf-lb-algorithm: maglev
   bpf-lb-acceleration: disabled
+  enable-experimental-lb: 'false'
   enable-svc-source-range-check: 'true'
   enable-l2-neigh-discovery: 'true'
   arping-refresh-period: 30s
   k8s-require-ipv4-pod-cidr: 'false'
   k8s-require-ipv6-pod-cidr: 'false'
   enable-endpoint-routes: 'true'
   enable-k8s-networkpolicy: 'true'
+  enable-endpoint-lockdown-on-policy-overflow: 'false'
   write-cni-conf-when-ready: /host/etc/cni/net.d/05-cilium.conflist
   cni-exclusive: 'false'
   cni-log-file: /var/run/cilium/cilium-cni.log
   enable-endpoint-health-checking: 'true'
   enable-health-checking: 'true'
+  health-check-icmp-failure-threshold: '3'
   enable-well-known-identities: 'false'
   enable-node-selector-labels: 'false'
   synchronize-k8s-nodes: 'true'
   operator-api-serve-addr: 127.0.0.1:9234
   enable-hubble: 'true'
   hubble-socket-path: /var/run/cilium/hubble.sock
@@ -94,35 +107,34 @@

   hubble-disable-tls: 'false'
   hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
   hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
   hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
   ipam: kubernetes
   ipam-cilium-node-update-rate: 15s
+  default-lb-service-ipam: lbipam
   egress-gateway-reconciliation-trigger-interval: 1s
   enable-vtep: 'false'
   vtep-endpoint: ''
   vtep-cidr: ''
   vtep-mask: ''
   vtep-mac: ''
   enable-l2-announcements: 'true'
   procfs: /host/proc
   bpf-root: /sys/fs/bpf
   cgroup-root: /sys/fs/cgroup
   enable-k8s-terminating-endpoint: 'true'
   enable-sctp: 'false'
-  k8s-client-qps: '10'
-  k8s-client-burst: '20'
   remove-cilium-node-taints: 'true'
   set-cilium-node-taints: 'true'
   set-cilium-is-up-condition: 'true'
   unmanaged-pod-watcher-interval: '15'
   dnsproxy-enable-transparent-mode: 'true'
   dnsproxy-socket-linger-timeout: '10'
   tofqdns-dns-reject-response-code: refused
   tofqdns-enable-dns-compression: 'true'
-  tofqdns-endpoint-max-ip-per-hostname: '50'
+  tofqdns-endpoint-max-ip-per-hostname: '1000'
   tofqdns-idle-connection-grace-period: 0s
   tofqdns-max-deferred-connection-deletes: '10000'
   tofqdns-proxy-response-max-delay: 100ms
   agent-not-ready-taint-key: node.cilium.io/agent-not-ready
   mesh-auth-enabled: 'true'
   mesh-auth-queue-size: '1024'
@@ -132,15 +144,22 @@

   proxy-xff-num-trusted-hops-egress: '0'
   proxy-connect-timeout: '2'
   proxy-initial-fetch-timeout: '30'
   proxy-max-requests-per-connection: '0'
   proxy-max-connection-duration-seconds: '0'
   proxy-idle-timeout-seconds: '60'
+  proxy-max-concurrent-retries: '128'
+  http-retry-count: '3'
   external-envoy-proxy: 'false'
   envoy-base-id: '0'
+  envoy-access-log-buffer-size: '4096'
   envoy-keep-cap-netbindservice: 'false'
   max-connected-clusters: '255'
   clustermesh-enable-endpoint-sync: 'false'
   clustermesh-enable-mcs-api: 'false'
   nat-map-stats-entries: '32'
   nat-map-stats-interval: 30s
+  enable-internal-traffic-policy: 'true'
+  enable-lb-ipam: 'true'
+  enable-non-default-deny-policies: 'true'
+  enable-source-ip-verification: 'true'
 
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-operator-dashboard

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-operator-dashboard

@@ -1013,13 +1013,19 @@

       ],
       "refresh": false,
       "schemaVersion": 25,
       "style": "dark",
       "tags": [],
       "templating": {
-        "list": []
+        "list": [
+          {
+            "type": "datasource",
+            "name": "DS_PROMETHEUS",
+            "query": "prometheus"
+          }
+        ]
       },
       "time": {
         "from": "now-30m",
         "to": "now"
       },
       "timepicker": {
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-relay-config

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-relay-config

@@ -3,12 +3,11 @@

 kind: ConfigMap
 metadata:
   name: hubble-relay-config
   namespace: kube-system
 data:
   config.yaml: "cluster-name: home-kubernetes\npeer-service: \"hubble-peer.kube-system.svc.cluster.local.:443\"\
-    \nlisten-address: :4245\ngops: true\ngops-port: \"9893\"\ndial-timeout: \nretry-timeout:\
-    \ \nsort-buffer-len-max: \nsort-buffer-drain-timeout: \ntls-hubble-client-cert-file:\
-    \ /var/lib/hubble-relay/tls/client.crt\ntls-hubble-client-key-file: /var/lib/hubble-relay/tls/client.key\n\
-    tls-hubble-server-ca-files: /var/lib/hubble-relay/tls/hubble-server-ca.crt\n\n\
-    disable-server-tls: true\n"
+    \nlisten-address: :4245\ngops: true\ngops-port: \"9893\"\nretry-timeout: \nsort-buffer-len-max:\
+    \ \nsort-buffer-drain-timeout: \ntls-hubble-client-cert-file: /var/lib/hubble-relay/tls/client.crt\n\
+    tls-hubble-client-key-file: /var/lib/hubble-relay/tls/client.key\ntls-hubble-server-ca-files:\
+    \ /var/lib/hubble-relay/tls/hubble-server-ca.crt\n\ndisable-server-tls: true\n"
 
--- HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium-operator

+++ HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium-operator

@@ -53,12 +53,13 @@

   - update
   - patch
 - apiGroups:
   - ''
   resources:
   - namespaces
+  - secrets
   verbs:
   - get
   - list
   - watch
 - apiGroups:
   - ''
@@ -135,12 +136,19 @@

   - update
   - get
   - list
   - watch
   - delete
   - patch
+- apiGroups:
+  - cilium.io
+  resources:
+  - ciliumbgpclusterconfigs/status
+  - ciliumbgppeerconfigs/status
+  verbs:
+  - update
 - apiGroups:
   - apiextensions.k8s.io
   resources:
   - customresourcedefinitions
   verbs:
   - create
@@ -181,12 +189,13 @@

   resources:
   - ciliumloadbalancerippools
   - ciliumpodippools
   - ciliumbgppeeringpolicies
   - ciliumbgpclusterconfigs
   - ciliumbgpnodeconfigoverrides
+  - ciliumbgppeerconfigs
   verbs:
   - get
   - list
   - watch
 - apiGroups:
   - cilium.io
--- HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium

+++ HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium

@@ -16,24 +16,24 @@

     rollingUpdate:
       maxUnavailable: 2
     type: RollingUpdate
   template:
     metadata:
       annotations:
-        cilium.io/cilium-configmap-checksum: de8cf26ceabe378b2f47632fd3fd210ee1e5b4ab5d6f3f888abe408c8a29cf7f
+        cilium.io/cilium-configmap-checksum: c2b3f8939ff075b3054d84e6bcb1e96deee779252f45c7ac22279a50f2809b60
       labels:
         k8s-app: cilium
         app.kubernetes.io/name: cilium-agent
         app.kubernetes.io/part-of: cilium
     spec:
       securityContext:
         appArmorProfile:
           type: Unconfined
       containers:
       - name: cilium-agent
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.2@sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1
         imagePullPolicy: IfNotPresent
         command:
         - cilium-agent
         args:
         - --config-dir=/tmp/cilium/config-map
         startupProbe:
@@ -197,13 +197,13 @@

           mountPath: /var/lib/cilium/tls/hubble
           readOnly: true
         - name: tmp
           mountPath: /tmp
       initContainers:
       - name: config
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.2@sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1
         imagePullPolicy: IfNotPresent
         command:
         - cilium-dbg
         - build-config
         env:
         - name: K8S_NODE_NAME
@@ -222,13 +222,13 @@

           value: '7445'
         volumeMounts:
         - name: tmp
           mountPath: /tmp
         terminationMessagePolicy: FallbackToLogsOnError
       - name: mount-cgroup
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.2@sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1
         imagePullPolicy: IfNotPresent
         env:
         - name: CGROUP_ROOT
           value: /sys/fs/cgroup
         - name: BIN_PATH
           value: /opt/cni/bin
@@ -254,13 +254,13 @@

             - SYS_ADMIN
             - SYS_CHROOT
             - SYS_PTRACE
             drop:
             - ALL
       - name: apply-sysctl-overwrites
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.2@sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1
         imagePullPolicy: IfNotPresent
         env:
         - name: BIN_PATH
           value: /opt/cni/bin
         command:
         - sh
@@ -284,13 +284,13 @@

             - SYS_ADMIN
             - SYS_CHROOT
             - SYS_PTRACE
             drop:
             - ALL
       - name: mount-bpf-fs
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.2@sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1
         imagePullPolicy: IfNotPresent
         args:
         - mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpf
         command:
         - /bin/bash
         - -c
@@ -300,13 +300,13 @@

           privileged: true
         volumeMounts:
         - name: bpf-maps
           mountPath: /sys/fs/bpf
           mountPropagation: Bidirectional
       - name: clean-cilium-state
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.2@sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1
         imagePullPolicy: IfNotPresent
         command:
         - /init-container.sh
         env:
         - name: CILIUM_ALL_STATE
           valueFrom:
@@ -348,13 +348,13 @@

         - name: cilium-cgroup
           mountPath: /sys/fs/cgroup
           mountPropagation: HostToContainer
         - name: cilium-run
           mountPath: /var/run/cilium
       - name: install-cni-binaries
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.2@sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1
         imagePullPolicy: IfNotPresent
         command:
         - /install-plugin.sh
         resources:
           requests:
             cpu: 100m
--- HelmRelease: kube-system/cilium Deployment: kube-system/cilium-operator

+++ HelmRelease: kube-system/cilium Deployment: kube-system/cilium-operator

@@ -20,22 +20,22 @@

       maxSurge: 25%
       maxUnavailable: 100%
     type: RollingUpdate
   template:
     metadata:
       annotations:
-        cilium.io/cilium-configmap-checksum: de8cf26ceabe378b2f47632fd3fd210ee1e5b4ab5d6f3f888abe408c8a29cf7f
+        cilium.io/cilium-configmap-checksum: c2b3f8939ff075b3054d84e6bcb1e96deee779252f45c7ac22279a50f2809b60
       labels:
         io.cilium/app: operator
         name: cilium-operator
         app.kubernetes.io/part-of: cilium
         app.kubernetes.io/name: cilium-operator
     spec:
       containers:
       - name: cilium-operator
-        image: quay.io/cilium/operator-generic:v1.16.6@sha256:13d32071d5a52c069fb7c35959a56009c6914439adc73e99e098917646d154fc
+        image: quay.io/cilium/operator-generic:v1.17.2@sha256:81f2d7198366e8dec2903a3a8361e4c68d47d19c68a0d42f0b7b6e3f0523f249
         imagePullPolicy: IfNotPresent
         command:
         - cilium-operator-generic
         args:
         - --config-dir=/tmp/cilium/config-map
         - --debug=$(CILIUM_DEBUG)
--- HelmRelease: kube-system/cilium Deployment: kube-system/hubble-relay

+++ HelmRelease: kube-system/cilium Deployment: kube-system/hubble-relay

@@ -17,13 +17,13 @@

     rollingUpdate:
       maxUnavailable: 1
     type: RollingUpdate
   template:
     metadata:
       annotations:
-        cilium.io/hubble-relay-configmap-checksum: 7013f296857a469857f02e7d0b7e0933fcdf29925c02e28162c33b4a8a00baca
+        cilium.io/hubble-relay-configmap-checksum: eff0e5f47a53fa4b010591dc8fd68bffd75ccd6298d9d502cc7125e0b3fede93
       labels:
         k8s-app: hubble-relay
         app.kubernetes.io/name: hubble-relay
         app.kubernetes.io/part-of: cilium
     spec:
       securityContext:
@@ -34,13 +34,13 @@

           capabilities:
             drop:
             - ALL
           runAsGroup: 65532
           runAsNonRoot: true
           runAsUser: 65532
-        image: quay.io/cilium/hubble-relay:v1.16.6@sha256:ca8dcaa5a81a37743b1397ba2221d16d5d63e4a47607584f1bf50a3b0882bf3b
+        image: quay.io/cilium/hubble-relay:v1.17.2@sha256:42a8db5c256c516cacb5b8937c321b2373ad7a6b0a1e5a5120d5028433d586cc
         imagePullPolicy: IfNotPresent
         command:
         - hubble-relay
         args:
         - serve
         ports:
--- HelmRelease: kube-system/cilium Deployment: kube-system/hubble-ui

+++ HelmRelease: kube-system/cilium Deployment: kube-system/hubble-ui

@@ -32,13 +32,13 @@

         runAsUser: 1001
       priorityClassName: null
       serviceAccountName: hubble-ui
       automountServiceAccountToken: true
       containers:
       - name: frontend
-        image: quay.io/cilium/hubble-ui:v0.13.1@sha256:e2e9313eb7caf64b0061d9da0efbdad59c6c461f6ca1752768942bfeda0796c6
+        image: quay.io/cilium/hubble-ui:v0.13.2@sha256:9e37c1296b802830834cc87342a9182ccbb71ffebb711971e849221bd9d59392
         imagePullPolicy: IfNotPresent
         ports:
         - name: http
           containerPort: 8081
         livenessProbe:
           httpGet:
@@ -53,13 +53,13 @@

           mountPath: /etc/nginx/conf.d/default.conf
           subPath: nginx.conf
         - name: tmp-dir
           mountPath: /tmp
         terminationMessagePolicy: FallbackToLogsOnError
       - name: backend
-        image: quay.io/cilium/hubble-ui-backend:v0.13.1@sha256:0e0eed917653441fded4e7cdb096b7be6a3bddded5a2dd10812a27b1fc6ed95b
+        image: quay.io/cilium/hubble-ui-backend:v0.13.2@sha256:a034b7e98e6ea796ed26df8f4e71f83fc16465a19d166eff67a03b822c0bfa15
         imagePullPolicy: IfNotPresent
         env:
         - name: EVENTS_SERVER_PORT
           value: '8090'
         - name: FLOWS_API_ADDR
           value: hubble-relay:80
--- HelmRelease: kube-system/cilium ServiceMonitor: kube-system/cilium-agent

+++ HelmRelease: kube-system/cilium ServiceMonitor: kube-system/cilium-agent

@@ -6,13 +6,13 @@

   namespace: kube-system
   labels:
     app.kubernetes.io/part-of: cilium
 spec:
   selector:
     matchLabels:
-      k8s-app: cilium
+      app.kubernetes.io/name: cilium-agent
   namespaceSelector:
     matchNames:
     - kube-system
   endpoints:
   - port: metrics
     interval: 10s
--- HelmRelease: kube-system/cilium Namespace: kube-system/cilium-secrets

+++ HelmRelease: kube-system/cilium Namespace: kube-system/cilium-secrets

@@ -0,0 +1,8 @@

+---
+apiVersion: v1
+kind: Namespace
+metadata:
+  name: cilium-secrets
+  labels:
+    app.kubernetes.io/part-of: cilium
+
--- HelmRelease: kube-system/cilium Role: cilium-secrets/cilium-tlsinterception-secrets

+++ HelmRelease: kube-system/cilium Role: cilium-secrets/cilium-tlsinterception-secrets

@@ -0,0 +1,18 @@

+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+  name: cilium-tlsinterception-secrets
+  namespace: cilium-secrets
+  labels:
+    app.kubernetes.io/part-of: cilium
+rules:
+- apiGroups:
+  - ''
+  resources:
+  - secrets
+  verbs:
+  - get
+  - list
+  - watch
+
--- HelmRelease: kube-system/cilium Role: cilium-secrets/cilium-operator-tlsinterception-secrets

+++ HelmRelease: kube-system/cilium Role: cilium-secrets/cilium-operator-tlsinterception-secrets

@@ -0,0 +1,19 @@

+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+  name: cilium-operator-tlsinterception-secrets
+  namespace: cilium-secrets
+  labels:
+    app.kubernetes.io/part-of: cilium
+rules:
+- apiGroups:
+  - ''
+  resources:
+  - secrets
+  verbs:
+  - create
+  - delete
+  - update
+  - patch
+
--- HelmRelease: kube-system/cilium RoleBinding: cilium-secrets/cilium-tlsinterception-secrets

+++ HelmRelease: kube-system/cilium RoleBinding: cilium-secrets/cilium-tlsinterception-secrets

@@ -0,0 +1,17 @@

+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+  name: cilium-tlsinterception-secrets
+  namespace: cilium-secrets
+  labels:
+    app.kubernetes.io/part-of: cilium
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: Role
+  name: cilium-tlsinterception-secrets
+subjects:
+- kind: ServiceAccount
+  name: cilium
+  namespace: kube-system
+
--- HelmRelease: kube-system/cilium RoleBinding: cilium-secrets/cilium-operator-tlsinterception-secrets

+++ HelmRelease: kube-system/cilium RoleBinding: cilium-secrets/cilium-operator-tlsinterception-secrets

@@ -0,0 +1,17 @@

+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+  name: cilium-operator-tlsinterception-secrets
+  namespace: cilium-secrets
+  labels:
+    app.kubernetes.io/part-of: cilium
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: Role
+  name: cilium-operator-tlsinterception-secrets
+subjects:
+- kind: ServiceAccount
+  name: cilium-operator
+  namespace: kube-system
+
--- HelmRelease: kyverno/kyverno ConfigMap: kyverno/kyverno

+++ HelmRelease: kyverno/kyverno ConfigMap: kyverno/kyverno

@@ -60,9 +60,11 @@

     [Service,kyverno,kyverno-cleanup-controller-metrics] [Service/*,kyverno,kyverno-cleanup-controller-metrics]
     [Service,kyverno,kyverno-reports-controller-metrics] [Service/*,kyverno,kyverno-reports-controller-metrics]
     [ServiceMonitor,kyverno,kyverno-admission-controller] [ServiceMonitor,kyverno,kyverno-background-controller]
     [ServiceMonitor,kyverno,kyverno-cleanup-controller] [ServiceMonitor,kyverno,kyverno-reports-controller]
     [Secret,kyverno,kyverno-svc.kyverno.svc.*] [Secret,kyverno,kyverno-cleanup-controller.kyverno.svc.*]'
   updateRequestThreshold: '1000'
-  webhooks: '{"namespaceSelector":{"matchExpressions":[{"key":"kubernetes.io/metadata.name","operator":"NotIn","values":["kube-system"]},{"key":"kubernetes.io/metadata.name","operator":"NotIn","values":["kyverno"]}],"matchLabels":null}}'
+  webhooks: |2-
+
+      {"namespaceSelector":{"matchExpressions":[{"key":"kubernetes.io/metadata.name","operator":"NotIn","values":["kube-system"]},{"key":"kubernetes.io/metadata.name","operator":"NotIn","values":["kyverno"]}],"matchLabels":null}}
   webhookAnnotations: '{"admissions.enforcer/disabled":"true"}'
 
--- HelmRelease: kyverno/kyverno Deployment: kyverno/kyverno-admission-controller

+++ HelmRelease: kyverno/kyverno Deployment: kyverno/kyverno-admission-controller

@@ -43,13 +43,13 @@

                   - admission-controller
               topologyKey: kubernetes.io/hostname
             weight: 1
       serviceAccountName: kyverno-admission-controller
       initContainers:
       - name: kyverno-pre
-        image: ghcr.io/kyverno/kyvernopre:v1.13.4
+        image: ghcr.io/kyverno/kyvernopre:v1.13.2
         imagePullPolicy: IfNotPresent
         args:
         - --loggingFormat=text
         - --v=2
         resources:
           limits:
@@ -88,13 +88,13 @@

         - name: KYVERNO_DEPLOYMENT
           value: kyverno-admission-controller
         - name: KYVERNO_SVC
           value: kyverno-svc
       containers:
       - name: kyverno
-        image: ghcr.io/kyverno/kyverno:v1.13.4
+        image: ghcr.io/kyverno/kyverno:v1.13.2
         imagePullPolicy: IfNotPresent
         args:
         - --caSecretName=kyverno-svc.kyverno.svc.kyverno-tls-ca
         - --tlsSecretName=kyverno-svc.kyverno.svc.kyverno-tls-pair
         - --backgroundServiceAccountName=system:serviceaccount:kyverno:kyverno-background-controller
         - --reportsServiceAccountName=system:serviceaccount:kyverno:kyverno-reports-controller
--- HelmRelease: kyverno/kyverno Deployment: kyverno/kyverno-background-controller

+++ HelmRelease: kyverno/kyverno Deployment: kyverno/kyverno-background-controller

@@ -43,13 +43,13 @@

                   - background-controller
               topologyKey: kubernetes.io/hostname
             weight: 1
       serviceAccountName: kyverno-background-controller
       containers:
       - name: controller
-        image: ghcr.io/kyverno/background-controller:v1.13.4
+        image: ghcr.io/kyverno/background-controller:v1.13.2
         imagePullPolicy: IfNotPresent
         ports:
         - containerPort: 9443
           name: https
           protocol: TCP
         - containerPort: 8000
--- HelmRelease: kyverno/kyverno Deployment: kyverno/kyverno-cleanup-controller

+++ HelmRelease: kyverno/kyverno Deployment: kyverno/kyverno-cleanup-controller

@@ -43,13 +43,13 @@

                   - cleanup-controller
               topologyKey: kubernetes.io/hostname
             weight: 1
       serviceAccountName: kyverno-cleanup-controller
       containers:
       - name: controller
-        image: ghcr.io/kyverno/cleanup-controller:v1.13.4
+        image: ghcr.io/kyverno/cleanup-controller:v1.13.2
         imagePullPolicy: IfNotPresent
         ports:
         - containerPort: 9443
           name: https
           protocol: TCP
         - containerPort: 8000
--- HelmRelease: kyverno/kyverno Deployment: kyverno/kyverno-reports-controller

+++ HelmRelease: kyverno/kyverno Deployment: kyverno/kyverno-reports-controller

@@ -43,13 +43,13 @@

                   - reports-controller
               topologyKey: kubernetes.io/hostname
             weight: 1
       serviceAccountName: kyverno-reports-controller
       containers:
       - name: controller
-        image: ghcr.io/kyverno/reports-controller:v1.13.4
+        image: ghcr.io/kyverno/reports-controller:v1.13.2
         imagePullPolicy: IfNotPresent
         ports:
         - containerPort: 9443
           name: https
           protocol: TCP
         - containerPort: 8000
--- HelmRelease: kyverno/kyverno Job: kyverno/kyverno-migrate-resources

+++ HelmRelease: kyverno/kyverno Job: kyverno/kyverno-migrate-resources

@@ -19,13 +19,13 @@

     metadata: null
     spec:
       serviceAccount: kyverno-migrate-resources
       restartPolicy: Never
       containers:
       - name: kubectl
-        image: ghcr.io/kyverno/kyverno-cli:v1.13.4
+        image: ghcr.io/kyverno/kyverno-cli:v1.13.2
         imagePullPolicy: IfNotPresent
         args:
         - migrate
         - --resource
         - cleanuppolicies.kyverno.io
         - --resource

bot-akira[bot] avatar Feb 04 '25 16:02 bot-akira[bot]

--- kubernetes/apps/kube-system/external-secrets/app Kustomization: flux-system/cluster-apps-external-secrets HelmRelease: kube-system/external-secrets

+++ kubernetes/apps/kube-system/external-secrets/app Kustomization: flux-system/cluster-apps-external-secrets HelmRelease: kube-system/external-secrets

@@ -13,13 +13,13 @@

       chart: external-secrets
       interval: 15m
       sourceRef:
         kind: HelmRepository
         name: external-secrets
         namespace: flux-system
-      version: 0.15.0
+      version: 0.13.0
   install:
     createNamespace: true
     remediation:
       retries: 3
   interval: 15m
   maxHistory: 3
--- kubernetes/apps/kube-system/kubelet-csr-approver/app Kustomization: flux-system/kubelet-csr-approver HelmRelease: kube-system/kubelet-csr-approver

+++ kubernetes/apps/kube-system/kubelet-csr-approver/app Kustomization: flux-system/kubelet-csr-approver HelmRelease: kube-system/kubelet-csr-approver

@@ -13,13 +13,13 @@

     spec:
       chart: kubelet-csr-approver
       sourceRef:
         kind: HelmRepository
         name: postfinance
         namespace: flux-system
-      version: 1.2.6
+      version: 1.2.5
   install:
     remediation:
       retries: 3
   interval: 30m
   upgrade:
     cleanupOnFail: true
--- kubernetes/apps/network/echo-server/app Kustomization: flux-system/echo-server HelmRelease: network/echo-server

+++ kubernetes/apps/network/echo-server/app Kustomization: flux-system/echo-server HelmRelease: network/echo-server

@@ -34,13 +34,13 @@

               HTTP_PORT: 8080
               LOG_IGNORE_PATH: /healthz
               LOG_WITHOUT_NEWLINE: true
               PROMETHEUS_ENABLED: true
             image:
               repository: ghcr.io/mendhak/http-https-echo
-              tag: 36
+              tag: 35
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3
--- kubernetes/apps/network/e1000e-fix/app Kustomization: flux-system/e1000e-fix HelmRelease: network/e1000e-fix

+++ kubernetes/apps/network/e1000e-fix/app Kustomization: flux-system/e1000e-fix HelmRelease: network/e1000e-fix

@@ -1,8 +1,8 @@

 ---
-apiVersion: helm.toolkit.fluxcd.io/v2
+apiVersion: helm.toolkit.fluxcd.io/v2beta2
 kind: HelmRelease
 metadata:
   labels:
     app.kubernetes.io/name: e1000e-fix
     kustomize.toolkit.fluxcd.io/name: e1000e-fix
     kustomize.toolkit.fluxcd.io/namespace: flux-system
--- kubernetes/apps/kyverno/kyverno/app Kustomization: flux-system/kyverno HelmRelease: kyverno/kyverno

+++ kubernetes/apps/kyverno/kyverno/app Kustomization: flux-system/kyverno HelmRelease: kyverno/kyverno

@@ -12,13 +12,13 @@

     spec:
       chart: kyverno
       sourceRef:
         kind: HelmRepository
         name: kyverno
         namespace: flux-system
-      version: 3.3.7
+      version: 3.3.4
   install:
     remediation:
       retries: 3
   interval: 30m
   upgrade:
     cleanupOnFail: true
--- kubernetes/apps/kube-system/cilium/app Kustomization: flux-system/cilium HelmRelease: kube-system/cilium

+++ kubernetes/apps/kube-system/cilium/app Kustomization: flux-system/cilium HelmRelease: kube-system/cilium

@@ -13,13 +13,13 @@

     spec:
       chart: cilium
       sourceRef:
         kind: HelmRepository
         name: cilium
         namespace: flux-system
-      version: 1.16.6
+      version: 1.17.2
   install:
     remediation:
       retries: 3
   interval: 30m
   upgrade:
     cleanupOnFail: true
--- kubernetes/apps/kube-system/coredns/app Kustomization: flux-system/coredns HelmRelease: kube-system/coredns

+++ kubernetes/apps/kube-system/coredns/app Kustomization: flux-system/coredns HelmRelease: kube-system/coredns

@@ -13,13 +13,13 @@

     spec:
       chart: coredns
       sourceRef:
         kind: HelmRepository
         name: coredns
         namespace: flux-system
-      version: 1.39.1
+      version: 1.37.3
   install:
     remediation:
       retries: 3
   interval: 30m
   upgrade:
     cleanupOnFail: true
--- kubernetes/apps/observability/kromgo/app Kustomization: flux-system/kromgo HelmRelease: observability/kromgo

+++ kubernetes/apps/observability/kromgo/app Kustomization: flux-system/kromgo HelmRelease: observability/kromgo

@@ -38,13 +38,13 @@

               HEALTH_PORT: 8888
               PROMETHEUS_URL: http://prometheus-operated.observability.svc.cluster.local:9090
               SERVER_HOST: 0.0.0.0
               SERVER_PORT: 8080
             image:
               repository: ghcr.io/kashalls/kromgo
-              tag: v0.5.1@sha256:1f86c6151c676fa6d368230f1b228d67ed030fd4409ae0a53763c60d522ea425
+              tag: v0.4.4@sha256:4f6770a49ffa2d1a96517761d677ababe5fa966a5da398530cc35ee4714c315b
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3
--- kubernetes/apps/cert-manager/cert-manager/app Kustomization: flux-system/cert-manager HelmRelease: cert-manager/cert-manager

+++ kubernetes/apps/cert-manager/cert-manager/app Kustomization: flux-system/cert-manager HelmRelease: cert-manager/cert-manager

@@ -13,13 +13,13 @@

     spec:
       chart: cert-manager
       sourceRef:
         kind: HelmRepository
         name: jetstack
         namespace: flux-system
-      version: v1.17.1
+      version: v1.16.3
   install:
     remediation:
       retries: 3
   interval: 30m
   upgrade:
     cleanupOnFail: true
--- kubernetes/apps/default/piped/app Kustomization: flux-system/piped HelmRelease: default/piped

+++ kubernetes/apps/default/piped/app Kustomization: flux-system/piped HelmRelease: default/piped

@@ -15,13 +15,13 @@

     spec:
       chart: piped
       sourceRef:
         kind: HelmRepository
         name: piped
         namespace: flux-system
-      version: 7.2.2
+      version: 7.0.1
   install:
     createNamespace: true
     remediation:
       retries: 5
   interval: 15m
   upgrade:
--- kubernetes/apps/media/komga/app Kustomization: flux-system/komga HelmRelease: media/komga

+++ kubernetes/apps/media/komga/app Kustomization: flux-system/komga HelmRelease: media/komga

@@ -35,13 +35,13 @@

           app:
             env:
               SERVER_PORT: 8080
               TZ: Europe/Prague
             image:
               repository: gotson/komga
-              tag: 1.21.2@sha256:ba587695d786f0e8f4de8598b8aa2785cc8c671098ef1cb624819c2bb812789c
+              tag: 1.19.0@sha256:b7bd32bc66159d020d682702f4b010e5977fecf37351903ed8b959c32c759638
             resources:
               limits:
                 memory: 2Gi
               requests:
                 cpu: 15m
                 memory: 1Gi
@@ -51,28 +51,21 @@

       app:
         annotations:
           gatus.io/enabled: 'true'
           hajimari.io/icon: mdi:thought-bubble-outline
         className: internal
         hosts:
-        - host: '{{ .Release.Name }}.juno.moe'
-          paths:
-          - path: /
-            service:
-              identifier: app
-              port: http
-        - host: comics.juno.moe
+        - host: '{{ .Release.Name }}...PLACEHOLDER_SECRET_DOMAIN..'
           paths:
           - path: /
             service:
               identifier: app
               port: http
         tls:
         - hosts:
-          - '{{ .Release.Name }}.juno.moe'
-          - comics.juno.moe
+          - '{{ .Release.Name }}...PLACEHOLDER_SECRET_DOMAIN..'
     persistence:
       config:
         existingClaim: komga
       media:
         globalMounts:
         - path: /data
--- kubernetes/apps/network/external-dns/unifi Kustomization: flux-system/cluster-apps-external-dns-unifi HelmRelease: network/external-dns-unifi

+++ kubernetes/apps/network/external-dns/unifi Kustomization: flux-system/cluster-apps-external-dns-unifi HelmRelease: network/external-dns-unifi

@@ -49,13 +49,13 @@

           valueFrom:
             secretKeyRef:
               key: UNIFI_PASS
               name: external-dns-unifi-secret
         image:
           repository: ghcr.io/kashalls/external-dns-unifi-webhook
-          tag: v0.5.1@sha256:fc031337a83e3a7d5f3407c931373455fe6842e085b47e4bb1e73708cb054b06
+          tag: v0.4.1@sha256:5c01923d9a2c050362335c1750c2361046c0d2caf1ab796661c215da47446aad
         livenessProbe:
           httpGet:
             path: /healthz
             port: http-webhook
           initialDelaySeconds: 10
           timeoutSeconds: 5
--- kubernetes/apps/observability/gatus/app Kustomization: flux-system/gatus HelmRelease: observability/gatus

+++ kubernetes/apps/observability/gatus/app Kustomization: flux-system/gatus HelmRelease: observability/gatus

@@ -40,13 +40,13 @@

               TZ: Europe/Prague
             envFrom:
             - secretRef:
                 name: gatus-secret
             image:
               repository: ghcr.io/twin/gatus
-              tag: v5.17.0@sha256:a8c53f9e9f1a3876cd00e44a42c80fc984e118d5ba0bdbaf08980cb627d61512
+              tag: v5.15.0@sha256:45686324db605e57dfa8b0931d8d57fe06298f52685f06aa9654a1f710d461bb
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3
--- kubernetes/apps/observability/kube-prometheus-stack/app Kustomization: flux-system/kube-prometheus-stack HelmRelease: observability/kube-prometheus-stack

+++ kubernetes/apps/observability/kube-prometheus-stack/app Kustomization: flux-system/kube-prometheus-stack HelmRelease: observability/kube-prometheus-stack

@@ -13,13 +13,13 @@

     spec:
       chart: kube-prometheus-stack
       sourceRef:
         kind: HelmRepository
         name: prometheus-community
         namespace: flux-system
-      version: 68.5.0
+      version: 68.3.2
   dependsOn:
   - name: rook-ceph-cluster
     namespace: rook-ceph
   install:
     crds: CreateReplace
     remediation:
--- kubernetes/apps/network/cloudflared/app Kustomization: flux-system/cloudflared HelmRelease: network/cloudflared

+++ kubernetes/apps/network/cloudflared/app Kustomization: flux-system/cloudflared HelmRelease: network/cloudflared

@@ -49,13 +49,13 @@

               TUNNEL_METRICS: 0.0.0.0:8080
               TUNNEL_ORIGIN_ENABLE_HTTP2: true
               TUNNEL_POST_QUANTUM: true
               TUNNEL_TRANSPORT_PROTOCOL: quic
             image:
               repository: docker.io/cloudflare/cloudflared
-              tag: 2025.2.1
+              tag: 2025.1.1
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3
--- kubernetes/apps/network/ingress-nginx/internal Kustomization: flux-system/ingress-nginx-internal HelmRelease: network/ingress-nginx-internal

+++ kubernetes/apps/network/ingress-nginx/internal Kustomization: flux-system/ingress-nginx-internal HelmRelease: network/ingress-nginx-internal

@@ -55,13 +55,13 @@

         - name: TEMPLATE_NAME
           value: lost-in-space
         - name: SHOW_DETAILS
           value: 'false'
         image:
           repository: ghcr.io/tarampampam/error-pages
-          tag: 3.3.2
+          tag: 3.3.1
       extraArgs:
         default-ssl-certificate: network/-.PLACEHOLDER_SECRET_DOMAIN..-production-tls
       ingressClassResource:
         controllerValue: k8s.io/internal
         default: true
         name: internal
--- kubernetes/apps/observability/karma/app Kustomization: flux-system/karma HelmRelease: observability/karma

+++ kubernetes/apps/observability/karma/app Kustomization: flux-system/karma HelmRelease: observability/karma

@@ -34,13 +34,13 @@

         containers:
           app:
             env:
               CONFIG_FILE: /config/config.yaml
             image:
               repository: ghcr.io/prymitive/karma
-              tag: v0.121@sha256:9f0ad820df1b1d0af562de3b3c545a52ddfce8d7492f434a2276e45f3a1f7e28
+              tag: v0.120@sha256:733bff15f2529065f1c1b50b13e4a56a541d3c0615dbc6b4b6a07befbfcc27ff
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3
--- kubernetes/apps/media/sabnzbd/app Kustomization: flux-system/sabnzbd HelmRelease: media/sabnzbd

+++ kubernetes/apps/media/sabnzbd/app Kustomization: flux-system/sabnzbd HelmRelease: media/sabnzbd

@@ -41,13 +41,13 @@

               TZ: Europe/Prague
             envFrom:
             - secretRef:
                 name: sabnzbd-secret
             image:
               repository: ghcr.io/buroa/sabnzbd
-              tag: 4.4.1@sha256:146646057a9049b4eca4b9996b3e2d3135a520402cf64f00abba0ef17f00d1d1
+              tag: 4.4.1@sha256:440fe03b57692411378f88697f3dfe099438af60d947f1f795eaf3f52dcdb622
             probes:
               liveness:
                 enabled: true
               readiness:
                 enabled: true
               startup:
--- kubernetes/apps/media/recyclarr/app Kustomization: flux-system/recyclarr HelmRelease: media/recyclarr

+++ kubernetes/apps/media/recyclarr/app Kustomization: flux-system/recyclarr HelmRelease: media/recyclarr

@@ -40,13 +40,13 @@

               TZ: Europe/Prague
             envFrom:
             - secretRef:
                 name: recyclarr-secret
             image:
               repository: ghcr.io/recyclarr/recyclarr
-              tag: 7.4.1@sha256:759540877f95453eca8a26c1a93593e783a7a824c324fbd57523deffb67f48e1
+              tag: 7.4.0@sha256:619c3b8920a179f2c578acd0f54e9a068f57c049aff840469eed66e93a4be2cf
             resources:
               limits:
                 memory: 128Mi
               requests:
                 cpu: 10m
             securityContext:

bot-akira[bot] avatar Feb 04 '25 16:02 bot-akira[bot]

--- HelmRelease: network/external-dns-unifi Deployment: network/external-dns-unifi

+++ HelmRelease: network/external-dns-unifi Deployment: network/external-dns-unifi

@@ -76,13 +76,13 @@

             port: http
           initialDelaySeconds: 5
           periodSeconds: 10
           successThreshold: 1
           timeoutSeconds: 5
       - name: webhook
-        image: ghcr.io/kashalls/external-dns-unifi-webhook:v0.5.1@sha256:fc031337a83e3a7d5f3407c931373455fe6842e085b47e4bb1e73708cb054b06
+        image: ghcr.io/kashalls/external-dns-unifi-webhook:v0.4.1@sha256:5c01923d9a2c050362335c1750c2361046c0d2caf1ab796661c215da47446aad
         imagePullPolicy: IfNotPresent
         env:
         - name: UNIFI_HOST
           value: https://192.168.69.1
         - name: UNIFI_USER
           valueFrom:
--- HelmRelease: network/echo-server Deployment: network/echo-server

+++ HelmRelease: network/echo-server Deployment: network/echo-server

@@ -45,13 +45,13 @@

         - name: LOG_IGNORE_PATH
           value: /healthz
         - name: LOG_WITHOUT_NEWLINE
           value: 'true'
         - name: PROMETHEUS_ENABLED
           value: 'true'
-        image: ghcr.io/mendhak/http-https-echo:36
+        image: ghcr.io/mendhak/http-https-echo:35
         livenessProbe:
           failureThreshold: 3
           httpGet:
             path: /healthz
             port: 8080
           initialDelaySeconds: 0
--- HelmRelease: network/cloudflared Deployment: network/cloudflared

+++ HelmRelease: network/cloudflared Deployment: network/cloudflared

@@ -62,13 +62,13 @@

         - name: TUNNEL_ORIGIN_ENABLE_HTTP2
           value: 'true'
         - name: TUNNEL_POST_QUANTUM
           value: 'true'
         - name: TUNNEL_TRANSPORT_PROTOCOL
           value: quic
-        image: docker.io/cloudflare/cloudflared:2025.2.1
+        image: docker.io/cloudflare/cloudflared:2025.1.1
         livenessProbe:
           failureThreshold: 3
           httpGet:
             path: /ready
             port: 8080
           initialDelaySeconds: 0
--- HelmRelease: kube-system/coredns Deployment: kube-system/coredns

+++ HelmRelease: kube-system/coredns Deployment: kube-system/coredns

@@ -48,13 +48,13 @@

         operator: Exists
       - effect: NoSchedule
         key: node-role.kubernetes.io/control-plane
         operator: Exists
       containers:
       - name: coredns
-        image: coredns/coredns:1.12.0
+        image: coredns/coredns:1.11.4
         imagePullPolicy: IfNotPresent
         args:
         - -conf
         - /etc/coredns/Corefile
         volumeMounts:
         - name: config-volume
--- HelmRelease: media/komga Deployment: media/komga

+++ HelmRelease: media/komga Deployment: media/komga

@@ -38,13 +38,13 @@

       containers:
       - env:
         - name: SERVER_PORT
           value: '8080'
         - name: TZ
           value: Europe/Prague
-        image: gotson/komga:1.21.2@sha256:ba587695d786f0e8f4de8598b8aa2785cc8c671098ef1cb624819c2bb812789c
+        image: gotson/komga:1.19.0@sha256:b7bd32bc66159d020d682702f4b010e5977fecf37351903ed8b959c32c759638
         name: app
         resources:
           limits:
             memory: 2Gi
           requests:
             cpu: 15m
--- HelmRelease: media/komga Ingress: media/komga

+++ HelmRelease: media/komga Ingress: media/komga

@@ -11,26 +11,15 @@

     gatus.io/enabled: 'true'
     hajimari.io/icon: mdi:thought-bubble-outline
 spec:
   ingressClassName: internal
   tls:
   - hosts:
-    - komga.juno.moe
-    - comics.juno.moe
+    - komga...PLACEHOLDER_SECRET_DOMAIN..
   rules:
-  - host: komga.juno.moe
-    http:
-      paths:
-      - path: /
-        pathType: Prefix
-        backend:
-          service:
-            name: komga
-            port:
-              number: 8080
-  - host: comics.juno.moe
+  - host: komga...PLACEHOLDER_SECRET_DOMAIN..
     http:
       paths:
       - path: /
         pathType: Prefix
         backend:
           service:
--- HelmRelease: media/recyclarr CronJob: media/recyclarr

+++ HelmRelease: media/recyclarr CronJob: media/recyclarr

@@ -50,13 +50,13 @@

             env:
             - name: TZ
               value: Europe/Prague
             envFrom:
             - secretRef:
                 name: recyclarr-secret
-            image: ghcr.io/recyclarr/recyclarr:7.4.1@sha256:759540877f95453eca8a26c1a93593e783a7a824c324fbd57523deffb67f48e1
+            image: ghcr.io/recyclarr/recyclarr:7.4.0@sha256:619c3b8920a179f2c578acd0f54e9a068f57c049aff840469eed66e93a4be2cf
             name: app
             resources:
               limits:
                 memory: 128Mi
               requests:
                 cpu: 10m
--- HelmRelease: media/sabnzbd Deployment: media/sabnzbd

+++ HelmRelease: media/sabnzbd Deployment: media/sabnzbd

@@ -52,13 +52,13 @@

           value: '8080'
         - name: TZ
           value: Europe/Prague
         envFrom:
         - secretRef:
             name: sabnzbd-secret
-        image: ghcr.io/buroa/sabnzbd:4.4.1@sha256:146646057a9049b4eca4b9996b3e2d3135a520402cf64f00abba0ef17f00d1d1
+        image: ghcr.io/buroa/sabnzbd:4.4.1@sha256:440fe03b57692411378f88697f3dfe099438af60d947f1f795eaf3f52dcdb622
         livenessProbe:
           failureThreshold: 3
           initialDelaySeconds: 0
           periodSeconds: 10
           tcpSocket:
             port: 8080
--- HelmRelease: default/piped Deployment: default/piped-ytproxy

+++ HelmRelease: default/piped Deployment: default/piped-ytproxy

@@ -25,13 +25,13 @@

       serviceAccountName: default
       automountServiceAccountToken: null
       dnsPolicy: ClusterFirst
       enableServiceLinks: null
       containers:
       - name: piped-ytproxy
-        image: 1337kavin/piped-proxy:latest@sha256:880b1117b6087e32b82c0204a96210fb87de61a874a3a2681361cc6d905e4d0e
+        image: 1337kavin/piped-proxy:latest@sha256:833ca24c048619c9cd6fe58e2d210bfc7b1e43875ba5108aeddea0b171f04dbd
         imagePullPolicy: IfNotPresent
         command:
         - /app/piped-proxy
         livenessProbe:
           tcpSocket:
             port: 8080
--- HelmRelease: kube-system/kubelet-csr-approver Deployment: kube-system/kubelet-csr-approver

+++ HelmRelease: kube-system/kubelet-csr-approver Deployment: kube-system/kubelet-csr-approver

@@ -33,13 +33,13 @@

           readOnlyRootFilesystem: true
           runAsGroup: 65532
           runAsNonRoot: true
           runAsUser: 65532
           seccompProfile:
             type: RuntimeDefault
-        image: ghcr.io/postfinance/kubelet-csr-approver:v1.2.6
+        image: ghcr.io/postfinance/kubelet-csr-approver:v1.2.5
         imagePullPolicy: IfNotPresent
         args:
         - -metrics-bind-address
         - :8080
         - -health-probe-bind-address
         - :8081
--- HelmRelease: observability/karma Deployment: observability/karma

+++ HelmRelease: observability/karma Deployment: observability/karma

@@ -50,13 +50,13 @@

         topologyKey: kubernetes.io/hostname
         whenUnsatisfiable: DoNotSchedule
       containers:
       - env:
         - name: CONFIG_FILE
           value: /config/config.yaml
-        image: ghcr.io/prymitive/karma:v0.121@sha256:9f0ad820df1b1d0af562de3b3c545a52ddfce8d7492f434a2276e45f3a1f7e28
+        image: ghcr.io/prymitive/karma:v0.120@sha256:733bff15f2529065f1c1b50b13e4a56a541d3c0615dbc6b4b6a07befbfcc27ff
         livenessProbe:
           failureThreshold: 3
           httpGet:
             path: /health
             port: 8080
           initialDelaySeconds: 0
--- HelmRelease: observability/gatus Deployment: observability/gatus

+++ HelmRelease: observability/gatus Deployment: observability/gatus

@@ -95,13 +95,13 @@

           value: '80'
         - name: TZ
           value: Europe/Prague
         envFrom:
         - secretRef:
             name: gatus-secret
-        image: ghcr.io/twin/gatus:v5.17.0@sha256:a8c53f9e9f1a3876cd00e44a42c80fc984e118d5ba0bdbaf08980cb627d61512
+        image: ghcr.io/twin/gatus:v5.15.0@sha256:45686324db605e57dfa8b0931d8d57fe06298f52685f06aa9654a1f710d461bb
         livenessProbe:
           failureThreshold: 3
           httpGet:
             path: /health
             port: 80
           initialDelaySeconds: 0
--- HelmRelease: kube-system/external-secrets ClusterRole: kube-system/external-secrets-controller

+++ HelmRelease: kube-system/external-secrets ClusterRole: kube-system/external-secrets-controller

@@ -13,13 +13,12 @@

   resources:
   - secretstores
   - clustersecretstores
   - externalsecrets
   - clusterexternalsecrets
   - pushsecrets
-  - clusterpushsecrets
   verbs:
   - get
   - list
   - watch
 - apiGroups:
   - external-secrets.io
@@ -36,32 +35,16 @@

   - clusterexternalsecrets
   - clusterexternalsecrets/status
   - clusterexternalsecrets/finalizers
   - pushsecrets
   - pushsecrets/status
   - pushsecrets/finalizers
-  - clusterpushsecrets
-  - clusterpushsecrets/status
-  - clusterpushsecrets/finalizers
   verbs:
   - get
   - update
   - patch
-- apiGroups:
-  - generators.external-secrets.io
-  resources:
-  - generatorstates
-  verbs:
-  - get
-  - list
-  - watch
-  - create
-  - update
-  - patch
-  - delete
-  - deletecollection
 - apiGroups:
   - generators.external-secrets.io
   resources:
   - acraccesstokens
   - clustergenerators
   - ecrauthorizationtokens
@@ -71,13 +54,12 @@

   - quayaccesstokens
   - passwords
   - stssessiontokens
   - uuids
   - vaultdynamicsecrets
   - webhooks
-  - grafanas
   verbs:
   - get
   - list
   - watch
 - apiGroups:
   - ''
@@ -126,15 +108,7 @@

   resources:
   - externalsecrets
   verbs:
   - create
   - update
   - delete
-- apiGroups:
-  - external-secrets.io
-  resources:
-  - pushsecrets
-  verbs:
-  - create
-  - update
-  - delete
 
--- HelmRelease: kube-system/external-secrets ClusterRole: kube-system/external-secrets-view

+++ HelmRelease: kube-system/external-secrets ClusterRole: kube-system/external-secrets-view

@@ -15,13 +15,12 @@

   - external-secrets.io
   resources:
   - externalsecrets
   - secretstores
   - clustersecretstores
   - pushsecrets
-  - clusterpushsecrets
   verbs:
   - get
   - watch
   - list
 - apiGroups:
   - generators.external-secrets.io
@@ -33,13 +32,11 @@

   - gcraccesstokens
   - githubaccesstokens
   - quayaccesstokens
   - passwords
   - vaultdynamicsecrets
   - webhooks
-  - grafanas
-  - generatorstates
   verbs:
   - get
   - watch
   - list
 
--- HelmRelease: kube-system/external-secrets ClusterRole: kube-system/external-secrets-edit

+++ HelmRelease: kube-system/external-secrets ClusterRole: kube-system/external-secrets-edit

@@ -14,13 +14,12 @@

   - external-secrets.io
   resources:
   - externalsecrets
   - secretstores
   - clustersecretstores
   - pushsecrets
-  - clusterpushsecrets
   verbs:
   - create
   - delete
   - deletecollection
   - patch
   - update
@@ -34,14 +33,12 @@

   - gcraccesstokens
   - githubaccesstokens
   - quayaccesstokens
   - passwords
   - vaultdynamicsecrets
   - webhooks
-  - grafanas
-  - generatorstates
   verbs:
   - create
   - delete
   - deletecollection
   - patch
   - update
--- HelmRelease: kube-system/external-secrets ClusterRole: kube-system/external-secrets-servicebindings

+++ HelmRelease: kube-system/external-secrets ClusterRole: kube-system/external-secrets-servicebindings

@@ -10,12 +10,11 @@

     app.kubernetes.io/managed-by: Helm
 rules:
 - apiGroups:
   - external-secrets.io
   resources:
   - externalsecrets
-  - pushsecrets
   verbs:
   - get
   - list
   - watch
 
--- HelmRelease: kube-system/external-secrets Deployment: kube-system/external-secrets-cert-controller

+++ HelmRelease: kube-system/external-secrets Deployment: kube-system/external-secrets-cert-controller

@@ -34,13 +34,13 @@

             - ALL
           readOnlyRootFilesystem: true
           runAsNonRoot: true
           runAsUser: 1000
           seccompProfile:
             type: RuntimeDefault
-        image: oci.external-secrets.io/external-secrets/external-secrets:v0.15.0
+        image: oci.external-secrets.io/external-secrets/external-secrets:v0.13.0
         imagePullPolicy: IfNotPresent
         args:
         - certcontroller
         - --crd-requeue-interval=5m
         - --service-name=external-secrets-webhook
         - --service-namespace=kube-system
--- HelmRelease: kube-system/external-secrets Deployment: kube-system/external-secrets

+++ HelmRelease: kube-system/external-secrets Deployment: kube-system/external-secrets

@@ -34,13 +34,13 @@

             - ALL
           readOnlyRootFilesystem: true
           runAsNonRoot: true
           runAsUser: 1000
           seccompProfile:
             type: RuntimeDefault
-        image: oci.external-secrets.io/external-secrets/external-secrets:v0.15.0
+        image: oci.external-secrets.io/external-secrets/external-secrets:v0.13.0
         imagePullPolicy: IfNotPresent
         args:
         - --enable-leader-election=true
         - --concurrent=1
         - --metrics-addr=:8080
         - --loglevel=info
--- HelmRelease: kube-system/external-secrets Deployment: kube-system/external-secrets-webhook

+++ HelmRelease: kube-system/external-secrets Deployment: kube-system/external-secrets-webhook

@@ -34,13 +34,13 @@

             - ALL
           readOnlyRootFilesystem: true
           runAsNonRoot: true
           runAsUser: 1000
           seccompProfile:
             type: RuntimeDefault
-        image: oci.external-secrets.io/external-secrets/external-secrets:v0.15.0
+        image: oci.external-secrets.io/external-secrets/external-secrets:v0.13.0
         imagePullPolicy: IfNotPresent
         args:
         - webhook
         - --port=10250
         - --dns-name=external-secrets-webhook.kube-system.svc
         - --cert-dir=/tmp/certs
--- HelmRelease: observability/kromgo Deployment: observability/kromgo

+++ HelmRelease: observability/kromgo Deployment: observability/kromgo

@@ -58,13 +58,13 @@

         - name: PROMETHEUS_URL
           value: http://prometheus-operated.observability.svc.cluster.local:9090
         - name: SERVER_HOST
           value: 0.0.0.0
         - name: SERVER_PORT
           value: '8080'
-        image: ghcr.io/kashalls/kromgo:v0.5.1@sha256:1f86c6151c676fa6d368230f1b228d67ed030fd4409ae0a53763c60d522ea425
+        image: ghcr.io/kashalls/kromgo:v0.4.4@sha256:4f6770a49ffa2d1a96517761d677ababe5fa966a5da398530cc35ee4714c315b
         livenessProbe:
           failureThreshold: 3
           httpGet:
             path: /-/ready
             port: 8888
           initialDelaySeconds: 0
--- HelmRelease: cert-manager/cert-manager Deployment: cert-manager/cert-manager-cainjector

+++ HelmRelease: cert-manager/cert-manager Deployment: cert-manager/cert-manager-cainjector

@@ -31,13 +31,13 @@

       securityContext:
         runAsNonRoot: true
         seccompProfile:
           type: RuntimeDefault
       containers:
       - name: cert-manager-cainjector
-        image: quay.io/jetstack/cert-manager-cainjector:v1.17.1
+        image: quay.io/jetstack/cert-manager-cainjector:v1.16.3
         imagePullPolicy: IfNotPresent
         args:
         - --v=2
         - --leader-election-namespace=kube-system
         ports:
         - containerPort: 9402
--- HelmRelease: cert-manager/cert-manager Deployment: cert-manager/cert-manager

+++ HelmRelease: cert-manager/cert-manager Deployment: cert-manager/cert-manager

@@ -31,19 +31,19 @@

       securityContext:
         runAsNonRoot: true
         seccompProfile:
           type: RuntimeDefault
       containers:
       - name: cert-manager-controller
-        image: quay.io/jetstack/cert-manager-controller:v1.17.1
+        image: quay.io/jetstack/cert-manager-controller:v1.16.3
         imagePullPolicy: IfNotPresent
         args:
         - --v=2
         - --cluster-resource-namespace=$(POD_NAMESPACE)
         - --leader-election-namespace=kube-system
-        - --acme-http01-solver-image=quay.io/jetstack/cert-manager-acmesolver:v1.17.1
+        - --acme-http01-solver-image=quay.io/jetstack/cert-manager-acmesolver:v1.16.3
         - --max-concurrent-challenges=60
         - --dns01-recursive-nameservers-only=true
         - --dns01-recursive-nameservers=https://1.1.1.1:443/dns-query,https://1.0.0.1:443/dns-query
         ports:
         - containerPort: 9402
           name: http-metrics
--- HelmRelease: cert-manager/cert-manager Deployment: cert-manager/cert-manager-webhook

+++ HelmRelease: cert-manager/cert-manager Deployment: cert-manager/cert-manager-webhook

@@ -31,13 +31,13 @@

       securityContext:
         runAsNonRoot: true
         seccompProfile:
           type: RuntimeDefault
       containers:
       - name: cert-manager-webhook
-        image: quay.io/jetstack/cert-manager-webhook:v1.17.1
+        image: quay.io/jetstack/cert-manager-webhook:v1.16.3
         imagePullPolicy: IfNotPresent
         args:
         - --v=2
         - --secure-port=10250
         - --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE)
         - --dynamic-serving-ca-secret-name=cert-manager-webhook-ca
--- HelmRelease: cert-manager/cert-manager Job: cert-manager/cert-manager-startupapicheck

+++ HelmRelease: cert-manager/cert-manager Job: cert-manager/cert-manager-startupapicheck

@@ -31,13 +31,13 @@

       securityContext:
         runAsNonRoot: true
         seccompProfile:
           type: RuntimeDefault
       containers:
       - name: cert-manager-startupapicheck
-        image: quay.io/jetstack/cert-manager-startupapicheck:v1.17.1
+        image: quay.io/jetstack/cert-manager-startupapicheck:v1.16.3
         imagePullPolicy: IfNotPresent
         args:
         - check
         - api
         - --wait=1m
         - -v
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-dashboard

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-dashboard

@@ -15,261 +15,323 @@

   cilium-dashboard.json: |
     {
       "annotations": {
         "list": [
           {
             "builtIn": 1,
-            "datasource": "-- Grafana --",
+            "datasource": {
+              "type": "datasource",
+              "uid": "grafana"
+            },
             "enable": true,
             "hide": true,
             "iconColor": "rgba(0, 211, 255, 1)",
             "name": "Annotations & Alerts",
             "type": "dashboard"
           }
         ]
       },
       "description": "Dashboard for Cilium (https://cilium.io/) metrics",
       "editable": true,
-      "gnetId": null,
+      "fiscalYearStartMonth": 0,
       "graphTooltip": 1,
-      "iteration": 1606309591568,
+      "id": 1,
       "links": [],
       "panels": [
         {
-          "aliasColors": {
-            "error": "#890f02",
-            "warning": "#c15c17"
-          },
-          "bars": false,
-          "dashLength": 10,
-          "dashes": false,
           "datasource": {
             "type": "prometheus",
             "uid": "${DS_PROMETHEUS}"
           },
           "fieldConfig": {
             "defaults": {
-              "custom": {}
-            },
-            "overrides": []
-          },
-          "fill": 1,
-          "fillGradient": 0,
+              "color": {
+                "mode": "palette-classic"
+              },
+              "custom": {
+                "axisBorderShow": false,
+                "axisCenteredZero": false,
+                "axisColorMode": "text",
+                "axisLabel": "",
+                "axisPlacement": "auto",
+                "barAlignment": 0,
+                "drawStyle": "line",
+                "fillOpacity": 10,
+                "gradientMode": "none",
+                "hideFrom": {
+                  "legend": false,
+                  "tooltip": false,
+                  "viz": false
+                },
+                "insertNulls": false,
+                "lineInterpolation": "linear",
+                "lineWidth": 1,
+                "pointSize": 5,
+                "scaleDistribution": {
+                  "type": "linear"
+                },
+                "showPoints": "never",
+                "spanNulls": false,
+                "stacking": {
+                  "group": "A",
+                  "mode": "none"
+                },
+                "thresholdsStyle": {
+                  "mode": "off"
+                }
+              },
+              "links": [],
+              "mappings": [],
+              "thresholds": {
+                "mode": "absolute",
+                "steps": [
+                  {
+                    "color": "green",
+                    "value": null
+                  },
+                  {
+                    "color": "red",
+                    "value": 80
+                  }
+                ]
+              },
+              "unit": "opm"
+            },
+            "overrides": [
+              {
+                "matcher": {
+                  "id": "byName",
+                  "options": "error"
+                },
+                "properties": [
+                  {
+                    "id": "color",
+                    "value": {
+                      "fixedColor": "#890f02",
+                      "mode": "fixed"
+                    }
+                  }
+                ]
+              },
+              {
+                "matcher": {
+                  "id": "byName",
+                  "options": "warning"
+                },
+                "properties": [
+                  {
+                    "id": "color",
+                    "value": {
+                      "fixedColor": "#c15c17",
+                      "mode": "fixed"
+                    }
+                  }
+                ]
+              }
+            ]
+          },
           "gridPos": {
             "h": 5,
             "w": 12,
             "x": 0,
             "y": 0
           },
-          "hiddenSeries": false,
           "id": 76,
-          "legend": {
-            "avg": false,
-            "current": false,
-            "max": false,
-            "min": false,
-            "show": true,
-            "total": false,
-            "values": false
-          },
-          "lines": true,
-          "linewidth": 1,
-          "links": [],
-          "nullPointMode": "null",
           "options": {
-            "dataLinks": []
-          },
-          "paceLength": 10,
-          "percentage": false,
-          "pointradius": 5,
-          "points": false,
-          "renderer": "flot",
-          "seriesOverrides": [
-            {
-              "alias": "error",
-              "yaxis": 2
-            }
-          ],
-          "spaceLength": 10,
-          "stack": false,
-          "steppedLine": false,
+            "legend": {
+              "calcs": [],
+              "displayMode": "list",
+              "placement": "bottom",
+              "showLegend": true
+            },
+            "tooltip": {
+              "mode": "multi",
+              "sort": "none"
+            }
+          },
+          "pluginVersion": "10.4.3",
           "targets": [
             {
+              "datasource": {
+                "type": "prometheus",
+                "uid": "${DS_PROMETHEUS}"
+              },
+              "editorMode": "code",
               "expr": "sum(rate(cilium_errors_warnings_total{k8s_app=\"cilium\", pod=~\"$pod\"}[1m])) by (pod, level) * 60",
               "format": "time_series",
               "intervalFactor": 1,
               "legendFormat": "{{level}}",
+              "range": true,
               "refId": "A"
             }
           ],
-          "thresholds": [],
-          "timeFrom": null,
-          "timeRegions": [],
-          "timeShift": null,
           "title": "Errors & Warnings",
-          "tooltip": {
-            "shared": true,
-            "sort": 0,
-            "value_type": "individual"
-          },
-          "type": "graph",
-          "xaxis": {
-            "buckets": null,
-            "mode": "time",
-            "name": null,
-            "show": true,
-            "values": []
-          },
-          "yaxes": [
-            {
-              "format": "opm",
-              "label": null,
-              "logBase": 1,
-              "max": null,
-              "min": null,
-              "show": true
-            },
-            {
-              "format": "opm",
-              "label": null,
-              "logBase": 1,
-              "max": null,
-              "min": null,
-              "show": true
-            }
-          ],
-          "yaxis": {
-            "align": false,
-            "alignLevel": null
-          }
+          "type": "timeseries"
         },
         {
-          "aliasColors": {
-            "avg": "#cffaff"
-          },
-          "bars": false,
-          "dashLength": 10,
-          "dashes": false,
           "datasource": {
             "type": "prometheus",
             "uid": "${DS_PROMETHEUS}"
           },
           "fieldConfig": {
             "defaults": {
-              "custom": {}
-            },
-            "overrides": []
-          },
-          "fill": 0,
-          "fillGradient": 0,
+              "color": {
+                "mode": "palette-classic"
+              },
+              "custom": {
+                "axisBorderShow": false,
+                "axisCenteredZero": false,
+                "axisColorMode": "text",
+                "axisLabel": "",
+                "axisPlacement": "auto",
+                "barAlignment": 0,
+                "drawStyle": "line",
+                "fillOpacity": 35,
+                "gradientMode": "none",
+                "hideFrom": {
+                  "legend": false,
+                  "tooltip": false,
+                  "viz": false
+                },
+                "insertNulls": false,
+                "lineInterpolation": "linear",
+                "lineWidth": 1,
+                "pointSize": 5,
+                "scaleDistribution": {
+                  "type": "linear"
+                },
+                "showPoints": "never",
+                "spanNulls": false,
+                "stacking": {
+                  "group": "A",
+                  "mode": "none"
+                },
+                "thresholdsStyle": {
+                  "mode": "off"
+                }
+              },
+              "links": [],
+              "mappings": [],
+              "thresholds": {
+                "mode": "absolute",
+                "steps": [
+                  {
+                    "color": "green",
+                    "value": null
+                  },
+                  {
+                    "color": "red",
+                    "value": 80
+                  }
+                ]
+              },
+              "unit": "percent"
+            },
+            "overrides": [
+              {
+                "matcher": {
+                  "id": "byName",
+                  "options": "avg"
+                },
+                "properties": [
+                  {
+                    "id": "color",
+                    "value": {
+                      "fixedColor": "#cffaff",
+                      "mode": "fixed"
+                    }
+                  }
+                ]
+              },
+              {
+                "matcher": {
+                  "id": "byName",
+                  "options": "max"
+                },
+                "properties": [
+                  {
+                    "id": "custom.fillBelowTo",
+                    "value": "min"
+                  },
+                  {
+                    "id": "custom.lineWidth",
+                    "value": 0
+                  }
+                ]
+              },
+              {
+                "matcher": {
+                  "id": "byName",
+                  "options": "min"
+                },
+                "properties": [
+                  {
+                    "id": "custom.lineWidth",
+                    "value": 0
+                  }
+                ]
+              }
+            ]
+          },
           "gridPos": {
[Diff truncated by flux-local]
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-config

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-config

@@ -16,42 +16,52 @@

   policy-cidr-match-mode: ''
   prometheus-serve-addr: :9962
   controller-group-metrics: write-cni-file sync-host-ips sync-lb-maps-with-k8s-services
   proxy-prometheus-port: '9964'
   operator-prometheus-serve-addr: :9963
   enable-metrics: 'true'
+  enable-policy-secrets-sync: 'true'
+  policy-secrets-only-from-secrets-namespace: 'true'
+  policy-secrets-namespace: cilium-secrets
   enable-ipv4: 'true'
   enable-ipv6: 'false'
   custom-cni-conf: 'false'
   enable-bpf-clock-probe: 'false'
   monitor-aggregation: medium
   monitor-aggregation-interval: 5s
   monitor-aggregation-flags: all
   bpf-map-dynamic-size-ratio: '0.0025'
   bpf-policy-map-max: '16384'
   bpf-lb-map-max: '65536'
   bpf-lb-external-clusterip: 'false'
+  bpf-lb-source-range-all-types: 'false'
+  bpf-lb-algorithm-annotation: 'false'
+  bpf-lb-mode-annotation: 'false'
+  bpf-distributed-lru: 'false'
   bpf-events-drop-enabled: 'true'
   bpf-events-policy-verdict-enabled: 'true'
   bpf-events-trace-enabled: 'true'
   preallocate-bpf-maps: 'false'
   cluster-name: home-kubernetes
   cluster-id: '1'
   routing-mode: native
+  tunnel-protocol: vxlan
+  tunnel-source-port-range: 0-0
   service-no-backend-response: reject
   enable-l7-proxy: 'true'
   enable-ipv4-masquerade: 'true'
   enable-ipv4-big-tcp: 'false'
   enable-ipv6-big-tcp: 'false'
   enable-ipv6-masquerade: 'true'
   enable-tcx: 'true'
   datapath-mode: veth
   enable-bpf-masquerade: 'false'
   enable-masquerade-to-route-source: 'false'
   enable-xt-socket-fallback: 'true'
   install-no-conntrack-iptables-rules: 'false'
+  iptables-random-fully: 'false'
   auto-direct-node-routes: 'true'
   direct-routing-skip-unreachable: 'false'
   enable-local-redirect-policy: 'true'
   ipv4-native-routing-cidr: 10.69.0.0/16
   enable-runtime-device-detection: 'true'
   kube-proxy-replacement: 'true'
@@ -63,24 +73,27 @@

   enable-health-check-loadbalancer-ip: 'false'
   node-port-bind-protection: 'true'
   enable-auto-protect-node-port-range: 'true'
   bpf-lb-mode: dsr
   bpf-lb-algorithm: maglev
   bpf-lb-acceleration: disabled
+  enable-experimental-lb: 'false'
   enable-svc-source-range-check: 'true'
   enable-l2-neigh-discovery: 'true'
   arping-refresh-period: 30s
   k8s-require-ipv4-pod-cidr: 'false'
   k8s-require-ipv6-pod-cidr: 'false'
   enable-endpoint-routes: 'true'
   enable-k8s-networkpolicy: 'true'
+  enable-endpoint-lockdown-on-policy-overflow: 'false'
   write-cni-conf-when-ready: /host/etc/cni/net.d/05-cilium.conflist
   cni-exclusive: 'false'
   cni-log-file: /var/run/cilium/cilium-cni.log
   enable-endpoint-health-checking: 'true'
   enable-health-checking: 'true'
+  health-check-icmp-failure-threshold: '3'
   enable-well-known-identities: 'false'
   enable-node-selector-labels: 'false'
   synchronize-k8s-nodes: 'true'
   operator-api-serve-addr: 127.0.0.1:9234
   enable-hubble: 'true'
   hubble-socket-path: /var/run/cilium/hubble.sock
@@ -94,35 +107,34 @@

   hubble-disable-tls: 'false'
   hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
   hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
   hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
   ipam: kubernetes
   ipam-cilium-node-update-rate: 15s
+  default-lb-service-ipam: lbipam
   egress-gateway-reconciliation-trigger-interval: 1s
   enable-vtep: 'false'
   vtep-endpoint: ''
   vtep-cidr: ''
   vtep-mask: ''
   vtep-mac: ''
   enable-l2-announcements: 'true'
   procfs: /host/proc
   bpf-root: /sys/fs/bpf
   cgroup-root: /sys/fs/cgroup
   enable-k8s-terminating-endpoint: 'true'
   enable-sctp: 'false'
-  k8s-client-qps: '10'
-  k8s-client-burst: '20'
   remove-cilium-node-taints: 'true'
   set-cilium-node-taints: 'true'
   set-cilium-is-up-condition: 'true'
   unmanaged-pod-watcher-interval: '15'
   dnsproxy-enable-transparent-mode: 'true'
   dnsproxy-socket-linger-timeout: '10'
   tofqdns-dns-reject-response-code: refused
   tofqdns-enable-dns-compression: 'true'
-  tofqdns-endpoint-max-ip-per-hostname: '50'
+  tofqdns-endpoint-max-ip-per-hostname: '1000'
   tofqdns-idle-connection-grace-period: 0s
   tofqdns-max-deferred-connection-deletes: '10000'
   tofqdns-proxy-response-max-delay: 100ms
   agent-not-ready-taint-key: node.cilium.io/agent-not-ready
   mesh-auth-enabled: 'true'
   mesh-auth-queue-size: '1024'
@@ -132,15 +144,22 @@

   proxy-xff-num-trusted-hops-egress: '0'
   proxy-connect-timeout: '2'
   proxy-initial-fetch-timeout: '30'
   proxy-max-requests-per-connection: '0'
   proxy-max-connection-duration-seconds: '0'
   proxy-idle-timeout-seconds: '60'
+  proxy-max-concurrent-retries: '128'
+  http-retry-count: '3'
   external-envoy-proxy: 'false'
   envoy-base-id: '0'
+  envoy-access-log-buffer-size: '4096'
   envoy-keep-cap-netbindservice: 'false'
   max-connected-clusters: '255'
   clustermesh-enable-endpoint-sync: 'false'
   clustermesh-enable-mcs-api: 'false'
   nat-map-stats-entries: '32'
   nat-map-stats-interval: 30s
+  enable-internal-traffic-policy: 'true'
+  enable-lb-ipam: 'true'
+  enable-non-default-deny-policies: 'true'
+  enable-source-ip-verification: 'true'
 
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-operator-dashboard

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-operator-dashboard

@@ -1013,13 +1013,19 @@

       ],
       "refresh": false,
       "schemaVersion": 25,
       "style": "dark",
       "tags": [],
       "templating": {
-        "list": []
+        "list": [
+          {
+            "type": "datasource",
+            "name": "DS_PROMETHEUS",
+            "query": "prometheus"
+          }
+        ]
       },
       "time": {
         "from": "now-30m",
         "to": "now"
       },
       "timepicker": {
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-relay-config

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-relay-config

@@ -3,12 +3,11 @@

 kind: ConfigMap
 metadata:
   name: hubble-relay-config
   namespace: kube-system
 data:
   config.yaml: "cluster-name: home-kubernetes\npeer-service: \"hubble-peer.kube-system.svc.cluster.local.:443\"\
-    \nlisten-address: :4245\ngops: true\ngops-port: \"9893\"\ndial-timeout: \nretry-timeout:\
-    \ \nsort-buffer-len-max: \nsort-buffer-drain-timeout: \ntls-hubble-client-cert-file:\
-    \ /var/lib/hubble-relay/tls/client.crt\ntls-hubble-client-key-file: /var/lib/hubble-relay/tls/client.key\n\
-    tls-hubble-server-ca-files: /var/lib/hubble-relay/tls/hubble-server-ca.crt\n\n\
-    disable-server-tls: true\n"
+    \nlisten-address: :4245\ngops: true\ngops-port: \"9893\"\nretry-timeout: \nsort-buffer-len-max:\
+    \ \nsort-buffer-drain-timeout: \ntls-hubble-client-cert-file: /var/lib/hubble-relay/tls/client.crt\n\
+    tls-hubble-client-key-file: /var/lib/hubble-relay/tls/client.key\ntls-hubble-server-ca-files:\
+    \ /var/lib/hubble-relay/tls/hubble-server-ca.crt\n\ndisable-server-tls: true\n"
 
--- HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium-operator

+++ HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium-operator

@@ -53,12 +53,13 @@

   - update
   - patch
 - apiGroups:
   - ''
   resources:
   - namespaces
+  - secrets
   verbs:
   - get
   - list
   - watch
 - apiGroups:
   - ''
@@ -135,12 +136,19 @@

   - update
   - get
   - list
   - watch
   - delete
   - patch
+- apiGroups:
+  - cilium.io
+  resources:
+  - ciliumbgpclusterconfigs/status
+  - ciliumbgppeerconfigs/status
+  verbs:
+  - update
 - apiGroups:
   - apiextensions.k8s.io
   resources:
   - customresourcedefinitions
   verbs:
   - create
@@ -181,12 +189,13 @@

   resources:
   - ciliumloadbalancerippools
   - ciliumpodippools
   - ciliumbgppeeringpolicies
   - ciliumbgpclusterconfigs
   - ciliumbgpnodeconfigoverrides
+  - ciliumbgppeerconfigs
   verbs:
   - get
   - list
   - watch
 - apiGroups:
   - cilium.io
--- HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium

+++ HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium

@@ -16,24 +16,24 @@

     rollingUpdate:
       maxUnavailable: 2
     type: RollingUpdate
   template:
     metadata:
       annotations:
-        cilium.io/cilium-configmap-checksum: de8cf26ceabe378b2f47632fd3fd210ee1e5b4ab5d6f3f888abe408c8a29cf7f
+        cilium.io/cilium-configmap-checksum: c2b3f8939ff075b3054d84e6bcb1e96deee779252f45c7ac22279a50f2809b60
       labels:
         k8s-app: cilium
         app.kubernetes.io/name: cilium-agent
         app.kubernetes.io/part-of: cilium
     spec:
       securityContext:
         appArmorProfile:
           type: Unconfined
       containers:
       - name: cilium-agent
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.2@sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1
         imagePullPolicy: IfNotPresent
         command:
         - cilium-agent
         args:
         - --config-dir=/tmp/cilium/config-map
         startupProbe:
@@ -197,13 +197,13 @@

           mountPath: /var/lib/cilium/tls/hubble
           readOnly: true
         - name: tmp
           mountPath: /tmp
       initContainers:
       - name: config
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.2@sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1
         imagePullPolicy: IfNotPresent
         command:
         - cilium-dbg
         - build-config
         env:
         - name: K8S_NODE_NAME
@@ -222,13 +222,13 @@

           value: '7445'
         volumeMounts:
         - name: tmp
           mountPath: /tmp
         terminationMessagePolicy: FallbackToLogsOnError
       - name: mount-cgroup
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.2@sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1
         imagePullPolicy: IfNotPresent
         env:
         - name: CGROUP_ROOT
           value: /sys/fs/cgroup
         - name: BIN_PATH
           value: /opt/cni/bin
@@ -254,13 +254,13 @@

             - SYS_ADMIN
             - SYS_CHROOT
             - SYS_PTRACE
             drop:
             - ALL
       - name: apply-sysctl-overwrites
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.2@sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1
         imagePullPolicy: IfNotPresent
         env:
         - name: BIN_PATH
           value: /opt/cni/bin
         command:
         - sh
@@ -284,13 +284,13 @@

             - SYS_ADMIN
             - SYS_CHROOT
             - SYS_PTRACE
             drop:
             - ALL
       - name: mount-bpf-fs
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.2@sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1
         imagePullPolicy: IfNotPresent
         args:
         - mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpf
         command:
         - /bin/bash
         - -c
@@ -300,13 +300,13 @@

           privileged: true
         volumeMounts:
         - name: bpf-maps
           mountPath: /sys/fs/bpf
           mountPropagation: Bidirectional
       - name: clean-cilium-state
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.2@sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1
         imagePullPolicy: IfNotPresent
         command:
         - /init-container.sh
         env:
         - name: CILIUM_ALL_STATE
           valueFrom:
@@ -348,13 +348,13 @@

         - name: cilium-cgroup
           mountPath: /sys/fs/cgroup
           mountPropagation: HostToContainer
         - name: cilium-run
           mountPath: /var/run/cilium
       - name: install-cni-binaries
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.2@sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1
         imagePullPolicy: IfNotPresent
         command:
         - /install-plugin.sh
         resources:
           requests:
             cpu: 100m
--- HelmRelease: kube-system/cilium Deployment: kube-system/cilium-operator

+++ HelmRelease: kube-system/cilium Deployment: kube-system/cilium-operator

@@ -20,22 +20,22 @@

       maxSurge: 25%
       maxUnavailable: 100%
     type: RollingUpdate
   template:
     metadata:
       annotations:
-        cilium.io/cilium-configmap-checksum: de8cf26ceabe378b2f47632fd3fd210ee1e5b4ab5d6f3f888abe408c8a29cf7f
+        cilium.io/cilium-configmap-checksum: c2b3f8939ff075b3054d84e6bcb1e96deee779252f45c7ac22279a50f2809b60
       labels:
         io.cilium/app: operator
         name: cilium-operator
         app.kubernetes.io/part-of: cilium
         app.kubernetes.io/name: cilium-operator
     spec:
       containers:
       - name: cilium-operator
-        image: quay.io/cilium/operator-generic:v1.16.6@sha256:13d32071d5a52c069fb7c35959a56009c6914439adc73e99e098917646d154fc
+        image: quay.io/cilium/operator-generic:v1.17.2@sha256:81f2d7198366e8dec2903a3a8361e4c68d47d19c68a0d42f0b7b6e3f0523f249
         imagePullPolicy: IfNotPresent
         command:
         - cilium-operator-generic
         args:
         - --config-dir=/tmp/cilium/config-map
         - --debug=$(CILIUM_DEBUG)
--- HelmRelease: kube-system/cilium Deployment: kube-system/hubble-relay

+++ HelmRelease: kube-system/cilium Deployment: kube-system/hubble-relay

@@ -17,13 +17,13 @@

     rollingUpdate:
       maxUnavailable: 1
     type: RollingUpdate
   template:
     metadata:
       annotations:
-        cilium.io/hubble-relay-configmap-checksum: 7013f296857a469857f02e7d0b7e0933fcdf29925c02e28162c33b4a8a00baca
+        cilium.io/hubble-relay-configmap-checksum: eff0e5f47a53fa4b010591dc8fd68bffd75ccd6298d9d502cc7125e0b3fede93
       labels:
         k8s-app: hubble-relay
         app.kubernetes.io/name: hubble-relay
         app.kubernetes.io/part-of: cilium
     spec:
       securityContext:
@@ -34,13 +34,13 @@

           capabilities:
             drop:
             - ALL
           runAsGroup: 65532
           runAsNonRoot: true
           runAsUser: 65532
-        image: quay.io/cilium/hubble-relay:v1.16.6@sha256:ca8dcaa5a81a37743b1397ba2221d16d5d63e4a47607584f1bf50a3b0882bf3b
+        image: quay.io/cilium/hubble-relay:v1.17.2@sha256:42a8db5c256c516cacb5b8937c321b2373ad7a6b0a1e5a5120d5028433d586cc
         imagePullPolicy: IfNotPresent
         command:
         - hubble-relay
         args:
         - serve
         ports:
--- HelmRelease: kube-system/cilium Deployment: kube-system/hubble-ui

+++ HelmRelease: kube-system/cilium Deployment: kube-system/hubble-ui

@@ -32,13 +32,13 @@

         runAsUser: 1001
       priorityClassName: null
       serviceAccountName: hubble-ui
       automountServiceAccountToken: true
       containers:
       - name: frontend
-        image: quay.io/cilium/hubble-ui:v0.13.1@sha256:e2e9313eb7caf64b0061d9da0efbdad59c6c461f6ca1752768942bfeda0796c6
+        image: quay.io/cilium/hubble-ui:v0.13.2@sha256:9e37c1296b802830834cc87342a9182ccbb71ffebb711971e849221bd9d59392
         imagePullPolicy: IfNotPresent
         ports:
         - name: http
           containerPort: 8081
         livenessProbe:
           httpGet:
@@ -53,13 +53,13 @@

           mountPath: /etc/nginx/conf.d/default.conf
           subPath: nginx.conf
         - name: tmp-dir
           mountPath: /tmp
         terminationMessagePolicy: FallbackToLogsOnError
       - name: backend
-        image: quay.io/cilium/hubble-ui-backend:v0.13.1@sha256:0e0eed917653441fded4e7cdb096b7be6a3bddded5a2dd10812a27b1fc6ed95b
+        image: quay.io/cilium/hubble-ui-backend:v0.13.2@sha256:a034b7e98e6ea796ed26df8f4e71f83fc16465a19d166eff67a03b822c0bfa15
         imagePullPolicy: IfNotPresent
         env:
         - name: EVENTS_SERVER_PORT
           value: '8090'
         - name: FLOWS_API_ADDR
           value: hubble-relay:80
--- HelmRelease: kube-system/cilium ServiceMonitor: kube-system/cilium-agent

+++ HelmRelease: kube-system/cilium ServiceMonitor: kube-system/cilium-agent

@@ -6,13 +6,13 @@

   namespace: kube-system
   labels:
     app.kubernetes.io/part-of: cilium
 spec:
   selector:
     matchLabels:
-      k8s-app: cilium
+      app.kubernetes.io/name: cilium-agent
   namespaceSelector:
     matchNames:
     - kube-system
   endpoints:
   - port: metrics
     interval: 10s
--- HelmRelease: kube-system/cilium Namespace: kube-system/cilium-secrets

+++ HelmRelease: kube-system/cilium Namespace: kube-system/cilium-secrets

@@ -0,0 +1,8 @@

+---
+apiVersion: v1
+kind: Namespace
+metadata:
+  name: cilium-secrets
+  labels:
+    app.kubernetes.io/part-of: cilium
+
--- HelmRelease: kube-system/cilium Role: cilium-secrets/cilium-tlsinterception-secrets

+++ HelmRelease: kube-system/cilium Role: cilium-secrets/cilium-tlsinterception-secrets

@@ -0,0 +1,18 @@

+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+  name: cilium-tlsinterception-secrets
+  namespace: cilium-secrets
+  labels:
+    app.kubernetes.io/part-of: cilium
+rules:
+- apiGroups:
+  - ''
+  resources:
+  - secrets
+  verbs:
+  - get
+  - list
+  - watch
+
--- HelmRelease: kube-system/cilium Role: cilium-secrets/cilium-operator-tlsinterception-secrets

+++ HelmRelease: kube-system/cilium Role: cilium-secrets/cilium-operator-tlsinterception-secrets

@@ -0,0 +1,19 @@

+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+  name: cilium-operator-tlsinterception-secrets
+  namespace: cilium-secrets
+  labels:
+    app.kubernetes.io/part-of: cilium
+rules:
+- apiGroups:
+  - ''
+  resources:
+  - secrets
+  verbs:
+  - create
+  - delete
+  - update
+  - patch
+
--- HelmRelease: kube-system/cilium RoleBinding: cilium-secrets/cilium-tlsinterception-secrets

+++ HelmRelease: kube-system/cilium RoleBinding: cilium-secrets/cilium-tlsinterception-secrets

@@ -0,0 +1,17 @@

+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+  name: cilium-tlsinterception-secrets
+  namespace: cilium-secrets
+  labels:
+    app.kubernetes.io/part-of: cilium
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: Role
+  name: cilium-tlsinterception-secrets
+subjects:
+- kind: ServiceAccount
+  name: cilium
+  namespace: kube-system
+
--- HelmRelease: kube-system/cilium RoleBinding: cilium-secrets/cilium-operator-tlsinterception-secrets

+++ HelmRelease: kube-system/cilium RoleBinding: cilium-secrets/cilium-operator-tlsinterception-secrets

@@ -0,0 +1,17 @@

+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+  name: cilium-operator-tlsinterception-secrets
+  namespace: cilium-secrets
+  labels:
+    app.kubernetes.io/part-of: cilium
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: Role
+  name: cilium-operator-tlsinterception-secrets
+subjects:
+- kind: ServiceAccount
+  name: cilium-operator
+  namespace: kube-system
+
--- HelmRelease: observability/kube-prometheus-stack Service: observability/kube-state-metrics

+++ HelmRelease: observability/kube-prometheus-stack Service: observability/kube-state-metrics

@@ -8,12 +8,14 @@

     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
     release: kube-prometheus-stack
+  annotations:
+    prometheus.io/scrape: 'true'
 spec:
   type: ClusterIP
   ports:
   - name: http
     protocol: TCP
     port: 8080
--- HelmRelease: observability/kube-prometheus-stack PrometheusRule: observability/kube-prometheus-stack-kubernetes-system-kubelet

+++ HelmRelease: observability/kube-prometheus-stack PrometheusRule: observability/kube-prometheus-stack-kubernetes-system-kubelet

@@ -18,16 +18,14 @@

     - alert: KubeNodeNotReady
       annotations:
         description: '{{ $labels.node }} has been unready for more than 15 minutes
           on cluster {{ $labels.cluster }}.'
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubenodenotready
         summary: Node is not ready.
-      expr: |-
-        kube_node_status_condition{job="kube-state-metrics",condition="Ready",status="true"} == 0
-        and on (cluster, node)
-        kube_node_spec_unschedulable{job="kube-state-metrics"} == 0
+      expr: kube_node_status_condition{job="kube-state-metrics",condition="Ready",status="true"}
+        == 0
       for: 15m
       labels:
         severity: warning
     - alert: KubeNodeUnreachable
       annotations:
         description: '{{ $labels.node }} is unreachable and some workloads may be
@@ -64,16 +62,14 @@

     - alert: KubeNodeReadinessFlapping
       annotations:
         description: The readiness status of node {{ $labels.node }} has changed {{
           $value }} times in the last 15 minutes on cluster {{ $labels.cluster }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubenodereadinessflapping
         summary: Node readiness status is flapping.
-      expr: |-
-        sum(changes(kube_node_status_condition{job="kube-state-metrics",status="true",condition="Ready"}[15m])) by (cluster, node) > 2
-        and on (cluster, node)
-        kube_node_spec_unschedulable{job="kube-state-metrics"} == 0
+      expr: sum(changes(kube_node_status_condition{job="kube-state-metrics",status="true",condition="Ready"}[15m]))
+        by (cluster, node) > 2
       for: 15m
       labels:
         severity: warning
     - alert: KubeletPlegDurationHigh
       annotations:
         description: The Kubelet Pod Lifecycle Event Generator has a 99th percentile
--- HelmRelease: kyverno/kyverno ConfigMap: kyverno/kyverno

+++ HelmRelease: kyverno/kyverno ConfigMap: kyverno/kyverno

@@ -60,9 +60,11 @@

     [Service,kyverno,kyverno-cleanup-controller-metrics] [Service/*,kyverno,kyverno-cleanup-controller-metrics]
     [Service,kyverno,kyverno-reports-controller-metrics] [Service/*,kyverno,kyverno-reports-controller-metrics]
     [ServiceMonitor,kyverno,kyverno-admission-controller] [ServiceMonitor,kyverno,kyverno-background-controller]
     [ServiceMonitor,kyverno,kyverno-cleanup-controller] [ServiceMonitor,kyverno,kyverno-reports-controller]
     [Secret,kyverno,kyverno-svc.kyverno.svc.*] [Secret,kyverno,kyverno-cleanup-controller.kyverno.svc.*]'
   updateRequestThreshold: '1000'
-  webhooks: '{"namespaceSelector":{"matchExpressions":[{"key":"kubernetes.io/metadata.name","operator":"NotIn","values":["kube-system"]},{"key":"kubernetes.io/metadata.name","operator":"NotIn","values":["kyverno"]}],"matchLabels":null}}'
+  webhooks: |2-
+
+      {"namespaceSelector":{"matchExpressions":[{"key":"kubernetes.io/metadata.name","operator":"NotIn","values":["kube-system"]},{"key":"kubernetes.io/metadata.name","operator":"NotIn","values":["kyverno"]}],"matchLabels":null}}
   webhookAnnotations: '{"admissions.enforcer/disabled":"true"}'
 
--- HelmRelease: kyverno/kyverno Deployment: kyverno/kyverno-admission-controller

+++ HelmRelease: kyverno/kyverno Deployment: kyverno/kyverno-admission-controller

@@ -43,13 +43,13 @@

                   - admission-controller
               topologyKey: kubernetes.io/hostname
             weight: 1
       serviceAccountName: kyverno-admission-controller
       initContainers:
       - name: kyverno-pre
-        image: ghcr.io/kyverno/kyvernopre:v1.13.4
+        image: ghcr.io/kyverno/kyvernopre:v1.13.2
         imagePullPolicy: IfNotPresent
         args:
         - --loggingFormat=text
         - --v=2
         resources:
           limits:
@@ -88,13 +88,13 @@

         - name: KYVERNO_DEPLOYMENT
           value: kyverno-admission-controller
         - name: KYVERNO_SVC
           value: kyverno-svc
       containers:
       - name: kyverno
-        image: ghcr.io/kyverno/kyverno:v1.13.4
+        image: ghcr.io/kyverno/kyverno:v1.13.2
         imagePullPolicy: IfNotPresent
         args:
         - --caSecretName=kyverno-svc.kyverno.svc.kyverno-tls-ca
         - --tlsSecretName=kyverno-svc.kyverno.svc.kyverno-tls-pair
         - --backgroundServiceAccountName=system:serviceaccount:kyverno:kyverno-background-controller
         - --reportsServiceAccountName=system:serviceaccount:kyverno:kyverno-reports-controller
--- HelmRelease: kyverno/kyverno Deployment: kyverno/kyverno-background-controller

+++ HelmRelease: kyverno/kyverno Deployment: kyverno/kyverno-background-controller

@@ -43,13 +43,13 @@

                   - background-controller
               topologyKey: kubernetes.io/hostname
             weight: 1
       serviceAccountName: kyverno-background-controller
       containers:
       - name: controller
-        image: ghcr.io/kyverno/background-controller:v1.13.4
+        image: ghcr.io/kyverno/background-controller:v1.13.2
         imagePullPolicy: IfNotPresent
         ports:
         - containerPort: 9443
           name: https
           protocol: TCP
         - containerPort: 8000
--- HelmRelease: kyverno/kyverno Deployment: kyverno/kyverno-cleanup-controller

+++ HelmRelease: kyverno/kyverno Deployment: kyverno/kyverno-cleanup-controller

@@ -43,13 +43,13 @@

                   - cleanup-controller
               topologyKey: kubernetes.io/hostname
             weight: 1
       serviceAccountName: kyverno-cleanup-controller
       containers:
       - name: controller
-        image: ghcr.io/kyverno/cleanup-controller:v1.13.4
+        image: ghcr.io/kyverno/cleanup-controller:v1.13.2
         imagePullPolicy: IfNotPresent
         ports:
         - containerPort: 9443
           name: https
           protocol: TCP
         - containerPort: 8000
--- HelmRelease: kyverno/kyverno Deployment: kyverno/kyverno-reports-controller

+++ HelmRelease: kyverno/kyverno Deployment: kyverno/kyverno-reports-controller

@@ -43,13 +43,13 @@

                   - reports-controller
               topologyKey: kubernetes.io/hostname
             weight: 1
       serviceAccountName: kyverno-reports-controller
       containers:
       - name: controller
-        image: ghcr.io/kyverno/reports-controller:v1.13.4
+        image: ghcr.io/kyverno/reports-controller:v1.13.2
         imagePullPolicy: IfNotPresent
         ports:
         - containerPort: 9443
           name: https
           protocol: TCP
         - containerPort: 8000
--- HelmRelease: kyverno/kyverno Job: kyverno/kyverno-migrate-resources

+++ HelmRelease: kyverno/kyverno Job: kyverno/kyverno-migrate-resources

@@ -19,13 +19,13 @@

     metadata: null
     spec:
       serviceAccount: kyverno-migrate-resources
       restartPolicy: Never
       containers:
       - name: kubectl
-        image: ghcr.io/kyverno/kyverno-cli:v1.13.4
+        image: ghcr.io/kyverno/kyverno-cli:v1.13.2
         imagePullPolicy: IfNotPresent
         args:
         - migrate
         - --resource
         - cleanuppolicies.kyverno.io
         - --resource

bot-akira[bot] avatar Feb 04 '25 16:02 bot-akira[bot]

🦙 MegaLinter status: ✅ SUCCESS

Descriptor Linter Files Fixed Errors Elapsed time

See detailed report in MegaLinter reports Set VALIDATE_ALL_CODEBASE: true in mega-linter.yml to validate all sources, not only the diff

MegaLinter is graciously provided by OX Security

axeII avatar Feb 04 '25 16:02 axeII

Wait for now. I will upgrade this version once I also finish the PR for migrating to cilium ingress.

axeII avatar Mar 31 '25 08:03 axeII

--- kubernetes/apps/security/authentik/app Kustomization: flux-system/authentik HelmRelease: security/authentik

+++ kubernetes/apps/security/authentik/app Kustomization: flux-system/authentik HelmRelease: security/authentik

@@ -14,13 +14,13 @@

       chart: authentik
       interval: 5m
       sourceRef:
         kind: HelmRepository
         name: authentik
         namespace: flux-system
-      version: 2025.4.1
+      version: 2025.4.0
   install:
     remediation:
       retries: 3
   interval: 1h
   upgrade:
     cleanupOnFail: true
--- kubernetes/apps/kube-system/nvidia-device-plugin/app Kustomization: flux-system/cluster-apps-nvidia HelmRelease: kube-system/nvidia-device-plugin

+++ kubernetes/apps/kube-system/nvidia-device-plugin/app Kustomization: flux-system/cluster-apps-nvidia HelmRelease: kube-system/nvidia-device-plugin

@@ -18,11 +18,11 @@

         namespace: flux-system
       version: 0.17.1
   interval: 15m
   values:
     image:
       repository: nvcr.io/nvidia/k8s-device-plugin
-      tag: v0.17.2
+      tag: v0.17.1
     nodeSelector:
       feature.node.kubernetes.io/custom-nvidia-gpu: 'true'
     runtimeClassName: nvidia
 
--- kubernetes/apps/kube-system/cilium/app Kustomization: flux-system/cilium HelmRelease: kube-system/cilium

+++ kubernetes/apps/kube-system/cilium/app Kustomization: flux-system/cilium HelmRelease: kube-system/cilium

@@ -13,13 +13,13 @@

     spec:
       chart: cilium
       sourceRef:
         kind: HelmRepository
         name: cilium
         namespace: flux-system
-      version: 1.16.6
+      version: 1.17.4
   install:
     remediation:
       retries: 3
   interval: 1h
   upgrade:
     cleanupOnFail: true
--- kubernetes/apps/network/echo-server/app Kustomization: flux-system/echo-server HelmRelease: network/echo-server

+++ kubernetes/apps/network/echo-server/app Kustomization: flux-system/echo-server HelmRelease: network/echo-server

@@ -34,13 +34,13 @@

               HTTP_PORT: 8080
               LOG_IGNORE_PATH: /healthz
               LOG_WITHOUT_NEWLINE: true
               PROMETHEUS_ENABLED: true
             image:
               repository: ghcr.io/mendhak/http-https-echo
-              tag: 37
+              tag: 36
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3
--- kubernetes/apps/media/komga/app Kustomization: flux-system/komga HelmRelease: media/komga

+++ kubernetes/apps/media/komga/app Kustomization: flux-system/komga HelmRelease: media/komga

@@ -35,13 +35,13 @@

           app:
             env:
               SERVER_PORT: 8080
               TZ: Europe/Prague
             image:
               repository: gotson/komga
-              tag: 1.21.3@sha256:72dc9f81a0a528752e953028a7d3ca6a83f8eabe2a617e3c7e53cfa594c84256
+              tag: 1.21.2@sha256:ba587695d786f0e8f4de8598b8aa2785cc8c671098ef1cb624819c2bb812789c
             resources:
               limits:
                 memory: 2Gi
               requests:
                 cpu: 15m
                 memory: 1Gi
--- kubernetes/apps/default/glance/app Kustomization: flux-system/glance HelmRelease: default/glance

+++ kubernetes/apps/default/glance/app Kustomization: flux-system/glance HelmRelease: default/glance

@@ -29,13 +29,13 @@

         containers:
           app:
             env:
               TZ: Europe/Prague
             image:
               repository: glanceapp/glance
-              tag: v0.8.3@sha256:1fa252b1651c061cbe7a023326b314248bb820f81ee89a89970347b83684414c
+              tag: v0.8.2@sha256:fe80231892d3b7d7a2770a58d8058c190ac1c27eb806f5e2f09ee4cf101780eb
             resources:
               limits:
                 memory: 1Gi
               requests:
                 cpu: 10m
             securityContext:
@@ -50,13 +50,13 @@

             fsGroupChangePolicy: OnRootMismatch
             runAsGroup: 1000
             runAsNonRoot: true
             runAsUser: 1000
     ingress:
       app:
-        className: external
+        className: internal
         hosts:
         - host: glance.juno.moe
           paths:
           - path: /
             service:
               identifier: app
--- kubernetes/apps/default/glance/app Kustomization: flux-system/glance ConfigMap: default/glance-configmap

+++ kubernetes/apps/default/glance/app Kustomization: flux-system/glance ConfigMap: default/glance-configmap

@@ -1,28 +1,11 @@

 ---
 apiVersion: v1
 data:
   glance.yml: |
     ---
-    theme:
-      background-color: 100 20 10
-      primary-color: 40 90 40
-      contrast-multiplier: 1.1
-
-      presets:
-        default-dark:
-          background-color: 0 0 16
-          primary-color: 43 59 81
-          positive-color: 61 66 44
-          negative-color: 6 96 59
-
-        default-light:
-          light: true
-          background-color: 0 0 95
-          primary-color: 0 0 10
-          negative-color: 0 90 50
     pages:
       - name: Home
         # Optionally, if you only have a single page you can hide the desktop navigation for a cleaner look
         # hide-desktop-navigation: true
         columns:
           - size: small
@@ -54,44 +37,32 @@

 
           - size: full
             widgets:
               - type: group
                 widgets:
                   - type: hacker-news
-                    sort-by: best
                   - type: lobsters
-                    sort-by: hot
 
               - type: videos
                 channels:
                   - UCBJycsmduvYEL83R_U4JriQ # Marques Brownlee
                   - UCkWQ0gDrqOCarmUKmppD7GQ
                   - UCbpMy0Fg74eXXkvxJrtEn3w
                   - UCwpHKudUkP5tNgmMdexB3ow
                   - UCJ901NqoRaXMnIm7aOjLyuA
                   - UCUyeluBRhGPCW4rPe_UvBZQ
                   - UCHnyfMqiRRG1u-2MsSQLbXA # Veritasium
-                  - UCgBVkKoOAr3ajSdFFLp13_A # krazam
 
               - type: group
                 widgets:
                   - type: reddit
                     subreddit: technology
                     show-thumbnails: true
-                    sort-by: "top"
-                    comments-url-template: "https://red.artemislena.eu/{POST-PATH}"
                   - type: reddit
                     subreddit: selfhosted
                     show-thumbnails: true
-                    sort-by: "top"
-                    comments-url-template: "https://red.artemislena.eu/{POST-PATH}"
-                  - type: reddit
-                    subreddit: kubernetes
-                    show-thumbnails: true
-                    sort-by: "top"
-                    comments-url-template: "https://red.artemislena.eu/{POST-PATH}"
 
           - size: small
             widgets:
               - type: weather
                 location: Brno, Czechia
                 units: metric
@@ -119,14 +90,27 @@

               - type: releases
                 cache: 1d
                 repositories:
                   - glanceapp/glance
                   - nikitabobko/AeroSpace
                   - freelensapp/freelens
-                  - toeverything/AFFiNE
-                  - docmost/docmost
+
+              - type: monitor
+                cache: 15m
+                title: Services
+                sites:
+                  - title: Rook
+                    url: https://rook...PLACEHOLDER_SECRET_DOMAIN..
+                  - title: Grafana
+                    url: https://grafana...PLACEHOLDER_SECRET_DOMAIN..
+                  - title: Karma
+                    url: https://karma...PLACEHOLDER_SECRET_DOMAIN..
+                  - title: Minio
+                    url: https://minio...PLACEHOLDER_SECRET_DOMAIN..
+                  - title: Requests
+                    url: https://requests...PLACEHOLDER_SECRET_DOMAIN..
 kind: ConfigMap
 metadata:
   labels:
     app.kubernetes.io/name: glance
     kustomize.toolkit.fluxcd.io/name: glance
     kustomize.toolkit.fluxcd.io/namespace: flux-system
--- kubernetes/apps Kustomization: flux-system/cluster-apps Kustomization: flux-system/docmost

+++ kubernetes/apps Kustomization: flux-system/cluster-apps Kustomization: flux-system/docmost

@@ -1,51 +0,0 @@

----
-apiVersion: kustomize.toolkit.fluxcd.io/v1
-kind: Kustomization
-metadata:
-  labels:
-    kustomize.toolkit.fluxcd.io/name: cluster-apps
-    kustomize.toolkit.fluxcd.io/namespace: flux-system
-  name: docmost
-  namespace: flux-system
-spec:
-  commonMetadata:
-    labels:
-      app.kubernetes.io/name: docmost
-  components:
-  - ../../../../components/volsync
-  decryption:
-    provider: sops
-    secretRef:
-      name: sops-age
-  dependsOn:
-  - name: rook-ceph-cluster
-    namespace: flux-system
-  - name: volsync
-    namespace: flux-system
-  interval: 1h
-  path: ./kubernetes/apps/default/docmost/app
-  postBuild:
-    substitute:
-      APP: docmost
-      VOLSYNC_CAPACITY: 1Gi
-    substituteFrom:
-    - kind: ConfigMap
-      name: cluster-settings
-    - kind: Secret
-      name: cluster-secrets
-    - kind: ConfigMap
-      name: cluster-user-settings
-      optional: true
-    - kind: Secret
-      name: cluster-user-secrets
-      optional: true
-  prune: true
-  retryInterval: 2m
-  sourceRef:
-    kind: GitRepository
-    name: home-kubernetes
-    namespace: flux-system
-  targetNamespace: default
-  timeout: 5m
-  wait: true
-
--- kubernetes/apps Kustomization: flux-system/cluster-apps Kustomization: flux-system/affine

+++ kubernetes/apps Kustomization: flux-system/cluster-apps Kustomization: flux-system/affine

@@ -0,0 +1,51 @@

+---
+apiVersion: kustomize.toolkit.fluxcd.io/v1
+kind: Kustomization
+metadata:
+  labels:
+    kustomize.toolkit.fluxcd.io/name: cluster-apps
+    kustomize.toolkit.fluxcd.io/namespace: flux-system
+  name: affine
+  namespace: flux-system
+spec:
+  commonMetadata:
+    labels:
+      app.kubernetes.io/name: affine
+  components:
+  - ../../../../components/volsync
+  decryption:
+    provider: sops
+    secretRef:
+      name: sops-age
+  dependsOn:
+  - name: rook-ceph-cluster
+    namespace: flux-system
+  - name: volsync
+    namespace: flux-system
+  interval: 1h
+  path: ./kubernetes/apps/default/affine/app
+  postBuild:
+    substitute:
+      APP: affine
+      VOLSYNC_CAPACITY: 2Gi
+    substituteFrom:
+    - kind: ConfigMap
+      name: cluster-settings
+    - kind: Secret
+      name: cluster-secrets
+    - kind: ConfigMap
+      name: cluster-user-settings
+      optional: true
+    - kind: Secret
+      name: cluster-user-secrets
+      optional: true
+  prune: true
+  retryInterval: 2m
+  sourceRef:
+    kind: GitRepository
+    name: home-kubernetes
+    namespace: flux-system
+  targetNamespace: default
+  timeout: 5m
+  wait: true
+
--- kubernetes/apps/media/plex/app Kustomization: flux-system/plex HelmRelease: media/plex

+++ kubernetes/apps/media/plex/app Kustomization: flux-system/plex HelmRelease: media/plex

@@ -34,13 +34,13 @@

               ADVERTISE_IP: https://plex...PLACEHOLDER_SECRET_DOMAIN..,http://192.168.69.101:32400
               NVIDIA_DRIVER_CAPABILITIES: all
               NVIDIA_VISIBLE_DEVICES: all
               TZ: Europe/Prague
             image:
               repository: ghcr.io/home-operations/plex
-              tag: 1.41.7.9799@sha256:0c31ee1ebee0b63ead2de2accc6c52b1e65e9322b39118715eeac8b2f2bb786f
+              tag: 1.41.6.9685@sha256:37d36646471fb905a0080daaaa1f09ad3370b06149ed5f94dad73ead591cad0e
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3
--- kubernetes/apps/network/cloudflared/app Kustomization: flux-system/cloudflared HelmRelease: network/cloudflared

+++ kubernetes/apps/network/cloudflared/app Kustomization: flux-system/cloudflared HelmRelease: network/cloudflared

@@ -49,13 +49,13 @@

               TUNNEL_METRICS: 0.0.0.0:8080
               TUNNEL_ORIGIN_ENABLE_HTTP2: true
               TUNNEL_POST_QUANTUM: true
               TUNNEL_TRANSPORT_PROTOCOL: quic
             image:
               repository: docker.io/cloudflare/cloudflared
-              tag: 2025.5.0
+              tag: 2025.4.2
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3
--- kubernetes/apps/observability/loki/app Kustomization: flux-system/loki HelmRelease: observability/loki

+++ kubernetes/apps/observability/loki/app Kustomization: flux-system/loki HelmRelease: observability/loki

@@ -13,13 +13,13 @@

     spec:
       chart: loki
       sourceRef:
         kind: HelmRepository
         name: grafana
         namespace: flux-system
-      version: 6.30.0
+      version: 6.29.0
   install:
     crds: Skip
     remediation:
       retries: 3
   interval: 1h
   upgrade:
--- kubernetes/apps/media/radarr/app Kustomization: flux-system/radarr HelmRelease: media/radarr

+++ kubernetes/apps/media/radarr/app Kustomization: flux-system/radarr HelmRelease: media/radarr

@@ -44,13 +44,13 @@

               TZ: Europe/Prague
             envFrom:
             - secretRef:
                 name: radarr-secret
             image:
               repository: ghcr.io/home-operations/radarr
-              tag: 5.23.3.9987@sha256:a415c932fc51b43477d38f125d4c82848b27984bb5a574e03907eaefd7aa7490
+              tag: 5.23.1.9914@sha256:794fb31c2773491429cdf50906443c301c61298b1e53f1e95ccf723c30c73d3f
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3
--- kubernetes/apps/media/prowlarr/app Kustomization: flux-system/prowlarr HelmRelease: media/prowlarr

+++ kubernetes/apps/media/prowlarr/app Kustomization: flux-system/prowlarr HelmRelease: media/prowlarr

@@ -44,13 +44,13 @@

               TZ: Europe/Prague
             envFrom:
             - secretRef:
                 name: prowlarr-secret
             image:
               repository: ghcr.io/home-operations/prowlarr
-              tag: 1.36.2.5059@sha256:8b998084a1696afb0bdc2e4c2a9750ac4e0f26528fc3db6fa77d7339811f305f
+              tag: 1.36.1.5049@sha256:94504dfaeccc5a72ae5cb9c8d776ebdddf91ab709a40bbacaf68bf7509f368d4
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3
--- kubernetes/apps/observability/grafana/app Kustomization: flux-system/grafana HelmRelease: observability/grafana

+++ kubernetes/apps/observability/grafana/app Kustomization: flux-system/grafana HelmRelease: observability/grafana

@@ -13,13 +13,13 @@

     spec:
       chart: grafana
       sourceRef:
         kind: HelmRepository
         name: grafana
         namespace: flux-system
-      version: 9.2.0
+      version: 9.0.0
   install:
     remediation:
       retries: 3
   interval: 1h
   upgrade:
     cleanupOnFail: true
--- kubernetes/apps/observability/kube-prometheus-stack/app Kustomization: flux-system/kube-prometheus-stack HelmRelease: observability/kube-prometheus-stack

+++ kubernetes/apps/observability/kube-prometheus-stack/app Kustomization: flux-system/kube-prometheus-stack HelmRelease: observability/kube-prometheus-stack

@@ -13,13 +13,13 @@

     spec:
       chart: kube-prometheus-stack
       sourceRef:
         kind: HelmRepository
         name: prometheus-community
         namespace: flux-system
-      version: 72.6.2
+      version: 72.4.0
   dependsOn:
   - name: rook-ceph-cluster
     namespace: rook-ceph
   install:
     crds: CreateReplace
     remediation:
--- kubernetes/apps/default/docmost/app Kustomization: flux-system/docmost ExternalSecret: default/docmost

+++ kubernetes/apps/default/docmost/app Kustomization: flux-system/docmost ExternalSecret: default/docmost

@@ -1,32 +0,0 @@

----
-apiVersion: external-secrets.io/v1beta1
-kind: ExternalSecret
-metadata:
-  labels:
-    app.kubernetes.io/name: docmost
-    kustomize.toolkit.fluxcd.io/name: docmost
-    kustomize.toolkit.fluxcd.io/namespace: flux-system
-  name: docmost
-  namespace: default
-spec:
-  dataFrom:
-  - extract:
-      key: docmost
-  - extract:
-      key: cloudnative-pg
-  secretStoreRef:
-    kind: ClusterSecretStore
-    name: onepassword-connect
-  target:
-    name: docmost-secret
-    template:
-      data:
-        APP_SECRET: '{{ .DOCMOST_APP_SECRET }}'
-        DATABASE_URL: postgres://{{ .DOCMOST_POSTGRES_USER }}:{{ .DOCMOST_POSTGRES_PASS
-          }}@192.168.69.107/docmost?sslmode=disable
-        INIT_POSTGRES_DBNAME: docmost
-        INIT_POSTGRES_HOST: 192.168.69.107
-        INIT_POSTGRES_PASS: '{{ .DOCMOST_POSTGRES_PASS }}'
-        INIT_POSTGRES_SUPER_PASS: '{{ .POSTGRES_SUPER_PASS }}'
-        INIT_POSTGRES_USER: '{{ .DOCMOST_POSTGRES_USER }}'
-
--- kubernetes/apps/default/docmost/app Kustomization: flux-system/docmost HelmRelease: default/docmost

+++ kubernetes/apps/default/docmost/app Kustomization: flux-system/docmost HelmRelease: default/docmost

@@ -1,88 +0,0 @@

----
-apiVersion: helm.toolkit.fluxcd.io/v2
-kind: HelmRelease
-metadata:
-  labels:
-    app.kubernetes.io/name: docmost
-    kustomize.toolkit.fluxcd.io/name: docmost
-    kustomize.toolkit.fluxcd.io/namespace: flux-system
-  name: docmost
-  namespace: default
-spec:
-  chartRef:
-    kind: OCIRepository
-    name: app-template
-    namespace: flux-system
-  install:
-    remediation:
-      retries: -1
-  interval: 1h
-  upgrade:
-    cleanupOnFail: true
-    remediation:
-      retries: 3
-  values:
-    controllers:
-      docmost:
-        annotations:
-          reloader.stakater.com/auto: 'true'
-        containers:
-          app:
-            env:
-              APP_URL: https://nt.juno.moe
-              PORT: '3000'
-              REDIS_URL: redis://dragonfly.database.svc.cluster.local:6379
-              TZ: Europe/Prague
-            envFrom:
-            - secretRef:
-                name: docmost-secret
-            image:
-              repository: docmost/docmost
-              tag: 0.20.4
-            probes:
-              liveness:
-                enabled: true
-              readiness:
-                enabled: true
-            resources:
-              requests:
-                cpu: 25m
-                memory: 105M
-        initContainers:
-          init-db:
-            envFrom:
-            - secretRef:
-                name: docmost-secret
-            image:
-              repository: ghcr.io/home-operations/postgres-init
-              tag: 17
-        pod:
-          securityContext:
-            fsGroup: 1000
-            fsGroupChangePolicy: OnRootMismatch
-            runAsGroup: 1000
-            runAsNonRoot: true
-            runAsUser: 1000
-    ingress:
-      app:
-        className: internal
-        enabled: true
-        hosts:
-        - host: nt.juno.moe
-          paths:
-          - path: /
-            service:
-              identifier: app
-              port: http
-    persistence:
-      data:
-        existingClaim: docmost
-        globalMounts:
-        - path: /app/data/storage
-    service:
-      app:
-        controller: docmost
-        ports:
-          http:
-            port: 3000
-
--- kubernetes/apps/default/docmost/app Kustomization: flux-system/docmost ExternalSecret: default/docmost-restic

+++ kubernetes/apps/default/docmost/app Kustomization: flux-system/docmost ExternalSecret: default/docmost-restic

@@ -1,26 +0,0 @@

----
-apiVersion: external-secrets.io/v1beta1
-kind: ExternalSecret
-metadata:
-  labels:
-    app.kubernetes.io/name: docmost
-    kustomize.toolkit.fluxcd.io/name: docmost
-    kustomize.toolkit.fluxcd.io/namespace: flux-system
-  name: docmost-restic
-  namespace: default
-spec:
-  dataFrom:
-  - extract:
-      key: volsync-restic-template
-  secretStoreRef:
-    kind: ClusterSecretStore
-    name: onepassword-connect
-  target:
-    name: docmost-restic-secret
-    template:
-      data:
-        AWS_ACCESS_KEY_ID: '{{ .AWS_ACCESS_KEY_ID }}'
-        AWS_SECRET_ACCESS_KEY: '{{ .AWS_SECRET_ACCESS_KEY }}'
-        RESTIC_PASSWORD: '{{ .RESTIC_PASSWORD }}'
-        RESTIC_REPOSITORY: '{{ .REPOSITORY_TEMPLATE }}/docmost'
-
--- kubernetes/apps/default/docmost/app Kustomization: flux-system/docmost ReplicationDestination: default/docmost

+++ kubernetes/apps/default/docmost/app Kustomization: flux-system/docmost ReplicationDestination: default/docmost

@@ -1,33 +0,0 @@

----
-apiVersion: volsync.backube/v1alpha1
-kind: ReplicationDestination
-metadata:
-  labels:
-    app.kubernetes.io/name: docmost
-    kustomize.toolkit.fluxcd.io/name: docmost
-    kustomize.toolkit.fluxcd.io/namespace: flux-system
-  name: docmost
-  namespace: default
-spec:
-  restic:
-    accessModes:
-    - ReadWriteOnce
-    cacheAccessModes:
-    - ReadWriteOnce
-    cacheCapacity: 2Gi
-    cacheStorageClassName: ceph-block
-    capacity: 1Gi
-    cleanupCachePVC: true
-    cleanupTempPVC: true
-    copyMethod: Snapshot
-    enableFileDeletion: true
-    moverSecurityContext:
-      fsGroup: 1000
-      runAsGroup: 1000
-      runAsUser: 1000
-    repository: docmost-restic-secret
-    storageClassName: ceph-block
-    volumeSnapshotClassName: csi-ceph-blockpool
-  trigger:
-    manual: restore-once
-
--- kubernetes/apps/default/docmost/app Kustomization: flux-system/docmost ReplicationSource: default/docmost

+++ kubernetes/apps/default/docmost/app Kustomization: flux-system/docmost ReplicationSource: default/docmost

@@ -1,35 +0,0 @@

----
-apiVersion: volsync.backube/v1alpha1
-kind: ReplicationSource
-metadata:
-  labels:
-    app.kubernetes.io/name: docmost
-    kustomize.toolkit.fluxcd.io/name: docmost
-    kustomize.toolkit.fluxcd.io/namespace: flux-system
-  name: docmost
-  namespace: default
-spec:
-  restic:
-    accessModes:
-    - ReadWriteOnce
-    cacheAccessModes:
-    - ReadWriteOnce
-    cacheCapacity: 2Gi
-    cacheStorageClassName: ceph-block
-    copyMethod: Snapshot
-    moverSecurityContext:
-      fsGroup: 1000
-      runAsGroup: 1000
-      runAsUser: 1000
-    pruneIntervalDays: 14
-    repository: docmost-restic-secret
-    retain:
-      daily: 7
-      hourly: 24
-      weekly: 5
-    storageClassName: ceph-block
-    volumeSnapshotClassName: csi-ceph-blockpool
-  sourcePVC: docmost
-  trigger:
-    schedule: 15 */8 * * *
-
--- kubernetes/apps/default/docmost/app Kustomization: flux-system/docmost PersistentVolumeClaim: default/docmost

+++ kubernetes/apps/default/docmost/app Kustomization: flux-system/docmost PersistentVolumeClaim: default/docmost

@@ -1,22 +0,0 @@

----
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
-  labels:
-    app.kubernetes.io/name: docmost
-    kustomize.toolkit.fluxcd.io/name: docmost
-    kustomize.toolkit.fluxcd.io/namespace: flux-system
-  name: docmost
-  namespace: default
-spec:
-  accessModes:
-  - ReadWriteOnce
-  dataSourceRef:
-    apiGroup: volsync.backube
-    kind: ReplicationDestination
-    name: docmost
-  resources:
-    requests:
-      storage: 1Gi
-  storageClassName: ceph-block
-
--- kubernetes/apps/default/affine/app Kustomization: flux-system/affine ExternalSecret: default/affine

+++ kubernetes/apps/default/affine/app Kustomization: flux-system/affine ExternalSecret: default/affine

@@ -0,0 +1,34 @@

+---
+apiVersion: external-secrets.io/v1beta1
+kind: ExternalSecret
+metadata:
+  labels:
+    app.kubernetes.io/name: affine
+    kustomize.toolkit.fluxcd.io/name: affine
+    kustomize.toolkit.fluxcd.io/namespace: flux-system
+  name: affine
+  namespace: default
+spec:
+  dataFrom:
+  - extract:
+      key: affine
+  - extract:
+      key: cloudnative-pg
+  secretStoreRef:
+    kind: ClusterSecretStore
+    name: onepassword-connect
+  target:
+    name: affine-secret
+    template:
+      data:
+        AFFINE_SERVER_HOST: nt.juno.moe
+        AFFINE_SERVER_HTTPS: 'true'
+        DATABASE_URL: postgresql://{{ .POSTGRES_USER }}:{{ .POSTGRES_PASS }}@192.168.69.107:5432/affine
+        INIT_POSTGRES_DBNAME: affine
+        INIT_POSTGRES_ENCODING: UTF8
+        INIT_POSTGRES_HOST: 192.168.69.107
+        INIT_POSTGRES_PASS: '{{ .POSTGRES_PASS }}'
+        INIT_POSTGRES_SUPER_PASS: '{{ .POSTGRES_SUPER_PASS }}'
+        INIT_POSTGRES_USER: '{{ .POSTGRES_USER }}'
+        REDIS_SERVER_HOST: dragonfly.database.svc.cluster.local
+
--- kubernetes/apps/default/affine/app Kustomization: flux-system/affine HelmRelease: default/affine

+++ kubernetes/apps/default/affine/app Kustomization: flux-system/affine HelmRelease: default/affine

@@ -0,0 +1,100 @@

+---
+apiVersion: helm.toolkit.fluxcd.io/v2
+kind: HelmRelease
+metadata:
+  labels:
+    app.kubernetes.io/name: affine
+    kustomize.toolkit.fluxcd.io/name: affine
+    kustomize.toolkit.fluxcd.io/namespace: flux-system
+  name: affine
+  namespace: default
+spec:
+  chartRef:
+    kind: OCIRepository
+    name: app-template
+    namespace: flux-system
+  install:
+    remediation:
+      retries: -1
+  interval: 1h
+  upgrade:
+    cleanupOnFail: true
+    remediation:
+      retries: 3
+  values:
+    controllers:
+      affine:
+        annotations:
+          reloader.stakater.com/auto: 'true'
+        containers:
+          app:
+            env:
+              TZ: Europe/Prague
+            envFrom:
+            - secretRef:
+                name: affine-secret
+            image:
+              repository: ghcr.io/toeverything/affine-graphql
+              tag: stable-0ab8655@sha256:b461dd09b968bd2f067e98ed3c4988f4711dd811df7624f19d53c899061c4347
+            probes:
+              liveness:
+                enabled: true
+                path: /
+                port: 3010
+              readiness:
+                enabled: true
+                path: /
+                port: 3010
+            resources:
+              requests:
+                cpu: 25m
+                memory: 105M
+        initContainers:
+          init-config:
+            args:
+            - |
+              node ./scripts/self-host-predeploy.js
+            command:
+            - /bin/sh
+            - -c
+            envFrom:
+            - secretRef:
+                name: affine-secret
+            image:
+              repository: ghcr.io/toeverything/affine-graphql
+              tag: stable-0ab8655@sha256:b461dd09b968bd2f067e98ed3c4988f4711dd811df7624f19d53c899061c4347
+            resources:
+              limits:
+                memory: 1Gi
+              requests:
+                cpu: 20m
+        pod:
+          securityContext:
+            runAsUser: 0
+    ingress:
+      app:
+        className: internal
+        enabled: true
+        hosts:
+        - host: nt.juno.moe
+          paths:
+          - path: /
+            service:
+              identifier: app
+              port: http
+        tls:
+        - hosts:
+          - nt.juno.moe
+    persistence:
+      workspace:
+        enabled: true
+        existingClaim: affine
+        globalMounts:
+        - path: /root/.affine
+    service:
+      app:
+        controller: affine
+        ports:
+          http:
+            port: 3010
+
--- kubernetes/apps/default/affine/app Kustomization: flux-system/affine ExternalSecret: default/affine-restic

+++ kubernetes/apps/default/affine/app Kustomization: flux-system/affine ExternalSecret: default/affine-restic

@@ -0,0 +1,26 @@

+---
+apiVersion: external-secrets.io/v1beta1
+kind: ExternalSecret
+metadata:
+  labels:
+    app.kubernetes.io/name: affine
+    kustomize.toolkit.fluxcd.io/name: affine
+    kustomize.toolkit.fluxcd.io/namespace: flux-system
+  name: affine-restic
+  namespace: default
+spec:
+  dataFrom:
+  - extract:
+      key: volsync-restic-template
+  secretStoreRef:
+    kind: ClusterSecretStore
+    name: onepassword-connect
+  target:
+    name: affine-restic-secret
+    template:
+      data:
+        AWS_ACCESS_KEY_ID: '{{ .AWS_ACCESS_KEY_ID }}'
+        AWS_SECRET_ACCESS_KEY: '{{ .AWS_SECRET_ACCESS_KEY }}'
+        RESTIC_PASSWORD: '{{ .RESTIC_PASSWORD }}'
+        RESTIC_REPOSITORY: '{{ .REPOSITORY_TEMPLATE }}/affine'
+
--- kubernetes/apps/default/affine/app Kustomization: flux-system/affine ReplicationDestination: default/affine

+++ kubernetes/apps/default/affine/app Kustomization: flux-system/affine ReplicationDestination: default/affine

@@ -0,0 +1,33 @@

+---
+apiVersion: volsync.backube/v1alpha1
+kind: ReplicationDestination
+metadata:
+  labels:
+    app.kubernetes.io/name: affine
+    kustomize.toolkit.fluxcd.io/name: affine
+    kustomize.toolkit.fluxcd.io/namespace: flux-system
+  name: affine
+  namespace: default
+spec:
+  restic:
+    accessModes:
+    - ReadWriteOnce
+    cacheAccessModes:
+    - ReadWriteOnce
+    cacheCapacity: 2Gi
+    cacheStorageClassName: ceph-block
+    capacity: 2Gi
+    cleanupCachePVC: true
+    cleanupTempPVC: true
+    copyMethod: Snapshot
+    enableFileDeletion: true
+    moverSecurityContext:
+      fsGroup: 1000
+      runAsGroup: 1000
+      runAsUser: 1000
+    repository: affine-restic-secret
+    storageClassName: ceph-block
+    volumeSnapshotClassName: csi-ceph-blockpool
+  trigger:
+    manual: restore-once
+
--- kubernetes/apps/default/affine/app Kustomization: flux-system/affine ReplicationSource: default/affine

+++ kubernetes/apps/default/affine/app Kustomization: flux-system/affine ReplicationSource: default/affine

@@ -0,0 +1,35 @@

+---
+apiVersion: volsync.backube/v1alpha1
+kind: ReplicationSource
+metadata:
+  labels:
+    app.kubernetes.io/name: affine
+    kustomize.toolkit.fluxcd.io/name: affine
+    kustomize.toolkit.fluxcd.io/namespace: flux-system
+  name: affine
+  namespace: default
+spec:
+  restic:
+    accessModes:
+    - ReadWriteOnce
+    cacheAccessModes:
+    - ReadWriteOnce
+    cacheCapacity: 2Gi
+    cacheStorageClassName: ceph-block
+    copyMethod: Snapshot
+    moverSecurityContext:
+      fsGroup: 1000
+      runAsGroup: 1000
+      runAsUser: 1000
+    pruneIntervalDays: 14
+    repository: affine-restic-secret
+    retain:
+      daily: 7
+      hourly: 24
+      weekly: 5
+    storageClassName: ceph-block
+    volumeSnapshotClassName: csi-ceph-blockpool
+  sourcePVC: affine
+  trigger:
+    schedule: 15 */8 * * *
+
--- kubernetes/apps/default/affine/app Kustomization: flux-system/affine PersistentVolumeClaim: default/affine

+++ kubernetes/apps/default/affine/app Kustomization: flux-system/affine PersistentVolumeClaim: default/affine

@@ -0,0 +1,22 @@

+---
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  labels:
+    app.kubernetes.io/name: affine
+    kustomize.toolkit.fluxcd.io/name: affine
+    kustomize.toolkit.fluxcd.io/namespace: flux-system
+  name: affine
+  namespace: default
+spec:
+  accessModes:
+  - ReadWriteOnce
+  dataSourceRef:
+    apiGroup: volsync.backube
+    kind: ReplicationDestination
+    name: affine
+  resources:
+    requests:
+      storage: 2Gi
+  storageClassName: ceph-block
+

bot-akira[bot] avatar Apr 21 '25 20:04 bot-akira[bot]

--- HelmRelease: media/komga Deployment: media/komga

+++ HelmRelease: media/komga Deployment: media/komga

@@ -38,13 +38,13 @@

       containers:
       - env:
         - name: SERVER_PORT
           value: '8080'
         - name: TZ
           value: Europe/Prague
-        image: gotson/komga:1.21.3@sha256:72dc9f81a0a528752e953028a7d3ca6a83f8eabe2a617e3c7e53cfa594c84256
+        image: gotson/komga:1.21.2@sha256:ba587695d786f0e8f4de8598b8aa2785cc8c671098ef1cb624819c2bb812789c
         name: app
         resources:
           limits:
             memory: 2Gi
           requests:
             cpu: 15m
--- HelmRelease: media/plex Deployment: media/plex

+++ HelmRelease: media/plex Deployment: media/plex

@@ -58,13 +58,13 @@

         - name: NVIDIA_DRIVER_CAPABILITIES
           value: all
         - name: NVIDIA_VISIBLE_DEVICES
           value: all
         - name: TZ
           value: Europe/Prague
-        image: ghcr.io/home-operations/plex:1.41.7.9799@sha256:0c31ee1ebee0b63ead2de2accc6c52b1e65e9322b39118715eeac8b2f2bb786f
+        image: ghcr.io/home-operations/plex:1.41.6.9685@sha256:37d36646471fb905a0080daaaa1f09ad3370b06149ed5f94dad73ead591cad0e
         livenessProbe:
           failureThreshold: 3
           httpGet:
             path: /identity
             port: 32400
           initialDelaySeconds: 0
--- HelmRelease: media/prowlarr Deployment: media/prowlarr

+++ HelmRelease: media/prowlarr Deployment: media/prowlarr

@@ -67,13 +67,13 @@

           value: develop
         - name: TZ
           value: Europe/Prague
         envFrom:
         - secretRef:
             name: prowlarr-secret
-        image: ghcr.io/home-operations/prowlarr:1.36.2.5059@sha256:8b998084a1696afb0bdc2e4c2a9750ac4e0f26528fc3db6fa77d7339811f305f
+        image: ghcr.io/home-operations/prowlarr:1.36.1.5049@sha256:94504dfaeccc5a72ae5cb9c8d776ebdddf91ab709a40bbacaf68bf7509f368d4
         livenessProbe:
           failureThreshold: 3
           httpGet:
             path: /ping
             port: 9696
           initialDelaySeconds: 0
--- HelmRelease: default/glance Deployment: default/glance

+++ HelmRelease: default/glance Deployment: default/glance

@@ -42,13 +42,13 @@

       hostPID: false
       dnsPolicy: ClusterFirst
       containers:
       - env:
         - name: TZ
           value: Europe/Prague
-        image: glanceapp/glance:v0.8.3@sha256:1fa252b1651c061cbe7a023326b314248bb820f81ee89a89970347b83684414c
+        image: glanceapp/glance:v0.8.2@sha256:fe80231892d3b7d7a2770a58d8058c190ac1c27eb806f5e2f09ee4cf101780eb
         name: app
         resources:
           limits:
             memory: 1Gi
           requests:
             cpu: 10m
--- HelmRelease: default/glance Ingress: default/glance

+++ HelmRelease: default/glance Ingress: default/glance

@@ -6,13 +6,13 @@

   labels:
     app.kubernetes.io/instance: glance
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/name: glance
   namespace: default
 spec:
-  ingressClassName: external
+  ingressClassName: internal
   tls:
   - hosts:
     - glance.juno.moe
   rules:
   - host: glance.juno.moe
     http:
--- HelmRelease: default/docmost Service: default/docmost

+++ HelmRelease: default/docmost Service: default/docmost

@@ -1,23 +0,0 @@

----
-apiVersion: v1
-kind: Service
-metadata:
-  name: docmost
-  labels:
-    app.kubernetes.io/instance: docmost
-    app.kubernetes.io/managed-by: Helm
-    app.kubernetes.io/name: docmost
-    app.kubernetes.io/service: docmost
-  namespace: default
-spec:
-  type: ClusterIP
-  ports:
-  - port: 3000
-    targetPort: 3000
-    protocol: TCP
-    name: http
-  selector:
-    app.kubernetes.io/component: docmost
-    app.kubernetes.io/instance: docmost
-    app.kubernetes.io/name: docmost
-
--- HelmRelease: default/docmost Deployment: default/docmost

+++ HelmRelease: default/docmost Deployment: default/docmost

@@ -1,93 +0,0 @@

----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: docmost
-  labels:
-    app.kubernetes.io/component: docmost
-    app.kubernetes.io/instance: docmost
-    app.kubernetes.io/managed-by: Helm
-    app.kubernetes.io/name: docmost
-  annotations:
-    reloader.stakater.com/auto: 'true'
-  namespace: default
-spec:
-  revisionHistoryLimit: 3
-  replicas: 1
-  strategy:
-    type: Recreate
-  selector:
-    matchLabels:
-      app.kubernetes.io/component: docmost
-      app.kubernetes.io/name: docmost
-      app.kubernetes.io/instance: docmost
-  template:
-    metadata:
-      labels:
-        app.kubernetes.io/component: docmost
-        app.kubernetes.io/instance: docmost
-        app.kubernetes.io/name: docmost
-    spec:
-      enableServiceLinks: false
-      serviceAccountName: default
-      automountServiceAccountToken: true
-      securityContext:
-        fsGroup: 1000
-        fsGroupChangePolicy: OnRootMismatch
-        runAsGroup: 1000
-        runAsNonRoot: true
-        runAsUser: 1000
-      hostIPC: false
-      hostNetwork: false
-      hostPID: false
-      dnsPolicy: ClusterFirst
-      initContainers:
-      - envFrom:
-        - secretRef:
-            name: docmost-secret
-        image: ghcr.io/home-operations/postgres-init:17
-        name: init-db
-        volumeMounts:
-        - mountPath: /app/data/storage
-          name: data
-      containers:
-      - env:
-        - name: APP_URL
-          value: https://nt.juno.moe
-        - name: PORT
-          value: '3000'
-        - name: REDIS_URL
-          value: redis://dragonfly.database.svc.cluster.local:6379
-        - name: TZ
-          value: Europe/Prague
-        envFrom:
-        - secretRef:
-            name: docmost-secret
-        image: docmost/docmost:0.20.4
-        livenessProbe:
-          failureThreshold: 3
-          initialDelaySeconds: 0
-          periodSeconds: 10
-          tcpSocket:
-            port: 3000
-          timeoutSeconds: 1
-        name: app
-        readinessProbe:
-          failureThreshold: 3
-          initialDelaySeconds: 0
-          periodSeconds: 10
-          tcpSocket:
-            port: 3000
-          timeoutSeconds: 1
-        resources:
-          requests:
-            cpu: 25m
-            memory: 105M
-        volumeMounts:
-        - mountPath: /app/data/storage
-          name: data
-      volumes:
-      - name: data
-        persistentVolumeClaim:
-          claimName: docmost
-
--- HelmRelease: default/docmost Ingress: default/docmost

+++ HelmRelease: default/docmost Ingress: default/docmost

@@ -1,24 +0,0 @@

----
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
-  name: docmost
-  labels:
-    app.kubernetes.io/instance: docmost
-    app.kubernetes.io/managed-by: Helm
-    app.kubernetes.io/name: docmost
-  namespace: default
-spec:
-  ingressClassName: internal
-  rules:
-  - host: nt.juno.moe
-    http:
-      paths:
-      - path: /
-        pathType: Prefix
-        backend:
-          service:
-            name: docmost
-            port:
-              number: 3000
-
--- HelmRelease: network/echo-server Deployment: network/echo-server

+++ HelmRelease: network/echo-server Deployment: network/echo-server

@@ -45,13 +45,13 @@

         - name: LOG_IGNORE_PATH
           value: /healthz
         - name: LOG_WITHOUT_NEWLINE
           value: 'true'
         - name: PROMETHEUS_ENABLED
           value: 'true'
-        image: ghcr.io/mendhak/http-https-echo:37
+        image: ghcr.io/mendhak/http-https-echo:36
         livenessProbe:
           failureThreshold: 3
           httpGet:
             path: /healthz
             port: 8080
           initialDelaySeconds: 0
--- HelmRelease: kube-system/nvidia-device-plugin DaemonSet: kube-system/nvidia-device-plugin

+++ HelmRelease: kube-system/nvidia-device-plugin DaemonSet: kube-system/nvidia-device-plugin

@@ -23,13 +23,13 @@

       annotations: {}
     spec:
       priorityClassName: system-node-critical
       runtimeClassName: nvidia
       securityContext: {}
       containers:
-      - image: nvcr.io/nvidia/k8s-device-plugin:v0.17.2
+      - image: nvcr.io/nvidia/k8s-device-plugin:v0.17.1
         imagePullPolicy: IfNotPresent
         name: nvidia-device-plugin-ctr
         command:
         - nvidia-device-plugin
         env:
         - name: MPS_ROOT
--- HelmRelease: kube-system/nvidia-device-plugin DaemonSet: kube-system/nvidia-device-plugin-mps-control-daemon

+++ HelmRelease: kube-system/nvidia-device-plugin DaemonSet: kube-system/nvidia-device-plugin-mps-control-daemon

@@ -23,25 +23,25 @@

       annotations: {}
     spec:
       priorityClassName: system-node-critical
       runtimeClassName: nvidia
       securityContext: {}
       initContainers:
-      - image: nvcr.io/nvidia/k8s-device-plugin:v0.17.2
+      - image: nvcr.io/nvidia/k8s-device-plugin:v0.17.1
         name: mps-control-daemon-mounts
         command:
         - mps-control-daemon
         - mount-shm
         securityContext:
           privileged: true
         volumeMounts:
         - name: mps-root
           mountPath: /mps
           mountPropagation: Bidirectional
       containers:
-      - image: nvcr.io/nvidia/k8s-device-plugin:v0.17.2
+      - image: nvcr.io/nvidia/k8s-device-plugin:v0.17.1
         imagePullPolicy: IfNotPresent
         name: mps-control-daemon-ctr
         command:
         - mps-control-daemon
         env:
         - name: NODE_NAME
--- HelmRelease: media/radarr Deployment: media/radarr

+++ HelmRelease: media/radarr Deployment: media/radarr

@@ -87,13 +87,13 @@

           value: develop
         - name: TZ
           value: Europe/Prague
         envFrom:
         - secretRef:
             name: radarr-secret
-        image: ghcr.io/home-operations/radarr:5.23.3.9987@sha256:a415c932fc51b43477d38f125d4c82848b27984bb5a574e03907eaefd7aa7490
+        image: ghcr.io/home-operations/radarr:5.23.1.9914@sha256:794fb31c2773491429cdf50906443c301c61298b1e53f1e95ccf723c30c73d3f
         livenessProbe:
           failureThreshold: 3
           httpGet:
             path: /ping
             port: 7878
           initialDelaySeconds: 0
--- HelmRelease: security/authentik Deployment: security/authentik-server

+++ HelmRelease: security/authentik Deployment: security/authentik-server

@@ -26,25 +26,25 @@

         app.kubernetes.io/name: authentik
         app.kubernetes.io/instance: authentik
         app.kubernetes.io/component: server
         app.kubernetes.io/managed-by: Helm
         app.kubernetes.io/part-of: authentik
       annotations:
-        checksum/secret: bf767a99a0e25bb1444db13938751f90135923a3a1b56e38b43559e4682d3c7d
+        checksum/secret: 6a453d36b3e9442e9dcdffca088b4fbea5c55689b98166a66b6ce7ab3d36387c
         secret.reloader.stakater.com/reload: authentik-secret
     spec:
       terminationGracePeriodSeconds: 30
       initContainers:
       - envFrom:
         - secretRef:
             name: authentik-secret
         image: ghcr.io/onedr0p/postgres-init:16
         name: init-db
       containers:
       - name: server
-        image: ghcr.io/goauthentik/server:2025.4.1
+        image: ghcr.io/goauthentik/server:2025.4.0
         imagePullPolicy: IfNotPresent
         args:
         - server
         env:
         - name: AUTHENTIK_LISTEN__HTTP
           value: 0.0.0.0:9000
--- HelmRelease: security/authentik Deployment: security/authentik-worker

+++ HelmRelease: security/authentik Deployment: security/authentik-worker

@@ -26,20 +26,20 @@

         app.kubernetes.io/name: authentik
         app.kubernetes.io/instance: authentik
         app.kubernetes.io/component: worker
         app.kubernetes.io/managed-by: Helm
         app.kubernetes.io/part-of: authentik
       annotations:
-        checksum/secret: bf767a99a0e25bb1444db13938751f90135923a3a1b56e38b43559e4682d3c7d
+        checksum/secret: 6a453d36b3e9442e9dcdffca088b4fbea5c55689b98166a66b6ce7ab3d36387c
         secret.reloader.stakater.com/reload: authentik-secret
     spec:
       serviceAccountName: authentik
       terminationGracePeriodSeconds: 30
       containers:
       - name: worker
-        image: ghcr.io/goauthentik/server:2025.4.1
+        image: ghcr.io/goauthentik/server:2025.4.0
         imagePullPolicy: IfNotPresent
         args:
         - worker
         env: null
         envFrom:
         - secretRef:
--- HelmRelease: network/cloudflared Deployment: network/cloudflared

+++ HelmRelease: network/cloudflared Deployment: network/cloudflared

@@ -62,13 +62,13 @@

         - name: TUNNEL_ORIGIN_ENABLE_HTTP2
           value: 'true'
         - name: TUNNEL_POST_QUANTUM
           value: 'true'
         - name: TUNNEL_TRANSPORT_PROTOCOL
           value: quic
-        image: docker.io/cloudflare/cloudflared:2025.5.0
+        image: docker.io/cloudflare/cloudflared:2025.4.2
         livenessProbe:
           failureThreshold: 3
           httpGet:
             path: /ready
             port: 8080
           initialDelaySeconds: 0
--- HelmRelease: observability/loki Deployment: observability/loki-gateway

+++ HelmRelease: observability/loki Deployment: observability/loki-gateway

@@ -32,13 +32,13 @@

         runAsGroup: 101
         runAsNonRoot: true
         runAsUser: 101
       terminationGracePeriodSeconds: 30
       containers:
       - name: nginx
-        image: docker.io/nginxinc/nginx-unprivileged:1.28-alpine
+        image: docker.io/nginxinc/nginx-unprivileged:1.27-alpine
         imagePullPolicy: IfNotPresent
         ports:
         - name: http-metrics
           containerPort: 8080
           protocol: TCP
         readinessProbe:
--- HelmRelease: observability/loki StatefulSet: observability/loki

+++ HelmRelease: observability/loki StatefulSet: observability/loki

@@ -41,13 +41,13 @@

         runAsGroup: 10001
         runAsNonRoot: true
         runAsUser: 10001
       terminationGracePeriodSeconds: 30
       containers:
       - name: loki-sc-rules
-        image: ghcr.io/kiwigrid/k8s-sidecar:1.30.3
+        image: ghcr.io/kiwigrid/k8s-sidecar:1.30.2
         imagePullPolicy: IfNotPresent
         env:
         - name: METHOD
           value: WATCH
         - name: LABEL
           value: loki_rule
@@ -72,13 +72,13 @@

             - ALL
           readOnlyRootFilesystem: true
         volumeMounts:
         - name: sc-rules-volume
           mountPath: /rules/fake
       - name: loki
-        image: docker.io/grafana/loki:3.5.0
+        image: docker.io/grafana/loki:3.4.2
         imagePullPolicy: IfNotPresent
         args:
         - -config.file=/etc/loki/config/config.yaml
         - -target=all
         ports:
         - name: http-metrics
--- HelmRelease: observability/grafana Deployment: observability/grafana

+++ HelmRelease: observability/grafana Deployment: observability/grafana

@@ -19,13 +19,13 @@

   template:
     metadata:
       labels:
         app.kubernetes.io/name: grafana
         app.kubernetes.io/instance: grafana
       annotations:
-        checksum/dashboards-json-config: 88a744de296a7a49524285b4cad49e1b27f3fd5a6b98612246e9a91b8e37c474
+        checksum/dashboards-json-config: d91e34269f1b2b5f9fcba50ae70a62edf11f78f719fc2c8e889a7951b659d5a1
         checksum/sc-dashboard-provider-config: c942752180ddff51a3ab63b7d256cf3d856d90757b6f804cbc420562989d5a84
         kubectl.kubernetes.io/default-container: grafana
     spec:
       serviceAccountName: grafana
       automountServiceAccountToken: true
       shareProcessNamespace: false
@@ -135,13 +135,13 @@

           seccompProfile:
             type: RuntimeDefault
         volumeMounts:
         - name: sc-datasources-volume
           mountPath: /etc/grafana/provisioning/datasources
       - name: grafana
-        image: docker.io/grafana/grafana:12.0.0-security-01
+        image: docker.io/grafana/grafana:12.0.0
         imagePullPolicy: IfNotPresent
         securityContext:
           allowPrivilegeEscalation: false
           capabilities:
             drop:
             - ALL
--- HelmRelease: observability/kube-prometheus-stack Prometheus: observability/kube-prometheus-stack

+++ HelmRelease: observability/kube-prometheus-stack Prometheus: observability/kube-prometheus-stack

@@ -82,13 +82,13 @@

           labelSelector:
             matchExpressions:
             - key: app.kubernetes.io/name
               operator: In
               values:
               - prometheus
-            - key: app.kubernetes.io/instance
+            - key: prometheus
               operator: In
               values:
               - kube-prometheus-stack
   portName: http-web
   hostNetwork: false
 
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-dashboard

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-dashboard

@@ -15,261 +15,323 @@

   cilium-dashboard.json: |
     {
       "annotations": {
         "list": [
           {
             "builtIn": 1,
-            "datasource": "-- Grafana --",
+            "datasource": {
+              "type": "datasource",
+              "uid": "grafana"
+            },
             "enable": true,
             "hide": true,
             "iconColor": "rgba(0, 211, 255, 1)",
             "name": "Annotations & Alerts",
             "type": "dashboard"
           }
         ]
       },
       "description": "Dashboard for Cilium (https://cilium.io/) metrics",
       "editable": true,
-      "gnetId": null,
+      "fiscalYearStartMonth": 0,
       "graphTooltip": 1,
-      "iteration": 1606309591568,
+      "id": 1,
       "links": [],
       "panels": [
         {
-          "aliasColors": {
-            "error": "#890f02",
-            "warning": "#c15c17"
-          },
-          "bars": false,
-          "dashLength": 10,
-          "dashes": false,
           "datasource": {
             "type": "prometheus",
             "uid": "${DS_PROMETHEUS}"
           },
           "fieldConfig": {
             "defaults": {
-              "custom": {}
-            },
-            "overrides": []
-          },
-          "fill": 1,
-          "fillGradient": 0,
+              "color": {
+                "mode": "palette-classic"
+              },
+              "custom": {
+                "axisBorderShow": false,
+                "axisCenteredZero": false,
+                "axisColorMode": "text",
+                "axisLabel": "",
+                "axisPlacement": "auto",
+                "barAlignment": 0,
+                "drawStyle": "line",
+                "fillOpacity": 10,
+                "gradientMode": "none",
+                "hideFrom": {
+                  "legend": false,
+                  "tooltip": false,
+                  "viz": false
+                },
+                "insertNulls": false,
+                "lineInterpolation": "linear",
+                "lineWidth": 1,
+                "pointSize": 5,
+                "scaleDistribution": {
+                  "type": "linear"
+                },
+                "showPoints": "never",
+                "spanNulls": false,
+                "stacking": {
+                  "group": "A",
+                  "mode": "none"
+                },
+                "thresholdsStyle": {
+                  "mode": "off"
+                }
+              },
+              "links": [],
+              "mappings": [],
+              "thresholds": {
+                "mode": "absolute",
+                "steps": [
+                  {
+                    "color": "green",
+                    "value": null
+                  },
+                  {
+                    "color": "red",
+                    "value": 80
+                  }
+                ]
+              },
+              "unit": "opm"
+            },
+            "overrides": [
+              {
+                "matcher": {
+                  "id": "byName",
+                  "options": "error"
+                },
+                "properties": [
+                  {
+                    "id": "color",
+                    "value": {
+                      "fixedColor": "#890f02",
+                      "mode": "fixed"
+                    }
+                  }
+                ]
+              },
+              {
+                "matcher": {
+                  "id": "byName",
+                  "options": "warning"
+                },
+                "properties": [
+                  {
+                    "id": "color",
+                    "value": {
+                      "fixedColor": "#c15c17",
+                      "mode": "fixed"
+                    }
+                  }
+                ]
+              }
+            ]
+          },
           "gridPos": {
             "h": 5,
             "w": 12,
             "x": 0,
             "y": 0
           },
-          "hiddenSeries": false,
           "id": 76,
-          "legend": {
-            "avg": false,
-            "current": false,
-            "max": false,
-            "min": false,
-            "show": true,
-            "total": false,
-            "values": false
-          },
-          "lines": true,
-          "linewidth": 1,
-          "links": [],
-          "nullPointMode": "null",
           "options": {
-            "dataLinks": []
-          },
-          "paceLength": 10,
-          "percentage": false,
-          "pointradius": 5,
-          "points": false,
-          "renderer": "flot",
-          "seriesOverrides": [
-            {
-              "alias": "error",
-              "yaxis": 2
-            }
-          ],
-          "spaceLength": 10,
-          "stack": false,
-          "steppedLine": false,
+            "legend": {
+              "calcs": [],
+              "displayMode": "list",
+              "placement": "bottom",
+              "showLegend": true
+            },
+            "tooltip": {
+              "mode": "multi",
+              "sort": "none"
+            }
+          },
+          "pluginVersion": "10.4.3",
           "targets": [
             {
+              "datasource": {
+                "type": "prometheus",
+                "uid": "${DS_PROMETHEUS}"
+              },
+              "editorMode": "code",
               "expr": "sum(rate(cilium_errors_warnings_total{k8s_app=\"cilium\", pod=~\"$pod\"}[1m])) by (pod, level) * 60",
               "format": "time_series",
               "intervalFactor": 1,
               "legendFormat": "{{level}}",
+              "range": true,
               "refId": "A"
             }
           ],
-          "thresholds": [],
-          "timeFrom": null,
-          "timeRegions": [],
-          "timeShift": null,
           "title": "Errors & Warnings",
-          "tooltip": {
-            "shared": true,
-            "sort": 0,
-            "value_type": "individual"
-          },
-          "type": "graph",
-          "xaxis": {
-            "buckets": null,
-            "mode": "time",
-            "name": null,
-            "show": true,
-            "values": []
-          },
-          "yaxes": [
-            {
-              "format": "opm",
-              "label": null,
-              "logBase": 1,
-              "max": null,
-              "min": null,
-              "show": true
-            },
-            {
-              "format": "opm",
-              "label": null,
-              "logBase": 1,
-              "max": null,
-              "min": null,
-              "show": true
-            }
-          ],
-          "yaxis": {
-            "align": false,
-            "alignLevel": null
-          }
+          "type": "timeseries"
         },
         {
-          "aliasColors": {
-            "avg": "#cffaff"
-          },
-          "bars": false,
-          "dashLength": 10,
-          "dashes": false,
           "datasource": {
             "type": "prometheus",
             "uid": "${DS_PROMETHEUS}"
           },
           "fieldConfig": {
             "defaults": {
-              "custom": {}
-            },
-            "overrides": []
-          },
-          "fill": 0,
-          "fillGradient": 0,
+              "color": {
+                "mode": "palette-classic"
+              },
+              "custom": {
+                "axisBorderShow": false,
+                "axisCenteredZero": false,
+                "axisColorMode": "text",
+                "axisLabel": "",
+                "axisPlacement": "auto",
+                "barAlignment": 0,
+                "drawStyle": "line",
+                "fillOpacity": 35,
+                "gradientMode": "none",
+                "hideFrom": {
+                  "legend": false,
+                  "tooltip": false,
+                  "viz": false
+                },
+                "insertNulls": false,
+                "lineInterpolation": "linear",
+                "lineWidth": 1,
+                "pointSize": 5,
+                "scaleDistribution": {
+                  "type": "linear"
+                },
+                "showPoints": "never",
+                "spanNulls": false,
+                "stacking": {
+                  "group": "A",
+                  "mode": "none"
+                },
+                "thresholdsStyle": {
+                  "mode": "off"
+                }
+              },
+              "links": [],
+              "mappings": [],
+              "thresholds": {
+                "mode": "absolute",
+                "steps": [
+                  {
+                    "color": "green",
+                    "value": null
+                  },
+                  {
+                    "color": "red",
+                    "value": 80
+                  }
+                ]
+              },
+              "unit": "percent"
+            },
+            "overrides": [
+              {
+                "matcher": {
+                  "id": "byName",
+                  "options": "avg"
+                },
+                "properties": [
+                  {
+                    "id": "color",
+                    "value": {
+                      "fixedColor": "#cffaff",
+                      "mode": "fixed"
+                    }
+                  }
+                ]
+              },
+              {
+                "matcher": {
+                  "id": "byName",
+                  "options": "max"
+                },
+                "properties": [
+                  {
+                    "id": "custom.fillBelowTo",
+                    "value": "min"
+                  },
+                  {
+                    "id": "custom.lineWidth",
+                    "value": 0
+                  }
+                ]
+              },
+              {
+                "matcher": {
+                  "id": "byName",
+                  "options": "min"
+                },
+                "properties": [
+                  {
+                    "id": "custom.lineWidth",
+                    "value": 0
+                  }
+                ]
+              }
+            ]
+          },
           "gridPos": {
[Diff truncated by flux-local]
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-config

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-config

@@ -16,42 +16,52 @@

   policy-cidr-match-mode: ''
   prometheus-serve-addr: :9962
   controller-group-metrics: write-cni-file sync-host-ips sync-lb-maps-with-k8s-services
   proxy-prometheus-port: '9964'
   operator-prometheus-serve-addr: :9963
   enable-metrics: 'true'
+  enable-policy-secrets-sync: 'true'
+  policy-secrets-only-from-secrets-namespace: 'true'
+  policy-secrets-namespace: cilium-secrets
   enable-ipv4: 'true'
   enable-ipv6: 'false'
   custom-cni-conf: 'false'
   enable-bpf-clock-probe: 'false'
   monitor-aggregation: medium
   monitor-aggregation-interval: 5s
   monitor-aggregation-flags: all
   bpf-map-dynamic-size-ratio: '0.0025'
   bpf-policy-map-max: '16384'
   bpf-lb-map-max: '65536'
   bpf-lb-external-clusterip: 'false'
+  bpf-lb-source-range-all-types: 'false'
+  bpf-lb-algorithm-annotation: 'false'
+  bpf-lb-mode-annotation: 'false'
+  bpf-distributed-lru: 'false'
   bpf-events-drop-enabled: 'true'
   bpf-events-policy-verdict-enabled: 'true'
   bpf-events-trace-enabled: 'true'
   preallocate-bpf-maps: 'false'
   cluster-name: home-kubernetes
   cluster-id: '1'
   routing-mode: native
+  tunnel-protocol: vxlan
+  tunnel-source-port-range: 0-0
   service-no-backend-response: reject
   enable-l7-proxy: 'true'
   enable-ipv4-masquerade: 'true'
   enable-ipv4-big-tcp: 'false'
   enable-ipv6-big-tcp: 'false'
   enable-ipv6-masquerade: 'true'
   enable-tcx: 'true'
   datapath-mode: veth
   enable-bpf-masquerade: 'false'
   enable-masquerade-to-route-source: 'false'
   enable-xt-socket-fallback: 'true'
   install-no-conntrack-iptables-rules: 'false'
+  iptables-random-fully: 'false'
   auto-direct-node-routes: 'true'
   direct-routing-skip-unreachable: 'false'
   enable-local-redirect-policy: 'true'
   ipv4-native-routing-cidr: 10.69.0.0/16
   enable-runtime-device-detection: 'true'
   kube-proxy-replacement: 'true'
@@ -63,66 +73,68 @@

   enable-health-check-loadbalancer-ip: 'false'
   node-port-bind-protection: 'true'
   enable-auto-protect-node-port-range: 'true'
   bpf-lb-mode: dsr
   bpf-lb-algorithm: maglev
   bpf-lb-acceleration: disabled
+  enable-experimental-lb: 'false'
   enable-svc-source-range-check: 'true'
   enable-l2-neigh-discovery: 'true'
   arping-refresh-period: 30s
   k8s-require-ipv4-pod-cidr: 'false'
   k8s-require-ipv6-pod-cidr: 'false'
   enable-endpoint-routes: 'true'
   enable-k8s-networkpolicy: 'true'
+  enable-endpoint-lockdown-on-policy-overflow: 'false'
   write-cni-conf-when-ready: /host/etc/cni/net.d/05-cilium.conflist
   cni-exclusive: 'false'
   cni-log-file: /var/run/cilium/cilium-cni.log
   enable-endpoint-health-checking: 'true'
   enable-health-checking: 'true'
+  health-check-icmp-failure-threshold: '3'
   enable-well-known-identities: 'false'
   enable-node-selector-labels: 'false'
   synchronize-k8s-nodes: 'true'
   operator-api-serve-addr: 127.0.0.1:9234
   enable-hubble: 'true'
   hubble-socket-path: /var/run/cilium/hubble.sock
   hubble-metrics-server: :9965
   hubble-metrics-server-enable-tls: 'false'
+  enable-hubble-open-metrics: 'false'
   hubble-metrics: dns:query drop tcp flow port-distribution icmp http
-  enable-hubble-open-metrics: 'false'
   hubble-export-file-max-size-mb: '10'
   hubble-export-file-max-backups: '5'
   hubble-listen-address: :4244
   hubble-disable-tls: 'false'
   hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
   hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
   hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
   ipam: kubernetes
   ipam-cilium-node-update-rate: 15s
+  default-lb-service-ipam: lbipam
   egress-gateway-reconciliation-trigger-interval: 1s
   enable-vtep: 'false'
   vtep-endpoint: ''
   vtep-cidr: ''
   vtep-mask: ''
   vtep-mac: ''
   enable-l2-announcements: 'true'
   procfs: /host/proc
   bpf-root: /sys/fs/bpf
   cgroup-root: /sys/fs/cgroup
   enable-k8s-terminating-endpoint: 'true'
   enable-sctp: 'false'
-  k8s-client-qps: '10'
-  k8s-client-burst: '20'
   remove-cilium-node-taints: 'true'
   set-cilium-node-taints: 'true'
   set-cilium-is-up-condition: 'true'
   unmanaged-pod-watcher-interval: '15'
   dnsproxy-enable-transparent-mode: 'true'
   dnsproxy-socket-linger-timeout: '10'
   tofqdns-dns-reject-response-code: refused
   tofqdns-enable-dns-compression: 'true'
-  tofqdns-endpoint-max-ip-per-hostname: '50'
+  tofqdns-endpoint-max-ip-per-hostname: '1000'
   tofqdns-idle-connection-grace-period: 0s
   tofqdns-max-deferred-connection-deletes: '10000'
   tofqdns-proxy-response-max-delay: 100ms
   agent-not-ready-taint-key: node.cilium.io/agent-not-ready
   mesh-auth-enabled: 'true'
   mesh-auth-queue-size: '1024'
@@ -132,15 +144,22 @@

   proxy-xff-num-trusted-hops-egress: '0'
   proxy-connect-timeout: '2'
   proxy-initial-fetch-timeout: '30'
   proxy-max-requests-per-connection: '0'
   proxy-max-connection-duration-seconds: '0'
   proxy-idle-timeout-seconds: '60'
+  proxy-max-concurrent-retries: '128'
+  http-retry-count: '3'
   external-envoy-proxy: 'false'
   envoy-base-id: '0'
+  envoy-access-log-buffer-size: '4096'
   envoy-keep-cap-netbindservice: 'false'
   max-connected-clusters: '255'
   clustermesh-enable-endpoint-sync: 'false'
   clustermesh-enable-mcs-api: 'false'
   nat-map-stats-entries: '32'
   nat-map-stats-interval: 30s
+  enable-internal-traffic-policy: 'true'
+  enable-lb-ipam: 'true'
+  enable-non-default-deny-policies: 'true'
+  enable-source-ip-verification: 'true'
 
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-operator-dashboard

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-operator-dashboard

@@ -1013,13 +1013,19 @@

       ],
       "refresh": false,
       "schemaVersion": 25,
       "style": "dark",
       "tags": [],
       "templating": {
-        "list": []
+        "list": [
+          {
+            "type": "datasource",
+            "name": "DS_PROMETHEUS",
+            "query": "prometheus"
+          }
+        ]
       },
       "time": {
         "from": "now-30m",
         "to": "now"
       },
       "timepicker": {
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-relay-config

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-relay-config

@@ -3,12 +3,11 @@

 kind: ConfigMap
 metadata:
   name: hubble-relay-config
   namespace: kube-system
 data:
   config.yaml: "cluster-name: home-kubernetes\npeer-service: \"hubble-peer.kube-system.svc.cluster.local.:443\"\
-    \nlisten-address: :4245\ngops: true\ngops-port: \"9893\"\ndial-timeout: \nretry-timeout:\
-    \ \nsort-buffer-len-max: \nsort-buffer-drain-timeout: \ntls-hubble-client-cert-file:\
-    \ /var/lib/hubble-relay/tls/client.crt\ntls-hubble-client-key-file: /var/lib/hubble-relay/tls/client.key\n\
-    tls-hubble-server-ca-files: /var/lib/hubble-relay/tls/hubble-server-ca.crt\n\n\
-    disable-server-tls: true\n"
+    \nlisten-address: :4245\ngops: true\ngops-port: \"9893\"\nretry-timeout: \nsort-buffer-len-max:\
+    \ \nsort-buffer-drain-timeout: \ntls-hubble-client-cert-file: /var/lib/hubble-relay/tls/client.crt\n\
+    tls-hubble-client-key-file: /var/lib/hubble-relay/tls/client.key\ntls-hubble-server-ca-files:\
+    \ /var/lib/hubble-relay/tls/hubble-server-ca.crt\n\ndisable-server-tls: true\n"
 
--- HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium-operator

+++ HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium-operator

@@ -53,12 +53,13 @@

   - update
   - patch
 - apiGroups:
   - ''
   resources:
   - namespaces
+  - secrets
   verbs:
   - get
   - list
   - watch
 - apiGroups:
   - ''
@@ -135,12 +136,19 @@

   - update
   - get
   - list
   - watch
   - delete
   - patch
+- apiGroups:
+  - cilium.io
+  resources:
+  - ciliumbgpclusterconfigs/status
+  - ciliumbgppeerconfigs/status
+  verbs:
+  - update
 - apiGroups:
   - apiextensions.k8s.io
   resources:
   - customresourcedefinitions
   verbs:
   - create
@@ -181,12 +189,13 @@

   resources:
   - ciliumloadbalancerippools
   - ciliumpodippools
   - ciliumbgppeeringpolicies
   - ciliumbgpclusterconfigs
   - ciliumbgpnodeconfigoverrides
+  - ciliumbgppeerconfigs
   verbs:
   - get
   - list
   - watch
 - apiGroups:
   - cilium.io
--- HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium

+++ HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium

@@ -16,24 +16,24 @@

     rollingUpdate:
       maxUnavailable: 2
     type: RollingUpdate
   template:
     metadata:
       annotations:
-        cilium.io/cilium-configmap-checksum: de8cf26ceabe378b2f47632fd3fd210ee1e5b4ab5d6f3f888abe408c8a29cf7f
+        cilium.io/cilium-configmap-checksum: 1cffe0b0e916997525c158d60cc907547f3574d4b5fd576451983655372006cc
       labels:
         k8s-app: cilium
         app.kubernetes.io/name: cilium-agent
         app.kubernetes.io/part-of: cilium
     spec:
       securityContext:
         appArmorProfile:
           type: Unconfined
       containers:
       - name: cilium-agent
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.4@sha256:24a73fe795351cf3279ac8e84918633000b52a9654ff73a6b0d7223bcff4a67a
         imagePullPolicy: IfNotPresent
         command:
         - cilium-agent
         args:
         - --config-dir=/tmp/cilium/config-map
         startupProbe:
@@ -55,12 +55,14 @@

             path: /healthz
             port: 9879
             scheme: HTTP
             httpHeaders:
             - name: brief
               value: 'true'
+            - name: require-k8s-connectivity
+              value: 'false'
           periodSeconds: 30
           successThreshold: 1
           failureThreshold: 10
           timeoutSeconds: 5
         readinessProbe:
           httpGet:
@@ -197,13 +199,13 @@

           mountPath: /var/lib/cilium/tls/hubble
           readOnly: true
         - name: tmp
           mountPath: /tmp
       initContainers:
       - name: config
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.4@sha256:24a73fe795351cf3279ac8e84918633000b52a9654ff73a6b0d7223bcff4a67a
         imagePullPolicy: IfNotPresent
         command:
         - cilium-dbg
         - build-config
         env:
         - name: K8S_NODE_NAME
@@ -222,13 +224,13 @@

           value: '7445'
         volumeMounts:
         - name: tmp
           mountPath: /tmp
         terminationMessagePolicy: FallbackToLogsOnError
       - name: mount-cgroup
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.4@sha256:24a73fe795351cf3279ac8e84918633000b52a9654ff73a6b0d7223bcff4a67a
         imagePullPolicy: IfNotPresent
         env:
         - name: CGROUP_ROOT
           value: /sys/fs/cgroup
         - name: BIN_PATH
           value: /opt/cni/bin
@@ -254,13 +256,13 @@

             - SYS_ADMIN
             - SYS_CHROOT
             - SYS_PTRACE
             drop:
             - ALL
       - name: apply-sysctl-overwrites
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.4@sha256:24a73fe795351cf3279ac8e84918633000b52a9654ff73a6b0d7223bcff4a67a
         imagePullPolicy: IfNotPresent
         env:
         - name: BIN_PATH
           value: /opt/cni/bin
         command:
         - sh
@@ -284,13 +286,13 @@

             - SYS_ADMIN
             - SYS_CHROOT
             - SYS_PTRACE
             drop:
             - ALL
       - name: mount-bpf-fs
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.4@sha256:24a73fe795351cf3279ac8e84918633000b52a9654ff73a6b0d7223bcff4a67a
         imagePullPolicy: IfNotPresent
         args:
         - mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpf
         command:
         - /bin/bash
         - -c
@@ -300,13 +302,13 @@

           privileged: true
         volumeMounts:
         - name: bpf-maps
           mountPath: /sys/fs/bpf
           mountPropagation: Bidirectional
       - name: clean-cilium-state
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.4@sha256:24a73fe795351cf3279ac8e84918633000b52a9654ff73a6b0d7223bcff4a67a
         imagePullPolicy: IfNotPresent
         command:
         - /init-container.sh
         env:
         - name: CILIUM_ALL_STATE
           valueFrom:
@@ -348,13 +350,13 @@

         - name: cilium-cgroup
           mountPath: /sys/fs/cgroup
           mountPropagation: HostToContainer
         - name: cilium-run
           mountPath: /var/run/cilium
       - name: install-cni-binaries
-        image: quay.io/cilium/cilium:v1.16.6@sha256:1e0896b1c4c188b4812c7e0bed7ec3f5631388ca88325c1391a0ef9172c448da
+        image: quay.io/cilium/cilium:v1.17.4@sha256:24a73fe795351cf3279ac8e84918633000b52a9654ff73a6b0d7223bcff4a67a
         imagePullPolicy: IfNotPresent
         command:
         - /install-plugin.sh
         resources:
           requests:
             cpu: 100m
--- HelmRelease: kube-system/cilium Deployment: kube-system/cilium-operator

+++ HelmRelease: kube-system/cilium Deployment: kube-system/cilium-operator

@@ -20,22 +20,22 @@

       maxSurge: 25%
       maxUnavailable: 100%
     type: RollingUpdate
   template:
     metadata:
       annotations:
-        cilium.io/cilium-configmap-checksum: de8cf26ceabe378b2f47632fd3fd210ee1e5b4ab5d6f3f888abe408c8a29cf7f
+        cilium.io/cilium-configmap-checksum: 1cffe0b0e916997525c158d60cc907547f3574d4b5fd576451983655372006cc
       labels:
         io.cilium/app: operator
         name: cilium-operator
         app.kubernetes.io/part-of: cilium
         app.kubernetes.io/name: cilium-operator
     spec:
       containers:
       - name: cilium-operator
-        image: quay.io/cilium/operator-generic:v1.16.6@sha256:13d32071d5a52c069fb7c35959a56009c6914439adc73e99e098917646d154fc
+        image: quay.io/cilium/operator-generic:v1.17.4@sha256:a3906412f477b09904f46aac1bed28eb522bef7899ed7dd81c15f78b7aa1b9b5
         imagePullPolicy: IfNotPresent
         command:
         - cilium-operator-generic
         args:
         - --config-dir=/tmp/cilium/config-map
         - --debug=$(CILIUM_DEBUG)
--- HelmRelease: kube-system/cilium Deployment: kube-system/hubble-relay

+++ HelmRelease: kube-system/cilium Deployment: kube-system/hubble-relay

@@ -17,13 +17,13 @@

     rollingUpdate:
       maxUnavailable: 1
     type: RollingUpdate
   template:
     metadata:
       annotations:
-        cilium.io/hubble-relay-configmap-checksum: 7013f296857a469857f02e7d0b7e0933fcdf29925c02e28162c33b4a8a00baca
+        cilium.io/hubble-relay-configmap-checksum: eff0e5f47a53fa4b010591dc8fd68bffd75ccd6298d9d502cc7125e0b3fede93
       labels:
         k8s-app: hubble-relay
         app.kubernetes.io/name: hubble-relay
         app.kubernetes.io/part-of: cilium
     spec:
       securityContext:
@@ -34,13 +34,13 @@

           capabilities:
             drop:
             - ALL
           runAsGroup: 65532
           runAsNonRoot: true
           runAsUser: 65532
-        image: quay.io/cilium/hubble-relay:v1.16.6@sha256:ca8dcaa5a81a37743b1397ba2221d16d5d63e4a47607584f1bf50a3b0882bf3b
+        image: quay.io/cilium/hubble-relay:v1.17.4@sha256:c16de12a64b8b56de62b15c1652d036253b40cd7fa643d7e1a404dc71dc66441
         imagePullPolicy: IfNotPresent
         command:
         - hubble-relay
         args:
         - serve
         ports:
--- HelmRelease: kube-system/cilium Deployment: kube-system/hubble-ui

+++ HelmRelease: kube-system/cilium Deployment: kube-system/hubble-ui

@@ -32,13 +32,13 @@

         runAsUser: 1001
       priorityClassName: null
       serviceAccountName: hubble-ui
       automountServiceAccountToken: true
       containers:
       - name: frontend
-        image: quay.io/cilium/hubble-ui:v0.13.1@sha256:e2e9313eb7caf64b0061d9da0efbdad59c6c461f6ca1752768942bfeda0796c6
+        image: quay.io/cilium/hubble-ui:v0.13.2@sha256:9e37c1296b802830834cc87342a9182ccbb71ffebb711971e849221bd9d59392
         imagePullPolicy: IfNotPresent
         ports:
         - name: http
           containerPort: 8081
         livenessProbe:
           httpGet:
@@ -53,13 +53,13 @@

           mountPath: /etc/nginx/conf.d/default.conf
           subPath: nginx.conf
         - name: tmp-dir
           mountPath: /tmp
         terminationMessagePolicy: FallbackToLogsOnError
       - name: backend
-        image: quay.io/cilium/hubble-ui-backend:v0.13.1@sha256:0e0eed917653441fded4e7cdb096b7be6a3bddded5a2dd10812a27b1fc6ed95b
+        image: quay.io/cilium/hubble-ui-backend:v0.13.2@sha256:a034b7e98e6ea796ed26df8f4e71f83fc16465a19d166eff67a03b822c0bfa15
         imagePullPolicy: IfNotPresent
         env:
         - name: EVENTS_SERVER_PORT
           value: '8090'
         - name: FLOWS_API_ADDR
           value: hubble-relay:80
--- HelmRelease: kube-system/cilium ServiceMonitor: kube-system/cilium-agent

+++ HelmRelease: kube-system/cilium ServiceMonitor: kube-system/cilium-agent

@@ -6,13 +6,13 @@

   namespace: kube-system
   labels:
     app.kubernetes.io/part-of: cilium
 spec:
   selector:
     matchLabels:
-      k8s-app: cilium
+      app.kubernetes.io/name: cilium-agent
   namespaceSelector:
     matchNames:
     - kube-system
   endpoints:
   - port: metrics
     interval: 10s
--- HelmRelease: kube-system/cilium Namespace: kube-system/cilium-secrets

+++ HelmRelease: kube-system/cilium Namespace: kube-system/cilium-secrets

@@ -0,0 +1,8 @@

+---
+apiVersion: v1
+kind: Namespace
+metadata:
+  name: cilium-secrets
+  labels:
+    app.kubernetes.io/part-of: cilium
+
--- HelmRelease: kube-system/cilium Role: cilium-secrets/cilium-tlsinterception-secrets

+++ HelmRelease: kube-system/cilium Role: cilium-secrets/cilium-tlsinterception-secrets

@@ -0,0 +1,18 @@

+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+  name: cilium-tlsinterception-secrets
+  namespace: cilium-secrets
+  labels:
+    app.kubernetes.io/part-of: cilium
+rules:
+- apiGroups:
+  - ''
+  resources:
+  - secrets
+  verbs:
+  - get
+  - list
+  - watch
+
--- HelmRelease: kube-system/cilium Role: cilium-secrets/cilium-operator-tlsinterception-secrets

+++ HelmRelease: kube-system/cilium Role: cilium-secrets/cilium-operator-tlsinterception-secrets

@@ -0,0 +1,19 @@

+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+  name: cilium-operator-tlsinterception-secrets
+  namespace: cilium-secrets
+  labels:
+    app.kubernetes.io/part-of: cilium
+rules:
+- apiGroups:
+  - ''
+  resources:
+  - secrets
+  verbs:
+  - create
+  - delete
+  - update
+  - patch
+
--- HelmRelease: kube-system/cilium RoleBinding: cilium-secrets/cilium-tlsinterception-secrets

+++ HelmRelease: kube-system/cilium RoleBinding: cilium-secrets/cilium-tlsinterception-secrets

@@ -0,0 +1,17 @@

+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+  name: cilium-tlsinterception-secrets
+  namespace: cilium-secrets
+  labels:
+    app.kubernetes.io/part-of: cilium
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: Role
+  name: cilium-tlsinterception-secrets
+subjects:
+- kind: ServiceAccount
+  name: cilium
+  namespace: kube-system
+
--- HelmRelease: kube-system/cilium RoleBinding: cilium-secrets/cilium-operator-tlsinterception-secrets

+++ HelmRelease: kube-system/cilium RoleBinding: cilium-secrets/cilium-operator-tlsinterception-secrets

@@ -0,0 +1,17 @@

+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+  name: cilium-operator-tlsinterception-secrets
+  namespace: cilium-secrets
+  labels:
+    app.kubernetes.io/part-of: cilium
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: Role
+  name: cilium-operator-tlsinterception-secrets
+subjects:
+- kind: ServiceAccount
+  name: cilium-operator
+  namespace: kube-system
+
--- HelmRelease: default/affine Service: default/affine

+++ HelmRelease: default/affine Service: default/affine

@@ -0,0 +1,23 @@

+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: affine
+  labels:
+    app.kubernetes.io/instance: affine
+    app.kubernetes.io/managed-by: Helm
+    app.kubernetes.io/name: affine
+    app.kubernetes.io/service: affine
+  namespace: default
+spec:
+  type: ClusterIP
+  ports:
+  - port: 3010
+    targetPort: 3010
+    protocol: TCP
+    name: http
+  selector:
+    app.kubernetes.io/component: affine
+    app.kubernetes.io/instance: affine
+    app.kubernetes.io/name: affine
+
--- HelmRelease: default/affine Deployment: default/affine

+++ HelmRelease: default/affine Deployment: default/affine

@@ -0,0 +1,94 @@

+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: affine
+  labels:
+    app.kubernetes.io/component: affine
+    app.kubernetes.io/instance: affine
+    app.kubernetes.io/managed-by: Helm
+    app.kubernetes.io/name: affine
+  annotations:
+    reloader.stakater.com/auto: 'true'
+  namespace: default
+spec:
+  revisionHistoryLimit: 3
+  replicas: 1
+  strategy:
+    type: Recreate
+  selector:
+    matchLabels:
+      app.kubernetes.io/component: affine
+      app.kubernetes.io/name: affine
+      app.kubernetes.io/instance: affine
+  template:
+    metadata:
+      labels:
+        app.kubernetes.io/component: affine
+        app.kubernetes.io/instance: affine
+        app.kubernetes.io/name: affine
+    spec:
+      enableServiceLinks: false
+      serviceAccountName: default
+      automountServiceAccountToken: true
+      securityContext:
+        runAsUser: 0
+      hostIPC: false
+      hostNetwork: false
+      hostPID: false
+      dnsPolicy: ClusterFirst
+      initContainers:
+      - args:
+        - |
+          node ./scripts/self-host-predeploy.js
+        command:
+        - /bin/sh
+        - -c
+        envFrom:
+        - secretRef:
+            name: affine-secret
+        image: ghcr.io/toeverything/affine-graphql:stable-0ab8655@sha256:b461dd09b968bd2f067e98ed3c4988f4711dd811df7624f19d53c899061c4347
+        name: init-config
+        resources:
+          limits:
+            memory: 1Gi
+          requests:
+            cpu: 20m
+        volumeMounts:
+        - mountPath: /root/.affine
+          name: workspace
+      containers:
+      - env:
+        - name: TZ
+          value: Europe/Prague
+        envFrom:
+        - secretRef:
+            name: affine-secret
+        image: ghcr.io/toeverything/affine-graphql:stable-0ab8655@sha256:b461dd09b968bd2f067e98ed3c4988f4711dd811df7624f19d53c899061c4347
+        livenessProbe:
+          failureThreshold: 3
+          initialDelaySeconds: 0
+          periodSeconds: 10
+          tcpSocket:
+            port: 3010
+          timeoutSeconds: 1
+        name: app
+        readinessProbe:
+          failureThreshold: 3
+          initialDelaySeconds: 0
+          periodSeconds: 10
+          tcpSocket:
+            port: 3010
+          timeoutSeconds: 1
+        resources:
+          requests:
+            cpu: 25m
+            memory: 105M
+        volumeMounts:
+        - mountPath: /root/.affine
+          name: workspace
+      volumes:
+      - name: workspace
+        persistentVolumeClaim:
+          claimName: affine
+
--- HelmRelease: default/affine Ingress: default/affine

+++ HelmRelease: default/affine Ingress: default/affine

@@ -0,0 +1,27 @@

+---
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+  name: affine
+  labels:
+    app.kubernetes.io/instance: affine
+    app.kubernetes.io/managed-by: Helm
+    app.kubernetes.io/name: affine
+  namespace: default
+spec:
+  ingressClassName: internal
+  tls:
+  - hosts:
+    - nt.juno.moe
+  rules:
+  - host: nt.juno.moe
+    http:
+      paths:
+      - path: /
+        pathType: Prefix
+        backend:
+          service:
+            name: affine
+            port:
+              number: 3010
+

bot-akira[bot] avatar Apr 21 '25 20:04 bot-akira[bot]