home-ops icon indicating copy to clipboard operation
home-ops copied to clipboard

feat(helm): update cilium to v1.16.1

Open bot-akira[bot] opened this issue 1 year ago • 3 comments

This PR contains the following updates:

Package Update Change
cilium (source) minor 1.15.4 -> 1.16.1
cilium (source) minor 1.15.7 -> 1.16.1

[!WARNING] Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

cilium/cilium (cilium)

v1.16.1: 1.16.1

Compare Source

Security Advisories

This release addresses the following security vulnerabilities:

  • https://github.com/cilium/cilium/security/advisories/GHSA-vwf8-q6fw-4wcm
  • https://github.com/cilium/cilium/security/advisories/GHSA-qcm3-7879-xcww

Summary of Changes

Minor Changes:

Bugfixes:

  • auth: Fix data race in Upsert (Backport PR #​34158, Upstream PR #​33905, @​chaunceyjiang)
  • BGPv1 + BGPv2: Fix incorrect service reconciliation in setups with multiple BGP instances (virtual routers) (Backport PR #​34297, Upstream PR #​34177, @​rastislavs)
  • bgpv1: Fix data race in bgppSelection (Backport PR #​34158, Upstream PR #​33904, @​chaunceyjiang)
  • bgpv2: Avoid duplicate route policy naming (Backport PR #​34158, Upstream PR #​34031, @​rastislavs)
  • BGPv2: Fix Service advertisement selector: do not require matching CiliumLoadBalancerIPPool (Backport PR #​34201, Upstream PR #​34182, @​rastislavs)
  • Fix a nil dereference crash during cilium-agent initialization affecting setups with FQDN policies. The crash is triggered when a restored endpoint performs a DNS request just a the right time during early cilium-agent restoration. Problem is not expected to be persistent and the agent should get pass the problematic part of the initialization on restart. (Backport PR #​34158, Upstream PR #​34059, @​joamaki)
  • Fix appArmorProfile condition for CronJob helm template (Backport PR #​34297, Upstream PR #​34100, @​sathieu)
  • Fix bug causing etcd upsertion/deletion events to be potentially missed during the initial synchronization, when Cilium operates in KVStore mode, or Cluster Mesh is enabled. (Backport PR #​34181, Upstream PR #​34091, @​giorio94)
  • Fix issue in picking node IP addresses from the loopback device. This fixes a regression in v1.15 and v1.16 where VIPs assigned to the lo device were not considered by Cilium. Fix spurious updates node addresses to avoid unnecessary datapath reinitializations. (Backport PR #​34085, Upstream PR #​34012, @​joamaki)
  • Fix possible connection disruption on agent restart with WireGuard + kvstore (Backport PR #​34158, Upstream PR #​34062, @​giorio94)
  • Fixes DNS proxy "connect: cannot assign requested address" errors in transparent mode, which were due to opening multiple TCP connections to the upstream DNS server. (Backport PR #​34201, Upstream PR #​33989, @​bimmlerd)
  • gateway-api: Add HTTP method condition in sortable routes (Backport PR #​34158, Upstream PR #​34109, @​sayboras)
  • gateway-api: Enqueue gateway for Reference Grant changes (Backport PR #​34158, Upstream PR #​34032, @​sayboras)
  • lbipam: fixed bug in sharing key logic (Backport PR #​34158, Upstream PR #​34106, @​dylandreimerink)
  • policy: Fix policy cache covers context lookup. (#​34322, @​nathanjsweet)
  • service: Relax protocol matching for L7 Service (Backport PR #​34195, Upstream PR #​34131, @​sayboras)

CI Changes:

Misc Changes:

Other Changes:

Docker Manifests
cilium

quay.io/cilium/cilium:v1.16.1@​sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39 quay.io/cilium/cilium:stable@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39

clustermesh-apiserver

quay.io/cilium/clustermesh-apiserver:v1.16.1@​sha256:e9c77417cd474cc943b2303a76c5cf584ac7024dd513ebb8d608cb62fe28896f quay.io/cilium/clustermesh-apiserver:stable@sha256:e9c77417cd474cc943b2303a76c5cf584ac7024dd513ebb8d608cb62fe28896f

docker-plugin

quay.io/cilium/docker-plugin:v1.16.1@​sha256:243fd7759818d990a7f9b33df3eb685a9f250a12020e22f660547f9516b76320 quay.io/cilium/docker-plugin:stable@sha256:243fd7759818d990a7f9b33df3eb685a9f250a12020e22f660547f9516b76320

hubble-relay

quay.io/cilium/hubble-relay:v1.16.1@​sha256:2e1b4c739a676ae187d4c2bfc45c3e865bda2567cc0320a90cb666657fcfcc35 quay.io/cilium/hubble-relay:stable@sha256:2e1b4c739a676ae187d4c2bfc45c3e865bda2567cc0320a90cb666657fcfcc35

operator-alibabacloud

quay.io/cilium/operator-alibabacloud:v1.16.1@​sha256:4381adf48d76ec482551183947e537d44bcac9b6c31a635a9ac63f696d978804 quay.io/cilium/operator-alibabacloud:stable@sha256:4381adf48d76ec482551183947e537d44bcac9b6c31a635a9ac63f696d978804

operator-aws

quay.io/cilium/operator-aws:v1.16.1@​sha256:e3876fcaf2d6ccc8d5b4aaaded7b1efa971f3f4175eaa2c8a499878d58c39df4 quay.io/cilium/operator-aws:stable@sha256:e3876fcaf2d6ccc8d5b4aaaded7b1efa971f3f4175eaa2c8a499878d58c39df4

operator-azure

quay.io/cilium/operator-azure:v1.16.1@​sha256:e55c222654a44ceb52db7ade3a7b9e8ef05681ff84c14ad1d46fea34869a7a22 quay.io/cilium/operator-azure:stable@sha256:e55c222654a44ceb52db7ade3a7b9e8ef05681ff84c14ad1d46fea34869a7a22

operator-generic

quay.io/cilium/operator-generic:v1.16.1@​sha256:3bc7e7a43bc4a4d8989cb7936c5d96675dd2d02c306adf925ce0a7c35aa27dc4 quay.io/cilium/operator-generic:stable@sha256:3bc7e7a43bc4a4d8989cb7936c5d96675dd2d02c306adf925ce0a7c35aa27dc4

operator

quay.io/cilium/operator:v1.16.1@​sha256:258b28fefc9f3fe1cbcb21a3b2c4c96dcc72f6ee258eed0afebe9b0ac47f462b quay.io/cilium/operator:stable@sha256:258b28fefc9f3fe1cbcb21a3b2c4c96dcc72f6ee258eed0afebe9b0ac47f462b

v1.16.0: 1.16.0

Compare Source

We are excited to announce the Cilium 1.16.0 release. A total of 2969 new commits have been contributed to this release by a growing community of over 750 developers and over 19300 GitHub stars! :star_struck:

To keep up to date with all the latest Cilium releases, join #release on Slack.

Here's what's new in v1.16.0:
  • :mountain_cableway: Networking

    • :speedboat: Cilium NetKit: container-network throughput and latency as fast as host-network.
    • :globe_with_meridians: BGPv2: Fresh new API for Cilium's BGP feature.
    • :loudspeaker: BGP ClusterIP Advertisement: BGP advertisements of ExternalIP and Cluster IP Services.
    • :twisted_rightwards_arrows: Service Traffic Distribution: Kubernetes 1.30 Service Traffic Distribution can be enabled directly in the Service spec instead of using annotations.
    • :arrows_counterclockwise: Local Redirect Policy promoted to Stable: Redirecting the traffic bound for services to the local backend, such as node-local DNS.
    • :satellite: Multicast Datapath: Define multicast groups in Cilium.
    • :label: Per-Pod Fixed MAC Address: Specify the MAC address used on a pod.
  • :spider_web: Service Mesh & Ingress/Gateway API

    • :compass: Gateway API GAMMA Support: East-west traffic management for the cluster via Gateway API.
    • :shinto_shrine: Gateway API 1.1 Support: Cilium now supports Gateway API 1.1.
    • :passport_control: ExternalTrafficPolicy support for Ingress/Gateway API: External traffic can now be routed to node-local or cluster-wide endpoints.
    • :spider_web: L7 Envoy Proxy as dedicated DaemonSet: With a dedicated DaemonSet, Envoy and Cilium can have a separate life-cycle from each other. Now on by default for new installs.
    • :card_index_dividers: NodeSelector support for CiliumEnvoyConfig: Instead of being applied on all nodes, it's now possible to select which nodes a particular CiliumEnvoyConfig should select.
  • :guardswoman: Security

    • :signal_strength: Port Range support in Network Policies: This long-awaited feature has been implemented into Cilium.
    • :clipboard: Network Policy Validation Status: kubectl describe cnp will be able to tell if the Cilium Network Policy is valid or invalid.
    • :no_entry: Control Cilium Network Policy Default Deny behavior: Policies usually enable default deny for the subject of the policies, but this can now be disabled on a per-policy basis.
    • :busts_in_silhouette: CIDRGroups support for Egress and Deny rules: Add support for matching CiliumCIDRGroups in Egress policy rules.
    • :floppy_disk: Load "default" Network Policies from Filesystem: In addition to reading policies from Kubernetes, Cilium can be configured to read policies locally.
    • :card_index_dividers: Support to Select Nodes as Target of Cilium Network Policies: With new ToNodes/FromNodes selectors, traffic can be allowed or denied based on the labels of the target Node in the cluster.
  • :sunrise: Day 2 Operations and Scale

    • :elf: New ELF Loader Logic: With this new loader logic, the median memory usage of Cilium was decreased by 24%.
    • :rocket: Improved DNS-based network policy performance: DNS-based network policies had up to 5x reduction in tail latency.
    • :spider_web: KVStoreMesh default option for ClusterMesh: Introduced in Cilium 1.14, and after a lot of adoption and feedback from the community, KVStoreMesh is now the default way to deploy ClusterMesh.
  • :artificial_satellite: Hubble & Observability

    • :speaking_head: CEL Filters Support: Hubble supports Common Express Language (CEL) giving support for more complex conditions that cannot be expressed using the existing flow filters.
    • :bar_chart: Improved HTTP metrics: There are additional metrics to count the HTTP requests and their duration.
    • :straight_ruler: Improved BPF map pressure metrics: New metric to track the BPF map pressure metric for the Connection Tracking BPF map.
    • :eyes: Improvements for Egress Traffic Path Observability: Some metrics were added on this release to help troubleshooting Cilium Egress Routing.
    • :microscope: K8S Event Generation on Packet Drop: Hubble is now able to generate a k8s event for a packet dropped from a pod and it that can be verified with kubectl get events.
    • :card_index_dividers: Filtering Hubble flows by node labels: Filter Hubble flows observed on nodes matching the given label.
  • :houses: Community:

And finally, we would like to thank you to all contributors of Cilium that helped directly and indirectly with the project. The success of Cilium could not happen without all of you. :heart:

For a full summary of changes, see https://github.com/cilium/cilium/blob/v1.16.0/CHANGELOG.md.

Docker Manifests
cilium

quay.io/cilium/cilium:v1.16.0@​sha256:46ffa4ef3cf6d8885dcc4af5963b0683f7d59daa90d49ed9fb68d3b1627fe058 quay.io/cilium/cilium:stable@sha256:46ffa4ef3cf6d8885dcc4af5963b0683f7d59daa90d49ed9fb68d3b1627fe058

clustermesh-apiserver

quay.io/cilium/clustermesh-apiserver:v1.16.0@​sha256:a1597b7de97cfa03f1330e6b784df1721eb69494cd9efb0b3a6930680dfe7a8e quay.io/cilium/clustermesh-apiserver:stable@sha256:a1597b7de97cfa03f1330e6b784df1721eb69494cd9efb0b3a6930680dfe7a8e

docker-plugin

quay.io/cilium/docker-plugin:v1.16.0@​sha256:024a17aa8ec70d42f0ac1a4407ad9f8fd1411aa85fd8019938af582e20522efe quay.io/cilium/docker-plugin:stable@sha256:024a17aa8ec70d42f0ac1a4407ad9f8fd1411aa85fd8019938af582e20522efe

hubble-relay

quay.io/cilium/hubble-relay:v1.16.0@​sha256:33fca7776fc3d7b2abe08873319353806dc1c5e07e12011d7da4da05f836ce8d quay.io/cilium/hubble-relay:stable@sha256:33fca7776fc3d7b2abe08873319353806dc1c5e07e12011d7da4da05f836ce8d

operator-alibabacloud

quay.io/cilium/operator-alibabacloud:v1.16.0@​sha256:d2d9f450f2fc650d74d4b3935f4c05736e61145b9c6927520ea52e1ebcf4f3ea quay.io/cilium/operator-alibabacloud:stable@sha256:d2d9f450f2fc650d74d4b3935f4c05736e61145b9c6927520ea52e1ebcf4f3ea

operator-aws

quay.io/cilium/operator-aws:v1.16.0@​sha256:8dbe47a77ba8e1a5b111647a43db10c213d1c7dfc9f9aab5ef7279321ad21a2f quay.io/cilium/operator-aws:stable@sha256:8dbe47a77ba8e1a5b111647a43db10c213d1c7dfc9f9aab5ef7279321ad21a2f

operator-azure

quay.io/cilium/operator-azure:v1.16.0@​sha256:dd7562e20bc72b55c65e2110eb98dca1dd2bbf6688b7d8cea2bc0453992c121d quay.io/cilium/operator-azure:stable@sha256:dd7562e20bc72b55c65e2110eb98dca1dd2bbf6688b7d8cea2bc0453992c121d

operator-generic

quay.io/cilium/operator-generic:v1.16.0@​sha256:d6621c11c4e4943bf2998af7febe05be5ed6fdcf812b27ad4388f47022190316 quay.io/cilium/operator-generic:stable@sha256:d6621c11c4e4943bf2998af7febe05be5ed6fdcf812b27ad4388f47022190316

operator

quay.io/cilium/operator:v1.16.0@​sha256:6aaa05737f21993ff51abe0ffe7ea4be88d518aa05266c3482364dce65643488 quay.io/cilium/operator:stable@sha256:6aaa05737f21993ff51abe0ffe7ea4be88d518aa05266c3482364dce65643488

v1.15.8: 1.15.8

Compare Source

Security Advisories

This release addresses the following security vulnerabilities:

  • https://github.com/cilium/cilium/security/advisories/GHSA-vwf8-q6fw-4wcm
  • https://github.com/cilium/cilium/security/advisories/GHSA-qcm3-7879-xcww
  • https://github.com/cilium/cilium/security/advisories/GHSA-q7w8-72mr-vpgw
Summary of Changes

Minor Changes:

Bugfixes:

  • add support for validation of stringToString values in ConfigMap (Backport PR #​33962, Upstream PR #​33779, @​alex-berger)
  • auth: Fix data race in Upsert (Backport PR #​34157, Upstream PR #​33905, @​chaunceyjiang)
  • auth: fix fatal error: concurrent map iteration and map write (Backport PR #​33809, Upstream PR #​33634, @​chaunceyjiang)
  • cert: Adding H2 Protocol Support when Get gRPC Config For Client (Backport PR #​33809, Upstream PR #​33616, @​mrproliu)
  • DNS Proxy: Allow SO_LINGER to be set to the socket to upstream (Backport PR #​33809, Upstream PR #​33592, @​gandro)
  • Fix an issue in updates to node addresses which may have caused missing NodePort frontend IP addresses. May have affected NodePort/LoadBalancer services for users running with runtime device detection enabled when node's IP addresses were changed after Cilium had started. Node IP as defined in the Kubernetes Node is now preferred when selecting the NodePort frontend IPs. (Backport PR #​33818, Upstream PR #​33629, @​joamaki)
  • Fix bug causing etcd upsertion/deletion events to be potentially missed during the initial synchronization, when Cilium operates in KVStore mode, or Cluster Mesh is enabled. (Backport PR #​34183, Upstream PR #​34091, @​giorio94)
  • Fix issue in picking node IP addresses from the loopback device. This fixes a regression in v1.15 and v1.16 where VIPs assigned to the lo device were not considered by Cilium. Fix spurious updates node addresses to avoid unnecessary datapath reinitializations. (Backport PR #​34086, Upstream PR #​34012, @​joamaki)
  • Fix rare race condition afflicting clustermesh while stopping the retrieval of the remote cluster configuration, possibly causing a deadlock (Backport PR #​33809, Upstream PR #​33735, @​giorio94)
  • Fixes a race condition during agent startup that causes the k8s node label updates to not get propagated to the host endpoint. (Backport PR #​33663, Upstream PR #​33511, @​skmatti)
  • gateway-api: Add HTTP method condition in sortable routes (Backport PR #​34157, Upstream PR #​34109, @​sayboras)
  • gateway-api: Enqueue gateway for Reference Grant changes (Backport PR #​34157, Upstream PR #​34032, @​sayboras)
  • helm: remove duplicate metrics for Envoy pod (Backport PR #​34157, Upstream PR #​33803, @​mhofstetter)
  • lbipam: fixed bug in sharing key logic (Backport PR #​34157, Upstream PR #​34106, @​dylandreimerink)
  • pkg/metrics: fix data race warning on metrics init hook. (Backport PR #​33962, Upstream PR #​33823, @​tommyp1ckles)
  • Reduce conntrack lifetime for closing service connections. (Backport PR #​33962, Upstream PR #​33907, @​julianwiedmann)
  • Skip regenerating host endpoint on k8s node labels update if identity labels are unchanged (Backport PR #​33809, Upstream PR #​33306, @​skmatti)
  • The cilium agent will now recover from stale nodeID mappings which could occur in clusters with high node churn, possibly manifesting itself in dropped IPsec traffic. (Backport PR #​34157, Upstream PR #​33666, @​bimmlerd)

CI Changes:

Misc Changes:

Other Changes:

Docker Manifests
cilium

quay.io/cilium/cilium:v1.15.8@​sha256:3b5b0477f696502c449eaddff30019a7d399f077b7814bcafabc636829d194c7

clustermesh-apiserver

quay.io/cilium/clustermesh-apiserver:v1.15.8@​sha256:4c1f33aae2b76392b57e867820471b5472f0886f7358513d47ee80c09af15a0e

docker-plugin

quay.io/cilium/docker-plugin:v1.15.8@​sha256:15b1b6e83e1c0eea97df179660c1898661c1d0da5d431c68f98c702581e29310

hubble-relay

quay.io/cilium/hubble-relay:v1.15.8@​sha256:47e8a19f60d0d226ec3d2c675ec63908f1f2fb936a39897f2e3255b3bab01ad6

operator-alibabacloud

quay.io/cilium/operator-alibabacloud:v1.15.8@​sha256:388ef72febd719bc9d16d5ee47fe6f846f73f0d8a6f9586ada04cb39eb2962d1

operator-aws

quay.io/cilium/operator-aws:v1.15.8@​sha256:3807dd23c2b5f90489824ddd13dca6e84e714dc9eae44e5718acfe86c855b7a1

operator-azure

quay.io/cilium/operator-azure:v1.15.8@​sha256:c517db3d12fcf038a9a4a81b88027a19672078bf8c2fcd6b2563f3eff9514d21

operator-generic

quay.io/cilium/operator-generic:v1.15.8@​sha256:e77ae6fc8a978f98363cf74d3c883dfaa6454c6e23ec417a60952f29408e2f18

operator

quay.io/cilium/operator:v1.15.8@​sha256:e9cf35fe3dc86933ccf3fdfdb7620d218c50aaca5f14e4ba5f422460ea4cb23c

v1.15.7: 1.15.7

Compare Source

Summary of Changes

We are pleased to release Cilium v1.15.7, which makes the load balancer class of the Clustermesh API server configurable and includes stability and bug fixes. Thanks to all contributors, reviewers, testers, and users!

Minor Changes:

Bugfixes:

CI Changes:

Misc Changes:


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • [ ] If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

bot-akira[bot] avatar May 15 '24 11:05 bot-akira[bot]

--- kubernetes/apps/kube-system/kube-vip/app Kustomization: flux-system/cluster-apps-kube-vip DaemonSet: kube-system/kube-vip

+++ kubernetes/apps/kube-system/kube-vip/app Kustomization: flux-system/cluster-apps-kube-vip DaemonSet: kube-system/kube-vip

@@ -57,13 +57,13 @@

         - name: vip_renewdeadline
           value: '10'
         - name: vip_retryperiod
           value: '2'
         - name: prometheus_server
           value: :2112
-        image: ghcr.io/kube-vip/kube-vip:v0.8.2
+        image: ghcr.io/kube-vip/kube-vip:v0.8.0
         imagePullPolicy: IfNotPresent
         name: kube-vip
         securityContext:
           capabilities:
             add:
             - NET_ADMIN
--- kubernetes/apps/networking/cloudflared/app Kustomization: flux-system/cloudflared HelmRelease: networking/cloudflared

+++ kubernetes/apps/networking/cloudflared/app Kustomization: flux-system/cloudflared HelmRelease: networking/cloudflared

@@ -50,13 +50,13 @@

               TUNNEL_METRICS: 0.0.0.0:2000
               TUNNEL_ORIGIN_ENABLE_HTTP2: true
               TUNNEL_POST_QUANTUM: true
               TUNNEL_TRANSPORT_PROTOCOL: quic
             image:
               repository: docker.io/cloudflare/cloudflared
-              tag: 2024.8.3@sha256:14d9c6b01b29d556569446b0cc5c9162dc129a92ce127afe27c3aae4534f8af1
+              tag: 2024.8.2@sha256:004f4b7b60bab652d478148c138843c24eae1feee4c58fddd435b9b79c953957
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3
--- kubernetes/apps/monitoring/goldilocks/app Kustomization: flux-system/cluster-apps-goldilocks HelmRelease: monitoring/goldilocks

+++ kubernetes/apps/monitoring/goldilocks/app Kustomization: flux-system/cluster-apps-goldilocks HelmRelease: monitoring/goldilocks

@@ -13,13 +13,13 @@

       chart: goldilocks
       interval: 5m
       sourceRef:
         kind: HelmRepository
         name: fairwinds
         namespace: flux-system
-      version: 9.0.0
+      version: 8.0.2
   interval: 5m
   values:
     dashboard:
       enabled: true
       ingress:
         annotations:
--- kubernetes/apps/kube-system/nvidia-device-plugin/app Kustomization: flux-system/cluster-apps-nvidia-plugin HelmRelease: kube-system/nvidia-device-plugin

+++ kubernetes/apps/kube-system/nvidia-device-plugin/app Kustomization: flux-system/cluster-apps-nvidia-plugin HelmRelease: kube-system/nvidia-device-plugin

@@ -13,16 +13,16 @@

       chart: nvidia-device-plugin
       interval: 15m
       sourceRef:
         kind: HelmRepository
         name: nvidia-device-plugin
         namespace: flux-system
-      version: 0.16.2
+      version: 0.15.0
   interval: 15m
   values:
     image:
       repository: nvcr.io/nvidia/k8s-device-plugin
-      tag: v0.16.2
+      tag: v0.15.0
     nodeSelector:
       feature.node.kubernetes.io/custom-nvidia-gpu: 'true'
     runtimeClassName: nvidia
 
--- kubernetes/apps/kube-system/reloader/app Kustomization: flux-system/cluster-apps-reloader HelmRelease: kube-system/reloader

+++ kubernetes/apps/kube-system/reloader/app Kustomization: flux-system/cluster-apps-reloader HelmRelease: kube-system/reloader

@@ -12,13 +12,13 @@

     spec:
       chart: reloader
       sourceRef:
         kind: HelmRepository
         name: stakater
         namespace: flux-system
-      version: 1.0.121
+      version: 1.0.115
   install:
     createNamespace: true
     remediation:
       retries: 3
   interval: 15m
   maxHistory: 3
--- kubernetes/apps/cert-manager/cert-manager/app Kustomization: flux-system/cluster-apps-cert-manager HelmRelease: cert-manager/cert-manager

+++ kubernetes/apps/cert-manager/cert-manager/app Kustomization: flux-system/cluster-apps-cert-manager HelmRelease: cert-manager/cert-manager

@@ -12,13 +12,13 @@

     spec:
       chart: cert-manager
       sourceRef:
         kind: HelmRepository
         name: jetstack
         namespace: flux-system
-      version: v1.15.3
+      version: v1.15.2
   install:
     createNamespace: true
     remediation:
       retries: 3
   interval: 5m
   upgrade:
--- kubernetes/apps/kube-system/cilium/app Kustomization: flux-system/cilium HelmRelease: kube-system/cilium

+++ kubernetes/apps/kube-system/cilium/app Kustomization: flux-system/cilium HelmRelease: kube-system/cilium

@@ -13,13 +13,13 @@

     spec:
       chart: cilium
       sourceRef:
         kind: HelmRepository
         name: cilium
         namespace: flux-system
-      version: 1.15.7
+      version: 1.16.1
   install:
     remediation:
       retries: 3
   interval: 30m
   upgrade:
     cleanupOnFail: true
--- kubernetes/apps/networking/echo-server/app Kustomization: flux-system/echo-server HelmRelease: networking/echo-server

+++ kubernetes/apps/networking/echo-server/app Kustomization: flux-system/echo-server HelmRelease: networking/echo-server

@@ -34,13 +34,13 @@

               HTTP_PORT: 8080
               LOG_IGNORE_PATH: /healthz
               LOG_WITHOUT_NEWLINE: true
               PROMETHEUS_ENABLED: true
             image:
               repository: ghcr.io/mendhak/http-https-echo
-              tag: 34
+              tag: 33
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3
--- kubernetes/apps/kube-system/nvidia-device-plugin/app Kustomization: flux-system/cluster-apps-nvidia HelmRelease: kube-system/nvidia-device-plugin

+++ kubernetes/apps/kube-system/nvidia-device-plugin/app Kustomization: flux-system/cluster-apps-nvidia HelmRelease: kube-system/nvidia-device-plugin

@@ -13,16 +13,16 @@

       chart: nvidia-device-plugin
       interval: 15m
       sourceRef:
         kind: HelmRepository
         name: nvidia-device-plugin
         namespace: flux-system
-      version: 0.16.2
+      version: 0.15.0
   interval: 15m
   values:
     image:
       repository: nvcr.io/nvidia/k8s-device-plugin
-      tag: v0.16.2
+      tag: v0.15.0
     nodeSelector:
       feature.node.kubernetes.io/custom-nvidia-gpu: 'true'
     runtimeClassName: nvidia
 
--- kubernetes/apps/default/nitter/app Kustomization: flux-system/nitter HelmRelease: default/nitter

+++ kubernetes/apps/default/nitter/app Kustomization: flux-system/nitter HelmRelease: default/nitter

@@ -1,108 +0,0 @@

----
-apiVersion: helm.toolkit.fluxcd.io/v2beta2
-kind: HelmRelease
-metadata:
-  labels:
-    app.kubernetes.io/name: nitter
-    kustomize.toolkit.fluxcd.io/name: nitter
-    kustomize.toolkit.fluxcd.io/namespace: flux-system
-  name: nitter
-  namespace: default
-spec:
-  chart:
-    spec:
-      chart: app-template
-      sourceRef:
-        kind: HelmRepository
-        name: bjw-s-charts
-        namespace: flux-system
-      version: 3.2.1
-  install:
-    createNamespace: true
-    remediation:
-      retries: 5
-  interval: 15m
-  upgrade:
-    remediation:
-      retries: 5
-  values:
-    controllers:
-      nitter:
-        annotations:
-          reloader.stakater.com/auto: 'true'
-        containers:
-          app:
-            image:
-              repository: registry.skysolutions.fi/library/nitter
-              tag: guest-accounts
-            probes:
-              liveness:
-                custom: true
-                enabled: false
-                spec:
-                  failureThreshold: 3
-                  httpGet:
-                    path: /settings
-                    port: 8080
-                  initialDelaySeconds: 0
-                  periodSeconds: 10
-                  timeoutSeconds: 1
-              readiness:
-                custom: true
-                enabled: false
-                spec:
-                  failureThreshold: 3
-                  httpGet:
-                    path: /settings
-                    port: 8080
-                  initialDelaySeconds: 0
-                  periodSeconds: 10
-                  timeoutSeconds: 1
-              startup:
-                enabled: false
-            resources:
-              limits:
-                memory: 250Mi
-              requests:
-                memory: 50Mi
-        replicas: 1
-        strategy: RollingUpdate
-    defaultPodOptions:
-      topologySpreadConstraints:
-      - labelSelector:
-          matchLabels:
-            app.kubernetes.io/name: nitter
-        maxSkew: 1
-        topologyKey: kubernetes.io/hostname
-        whenUnsatisfiable: DoNotSchedule
-    ingress:
-      app:
-        annotations:
-          hajimari.io/icon: twitter
-        className: internal
-        hosts:
-        - host: nitter...PLACEHOLDER..
-          paths:
-          - path: /
-            pathType: Prefix
-            service:
-              identifier: app
-              port: http
-      tls:
-      - hosts:
-        - nitter...PLACEHOLDER..
-    persistence:
-      config:
-        enabled: true
-        mountPath: /src/nitter.conf
-        name: nitter
-        readOnly: false
-        subPath: config.ini
-        type: configMap
-    service:
-      app:
-        controller: nitter
-        ports:
-          http:
-            port: 8080
-
--- kubernetes/apps/default/nitter/app Kustomization: flux-system/nitter ExternalSecret: default/gatus

+++ kubernetes/apps/default/nitter/app Kustomization: flux-system/nitter ExternalSecret: default/gatus

@@ -1,25 +0,0 @@

----
-apiVersion: external-secrets.io/v1beta1
-kind: ExternalSecret
-metadata:
-  labels:
-    app.kubernetes.io/name: nitter
-    kustomize.toolkit.fluxcd.io/name: nitter
-    kustomize.toolkit.fluxcd.io/namespace: flux-system
-  name: gatus
-  namespace: default
-spec:
-  dataFrom:
-  - extract:
-      key: gatus
-  secretStoreRef:
-    kind: ClusterSecretStore
-    name: onepassword-connect
-  target:
-    name: gatus-secret
-    template:
-      data:
-        CUSTOM_PUSHOVER_TOKEN: '{{ .GATUS_PUSHOVER_TOKEN }}'
-        CUSTOM_PUSHOVER_USER_KEY: '{{ .PUSHOVER_USER_KEY }}'
-      engineVersion: v2
-
--- kubernetes/apps Kustomization: flux-system/cluster-apps Kustomization: flux-system/nitter

+++ kubernetes/apps Kustomization: flux-system/cluster-apps Kustomization: flux-system/nitter

@@ -1,32 +0,0 @@

----
-apiVersion: kustomize.toolkit.fluxcd.io/v1
-kind: Kustomization
-metadata:
-  labels:
-    kustomize.toolkit.fluxcd.io/name: cluster-apps
-    kustomize.toolkit.fluxcd.io/namespace: flux-system
-  name: nitter
-  namespace: flux-system
-spec:
-  commonMetadata:
-    labels:
-      app.kubernetes.io/name: nitter
-  decryption:
-    provider: sops
-    secretRef:
-      name: sops-age
-  interval: 10m
-  path: ./kubernetes/apps/default/nitter/app
-  postBuild:
-    substituteFrom:
-    - kind: ConfigMap
-      name: cluster-settings
-    - kind: Secret
-      name: cluster-secrets
-  prune: true
-  sourceRef:
-    kind: GitRepository
-    name: home-kubernetes
-  targetNamespace: default
-  wait: false
-
--- kubernetes/apps/media/sonarr/app Kustomization: flux-system/cluster-apps-sonarr HelmRelease: media/sonarr

+++ kubernetes/apps/media/sonarr/app Kustomization: flux-system/cluster-apps-sonarr HelmRelease: media/sonarr

@@ -40,13 +40,13 @@

     - secretRef:
         name: sonarr
     global:
       nameOverride: sonarr
     image:
       repository: ghcr.io/onedr0p/sonarr-develop
-      tag: 4.0.8.2223@sha256:f4d8a1203d2f0cf4f1ab69b9682896ef1e73eaf04021edb4ce2a479af961e420
+      tag: 4.0.6.1820@sha256:3418fb8cd12b30fd70c026531e14f5a1222c7b4499d9560aad9f31ddf064f4fb
     ingress:
       main:
         annotations:
           gatus.io/enabled: 'true'
           gethomepage.dev/description: TV Downloads
           gethomepage.dev/enabled: 'true'
--- kubernetes/apps/media/radarr/app Kustomization: flux-system/cluster-apps-radarr HelmRelease: media/radarr

+++ kubernetes/apps/media/radarr/app Kustomization: flux-system/cluster-apps-radarr HelmRelease: media/radarr

@@ -34,13 +34,13 @@

       RADARR__INSTANCE_NAME: Radarr
       RADARR__LOG_LEVEL: info
       RADARR__PORT: 80
       TZ: Europe/Prague
     image:
       repository: ghcr.io/onedr0p/radarr-develop
-      tag: 5.10.0.9090@sha256:3802c38f08a3350637d6d9ba10a35a89b791afd95c2e4e7e7402e69c0910b50c
+      tag: 5.8.3.8933@sha256:da6094f6cc4dc95af194612a8a4d7db4fc27ff4a6e5748c2e6d5dd7be9ed69a7
     ingress:
       main:
         annotations:
           gatus.io/enabled: 'true'
           gethomepage.dev/description: Movie Downloads
           gethomepage.dev/enabled: 'true'
--- kubernetes/apps/media/plex/app Kustomization: flux-system/plex HelmRelease: media/plex

+++ kubernetes/apps/media/plex/app Kustomization: flux-system/plex HelmRelease: media/plex

@@ -34,13 +34,13 @@

               ADVERTISE_IP: https://plex...PLACEHOLDER..,http://192.168.69.101:32400
               NVIDIA_DRIVER_CAPABILITIES: all
               NVIDIA_VISIBLE_DEVICES: all
               TZ: Europe/Prague
             image:
               repository: ghcr.io/onedr0p/plex-beta
-              tag: 1.41.0.8911-1bd569c5f@sha256:970272244b9c638b596e88591516d091be007b6a35e81236d9e95c5c9c24b681
+              tag: 1.40.5.8854-f36c552fd@sha256:483bb8b03110e6874b2eea984c15039b7423d01c9fd5b436807aa14fe46ba0f2
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3
--- kubernetes/apps/media/sabnzbd/app Kustomization: flux-system/cluster-apps-sabnzbd HelmRelease: media/sabnzbd

+++ kubernetes/apps/media/sabnzbd/app Kustomization: flux-system/cluster-apps-sabnzbd HelmRelease: media/sabnzbd

@@ -35,13 +35,13 @@

               SABNZBD__HOST_WHITELIST_ENTRIES: sabnzbd, sabnzbd.media, sabnzbd.media.svc,
                 sabnzbd.media.svc.cluster, sabnzbd.media.svc.cluster.local, sabnzbd...PLACEHOLDER..
               SABNZBD__PORT: 80
               TZ: Europe/Prague
             image:
               repository: ghcr.io/onedr0p/sabnzbd
-              tag: 4.3.3@sha256:c8a03bbe260ba1646fd0e58f9b45dfda8f0a0c9f1f9b5f6f92440cd689cdf353
+              tag: 4.3.2@sha256:b23a4ecc680470e88fc04a6dc27097f4da68adcf9d1ad0d6407bab7010fefade
             probes:
               liveness:
                 custom: true
                 enabled: true
                 spec:
                   failureThreshold: 3
--- kubernetes/apps/networking/ingress-nginx/internal Kustomization: flux-system/ingress-nginx-internal HelmRelease: networking/ingress-nginx-internal

+++ kubernetes/apps/networking/ingress-nginx/internal Kustomization: flux-system/ingress-nginx-internal HelmRelease: networking/ingress-nginx-internal

@@ -13,13 +13,13 @@

     spec:
       chart: ingress-nginx
       sourceRef:
         kind: HelmRepository
         name: ingress-nginx
         namespace: flux-system
-      version: 4.11.2
+      version: 4.11.1
   install:
     remediation:
       retries: 3
   interval: 30m
   upgrade:
     cleanupOnFail: true
@@ -77,9 +77,9 @@

       - name: TEMPLATE_NAME
         value: lost-in-space
       - name: SHOW_DETAILS
         value: 'false'
       image:
         repository: ghcr.io/tarampampam/error-pages
-        tag: 3.3.0
+        tag: 2.27.0
     fullnameOverride: ingress-nginx-internal
 
--- kubernetes/apps/networking/ingress-nginx/external Kustomization: flux-system/ingress-nginx-external HelmRelease: networking/ingress-nginx-external

+++ kubernetes/apps/networking/ingress-nginx/external Kustomization: flux-system/ingress-nginx-external HelmRelease: networking/ingress-nginx-external

@@ -13,13 +13,13 @@

     spec:
       chart: ingress-nginx
       sourceRef:
         kind: HelmRepository
         name: ingress-nginx
         namespace: flux-system
-      version: 4.11.2
+      version: 4.11.1
   dependsOn:
   - name: cloudflared
     namespace: networking
   install:
     remediation:
       retries: 3
--- kubernetes/apps/monitoring/gatus/app Kustomization: flux-system/gatus HelmRelease: monitoring/gatus

+++ kubernetes/apps/monitoring/gatus/app Kustomization: flux-system/gatus HelmRelease: monitoring/gatus

@@ -86,13 +86,13 @@

               METHOD: WATCH
               NAMESPACE: ALL
               RESOURCE: both
               UNIQUE_FILENAMES: true
             image:
               repository: ghcr.io/kiwigrid/k8s-sidecar
-              tag: 1.27.5@sha256:1fc88232e223a6977c626510372a74ca1f73af073e3c361719ccf02f223c8a12
+              tag: 1.27.4@sha256:f6ed71d0f9f1175df8c4b8c674b339a74785384d25fdad21b3c3dc0554109286
             resources:
               limits:
                 memory: 256Mi
               requests:
                 cpu: 10m
             restartPolicy: Always

bot-akira[bot] avatar May 15 '24 11:05 bot-akira[bot]

--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-dashboard

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-dashboard

@@ -4703,27 +4703,27 @@

           ],
           "spaceLength": 10,
           "stack": false,
           "steppedLine": false,
           "targets": [
             {
-              "expr": "sum(rate(cilium_policy_l7_denied_total{k8s_app=\"cilium\", pod=~\"$pod\"}[1m]))",
+              "expr": "sum(rate(cilium_policy_l7_total{k8s_app=\"cilium\", pod=~\"$pod\", rule=\"denied\"}[1m]))",
               "format": "time_series",
               "intervalFactor": 1,
               "legendFormat": "denied",
               "refId": "A"
             },
             {
-              "expr": "sum(rate(cilium_policy_l7_forwarded_total{k8s_app=\"cilium\", pod=~\"$pod\"}[1m]))",
+              "expr": "sum(rate(cilium_policy_l7_total{k8s_app=\"cilium\", pod=~\"$pod\", rule=\"forwarded\"}[1m]))",
               "format": "time_series",
               "intervalFactor": 1,
               "legendFormat": "forwarded",
               "refId": "B"
             },
             {
-              "expr": "sum(rate(cilium_policy_l7_received_total{k8s_app=\"cilium\", pod=~\"$pod\"}[1m]))",
+              "expr": "sum(rate(cilium_policy_l7_total{k8s_app=\"cilium\", pod=~\"$pod\", rule=\"received\"}[1m]))",
               "format": "time_series",
               "intervalFactor": 1,
               "legendFormat": "received",
               "refId": "C"
             }
           ],
@@ -4869,13 +4869,13 @@

           }
         },
         {
           "aliasColors": {
             "Max per node processingTime": "#e24d42",
             "Max per node upstreamTime": "#58140c",
-            "avg(cilium_policy_l7_parse_errors_total{pod=~\"cilium.*\"})": "#bf1b00",
+            "avg(cilium_policy_l7_total{pod=~\"cilium.*\", rule=\"parse_errors\"})": "#bf1b00",
             "parse errors": "#bf1b00"
           },
           "bars": true,
           "dashLength": 10,
           "dashes": false,
           "datasource": {
@@ -4928,13 +4928,13 @@

             },
             {
               "alias": "Max per node upstreamTime",
               "yaxis": 2
             },
             {
-              "alias": "avg(cilium_policy_l7_parse_errors_total{pod=~\"cilium.*\"})",
+              "alias": "avg(cilium_policy_l7_total{pod=~\"cilium.*\", rule=\"parse_errors\"})",
               "yaxis": 2
             },
             {
               "alias": "parse errors",
               "yaxis": 2
             }
@@ -4949,13 +4949,13 @@

               "interval": "",
               "intervalFactor": 1,
               "legendFormat": "{{scope}}",
               "refId": "A"
             },
             {
-              "expr": "avg(cilium_policy_l7_parse_errors_total{k8s_app=\"cilium\", pod=~\"$pod\"}) by (pod)",
+              "expr": "avg(cilium_policy_l7_total{k8s_app=\"cilium\", pod=~\"$pod\", rule=\"parse_errors\"}) by (pod)",
               "format": "time_series",
               "intervalFactor": 1,
               "legendFormat": "parse errors",
               "refId": "B"
             }
           ],
@@ -5307,13 +5307,13 @@

               "format": "time_series",
               "intervalFactor": 1,
               "legendFormat": "Max {{scope}}",
               "refId": "B"
             },
             {
-              "expr": "max(rate(cilium_policy_l7_parse_errors_total{k8s_app=\"cilium\", pod=~\"$pod\"}[1m])) by (pod)",
+              "expr": "max(rate(cilium_policy_l7_total{k8s_app=\"cilium\", pod=~\"$pod\", rule=\"parse_errors\"}[1m])) by (pod)",
               "format": "time_series",
               "intervalFactor": 1,
               "legendFormat": "parse errors",
               "refId": "A"
             }
           ],
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-config

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-config

@@ -7,20 +7,18 @@

 data:
   identity-allocation-mode: crd
   identity-heartbeat-timeout: 30m0s
   identity-gc-interval: 15m0s
   cilium-endpoint-gc-interval: 5m0s
   nodes-gc-interval: 5m0s
-  skip-cnp-status-startup-clean: 'false'
   debug: 'false'
   debug-verbose: ''
   enable-policy: default
   policy-cidr-match-mode: ''
   prometheus-serve-addr: :9962
   controller-group-metrics: write-cni-file sync-host-ips sync-lb-maps-with-k8s-services
-  proxy-prometheus-port: '9964'
   operator-prometheus-serve-addr: :9963
   enable-metrics: 'true'
   enable-ipv4: 'true'
   enable-ipv6: 'false'
   custom-cni-conf: 'false'
   enable-bpf-clock-probe: 'false'
@@ -28,58 +26,69 @@

   monitor-aggregation-interval: 5s
   monitor-aggregation-flags: all
   bpf-map-dynamic-size-ratio: '0.0025'
   bpf-policy-map-max: '16384'
   bpf-lb-map-max: '65536'
   bpf-lb-external-clusterip: 'false'
+  bpf-events-drop-enabled: 'true'
+  bpf-events-policy-verdict-enabled: 'true'
+  bpf-events-trace-enabled: 'true'
   preallocate-bpf-maps: 'false'
-  sidecar-istio-proxy-image: cilium/istio_proxy
   cluster-name: home-kubernetes
   cluster-id: '1'
   routing-mode: native
   service-no-backend-response: reject
   enable-l7-proxy: 'true'
   enable-ipv4-masquerade: 'true'
   enable-ipv4-big-tcp: 'false'
   enable-ipv6-big-tcp: 'false'
   enable-ipv6-masquerade: 'true'
+  enable-tcx: 'true'
+  datapath-mode: veth
   enable-bpf-masquerade: 'false'
   enable-masquerade-to-route-source: 'false'
   enable-xt-socket-fallback: 'true'
   install-no-conntrack-iptables-rules: 'false'
   auto-direct-node-routes: 'true'
+  direct-routing-skip-unreachable: 'false'
   enable-local-redirect-policy: 'true'
   ipv4-native-routing-cidr: 10.69.0.0/16
   devices: eno+ enp6s+ bond+
+  enable-runtime-device-detection: 'true'
   kube-proxy-replacement: 'true'
   kube-proxy-replacement-healthz-bind-address: 0.0.0.0:10256
   bpf-lb-sock: 'false'
+  bpf-lb-sock-terminate-pod-connections: 'false'
+  nodeport-addresses: ''
   enable-health-check-nodeport: 'true'
   enable-health-check-loadbalancer-ip: 'false'
   node-port-bind-protection: 'true'
   enable-auto-protect-node-port-range: 'true'
   bpf-lb-mode: dsr
   bpf-lb-algorithm: maglev
   bpf-lb-acceleration: disabled
   enable-svc-source-range-check: 'true'
   enable-l2-neigh-discovery: 'true'
   arping-refresh-period: 30s
+  k8s-require-ipv4-pod-cidr: 'false'
+  k8s-require-ipv6-pod-cidr: 'false'
   enable-endpoint-routes: 'true'
   enable-k8s-networkpolicy: 'true'
   write-cni-conf-when-ready: /host/etc/cni/net.d/05-cilium.conflist
   cni-exclusive: 'true'
   cni-log-file: /var/run/cilium/cilium-cni.log
   enable-endpoint-health-checking: 'true'
   enable-health-checking: 'true'
   enable-well-known-identities: 'false'
-  enable-remote-node-identity: 'true'
+  enable-node-selector-labels: 'false'
   synchronize-k8s-nodes: 'true'
   operator-api-serve-addr: 127.0.0.1:9234
   enable-hubble: 'true'
   hubble-socket-path: /var/run/cilium/hubble.sock
   hubble-metrics-server: :9965
+  hubble-metrics-server-enable-tls: 'false'
   hubble-metrics: dns:query drop tcp flow port-distribution icmp http
   enable-hubble-open-metrics: 'false'
   hubble-export-file-max-size-mb: '10'
   hubble-export-file-max-backups: '5'
   hubble-listen-address: :4244
   hubble-disable-tls: 'false'
@@ -106,12 +115,13 @@

   k8s-client-burst: '20'
   remove-cilium-node-taints: 'true'
   set-cilium-node-taints: 'true'
   set-cilium-is-up-condition: 'true'
   unmanaged-pod-watcher-interval: '15'
   dnsproxy-enable-transparent-mode: 'true'
+  dnsproxy-socket-linger-timeout: '10'
   tofqdns-dns-reject-response-code: refused
   tofqdns-enable-dns-compression: 'true'
   tofqdns-endpoint-max-ip-per-hostname: '50'
   tofqdns-idle-connection-grace-period: 0s
   tofqdns-max-deferred-connection-deletes: '10000'
   tofqdns-proxy-response-max-delay: 100ms
@@ -123,9 +133,15 @@

   proxy-xff-num-trusted-hops-ingress: '0'
   proxy-xff-num-trusted-hops-egress: '0'
   proxy-connect-timeout: '2'
   proxy-max-requests-per-connection: '0'
   proxy-max-connection-duration-seconds: '0'
   proxy-idle-timeout-seconds: '60'
-  external-envoy-proxy: 'false'
+  external-envoy-proxy: 'true'
+  envoy-base-id: '0'
+  envoy-keep-cap-netbindservice: 'false'
   max-connected-clusters: '255'
+  clustermesh-enable-endpoint-sync: 'false'
+  clustermesh-enable-mcs-api: 'false'
+  nat-map-stats-entries: '32'
+  nat-map-stats-interval: 30s
 
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-operator-dashboard

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-operator-dashboard

@@ -11,17 +11,30 @@

     grafana_dashboard: '1'
   annotations:
     grafana_folder: Cilium
 data:
   cilium-operator-dashboard.json: |
     {
+      "__inputs": [
+        {
+          "name": "DS_PROMETHEUS",
+          "label": "prometheus",
+          "description": "",
+          "type": "datasource",
+          "pluginId": "prometheus",
+          "pluginName": "Prometheus"
+        }
+      ],
       "annotations": {
         "list": [
           {
             "builtIn": 1,
-            "datasource": "-- Grafana --",
+            "datasource": {
+              "type": "datasource",
+              "uid": "grafana"
+            },
             "enable": true,
             "hide": true,
             "iconColor": "rgba(0, 211, 255, 1)",
             "name": "Annotations & Alerts",
             "type": "dashboard"
           }
@@ -37,13 +50,16 @@

           "aliasColors": {
             "avg": "#cffaff"
           },
           "bars": false,
           "dashLength": 10,
           "dashes": false,
-          "datasource": "prometheus",
+          "datasource": {
+            "type": "prometheus",
+            "uid": "${DS_PROMETHEUS}"
+          },
           "fieldConfig": {
             "defaults": {
               "custom": {}
             },
             "overrides": []
           },
@@ -163,13 +179,16 @@

           "aliasColors": {
             "MAX_resident_memory_bytes_max": "#e5ac0e"
           },
           "bars": false,
           "dashLength": 10,
           "dashes": false,
-          "datasource": "prometheus",
+          "datasource": {
+            "type": "prometheus",
+            "uid": "${DS_PROMETHEUS}"
+          },
           "fieldConfig": {
             "defaults": {
               "custom": {}
             },
             "overrides": []
           },
@@ -293,13 +312,16 @@

         },
         {
           "aliasColors": {},
           "bars": false,
           "dashLength": 10,
           "dashes": false,
-          "datasource": "prometheus",
+          "datasource": {
+            "type": "prometheus",
+            "uid": "${DS_PROMETHEUS}"
+          },
           "fieldConfig": {
             "defaults": {
               "custom": {}
             },
             "overrides": []
           },
@@ -390,13 +412,16 @@

         },
         {
           "aliasColors": {},
           "bars": false,
           "dashLength": 10,
           "dashes": false,
-          "datasource": "prometheus",
+          "datasource": {
+            "type": "prometheus",
+            "uid": "${DS_PROMETHEUS}"
+          },
           "fieldConfig": {
             "defaults": {
               "custom": {}
             },
             "overrides": []
           },
@@ -487,13 +512,16 @@

         },
         {
           "aliasColors": {},
           "bars": false,
           "dashLength": 10,
           "dashes": false,
-          "datasource": "prometheus",
+          "datasource": {
+            "type": "prometheus",
+            "uid": "${DS_PROMETHEUS}"
+          },
           "fieldConfig": {
             "defaults": {
               "custom": {}
             },
             "overrides": []
           },
@@ -584,13 +612,16 @@

         },
         {
           "aliasColors": {},
           "bars": false,
           "dashLength": 10,
           "dashes": false,
-          "datasource": "prometheus",
+          "datasource": {
+            "type": "prometheus",
+            "uid": "${DS_PROMETHEUS}"
+          },
           "fieldConfig": {
             "defaults": {
               "custom": {}
             },
             "overrides": []
           },
@@ -681,13 +712,16 @@

         },
         {
           "aliasColors": {},
           "bars": false,
           "dashLength": 10,
           "dashes": false,
-          "datasource": "prometheus",
+          "datasource": {
+            "type": "prometheus",
+            "uid": "${DS_PROMETHEUS}"
+          },
           "fieldConfig": {
             "defaults": {
               "custom": {}
             },
             "overrides": []
           },
@@ -778,13 +812,16 @@

         },
         {
           "aliasColors": {},
           "bars": false,
           "dashLength": 10,
           "dashes": false,
-          "datasource": "prometheus",
+          "datasource": {
+            "type": "prometheus",
+            "uid": "${DS_PROMETHEUS}"
+          },
           "fieldConfig": {
             "defaults": {
               "custom": {}
             },
             "overrides": []
           },
@@ -875,13 +912,16 @@

         },
         {
           "aliasColors": {},
           "bars": false,
           "dashLength": 10,
           "dashes": false,
-          "datasource": "prometheus",
+          "datasource": {
+            "type": "prometheus",
+            "uid": "${DS_PROMETHEUS}"
+          },
           "fieldConfig": {
             "defaults": {
               "custom": {}
             },
             "overrides": []
           },
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-relay-config

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-relay-config

@@ -6,9 +6,9 @@

   namespace: kube-system
 data:
   config.yaml: "cluster-name: home-kubernetes\npeer-service: \"hubble-peer.kube-system.svc.cluster.local:443\"\
     \nlisten-address: :4245\ngops: true\ngops-port: \"9893\"\ndial-timeout: \nretry-timeout:\
     \ \nsort-buffer-len-max: \nsort-buffer-drain-timeout: \ntls-hubble-client-cert-file:\
     \ /var/lib/hubble-relay/tls/client.crt\ntls-hubble-client-key-file: /var/lib/hubble-relay/tls/client.key\n\
-    tls-hubble-server-ca-files: /var/lib/hubble-relay/tls/hubble-server-ca.crt\ndisable-server-tls:\
-    \ true\n"
+    tls-hubble-server-ca-files: /var/lib/hubble-relay/tls/hubble-server-ca.crt\n\n\
+    disable-server-tls: true\n"
 
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-dashboard

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-dashboard

@@ -9,3256 +9,1059 @@

     app.kubernetes.io/name: hubble
     app.kubernetes.io/part-of: cilium
     grafana_dashboard: '1'
   annotations:
     grafana_folder: Cilium
 data:
-  hubble-dashboard.json: |
-    {
-      "annotations": {
-        "list": [
-          {
-            "builtIn": 1,
-            "datasource": "-- Grafana --",
-            "enable": true,
-            "hide": true,
-            "iconColor": "rgba(0, 211, 255, 1)",
-            "name": "Annotations & Alerts",
-            "type": "dashboard"
-          }
-        ]
-      },
-      "editable": true,
-      "gnetId": null,
-      "graphTooltip": 0,
-      "id": 3,
-      "links": [],
-      "panels": [
-        {
-          "collapsed": false,
-          "gridPos": {
-            "h": 1,
-            "w": 24,
-            "x": 0,
-            "y": 0
-          },
-          "id": 14,
-          "panels": [],
-          "title": "General Processing",
-          "type": "row"
-        },
-        {
-          "aliasColors": {},
-          "bars": false,
-          "dashLength": 10,
-          "dashes": false,
-          "datasource": "prometheus",
-          "fill": 1,
-          "gridPos": {
-            "h": 5,
-            "w": 12,
-            "x": 0,
-            "y": 1
-          },
-          "id": 12,
-          "legend": {
-            "avg": false,
-            "current": false,
-            "max": false,
-            "min": false,
-            "show": true,
-            "total": false,
-            "values": false
-          },
-          "lines": true,
-          "linewidth": 1,
-          "links": [],
-          "nullPointMode": "null",
-          "options": {},
-          "percentage": false,
-          "pointradius": 2,
-          "points": false,
-          "renderer": "flot",
-          "seriesOverrides": [
-            {
-              "alias": "max",
-              "fillBelowTo": "avg",
-              "lines": false
-            },
-            {
-              "alias": "avg",
-              "fill": 0,
-              "fillBelowTo": "min"
-            },
-            {
-              "alias": "min",
-              "lines": false
-            }
-          ],
-          "spaceLength": 10,
-          "stack": false,
-          "steppedLine": false,
-          "targets": [
-            {
-              "expr": "avg(sum(rate(hubble_flows_processed_total[1m])) by (pod))",
-              "format": "time_series",
-              "intervalFactor": 1,
-              "legendFormat": "avg",
-              "refId": "A"
-            },
-            {
-              "expr": "min(sum(rate(hubble_flows_processed_total[1m])) by (pod))",
-              "format": "time_series",
-              "intervalFactor": 1,
-              "legendFormat": "min",
-              "refId": "B"
-            },
-            {
-              "expr": "max(sum(rate(hubble_flows_processed_total[1m])) by (pod))",
-              "format": "time_series",
-              "intervalFactor": 1,
-              "legendFormat": "max",
-              "refId": "C"
-            }
-          ],
-          "thresholds": [],
-          "timeFrom": null,
-          "timeRegions": [],
-          "timeShift": null,
-          "title": "Flows processed Per Node",
-          "tooltip": {
-            "shared": true,
-            "sort": 1,
-            "value_type": "individual"
-          },
-          "type": "graph",
-          "xaxis": {
-            "buckets": null,
-            "mode": "time",
-            "name": null,
-            "show": true,
-            "values": []
-          },
-          "yaxes": [
-            {
-              "format": "ops",
-              "label": null,
-              "logBase": 1,
-              "max": null,
-              "min": null,
-              "show": true
-            },
-            {
-              "format": "short",
-              "label": null,
-              "logBase": 1,
-              "max": null,
-              "min": null,
-              "show": true
-            }
-          ],
-          "yaxis": {
-            "align": false,
-            "alignLevel": null
-          }
-        },
-        {
-          "aliasColors": {},
-          "bars": false,
-          "dashLength": 10,
-          "dashes": false,
-          "datasource": "prometheus",
-          "fill": 1,
-          "gridPos": {
-            "h": 5,
-            "w": 12,
-            "x": 12,
-            "y": 1
-          },
-          "id": 32,
-          "legend": {
-            "avg": false,
-            "current": false,
-            "max": false,
-            "min": false,
-            "show": true,
-            "total": false,
-            "values": false
-          },
-          "lines": true,
-          "linewidth": 1,
-          "links": [],
-          "nullPointMode": "null",
-          "options": {},
-          "percentage": false,
-          "pointradius": 2,
-          "points": false,
-          "renderer": "flot",
-          "seriesOverrides": [],
-          "spaceLength": 10,
-          "stack": true,
-          "steppedLine": false,
-          "targets": [
-            {
-              "expr": "sum(rate(hubble_flows_processed_total[1m])) by (pod, type)",
-              "format": "time_series",
-              "intervalFactor": 1,
-              "legendFormat": "{{type}}",
-              "refId": "A"
-            }
-          ],
-          "thresholds": [],
-          "timeFrom": null,
-          "timeRegions": [],
-          "timeShift": null,
-          "title": "Flows Types",
-          "tooltip": {
-            "shared": true,
-            "sort": 2,
-            "value_type": "individual"
-          },
-          "type": "graph",
-          "xaxis": {
-            "buckets": null,
-            "mode": "time",
-            "name": null,
-            "show": true,
-            "values": []
-          },
-          "yaxes": [
-            {
-              "format": "ops",
-              "label": null,
-              "logBase": 1,
-              "max": null,
-              "min": null,
-              "show": true
-            },
-            {
-              "format": "short",
-              "label": null,
-              "logBase": 1,
-              "max": null,
-              "min": null,
-              "show": true
-            }
-          ],
-          "yaxis": {
-            "align": false,
-            "alignLevel": null
-          }
-        },
-        {
-          "aliasColors": {},
-          "bars": false,
-          "dashLength": 10,
-          "dashes": false,
-          "datasource": "prometheus",
-          "fill": 1,
-          "gridPos": {
-            "h": 5,
-            "w": 12,
-            "x": 0,
-            "y": 6
-          },
-          "id": 59,
-          "legend": {
-            "avg": false,
-            "current": false,
-            "max": false,
-            "min": false,
-            "show": true,
-            "total": false,
-            "values": false
-          },
-          "lines": true,
-          "linewidth": 1,
-          "links": [],
-          "nullPointMode": "null",
-          "options": {},
-          "percentage": false,
-          "pointradius": 2,
-          "points": false,
-          "renderer": "flot",
-          "seriesOverrides": [],
-          "spaceLength": 10,
-          "stack": true,
-          "steppedLine": false,
-          "targets": [
-            {
-              "expr": "sum(rate(hubble_flows_processed_total{type=\"L7\"}[1m])) by (pod, subtype)",
-              "format": "time_series",
-              "intervalFactor": 1,
-              "legendFormat": "{{subtype}}",
-              "refId": "A"
-            }
-          ],
-          "thresholds": [],
-          "timeFrom": null,
-          "timeRegions": [],
-          "timeShift": null,
-          "title": "L7 Flow Distribution",
-          "tooltip": {
-            "shared": true,
-            "sort": 2,
-            "value_type": "individual"
-          },
-          "type": "graph",
-          "xaxis": {
-            "buckets": null,
-            "mode": "time",
-            "name": null,
-            "show": true,
-            "values": []
-          },
-          "yaxes": [
-            {
-              "format": "ops",
-              "label": null,
-              "logBase": 1,
-              "max": null,
-              "min": null,
-              "show": true
-            },
-            {
-              "format": "short",
-              "label": null,
-              "logBase": 1,
-              "max": null,
-              "min": null,
-              "show": true
-            }
-          ],
-          "yaxis": {
-            "align": false,
-            "alignLevel": null
-          }
-        },
-        {
-          "aliasColors": {},
-          "bars": false,
-          "dashLength": 10,
-          "dashes": false,
-          "datasource": "prometheus",
-          "fill": 1,
-          "gridPos": {
-            "h": 5,
-            "w": 12,
-            "x": 12,
-            "y": 6
-          },
-          "id": 60,
-          "legend": {
-            "avg": false,
-            "current": false,
-            "max": false,
-            "min": false,
-            "show": true,
-            "total": false,
-            "values": false
-          },
-          "lines": true,
-          "linewidth": 1,
-          "links": [],
-          "nullPointMode": "null",
-          "options": {},
-          "percentage": false,
-          "pointradius": 2,
-          "points": false,
-          "renderer": "flot",
-          "seriesOverrides": [],
-          "spaceLength": 10,
-          "stack": true,
-          "steppedLine": false,
-          "targets": [
-            {
-              "expr": "sum(rate(hubble_flows_processed_total{type=\"Trace\"}[1m])) by (pod, subtype)",
-              "format": "time_series",
-              "intervalFactor": 1,
-              "legendFormat": "{{subtype}}",
-              "refId": "A"
-            }
-          ],
-          "thresholds": [],
-          "timeFrom": null,
-          "timeRegions": [],
-          "timeShift": null,
-          "title": "Trace Flow Distribution",
-          "tooltip": {
[Diff truncated by flux-local]
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-l7-http-metrics-by-workload

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-l7-http-metrics-by-workload

@@ -11,13 +11,22 @@

     grafana_dashboard: '1'
   annotations:
     grafana_folder: Cilium
 data:
   hubble-l7-http-metrics-by-workload.json: |
     {
-      "__inputs": [],
+      "__inputs": [
+        {
+          "name": "DS_PROMETHEUS",
+          "label": "prometheus",
+          "description": "",
+          "type": "datasource",
+          "pluginId": "prometheus",
+          "pluginName": "Prometheus"
+        }
+      ],
       "__elements": {},
       "__requires": [
         {
           "type": "grafana",
           "id": "grafana",
           "name": "Grafana",
--- HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium

+++ HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium

@@ -106,14 +106,12 @@

   verbs:
   - get
   - update
 - apiGroups:
   - cilium.io
   resources:
-  - ciliumnetworkpolicies/status
-  - ciliumclusterwidenetworkpolicies/status
   - ciliumendpoints/status
   - ciliumendpoints
   - ciliuml2announcementpolicies/status
   - ciliumbgpnodeconfigs/status
   verbs:
   - patch
--- HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium-operator

+++ HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium-operator

@@ -170,12 +170,13 @@

   - ciliumpodippools.cilium.io
 - apiGroups:
   - cilium.io
   resources:
   - ciliumloadbalancerippools
   - ciliumpodippools
+  - ciliumbgppeeringpolicies
   - ciliumbgpclusterconfigs
   - ciliumbgpnodeconfigoverrides
   verbs:
   - get
   - list
   - watch
--- HelmRelease: kube-system/cilium Service: kube-system/cilium-agent

+++ HelmRelease: kube-system/cilium Service: kube-system/cilium-agent

@@ -15,11 +15,7 @@

     k8s-app: cilium
   ports:
   - name: metrics
     port: 9962
     protocol: TCP
     targetPort: prometheus
-  - name: envoy-metrics
-    port: 9964
-    protocol: TCP
-    targetPort: envoy-metrics
 
--- HelmRelease: kube-system/cilium Service: kube-system/hubble-relay

+++ HelmRelease: kube-system/cilium Service: kube-system/hubble-relay

@@ -12,8 +12,8 @@

   type: ClusterIP
   selector:
     k8s-app: hubble-relay
   ports:
   - protocol: TCP
     port: 80
-    targetPort: 4245
+    targetPort: grpc
 
--- HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium

+++ HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium

@@ -16,24 +16,24 @@

     rollingUpdate:
       maxUnavailable: 2
     type: RollingUpdate
   template:
     metadata:
       annotations:
-        cilium.io/cilium-configmap-checksum: 41b8349ddf5b1a139409de2a8330c31f5eaf532bba781527911e92555678a14a
+        cilium.io/cilium-configmap-checksum: 17190095812a9d665a81c116f5dbc0a4d1a819fc69d020aa2eed1a86b43aa125
       labels:
         k8s-app: cilium
         app.kubernetes.io/name: cilium-agent
         app.kubernetes.io/part-of: cilium
     spec:
       securityContext:
         appArmorProfile:
           type: Unconfined
       containers:
       - name: cilium-agent
-        image: quay.io/cilium/cilium:v1.15.7@sha256:2e432bf6879feb8b891c497d6fd784b13e53456017d2b8e4ea734145f0282ef0
+        image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
         imagePullPolicy: IfNotPresent
         command:
         - cilium-agent
         args:
         - --config-dir=/tmp/cilium/config-map
         startupProbe:
@@ -133,16 +133,12 @@

           hostPort: 4244
           protocol: TCP
         - name: prometheus
           containerPort: 9962
           hostPort: 9962
           protocol: TCP
-        - name: envoy-metrics
-          containerPort: 9964
-          hostPort: 9964
-          protocol: TCP
         - name: hubble-metrics
           containerPort: 9965
           hostPort: 9965
           protocol: TCP
         securityContext:
           seLinuxOptions:
@@ -162,12 +158,15 @@

             - SETGID
             - SETUID
             drop:
             - ALL
         terminationMessagePolicy: FallbackToLogsOnError
         volumeMounts:
+        - name: envoy-sockets
+          mountPath: /var/run/cilium/envoy/sockets
+          readOnly: false
         - mountPath: /host/proc/sys/net
           name: host-proc-sys-net
         - mountPath: /host/proc/sys/kernel
           name: host-proc-sys-kernel
         - name: bpf-maps
           mountPath: /sys/fs/bpf
@@ -190,13 +189,13 @@

           mountPath: /var/lib/cilium/tls/hubble
           readOnly: true
         - name: tmp
           mountPath: /tmp
       initContainers:
       - name: config
-        image: quay.io/cilium/cilium:v1.15.7@sha256:2e432bf6879feb8b891c497d6fd784b13e53456017d2b8e4ea734145f0282ef0
+        image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
         imagePullPolicy: IfNotPresent
         command:
         - cilium-dbg
         - build-config
         env:
         - name: K8S_NODE_NAME
@@ -215,13 +214,13 @@

           value: '6444'
         volumeMounts:
         - name: tmp
           mountPath: /tmp
         terminationMessagePolicy: FallbackToLogsOnError
       - name: mount-cgroup
-        image: quay.io/cilium/cilium:v1.15.7@sha256:2e432bf6879feb8b891c497d6fd784b13e53456017d2b8e4ea734145f0282ef0
+        image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
         imagePullPolicy: IfNotPresent
         env:
         - name: CGROUP_ROOT
           value: /sys/fs/cgroup
         - name: BIN_PATH
           value: /opt/cni/bin
@@ -247,13 +246,13 @@

             - SYS_ADMIN
             - SYS_CHROOT
             - SYS_PTRACE
             drop:
             - ALL
       - name: apply-sysctl-overwrites
-        image: quay.io/cilium/cilium:v1.15.7@sha256:2e432bf6879feb8b891c497d6fd784b13e53456017d2b8e4ea734145f0282ef0
+        image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
         imagePullPolicy: IfNotPresent
         env:
         - name: BIN_PATH
           value: /opt/cni/bin
         command:
         - sh
@@ -277,13 +276,13 @@

             - SYS_ADMIN
             - SYS_CHROOT
             - SYS_PTRACE
             drop:
             - ALL
       - name: mount-bpf-fs
-        image: quay.io/cilium/cilium:v1.15.7@sha256:2e432bf6879feb8b891c497d6fd784b13e53456017d2b8e4ea734145f0282ef0
+        image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
         imagePullPolicy: IfNotPresent
         args:
         - mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpf
         command:
         - /bin/bash
         - -c
@@ -293,13 +292,13 @@

           privileged: true
         volumeMounts:
         - name: bpf-maps
           mountPath: /sys/fs/bpf
           mountPropagation: Bidirectional
       - name: clean-cilium-state
-        image: quay.io/cilium/cilium:v1.15.7@sha256:2e432bf6879feb8b891c497d6fd784b13e53456017d2b8e4ea734145f0282ef0
+        image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
         imagePullPolicy: IfNotPresent
         command:
         - /init-container.sh
         env:
         - name: CILIUM_ALL_STATE
           valueFrom:
@@ -341,13 +340,13 @@

         - name: cilium-cgroup
           mountPath: /sys/fs/cgroup
           mountPropagation: HostToContainer
         - name: cilium-run
           mountPath: /var/run/cilium
       - name: install-cni-binaries
-        image: quay.io/cilium/cilium:v1.15.7@sha256:2e432bf6879feb8b891c497d6fd784b13e53456017d2b8e4ea734145f0282ef0
+        image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
         imagePullPolicy: IfNotPresent
         command:
         - /install-plugin.sh
         resources:
           requests:
             cpu: 100m
@@ -362,13 +361,12 @@

         terminationMessagePolicy: FallbackToLogsOnError
         volumeMounts:
         - name: cni-path
           mountPath: /host/opt/cni/bin
       restartPolicy: Always
       priorityClassName: system-node-critical
-      serviceAccount: cilium
       serviceAccountName: cilium
       automountServiceAccountToken: true
       terminationGracePeriodSeconds: 1
       hostNetwork: true
       affinity:
         podAntiAffinity:
@@ -412,12 +410,16 @@

         hostPath:
           path: /lib/modules
       - name: xtables-lock
         hostPath:
           path: /run/xtables.lock
           type: FileOrCreate
+      - name: envoy-sockets
+        hostPath:
+          path: /var/run/cilium/envoy/sockets
+          type: DirectoryOrCreate
       - name: clustermesh-secrets
         projected:
           defaultMode: 256
           sources:
           - secret:
               name: cilium-clustermesh
@@ -429,12 +431,22 @@

               - key: tls.key
                 path: common-etcd-client.key
               - key: tls.crt
                 path: common-etcd-client.crt
               - key: ca.crt
                 path: common-etcd-client-ca.crt
+          - secret:
+              name: clustermesh-apiserver-local-cert
+              optional: true
+              items:
+              - key: tls.key
+                path: local-etcd-client.key
+              - key: tls.crt
+                path: local-etcd-client.crt
+              - key: ca.crt
+                path: local-etcd-client-ca.crt
       - name: host-proc-sys-net
         hostPath:
           path: /proc/sys/net
           type: Directory
       - name: host-proc-sys-kernel
         hostPath:
--- HelmRelease: kube-system/cilium Deployment: kube-system/cilium-operator

+++ HelmRelease: kube-system/cilium Deployment: kube-system/cilium-operator

@@ -20,22 +20,22 @@

       maxSurge: 25%
       maxUnavailable: 100%
     type: RollingUpdate
   template:
     metadata:
       annotations:
-        cilium.io/cilium-configmap-checksum: 41b8349ddf5b1a139409de2a8330c31f5eaf532bba781527911e92555678a14a
+        cilium.io/cilium-configmap-checksum: 17190095812a9d665a81c116f5dbc0a4d1a819fc69d020aa2eed1a86b43aa125
       labels:
         io.cilium/app: operator
         name: cilium-operator
         app.kubernetes.io/part-of: cilium
         app.kubernetes.io/name: cilium-operator
     spec:
       containers:
       - name: cilium-operator
-        image: quay.io/cilium/operator-generic:v1.15.7@sha256:6840a6dde703b3e73dd31e03390327a9184fcb888efbad9d9d098d65b9035b54
+        image: quay.io/cilium/operator-generic:v1.16.1@sha256:3bc7e7a43bc4a4d8989cb7936c5d96675dd2d02c306adf925ce0a7c35aa27dc4
         imagePullPolicy: IfNotPresent
         command:
         - cilium-operator-generic
         args:
         - --config-dir=/tmp/cilium/config-map
         - --debug=$(CILIUM_DEBUG)
@@ -89,13 +89,12 @@

           mountPath: /tmp/cilium/config-map
           readOnly: true
         terminationMessagePolicy: FallbackToLogsOnError
       hostNetwork: true
       restartPolicy: Always
       priorityClassName: system-cluster-critical
-      serviceAccount: cilium-operator
       serviceAccountName: cilium-operator
       automountServiceAccountToken: true
       affinity:
         podAntiAffinity:
           requiredDuringSchedulingIgnoredDuringExecution:
           - labelSelector:
--- HelmRelease: kube-system/cilium Deployment: kube-system/hubble-relay

+++ HelmRelease: kube-system/cilium Deployment: kube-system/hubble-relay

@@ -17,13 +17,13 @@

     rollingUpdate:
       maxUnavailable: 1
     type: RollingUpdate
   template:
     metadata:
       annotations:
-        cilium.io/hubble-relay-configmap-checksum: 9ff143e9d452090a95b3354affb34e15672c8bf2f87e5d5f667dfdb7ca16ee27
+        cilium.io/hubble-relay-configmap-checksum: 058d4aa45f038b89c2abca9819ce810326aeb9f8c6d1560d4a2070e0db250b02
       labels:
         k8s-app: hubble-relay
         app.kubernetes.io/name: hubble-relay
         app.kubernetes.io/part-of: cilium
     spec:
       securityContext:
@@ -34,13 +34,13 @@

           capabilities:
             drop:
             - ALL
           runAsGroup: 65532
           runAsNonRoot: true
           runAsUser: 65532
-        image: quay.io/cilium/hubble-relay:v1.15.7@sha256:12870e87ec6c105ca86885c4ee7c184ece6b706cc0f22f63d2a62a9a818fd68f
+        image: quay.io/cilium/hubble-relay:v1.16.1@sha256:2e1b4c739a676ae187d4c2bfc45c3e865bda2567cc0320a90cb666657fcfcc35
         imagePullPolicy: IfNotPresent
         command:
         - hubble-relay
         args:
         - serve
         ports:
@@ -50,30 +50,32 @@

           grpc:
             port: 4222
           timeoutSeconds: 3
         livenessProbe:
           grpc:
             port: 4222
-          timeoutSeconds: 3
+          timeoutSeconds: 10
+          initialDelaySeconds: 10
+          periodSeconds: 10
+          failureThreshold: 12
         startupProbe:
           grpc:
             port: 4222
-          timeoutSeconds: 3
+          initialDelaySeconds: 10
           failureThreshold: 20
           periodSeconds: 3
         volumeMounts:
         - name: config
           mountPath: /etc/hubble-relay
           readOnly: true
         - name: tls
           mountPath: /var/lib/hubble-relay/tls
           readOnly: true
         terminationMessagePolicy: FallbackToLogsOnError
       restartPolicy: Always
       priorityClassName: null
-      serviceAccount: hubble-relay
       serviceAccountName: hubble-relay
       automountServiceAccountToken: false
       terminationGracePeriodSeconds: 1
       affinity:
         podAffinity:
           requiredDuringSchedulingIgnoredDuringExecution:
--- HelmRelease: kube-system/cilium Deployment: kube-system/hubble-ui

+++ HelmRelease: kube-system/cilium Deployment: kube-system/hubble-ui

@@ -28,13 +28,12 @@

     spec:
       securityContext:
         fsGroup: 1001
         runAsGroup: 1001
         runAsUser: 1001
       priorityClassName: null
-      serviceAccount: hubble-ui
       serviceAccountName: hubble-ui
       automountServiceAccountToken: true
       containers:
       - name: frontend
         image: quay.io/cilium/hubble-ui:v0.13.1@sha256:e2e9313eb7caf64b0061d9da0efbdad59c6c461f6ca1752768942bfeda0796c6
         imagePullPolicy: IfNotPresent
--- HelmRelease: kube-system/cilium ServiceMonitor: kube-system/hubble

+++ HelmRelease: kube-system/cilium ServiceMonitor: kube-system/hubble

@@ -15,12 +15,13 @@

     - kube-system
   endpoints:
   - port: hubble-metrics
     interval: 10s
     honorLabels: true
     path: /metrics
+    scheme: http
     relabelings:
     - replacement: ${1}
       sourceLabels:
       - __meta_kubernetes_pod_node_name
       targetLabel: node
 
--- HelmRelease: kube-system/cilium ServiceAccount: kube-system/cilium-envoy

+++ HelmRelease: kube-system/cilium ServiceAccount: kube-system/cilium-envoy

@@ -0,0 +1,7 @@

+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: cilium-envoy
+  namespace: kube-system
+
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-envoy-config

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-envoy-config

@@ -0,0 +1,326 @@

+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+  name: cilium-envoy-config
+  namespace: kube-system
+data:
+  bootstrap-config.json: |
+    {
+      "node": {
+        "id": "host~127.0.0.1~no-id~localdomain",
+        "cluster": "ingress-cluster"
+      },
+      "staticResources": {
+        "listeners": [
+          {
+            "name": "envoy-prometheus-metrics-listener",
+            "address": {
+              "socket_address": {
+                "address": "0.0.0.0",
+                "port_value": 9964
+              }
+            },
+            "filter_chains": [
+              {
+                "filters": [
+                  {
+                    "name": "envoy.filters.network.http_connection_manager",
+                    "typed_config": {
+                      "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
+                      "stat_prefix": "envoy-prometheus-metrics-listener",
+                      "route_config": {
+                        "virtual_hosts": [
+                          {
+                            "name": "prometheus_metrics_route",
+                            "domains": [
+                              "*"
+                            ],
+                            "routes": [
+                              {
+                                "name": "prometheus_metrics_route",
+                                "match": {
+                                  "prefix": "/metrics"
+                                },
+                                "route": {
+                                  "cluster": "/envoy-admin",
+                                  "prefix_rewrite": "/stats/prometheus"
+                                }
+                              }
+                            ]
+                          }
+                        ]
+                      },
+                      "http_filters": [
+                        {
+                          "name": "envoy.filters.http.router",
+                          "typed_config": {
+                            "@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
+                          }
+                        }
+                      ],
+                      "stream_idle_timeout": "0s"
+                    }
+                  }
+                ]
+              }
+            ]
+          },
+          {
+            "name": "envoy-health-listener",
+            "address": {
+              "socket_address": {
+                "address": "127.0.0.1",
+                "port_value": 9878
+              }
+            },
+            "filter_chains": [
+              {
+                "filters": [
+                  {
+                    "name": "envoy.filters.network.http_connection_manager",
+                    "typed_config": {
+                      "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
+                      "stat_prefix": "envoy-health-listener",
+                      "route_config": {
+                        "virtual_hosts": [
+                          {
+                            "name": "health",
+                            "domains": [
+                              "*"
+                            ],
+                            "routes": [
+                              {
+                                "name": "health",
+                                "match": {
+                                  "prefix": "/healthz"
+                                },
+                                "route": {
+                                  "cluster": "/envoy-admin",
+                                  "prefix_rewrite": "/ready"
+                                }
+                              }
+                            ]
+                          }
+                        ]
+                      },
+                      "http_filters": [
+                        {
+                          "name": "envoy.filters.http.router",
+                          "typed_config": {
+                            "@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
+                          }
+                        }
+                      ],
+                      "stream_idle_timeout": "0s"
+                    }
+                  }
+                ]
+              }
+            ]
+          }
+        ],
+        "clusters": [
+          {
+            "name": "ingress-cluster",
+            "type": "ORIGINAL_DST",
+            "connectTimeout": "2s",
+            "lbPolicy": "CLUSTER_PROVIDED",
+            "typedExtensionProtocolOptions": {
+              "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+                "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+                "commonHttpProtocolOptions": {
+                  "idleTimeout": "60s",
+                  "maxConnectionDuration": "0s",
+                  "maxRequestsPerConnection": 0
+                },
+                "useDownstreamProtocolConfig": {}
+              }
+            },
+            "cleanupInterval": "2.500s"
+          },
+          {
+            "name": "egress-cluster-tls",
+            "type": "ORIGINAL_DST",
+            "connectTimeout": "2s",
+            "lbPolicy": "CLUSTER_PROVIDED",
+            "typedExtensionProtocolOptions": {
+              "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+                "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+                "commonHttpProtocolOptions": {
+                  "idleTimeout": "60s",
+                  "maxConnectionDuration": "0s",
+                  "maxRequestsPerConnection": 0
+                },
+                "upstreamHttpProtocolOptions": {},
+                "useDownstreamProtocolConfig": {}
+              }
+            },
+            "cleanupInterval": "2.500s",
+            "transportSocket": {
+              "name": "cilium.tls_wrapper",
+              "typedConfig": {
+                "@type": "type.googleapis.com/cilium.UpstreamTlsWrapperContext"
+              }
+            }
+          },
+          {
+            "name": "egress-cluster",
+            "type": "ORIGINAL_DST",
+            "connectTimeout": "2s",
+            "lbPolicy": "CLUSTER_PROVIDED",
+            "typedExtensionProtocolOptions": {
+              "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+                "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+                "commonHttpProtocolOptions": {
+                  "idleTimeout": "60s",
+                  "maxConnectionDuration": "0s",
+                  "maxRequestsPerConnection": 0
+                },
+                "useDownstreamProtocolConfig": {}
+              }
+            },
+            "cleanupInterval": "2.500s"
+          },
+          {
+            "name": "ingress-cluster-tls",
+            "type": "ORIGINAL_DST",
+            "connectTimeout": "2s",
+            "lbPolicy": "CLUSTER_PROVIDED",
+            "typedExtensionProtocolOptions": {
+              "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+                "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+                "commonHttpProtocolOptions": {
+                  "idleTimeout": "60s",
+                  "maxConnectionDuration": "0s",
+                  "maxRequestsPerConnection": 0
+                },
+                "upstreamHttpProtocolOptions": {},
+                "useDownstreamProtocolConfig": {}
+              }
+            },
+            "cleanupInterval": "2.500s",
+            "transportSocket": {
+              "name": "cilium.tls_wrapper",
+              "typedConfig": {
+                "@type": "type.googleapis.com/cilium.UpstreamTlsWrapperContext"
+              }
+            }
+          },
+          {
+            "name": "xds-grpc-cilium",
+            "type": "STATIC",
+            "connectTimeout": "2s",
+            "loadAssignment": {
+              "clusterName": "xds-grpc-cilium",
+              "endpoints": [
+                {
+                  "lbEndpoints": [
+                    {
+                      "endpoint": {
+                        "address": {
+                          "pipe": {
+                            "path": "/var/run/cilium/envoy/sockets/xds.sock"
+                          }
+                        }
+                      }
+                    }
+                  ]
+                }
+              ]
+            },
+            "typedExtensionProtocolOptions": {
+              "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+                "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+                "explicitHttpConfig": {
+                  "http2ProtocolOptions": {}
+                }
+              }
+            }
+          },
+          {
+            "name": "/envoy-admin",
+            "type": "STATIC",
+            "connectTimeout": "2s",
+            "loadAssignment": {
+              "clusterName": "/envoy-admin",
+              "endpoints": [
+                {
+                  "lbEndpoints": [
+                    {
+                      "endpoint": {
+                        "address": {
+                          "pipe": {
+                            "path": "/var/run/cilium/envoy/sockets/admin.sock"
+                          }
+                        }
+                      }
+                    }
+                  ]
+                }
+              ]
+            }
+          }
+        ]
+      },
+      "dynamicResources": {
+        "ldsConfig": {
+          "apiConfigSource": {
+            "apiType": "GRPC",
+            "transportApiVersion": "V3",
+            "grpcServices": [
+              {
+                "envoyGrpc": {
[Diff truncated by flux-local]
--- HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium-envoy

+++ HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium-envoy

@@ -0,0 +1,171 @@

+---
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+  name: cilium-envoy
+  namespace: kube-system
+  labels:
+    k8s-app: cilium-envoy
+    app.kubernetes.io/part-of: cilium
+    app.kubernetes.io/name: cilium-envoy
+    name: cilium-envoy
+spec:
+  selector:
+    matchLabels:
+      k8s-app: cilium-envoy
+  updateStrategy:
+    rollingUpdate:
+      maxUnavailable: 2
+    type: RollingUpdate
+  template:
+    metadata:
+      annotations:
+        prometheus.io/port: '9964'
+        prometheus.io/scrape: 'true'
+      labels:
+        k8s-app: cilium-envoy
+        name: cilium-envoy
+        app.kubernetes.io/name: cilium-envoy
+        app.kubernetes.io/part-of: cilium
+    spec:
+      securityContext:
+        appArmorProfile:
+          type: Unconfined
+      containers:
+      - name: cilium-envoy
+        image: quay.io/cilium/cilium-envoy:v1.29.7-39a2a56bbd5b3a591f69dbca51d3e30ef97e0e51@sha256:bd5ff8c66716080028f414ec1cb4f7dc66f40d2fb5a009fff187f4a9b90b566b
+        imagePullPolicy: IfNotPresent
+        command:
+        - /usr/bin/cilium-envoy-starter
+        args:
+        - --
+        - -c /var/run/cilium/envoy/bootstrap-config.json
+        - --base-id 0
+        - --log-level info
+        - --log-format [%Y-%m-%d %T.%e][%t][%l][%n] [%g:%#] %v
+        startupProbe:
+          httpGet:
+            host: 127.0.0.1
+            path: /healthz
+            port: 9878
+            scheme: HTTP
+          failureThreshold: 105
+          periodSeconds: 2
+          successThreshold: 1
+          initialDelaySeconds: 5
+        livenessProbe:
+          httpGet:
+            host: 127.0.0.1
+            path: /healthz
+            port: 9878
+            scheme: HTTP
+          periodSeconds: 30
+          successThreshold: 1
+          failureThreshold: 10
+          timeoutSeconds: 5
+        readinessProbe:
+          httpGet:
+            host: 127.0.0.1
+            path: /healthz
+            port: 9878
+            scheme: HTTP
+          periodSeconds: 30
+          successThreshold: 1
+          failureThreshold: 3
+          timeoutSeconds: 5
+        env:
+        - name: K8S_NODE_NAME
+          valueFrom:
+            fieldRef:
+              apiVersion: v1
+              fieldPath: spec.nodeName
+        - name: CILIUM_K8S_NAMESPACE
+          valueFrom:
+            fieldRef:
+              apiVersion: v1
+              fieldPath: metadata.namespace
+        - name: KUBERNETES_SERVICE_HOST
+          value: 127.0.0.1
+        - name: KUBERNETES_SERVICE_PORT
+          value: '6444'
+        ports:
+        - name: envoy-metrics
+          containerPort: 9964
+          hostPort: 9964
+          protocol: TCP
+        securityContext:
+          seLinuxOptions:
+            level: s0
+            type: spc_t
+          capabilities:
+            add:
+            - NET_ADMIN
+            - SYS_ADMIN
+            drop:
+            - ALL
+        terminationMessagePolicy: FallbackToLogsOnError
+        volumeMounts:
+        - name: envoy-sockets
+          mountPath: /var/run/cilium/envoy/sockets
+          readOnly: false
+        - name: envoy-artifacts
+          mountPath: /var/run/cilium/envoy/artifacts
+          readOnly: true
+        - name: envoy-config
+          mountPath: /var/run/cilium/envoy/
+          readOnly: true
+        - name: bpf-maps
+          mountPath: /sys/fs/bpf
+          mountPropagation: HostToContainer
+      restartPolicy: Always
+      priorityClassName: system-node-critical
+      serviceAccountName: cilium-envoy
+      automountServiceAccountToken: true
+      terminationGracePeriodSeconds: 1
+      hostNetwork: true
+      affinity:
+        nodeAffinity:
+          requiredDuringSchedulingIgnoredDuringExecution:
+            nodeSelectorTerms:
+            - matchExpressions:
+              - key: cilium.io/no-schedule
+                operator: NotIn
+                values:
+                - 'true'
+        podAffinity:
+          requiredDuringSchedulingIgnoredDuringExecution:
+          - labelSelector:
+              matchLabels:
+                k8s-app: cilium
+            topologyKey: kubernetes.io/hostname
+        podAntiAffinity:
+          requiredDuringSchedulingIgnoredDuringExecution:
+          - labelSelector:
+              matchLabels:
+                k8s-app: cilium-envoy
+            topologyKey: kubernetes.io/hostname
+      nodeSelector:
+        kubernetes.io/os: linux
+      tolerations:
+      - operator: Exists
+      volumes:
+      - name: envoy-sockets
+        hostPath:
+          path: /var/run/cilium/envoy/sockets
+          type: DirectoryOrCreate
+      - name: envoy-artifacts
+        hostPath:
+          path: /var/run/cilium/envoy/artifacts
+          type: DirectoryOrCreate
+      - name: envoy-config
+        configMap:
+          name: cilium-envoy-config
+          defaultMode: 256
+          items:
+          - key: bootstrap-config.json
+            path: bootstrap-config.json
+      - name: bpf-maps
+        hostPath:
+          path: /sys/fs/bpf
+          type: DirectoryOrCreate
+

bot-akira[bot] avatar May 15 '24 11:05 bot-akira[bot]

🦙 MegaLinter status: ✅ SUCCESS

Descriptor Linter Files Fixed Errors Elapsed time

See detailed report in MegaLinter reports Set VALIDATE_ALL_CODEBASE: true in mega-linter.yml to validate all sources, not only the diff

MegaLinter is graciously provided by OX Security

axeII avatar May 15 '24 11:05 axeII