home-ops
home-ops copied to clipboard
feat(helm): update cilium to v1.16.1
This PR contains the following updates:
| Package | Update | Change |
|---|---|---|
| cilium (source) | minor | 1.15.4 -> 1.16.1 |
| cilium (source) | minor | 1.15.7 -> 1.16.1 |
[!WARNING] Some dependencies could not be looked up. Check the Dependency Dashboard for more information.
Release Notes
cilium/cilium (cilium)
v1.16.1: 1.16.1
Security Advisories
This release addresses the following security vulnerabilities:
- https://github.com/cilium/cilium/security/advisories/GHSA-vwf8-q6fw-4wcm
- https://github.com/cilium/cilium/security/advisories/GHSA-qcm3-7879-xcww
Summary of Changes
Minor Changes:
- Deprecate providing Hubble TLS secrets in helm values (Backport PR #34297, Upstream PR #34114, @chancez)
- gateway-api: Add required labels and annotations (Backport PR #34215, Upstream PR #33990, @sayboras)
- helm: add config for nat-map-stats-{interval, entries} config. (Backport PR #34158, Upstream PR #33847, @tommyp1ckles)
- Internal listener references are now properly qualified with namespace and CEC name. (Backport PR #34158, Upstream PR #34104, @jrajahalme)
- Support configuring imagePullSecrets for spire agent/server pods (Backport PR #34158, Upstream PR #33952, @chancez)
Bugfixes:
- auth: Fix data race in Upsert (Backport PR #34158, Upstream PR #33905, @chaunceyjiang)
- BGPv1 + BGPv2: Fix incorrect service reconciliation in setups with multiple BGP instances (virtual routers) (Backport PR #34297, Upstream PR #34177, @rastislavs)
- bgpv1: Fix data race in bgppSelection (Backport PR #34158, Upstream PR #33904, @chaunceyjiang)
- bgpv2: Avoid duplicate route policy naming (Backport PR #34158, Upstream PR #34031, @rastislavs)
- BGPv2: Fix
Serviceadvertisement selector: do not require matchingCiliumLoadBalancerIPPool(Backport PR #34201, Upstream PR #34182, @rastislavs) - Fix a nil dereference crash during cilium-agent initialization affecting setups with FQDN policies. The crash is triggered when a restored endpoint performs a DNS request just a the right time during early cilium-agent restoration. Problem is not expected to be persistent and the agent should get pass the problematic part of the initialization on restart. (Backport PR #34158, Upstream PR #34059, @joamaki)
- Fix appArmorProfile condition for CronJob helm template (Backport PR #34297, Upstream PR #34100, @sathieu)
- Fix bug causing etcd upsertion/deletion events to be potentially missed during the initial synchronization, when Cilium operates in KVStore mode, or Cluster Mesh is enabled. (Backport PR #34181, Upstream PR #34091, @giorio94)
- Fix issue in picking node IP addresses from the loopback device. This fixes a regression in v1.15 and v1.16 where VIPs assigned to the lo device were not considered by Cilium. Fix spurious updates node addresses to avoid unnecessary datapath reinitializations. (Backport PR #34085, Upstream PR #34012, @joamaki)
- Fix possible connection disruption on agent restart with WireGuard + kvstore (Backport PR #34158, Upstream PR #34062, @giorio94)
- Fixes DNS proxy "connect: cannot assign requested address" errors in transparent mode, which were due to opening multiple TCP connections to the upstream DNS server. (Backport PR #34201, Upstream PR #33989, @bimmlerd)
- gateway-api: Add HTTP method condition in sortable routes (Backport PR #34158, Upstream PR #34109, @sayboras)
- gateway-api: Enqueue gateway for Reference Grant changes (Backport PR #34158, Upstream PR #34032, @sayboras)
- lbipam: fixed bug in sharing key logic (Backport PR #34158, Upstream PR #34106, @dylandreimerink)
- policy: Fix policy cache covers context lookup. (#34322, @nathanjsweet)
- service: Relax protocol matching for L7 Service (Backport PR #34195, Upstream PR #34131, @sayboras)
CI Changes:
- .github: ginkgo: remove duplicate datapath ipv4only test in f09/f21. (Backport PR #34297, Upstream PR #34071, @tommyp1ckles)
- bpf: egressgw: don't install allow-all policy in to-netdev tests (Backport PR #34201, Upstream PR #34143, @julianwiedmann)
- ci: multi pool run tests concurrently (Backport PR #34297, Upstream PR #33945, @viktor-kurchenko)
- Fix workflow telemetry in ci-ipsec-upgrade (Backport PR #34158, Upstream PR #34097, @chancez)
- gha: Add extended features in gateway profile run (Backport PR #34215, Upstream PR #34098, @sayboras)
- gha: Free up Github runner disk space (Backport PR #34297, Upstream PR #34247, @sayboras)
- gha: lint absence of trailing spaces in workflow files (Backport PR #34158, Upstream PR #33908, @giorio94)
- gha: simplify the call-backport-label-updater workflow (Backport PR #34158, Upstream PR #33934, @giorio94)
- ginkgo-ci: split f09 into two groups to reduce timeouts & flakes (Backport PR #34297, Upstream PR #34038, @tommyp1ckles)
- test: use cgr.dev/chainguard/busybox:latest instead of docker.io image. (Backport PR #34158, Upstream PR #34004, @tommyp1ckles)
- tests-clustermesh-upgrade: Don't hardcode test namespace (Backport PR #34158, Upstream PR #34121, @michi-covalent)
Misc Changes:
- [v1.16] docs: Add note for CNP empty slices semantic under v1.16 section (#34008, @pippolo84)
- Add source IP visibility info to Ingress and Gateway API docs (Backport PR #34297, Upstream PR #34137, @youngnick)
- bgpv1: Reconcile with retry in BGP Controller (Backport PR #34158, Upstream PR #33971, @rastislavs)
- bgpv2: deprecate local port setting in transport config (Backport PR #34209, Upstream PR #33438, @harsimran-pabla)
- bgpv2: use correct path key in path reconciler (Backport PR #34158, Upstream PR #33947, @harsimran-pabla)
- bitlpm: Avoid allocs in CIDR trie lookups (Backport PR #34158, Upstream PR #33518, @jrajahalme)
- bitlpm: Simplify matchPrefix() (Backport PR #34158, Upstream PR #33517, @jrajahalme)
- bugtool: dump cilium_skip_lb{4,6} (Backport PR #34158, Upstream PR #34017, @ysksuzuki)
- bugtool: dumping more Envoy information (Backport PR #34158, Upstream PR #34110, @mhofstetter)
- chore(deps): update all github action dependencies (v1.16) (#34166, @cilium-renovate[bot])
- chore(deps): update dependency protocolbuffers/protobuf to v27.3 (v1.16) (#34165, @cilium-renovate[bot])
- chore(deps): update gcr.io/etcd-development/etcd docker tag to v3.5.15 (v1.16) (#34049, @cilium-renovate[bot])
- Clean up documentation make targets for cases of nesting make builds inside container invocations (Backport PR #34297, Upstream PR #34151, @joestringer)
- doc: update slack channel reference (Backport PR #34158, Upstream PR #34044, @Huweicai)
- docs: Add warning on CRDs requirement for using the Gateway API (Backport PR #34297, Upstream PR #33974, @xtineskim)
- Documentation: Introduce support for redirects (Backport PR #34297, Upstream PR #34233, @chancez)
- Documentation: Update readthedocs configuration (Backport PR #34297, Upstream PR #34190, @joestringer)
- Fix two bugs in dnsproxy tcp conn reuse (Backport PR #34201, Upstream PR #34175, @bimmlerd)
- Improve documentation on configuring Hubble TLS (Backport PR #34297, Upstream PR #34115, @chancez)
- iptables: Support Envoy listener chaining (Backport PR #34297, Upstream PR #34105, @jrajahalme)
- Makefile: Fix docker flags for fast image targets (Backport PR #34297, Upstream PR #34132, @joestringer)
- policy: Sanitize DNS Rules to Disallow Port Ranges (Backport PR #34201, Upstream PR #34023, @nathanjsweet)
- Revert "fix: support validation of stringToString values in ConfigMap" (Backport PR #34305, Upstream PR #34277, @aanm)
- vendor: Bump StateDB to version v0.2.1 (Backport PR #34246, Upstream PR #33587, @joamaki)
Other Changes:
- install: Update image digests for v1.16.0 (#33994, @cilium-release-bot[bot])
- v1.16: Remove leftover backporter state file (#34210, @gandro)
Docker Manifests
cilium
quay.io/cilium/cilium:v1.16.1@​sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
quay.io/cilium/cilium:stable@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
clustermesh-apiserver
quay.io/cilium/clustermesh-apiserver:v1.16.1@​sha256:e9c77417cd474cc943b2303a76c5cf584ac7024dd513ebb8d608cb62fe28896f
quay.io/cilium/clustermesh-apiserver:stable@sha256:e9c77417cd474cc943b2303a76c5cf584ac7024dd513ebb8d608cb62fe28896f
docker-plugin
quay.io/cilium/docker-plugin:v1.16.1@​sha256:243fd7759818d990a7f9b33df3eb685a9f250a12020e22f660547f9516b76320
quay.io/cilium/docker-plugin:stable@sha256:243fd7759818d990a7f9b33df3eb685a9f250a12020e22f660547f9516b76320
hubble-relay
quay.io/cilium/hubble-relay:v1.16.1@​sha256:2e1b4c739a676ae187d4c2bfc45c3e865bda2567cc0320a90cb666657fcfcc35
quay.io/cilium/hubble-relay:stable@sha256:2e1b4c739a676ae187d4c2bfc45c3e865bda2567cc0320a90cb666657fcfcc35
operator-alibabacloud
quay.io/cilium/operator-alibabacloud:v1.16.1@​sha256:4381adf48d76ec482551183947e537d44bcac9b6c31a635a9ac63f696d978804
quay.io/cilium/operator-alibabacloud:stable@sha256:4381adf48d76ec482551183947e537d44bcac9b6c31a635a9ac63f696d978804
operator-aws
quay.io/cilium/operator-aws:v1.16.1@​sha256:e3876fcaf2d6ccc8d5b4aaaded7b1efa971f3f4175eaa2c8a499878d58c39df4
quay.io/cilium/operator-aws:stable@sha256:e3876fcaf2d6ccc8d5b4aaaded7b1efa971f3f4175eaa2c8a499878d58c39df4
operator-azure
quay.io/cilium/operator-azure:v1.16.1@​sha256:e55c222654a44ceb52db7ade3a7b9e8ef05681ff84c14ad1d46fea34869a7a22
quay.io/cilium/operator-azure:stable@sha256:e55c222654a44ceb52db7ade3a7b9e8ef05681ff84c14ad1d46fea34869a7a22
operator-generic
quay.io/cilium/operator-generic:v1.16.1@​sha256:3bc7e7a43bc4a4d8989cb7936c5d96675dd2d02c306adf925ce0a7c35aa27dc4
quay.io/cilium/operator-generic:stable@sha256:3bc7e7a43bc4a4d8989cb7936c5d96675dd2d02c306adf925ce0a7c35aa27dc4
operator
quay.io/cilium/operator:v1.16.1@​sha256:258b28fefc9f3fe1cbcb21a3b2c4c96dcc72f6ee258eed0afebe9b0ac47f462b
quay.io/cilium/operator:stable@sha256:258b28fefc9f3fe1cbcb21a3b2c4c96dcc72f6ee258eed0afebe9b0ac47f462b
v1.16.0: 1.16.0
We are excited to announce the Cilium 1.16.0 release. A total of 2969 new commits have been contributed to this release by a growing community of over 750 developers and over 19300 GitHub stars! :star_struck:
To keep up to date with all the latest Cilium releases, join #release on Slack.
Here's what's new in v1.16.0:
-
:mountain_cableway: Networking
- :speedboat: Cilium NetKit: container-network throughput and latency as fast as host-network.
- :globe_with_meridians: BGPv2: Fresh new API for Cilium's BGP feature.
- :loudspeaker: BGP ClusterIP Advertisement: BGP advertisements of ExternalIP and Cluster IP Services.
- :twisted_rightwards_arrows: Service Traffic Distribution: Kubernetes 1.30 Service Traffic Distribution can be enabled directly in the Service spec instead of using annotations.
- :arrows_counterclockwise: Local Redirect Policy promoted to Stable: Redirecting the traffic bound for services to the local backend, such as node-local DNS.
- :satellite: Multicast Datapath: Define multicast groups in Cilium.
- :label: Per-Pod Fixed MAC Address: Specify the MAC address used on a pod.
-
:spider_web: Service Mesh & Ingress/Gateway API
- :compass: Gateway API GAMMA Support: East-west traffic management for the cluster via Gateway API.
- :shinto_shrine: Gateway API 1.1 Support: Cilium now supports Gateway API 1.1.
- :passport_control: ExternalTrafficPolicy support for Ingress/Gateway API: External traffic can now be routed to node-local or cluster-wide endpoints.
- :spider_web: L7 Envoy Proxy as dedicated DaemonSet: With a dedicated DaemonSet, Envoy and Cilium can have a separate life-cycle from each other. Now on by default for new installs.
- :card_index_dividers: NodeSelector support for CiliumEnvoyConfig: Instead of being applied on all nodes, it's now possible to select which nodes a particular CiliumEnvoyConfig should select.
-
:guardswoman: Security
- :signal_strength: Port Range support in Network Policies: This long-awaited feature has been implemented into Cilium.
- :clipboard: Network Policy Validation Status: kubectl describe cnp
will be able to tell if the Cilium Network Policy is valid or invalid. - :no_entry: Control Cilium Network Policy Default Deny behavior: Policies usually enable default deny for the subject of the policies, but this can now be disabled on a per-policy basis.
- :busts_in_silhouette: CIDRGroups support for Egress and Deny rules: Add support for matching CiliumCIDRGroups in Egress policy rules.
- :floppy_disk: Load "default" Network Policies from Filesystem: In addition to reading policies from Kubernetes, Cilium can be configured to read policies locally.
- :card_index_dividers: Support to Select Nodes as Target of Cilium Network Policies: With new ToNodes/FromNodes selectors, traffic can be allowed or denied based on the labels of the target Node in the cluster.
-
:sunrise: Day 2 Operations and Scale
- :elf: New ELF Loader Logic: With this new loader logic, the median memory usage of Cilium was decreased by 24%.
- :rocket: Improved DNS-based network policy performance: DNS-based network policies had up to 5x reduction in tail latency.
- :spider_web: KVStoreMesh default option for ClusterMesh: Introduced in Cilium 1.14, and after a lot of adoption and feedback from the community, KVStoreMesh is now the default way to deploy ClusterMesh.
-
:artificial_satellite: Hubble & Observability
- :speaking_head: CEL Filters Support: Hubble supports Common Express Language (CEL) giving support for more complex conditions that cannot be expressed using the existing flow filters.
- :bar_chart: Improved HTTP metrics: There are additional metrics to count the HTTP requests and their duration.
- :straight_ruler: Improved BPF map pressure metrics: New metric to track the BPF map pressure metric for the Connection Tracking BPF map.
- :eyes: Improvements for Egress Traffic Path Observability: Some metrics were added on this release to help troubleshooting Cilium Egress Routing.
- :microscope: K8S Event Generation on Packet Drop: Hubble is now able to generate a k8s event for a packet dropped from a pod and it that can be verified with kubectl get events.
- :card_index_dividers: Filtering Hubble flows by node labels: Filter Hubble flows observed on nodes matching the given label.
-
:houses: Community:
- :heart: Many end-users have stepped forward to tell their stories running Cilium in production. If your company wants to submit their case studies let us know. We would love to hear your feedback!
And finally, we would like to thank you to all contributors of Cilium that helped directly and indirectly with the project. The success of Cilium could not happen without all of you. :heart:
For a full summary of changes, see https://github.com/cilium/cilium/blob/v1.16.0/CHANGELOG.md.
Docker Manifests
cilium
quay.io/cilium/cilium:v1.16.0@​sha256:46ffa4ef3cf6d8885dcc4af5963b0683f7d59daa90d49ed9fb68d3b1627fe058
quay.io/cilium/cilium:stable@sha256:46ffa4ef3cf6d8885dcc4af5963b0683f7d59daa90d49ed9fb68d3b1627fe058
clustermesh-apiserver
quay.io/cilium/clustermesh-apiserver:v1.16.0@​sha256:a1597b7de97cfa03f1330e6b784df1721eb69494cd9efb0b3a6930680dfe7a8e
quay.io/cilium/clustermesh-apiserver:stable@sha256:a1597b7de97cfa03f1330e6b784df1721eb69494cd9efb0b3a6930680dfe7a8e
docker-plugin
quay.io/cilium/docker-plugin:v1.16.0@​sha256:024a17aa8ec70d42f0ac1a4407ad9f8fd1411aa85fd8019938af582e20522efe
quay.io/cilium/docker-plugin:stable@sha256:024a17aa8ec70d42f0ac1a4407ad9f8fd1411aa85fd8019938af582e20522efe
hubble-relay
quay.io/cilium/hubble-relay:v1.16.0@​sha256:33fca7776fc3d7b2abe08873319353806dc1c5e07e12011d7da4da05f836ce8d
quay.io/cilium/hubble-relay:stable@sha256:33fca7776fc3d7b2abe08873319353806dc1c5e07e12011d7da4da05f836ce8d
operator-alibabacloud
quay.io/cilium/operator-alibabacloud:v1.16.0@​sha256:d2d9f450f2fc650d74d4b3935f4c05736e61145b9c6927520ea52e1ebcf4f3ea
quay.io/cilium/operator-alibabacloud:stable@sha256:d2d9f450f2fc650d74d4b3935f4c05736e61145b9c6927520ea52e1ebcf4f3ea
operator-aws
quay.io/cilium/operator-aws:v1.16.0@​sha256:8dbe47a77ba8e1a5b111647a43db10c213d1c7dfc9f9aab5ef7279321ad21a2f
quay.io/cilium/operator-aws:stable@sha256:8dbe47a77ba8e1a5b111647a43db10c213d1c7dfc9f9aab5ef7279321ad21a2f
operator-azure
quay.io/cilium/operator-azure:v1.16.0@​sha256:dd7562e20bc72b55c65e2110eb98dca1dd2bbf6688b7d8cea2bc0453992c121d
quay.io/cilium/operator-azure:stable@sha256:dd7562e20bc72b55c65e2110eb98dca1dd2bbf6688b7d8cea2bc0453992c121d
operator-generic
quay.io/cilium/operator-generic:v1.16.0@​sha256:d6621c11c4e4943bf2998af7febe05be5ed6fdcf812b27ad4388f47022190316
quay.io/cilium/operator-generic:stable@sha256:d6621c11c4e4943bf2998af7febe05be5ed6fdcf812b27ad4388f47022190316
operator
quay.io/cilium/operator:v1.16.0@​sha256:6aaa05737f21993ff51abe0ffe7ea4be88d518aa05266c3482364dce65643488
quay.io/cilium/operator:stable@sha256:6aaa05737f21993ff51abe0ffe7ea4be88d518aa05266c3482364dce65643488
v1.15.8: 1.15.8
Security Advisories
This release addresses the following security vulnerabilities:
- https://github.com/cilium/cilium/security/advisories/GHSA-vwf8-q6fw-4wcm
- https://github.com/cilium/cilium/security/advisories/GHSA-qcm3-7879-xcww
- https://github.com/cilium/cilium/security/advisories/GHSA-q7w8-72mr-vpgw
Summary of Changes
Minor Changes:
- helm: Add validation to prevent users from using deprecated values that have been removed (#34213, @chancez)
- helm: Cleanup old k8s version check and deprecated atributes (Backport PR #34157, Upstream PR #31940, @sayboras)
- Make hubble-relay more resilient to transient errors (Backport PR #34157, Upstream PR #33894, @chancez)
Bugfixes:
- add support for validation of stringToString values in ConfigMap (Backport PR #33962, Upstream PR #33779, @alex-berger)
- auth: Fix data race in Upsert (Backport PR #34157, Upstream PR #33905, @chaunceyjiang)
- auth: fix fatal error: concurrent map iteration and map write (Backport PR #33809, Upstream PR #33634, @chaunceyjiang)
- cert: Adding H2 Protocol Support when Get gRPC Config For Client (Backport PR #33809, Upstream PR #33616, @mrproliu)
- DNS Proxy: Allow SO_LINGER to be set to the socket to upstream (Backport PR #33809, Upstream PR #33592, @gandro)
- Fix an issue in updates to node addresses which may have caused missing NodePort frontend IP addresses. May have affected NodePort/LoadBalancer services for users running with runtime device detection enabled when node's IP addresses were changed after Cilium had started. Node IP as defined in the Kubernetes Node is now preferred when selecting the NodePort frontend IPs. (Backport PR #33818, Upstream PR #33629, @joamaki)
- Fix bug causing etcd upsertion/deletion events to be potentially missed during the initial synchronization, when Cilium operates in KVStore mode, or Cluster Mesh is enabled. (Backport PR #34183, Upstream PR #34091, @giorio94)
- Fix issue in picking node IP addresses from the loopback device. This fixes a regression in v1.15 and v1.16 where VIPs assigned to the lo device were not considered by Cilium. Fix spurious updates node addresses to avoid unnecessary datapath reinitializations. (Backport PR #34086, Upstream PR #34012, @joamaki)
- Fix rare race condition afflicting clustermesh while stopping the retrieval of the remote cluster configuration, possibly causing a deadlock (Backport PR #33809, Upstream PR #33735, @giorio94)
- Fixes a race condition during agent startup that causes the k8s node label updates to not get propagated to the host endpoint. (Backport PR #33663, Upstream PR #33511, @skmatti)
- gateway-api: Add HTTP method condition in sortable routes (Backport PR #34157, Upstream PR #34109, @sayboras)
- gateway-api: Enqueue gateway for Reference Grant changes (Backport PR #34157, Upstream PR #34032, @sayboras)
- helm: remove duplicate metrics for Envoy pod (Backport PR #34157, Upstream PR #33803, @mhofstetter)
- lbipam: fixed bug in sharing key logic (Backport PR #34157, Upstream PR #34106, @dylandreimerink)
- pkg/metrics: fix data race warning on metrics init hook. (Backport PR #33962, Upstream PR #33823, @tommyp1ckles)
- Reduce conntrack lifetime for closing service connections. (Backport PR #33962, Upstream PR #33907, @julianwiedmann)
- Skip regenerating host endpoint on k8s node labels update if identity labels are unchanged (Backport PR #33809, Upstream PR #33306, @skmatti)
- The cilium agent will now recover from stale nodeID mappings which could occur in clusters with high node churn, possibly manifesting itself in dropped IPsec traffic. (Backport PR #34157, Upstream PR #33666, @bimmlerd)
CI Changes:
- [v1.15] ci/ipsec: add missing config for patch-upgrade test with 6.6 kernel (#33736, @julianwiedmann)
- [v1.15] gh/e2e: fix up config 15 to not use bpf-next (#33738, @julianwiedmann)
- gha: Add http client timeout in Ingress (Backport PR #33809, Upstream PR #33683, @sayboras)
- gha: don't fail if all cloud provider matrix entries are filtered out (Backport PR #33962, Upstream PR #33819, @giorio94)
- gha: ensure that helm values.schema.json is not accidentally backported (#33845, @giorio94)
- gha: lint absence of trailing spaces in workflow files (Backport PR #34157, Upstream PR #33908, @giorio94)
- gha: simplify the call-backport-label-updater workflow (Backport PR #33962, Upstream PR #33934, @giorio94)
- test: use cgr.dev/chainguard/busybox:latest instead of docker.io image. (Backport PR #34157, Upstream PR #34004, @tommyp1ckles)
- tests-clustermesh-upgrade: Don't hardcode test namespace (Backport PR #34157, Upstream PR #34121, @michi-covalent)
- workflow: Use per-tunnel keys for the IPsec upgrade test (Backport PR #33809, Upstream PR #33769, @pchaigno)
Misc Changes:
- [v1.15] Update Docker dependency (#34196, @ferozsalam)
- bugtool: dumping more Envoy information (Backport PR #34157, Upstream PR #34110, @mhofstetter)
- chore(deps): update all github action dependencies (v1.15) (#34170, @cilium-renovate[bot])
- chore(deps): update all-dependencies (v1.15) (#33649, @cilium-renovate[bot])
- chore(deps): update all-dependencies (v1.15) (#34168, @cilium-renovate[bot])
- chore(deps): update cilium/little-vm-helper action to v0.0.19 (v1.15) (#33793, @cilium-renovate[bot])
- chore(deps): update dependency cilium/cilium-cli to v0.16.13 (v1.15) (#33794, @cilium-renovate[bot])
- chore(deps): update dependency cilium/hubble to v1 (v1.15) (#34051, @cilium-renovate[bot])
- chore(deps): update docker.io/library/golang:1.21.12 docker digest to
7e0e13a(v1.15) (#33792, @cilium-renovate[bot]) - chore(deps): update go to v1.22.5 (v1.15) (#33857, @cilium-renovate[bot])
- chore(deps): update go to v1.22.6 (v1.15) (#34167, @cilium-renovate[bot])
- chore(deps): update stable lvh-images (v1.15) (patch) (#33798, @cilium-renovate[bot])
- daemon/ipam: don't swallow parse error of CIDR (Backport PR #33809, Upstream PR #33283, @bimmlerd)
- doc: update slack channel reference (Backport PR #34157, Upstream PR #34044, @Huweicai)
- docs,LRP: Add steps to restart agent and operator pods and update feature roadmap status (Backport PR #33809, Upstream PR #33655, @aditighag)
- docs: Add node about socketLB.hostNamespaceOnly to Kata page (Backport PR #33809, Upstream PR #33725, @brb)
- docs: Extend LRP guide with troubleshooting section (Backport PR #33809, Upstream PR #33373, @aditighag)
- docs: generalize version specific notes section (Backport PR #33962, Upstream PR #33888, @giorio94)
- docs: Remove CNCF graduation from the roadmap (Backport PR #33809, Upstream PR #33680, @joestringer)
- docs: remove mention of outdated clustermesh + L7 policies + tunnel limitation (Backport PR #33809, Upstream PR #33626, @giorio94)
- docs: Update LVH VM image pull instructions (Backport PR #33809, Upstream PR #33621, @brb)
- Documentation: Add --set cni.exclusive=false for Azure Chain Mode (Backport PR #33809, Upstream PR #33708, @Mais316)
- helm: Allow socket linger timeout to be set to zero (Backport PR #33962, Upstream PR #33887, @gandro)
- policy: Fix
mapstate.Diff()used in tests (Backport PR #33809, Upstream PR #33449, @jrajahalme) - Remove stable tags from v1.15 releases (#33985, @joestringer)
- renovate: onboard etcd image used in integration tests (Backport PR #33809, Upstream PR #33679, @giorio94)
- Revert "fix: support validation of stringToString values in ConfigMap" (Backport PR #34306, Upstream PR #34277, @aanm)
Other Changes:
- [v1.15] ci: use base and head SHAs from context in lint-build-commits workflow (#34267, @tklauser)
- [v1.15] Revert "docs: Update LRP feature status" (#34238, @ysksuzuki)
- Fix bug in Bandwidth Manager that caused it to not find native devices. (#33910, @joamaki)
- install: Update image digests for v1.15.7 (#33744, @cilium-release-bot[bot])
Docker Manifests
cilium
quay.io/cilium/cilium:v1.15.8@​sha256:3b5b0477f696502c449eaddff30019a7d399f077b7814bcafabc636829d194c7
clustermesh-apiserver
quay.io/cilium/clustermesh-apiserver:v1.15.8@​sha256:4c1f33aae2b76392b57e867820471b5472f0886f7358513d47ee80c09af15a0e
docker-plugin
quay.io/cilium/docker-plugin:v1.15.8@​sha256:15b1b6e83e1c0eea97df179660c1898661c1d0da5d431c68f98c702581e29310
hubble-relay
quay.io/cilium/hubble-relay:v1.15.8@​sha256:47e8a19f60d0d226ec3d2c675ec63908f1f2fb936a39897f2e3255b3bab01ad6
operator-alibabacloud
quay.io/cilium/operator-alibabacloud:v1.15.8@​sha256:388ef72febd719bc9d16d5ee47fe6f846f73f0d8a6f9586ada04cb39eb2962d1
operator-aws
quay.io/cilium/operator-aws:v1.15.8@​sha256:3807dd23c2b5f90489824ddd13dca6e84e714dc9eae44e5718acfe86c855b7a1
operator-azure
quay.io/cilium/operator-azure:v1.15.8@​sha256:c517db3d12fcf038a9a4a81b88027a19672078bf8c2fcd6b2563f3eff9514d21
operator-generic
quay.io/cilium/operator-generic:v1.15.8@​sha256:e77ae6fc8a978f98363cf74d3c883dfaa6454c6e23ec417a60952f29408e2f18
operator
quay.io/cilium/operator:v1.15.8@​sha256:e9cf35fe3dc86933ccf3fdfdb7620d218c50aaca5f14e4ba5f422460ea4cb23c
v1.15.7: 1.15.7
Summary of Changes
We are pleased to release Cilium v1.15.7, which makes the load balancer class of the Clustermesh API server configurable and includes stability and bug fixes. Thanks to all contributors, reviewers, testers, and users!
Minor Changes:
- helm: loadBalancerClass for Cluster Mesh APIserver (Backport PR #33342, Upstream PR #33033, @PhilipSchmid)
- ui: v0.13.1 release (Backport PR #33223, Upstream PR #32852, @geakstr)
Bugfixes:
- bgpv1: reorder neighbor creation and deletion steps (Backport PR #33378, Upstream PR #33262, @harsimran-pabla)
- datapath: Fix redirect from from L3 netdev to tunnel (Backport PR #33529, Upstream PR #33421, @brb)
- Datasource error fixed for Hubble DNS and Network dashboards (Backport PR #33631, Upstream PR #30580, @Pionerd)
- egress-gateway: Validate ep identity before fetching labels (Backport PR #33529, Upstream PR #33311, @pippolo84)
- envoy: Avoid short circuit backend filtering (Backport PR #33533, Upstream PR #33403, @sayboras)
- Fix #32587 concurrent hubble dynamic exporter stop and reload (Backport PR #33098, Upstream PR #33000, @marqc)
- Fix hubble metrics leak by using CiliumEndpoint watcher to remove stale metrics. (Backport PR #33529, Upstream PR #33260, @sgargan)
- Fix rare spurious double reconnection upon clustermesh configuration change for remote cluster (Backport PR #33378, Upstream PR #33248, @giorio94)
- Fix too many open Unix sockets (Backport PR #33631, Upstream PR #33569, @chaunceyjiang)
- gateway-api: Check for matching controller name (Backport PR #33223, Upstream PR #33050, @sayboras)
- Generate SBOM from the correct release image (#33052, @ferozsalam)
- helm: Decouple sysctlfix from cgroup.autoMount (Backport PR #33010, Upstream PR #32866, @YutaroHayakawa)
- ipsec: do not nil out EncryptInterface when using IPAM ENI on netlink… (Backport PR #33631, Upstream PR #33512, @jasonaliyetti)
- IPv6 and IPv4 '0.0.0.0/0' CIDR parsing in policy processing has been fixed (Backport PR #33529, Upstream PR #33448, @jrajahalme)
- Recreate CT entries for non-TCP to fix L7 proxy redirect failures. (Backport PR #33378, Upstream PR #33222, @ysksuzuki)
- Report the correct drop reason when a packet is dropped by the bpf_lxc program. (Backport PR #33631, Upstream PR #33551, @julianwiedmann)
- Revert PR #32244 which caused unintended side-effects that negatively impacted network performance. (Backport PR #33378, Upstream PR #33304, @learnitall)
- socketlb: tolerate cgroupv1 when detaching bpf programs (Backport PR #33631, Upstream PR #33599, @rgo3)
- Update IPsec to handle larger PSK values when using per-tunnel PSK (Backport PR #33631, Upstream PR #33472, @jasonaliyetti)
- When the Bandwidth Manager feature is enabled, don't apply Egress rate-limiting to "Port unreachable" ICMP replies by Cilium's North-South Loadbalancer. (Backport PR #33631, Upstream PR #33624, @julianwiedmann)
CI Changes:
- [v1.15] Disable release SBOM asset uploads (#33072, @ferozsalam)
- Bump CLI to v0.16.11 (Backport PR #33529, Upstream PR #33444, @brb)
- ci: Add IPsec leak detection for ci-ipsec-e2e (Backport PR #33047, Upstream PR #32930, @jschwinger233)
- ci: l4lb: Don't hang on gathering logs forever (Backport PR #33010, Upstream PR #32947, @joestringer)
- gh: ipsec: clarify check for leaked proxy traffic during key rotation (Backport PR #33631, Upstream PR #33509, @julianwiedmann)
- gha: Only retrieve IPv4 CIDR from docker network (Backport PR #33110, Upstream PR #33093, @sayboras)
- workflows: e2e-upgrade: fix EXTRA parameters (Backport PR #33223, Upstream PR #33150, @jibi)
Misc Changes:
- .github: add workflow for renovate to build base images (Backport PR #33346, Upstream PR #33326, @aanm)
- .github: fix cloud workflows for renovate (Backport PR #33321, Upstream PR #33320, @aanm)
- .github: fix worfklows used by renovate (Backport PR #33317, Upstream PR #33309, @aanm)
- [v1.15] remove tracking of backports with MLH (#33124, @aanm)
- Add auto-merge for renovate for trusted dependencies (Backport PR #33317, Upstream PR #33287, @aanm)
- bpf: ct: return actual error from CT lookup (Backport PR #33378, Upstream PR #33225, @julianwiedmann)
- bpf: encap: fix ifindex in TO_OVERLAY trace notification (Backport PR #33575, Upstream PR #33083, @julianwiedmann)
- bpf: lxc: fix ifindex in TO_ENDPOINT trace notification (Backport PR #33575, Upstream PR #33085, @julianwiedmann)
- bpf: lxc: prefer SECLABEL_IPV4 over SECLABEL in ipv4_policy() (Backport PR #33378, Upstream PR #33181, @julianwiedmann)
- build(deps): bump urllib3 from 2.0.7 to 2.2.2 in /Documentation (Backport PR #33378, Upstream PR #33218, @dependabot[bot])
- build-images-base: cancel github runs based on branch name (Backport PR #33378, Upstream PR #33353, @aanm)
- build-images-base: push to branch if pull request ref doesn't exist (Backport PR #33378, Upstream PR #33368, @aanm)
- build-images: fetch artifacts with specific pattern (Backport PR #33378, Upstream PR #33216, @aanm)
- chore(deps): update all github action dependencies (v1.15) (#33177, @cilium-renovate[bot])
- chore(deps): update all github action dependencies (v1.15) (#33338, @cilium-renovate[bot])
- chore(deps): update all github action dependencies (v1.15) (#33492, @cilium-renovate[bot])
- chore(deps): update all-dependencies (v1.15) (#33175, @cilium-renovate[bot])
- chore(deps): update all-dependencies (v1.15) (#33337, @cilium-renovate[bot])
- chore(deps): update all-dependencies (v1.15) (#33571, @cilium-renovate[bot])
- chore(deps): update cilium/cilium-cli action to v0.16.11 (v1.15) (#33650, @cilium-renovate[bot])
- chore(deps): update cilium/scale-tests-action digest to
511e3d9(v1.15) (#33208, @cilium-renovate[bot]) - chore(deps): update dependency cilium/cilium-cli to v0.16.10 (v1.15) (#32990, @cilium-renovate[bot])
- chore(deps): update dependency eksctl-io/eksctl to v0.182.0 (v1.15) (#32991, @cilium-renovate[bot])
- chore(deps): update
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about these updates again.
- [ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
--- kubernetes/apps/kube-system/kube-vip/app Kustomization: flux-system/cluster-apps-kube-vip DaemonSet: kube-system/kube-vip
+++ kubernetes/apps/kube-system/kube-vip/app Kustomization: flux-system/cluster-apps-kube-vip DaemonSet: kube-system/kube-vip
@@ -57,13 +57,13 @@
- name: vip_renewdeadline
value: '10'
- name: vip_retryperiod
value: '2'
- name: prometheus_server
value: :2112
- image: ghcr.io/kube-vip/kube-vip:v0.8.2
+ image: ghcr.io/kube-vip/kube-vip:v0.8.0
imagePullPolicy: IfNotPresent
name: kube-vip
securityContext:
capabilities:
add:
- NET_ADMIN
--- kubernetes/apps/networking/cloudflared/app Kustomization: flux-system/cloudflared HelmRelease: networking/cloudflared
+++ kubernetes/apps/networking/cloudflared/app Kustomization: flux-system/cloudflared HelmRelease: networking/cloudflared
@@ -50,13 +50,13 @@
TUNNEL_METRICS: 0.0.0.0:2000
TUNNEL_ORIGIN_ENABLE_HTTP2: true
TUNNEL_POST_QUANTUM: true
TUNNEL_TRANSPORT_PROTOCOL: quic
image:
repository: docker.io/cloudflare/cloudflared
- tag: 2024.8.3@sha256:14d9c6b01b29d556569446b0cc5c9162dc129a92ce127afe27c3aae4534f8af1
+ tag: 2024.8.2@sha256:004f4b7b60bab652d478148c138843c24eae1feee4c58fddd435b9b79c953957
probes:
liveness:
custom: true
enabled: true
spec:
failureThreshold: 3
--- kubernetes/apps/monitoring/goldilocks/app Kustomization: flux-system/cluster-apps-goldilocks HelmRelease: monitoring/goldilocks
+++ kubernetes/apps/monitoring/goldilocks/app Kustomization: flux-system/cluster-apps-goldilocks HelmRelease: monitoring/goldilocks
@@ -13,13 +13,13 @@
chart: goldilocks
interval: 5m
sourceRef:
kind: HelmRepository
name: fairwinds
namespace: flux-system
- version: 9.0.0
+ version: 8.0.2
interval: 5m
values:
dashboard:
enabled: true
ingress:
annotations:
--- kubernetes/apps/kube-system/nvidia-device-plugin/app Kustomization: flux-system/cluster-apps-nvidia-plugin HelmRelease: kube-system/nvidia-device-plugin
+++ kubernetes/apps/kube-system/nvidia-device-plugin/app Kustomization: flux-system/cluster-apps-nvidia-plugin HelmRelease: kube-system/nvidia-device-plugin
@@ -13,16 +13,16 @@
chart: nvidia-device-plugin
interval: 15m
sourceRef:
kind: HelmRepository
name: nvidia-device-plugin
namespace: flux-system
- version: 0.16.2
+ version: 0.15.0
interval: 15m
values:
image:
repository: nvcr.io/nvidia/k8s-device-plugin
- tag: v0.16.2
+ tag: v0.15.0
nodeSelector:
feature.node.kubernetes.io/custom-nvidia-gpu: 'true'
runtimeClassName: nvidia
--- kubernetes/apps/kube-system/reloader/app Kustomization: flux-system/cluster-apps-reloader HelmRelease: kube-system/reloader
+++ kubernetes/apps/kube-system/reloader/app Kustomization: flux-system/cluster-apps-reloader HelmRelease: kube-system/reloader
@@ -12,13 +12,13 @@
spec:
chart: reloader
sourceRef:
kind: HelmRepository
name: stakater
namespace: flux-system
- version: 1.0.121
+ version: 1.0.115
install:
createNamespace: true
remediation:
retries: 3
interval: 15m
maxHistory: 3
--- kubernetes/apps/cert-manager/cert-manager/app Kustomization: flux-system/cluster-apps-cert-manager HelmRelease: cert-manager/cert-manager
+++ kubernetes/apps/cert-manager/cert-manager/app Kustomization: flux-system/cluster-apps-cert-manager HelmRelease: cert-manager/cert-manager
@@ -12,13 +12,13 @@
spec:
chart: cert-manager
sourceRef:
kind: HelmRepository
name: jetstack
namespace: flux-system
- version: v1.15.3
+ version: v1.15.2
install:
createNamespace: true
remediation:
retries: 3
interval: 5m
upgrade:
--- kubernetes/apps/kube-system/cilium/app Kustomization: flux-system/cilium HelmRelease: kube-system/cilium
+++ kubernetes/apps/kube-system/cilium/app Kustomization: flux-system/cilium HelmRelease: kube-system/cilium
@@ -13,13 +13,13 @@
spec:
chart: cilium
sourceRef:
kind: HelmRepository
name: cilium
namespace: flux-system
- version: 1.15.7
+ version: 1.16.1
install:
remediation:
retries: 3
interval: 30m
upgrade:
cleanupOnFail: true
--- kubernetes/apps/networking/echo-server/app Kustomization: flux-system/echo-server HelmRelease: networking/echo-server
+++ kubernetes/apps/networking/echo-server/app Kustomization: flux-system/echo-server HelmRelease: networking/echo-server
@@ -34,13 +34,13 @@
HTTP_PORT: 8080
LOG_IGNORE_PATH: /healthz
LOG_WITHOUT_NEWLINE: true
PROMETHEUS_ENABLED: true
image:
repository: ghcr.io/mendhak/http-https-echo
- tag: 34
+ tag: 33
probes:
liveness:
custom: true
enabled: true
spec:
failureThreshold: 3
--- kubernetes/apps/kube-system/nvidia-device-plugin/app Kustomization: flux-system/cluster-apps-nvidia HelmRelease: kube-system/nvidia-device-plugin
+++ kubernetes/apps/kube-system/nvidia-device-plugin/app Kustomization: flux-system/cluster-apps-nvidia HelmRelease: kube-system/nvidia-device-plugin
@@ -13,16 +13,16 @@
chart: nvidia-device-plugin
interval: 15m
sourceRef:
kind: HelmRepository
name: nvidia-device-plugin
namespace: flux-system
- version: 0.16.2
+ version: 0.15.0
interval: 15m
values:
image:
repository: nvcr.io/nvidia/k8s-device-plugin
- tag: v0.16.2
+ tag: v0.15.0
nodeSelector:
feature.node.kubernetes.io/custom-nvidia-gpu: 'true'
runtimeClassName: nvidia
--- kubernetes/apps/default/nitter/app Kustomization: flux-system/nitter HelmRelease: default/nitter
+++ kubernetes/apps/default/nitter/app Kustomization: flux-system/nitter HelmRelease: default/nitter
@@ -1,108 +0,0 @@
----
-apiVersion: helm.toolkit.fluxcd.io/v2beta2
-kind: HelmRelease
-metadata:
- labels:
- app.kubernetes.io/name: nitter
- kustomize.toolkit.fluxcd.io/name: nitter
- kustomize.toolkit.fluxcd.io/namespace: flux-system
- name: nitter
- namespace: default
-spec:
- chart:
- spec:
- chart: app-template
- sourceRef:
- kind: HelmRepository
- name: bjw-s-charts
- namespace: flux-system
- version: 3.2.1
- install:
- createNamespace: true
- remediation:
- retries: 5
- interval: 15m
- upgrade:
- remediation:
- retries: 5
- values:
- controllers:
- nitter:
- annotations:
- reloader.stakater.com/auto: 'true'
- containers:
- app:
- image:
- repository: registry.skysolutions.fi/library/nitter
- tag: guest-accounts
- probes:
- liveness:
- custom: true
- enabled: false
- spec:
- failureThreshold: 3
- httpGet:
- path: /settings
- port: 8080
- initialDelaySeconds: 0
- periodSeconds: 10
- timeoutSeconds: 1
- readiness:
- custom: true
- enabled: false
- spec:
- failureThreshold: 3
- httpGet:
- path: /settings
- port: 8080
- initialDelaySeconds: 0
- periodSeconds: 10
- timeoutSeconds: 1
- startup:
- enabled: false
- resources:
- limits:
- memory: 250Mi
- requests:
- memory: 50Mi
- replicas: 1
- strategy: RollingUpdate
- defaultPodOptions:
- topologySpreadConstraints:
- - labelSelector:
- matchLabels:
- app.kubernetes.io/name: nitter
- maxSkew: 1
- topologyKey: kubernetes.io/hostname
- whenUnsatisfiable: DoNotSchedule
- ingress:
- app:
- annotations:
- hajimari.io/icon: twitter
- className: internal
- hosts:
- - host: nitter...PLACEHOLDER..
- paths:
- - path: /
- pathType: Prefix
- service:
- identifier: app
- port: http
- tls:
- - hosts:
- - nitter...PLACEHOLDER..
- persistence:
- config:
- enabled: true
- mountPath: /src/nitter.conf
- name: nitter
- readOnly: false
- subPath: config.ini
- type: configMap
- service:
- app:
- controller: nitter
- ports:
- http:
- port: 8080
-
--- kubernetes/apps/default/nitter/app Kustomization: flux-system/nitter ExternalSecret: default/gatus
+++ kubernetes/apps/default/nitter/app Kustomization: flux-system/nitter ExternalSecret: default/gatus
@@ -1,25 +0,0 @@
----
-apiVersion: external-secrets.io/v1beta1
-kind: ExternalSecret
-metadata:
- labels:
- app.kubernetes.io/name: nitter
- kustomize.toolkit.fluxcd.io/name: nitter
- kustomize.toolkit.fluxcd.io/namespace: flux-system
- name: gatus
- namespace: default
-spec:
- dataFrom:
- - extract:
- key: gatus
- secretStoreRef:
- kind: ClusterSecretStore
- name: onepassword-connect
- target:
- name: gatus-secret
- template:
- data:
- CUSTOM_PUSHOVER_TOKEN: '{{ .GATUS_PUSHOVER_TOKEN }}'
- CUSTOM_PUSHOVER_USER_KEY: '{{ .PUSHOVER_USER_KEY }}'
- engineVersion: v2
-
--- kubernetes/apps Kustomization: flux-system/cluster-apps Kustomization: flux-system/nitter
+++ kubernetes/apps Kustomization: flux-system/cluster-apps Kustomization: flux-system/nitter
@@ -1,32 +0,0 @@
----
-apiVersion: kustomize.toolkit.fluxcd.io/v1
-kind: Kustomization
-metadata:
- labels:
- kustomize.toolkit.fluxcd.io/name: cluster-apps
- kustomize.toolkit.fluxcd.io/namespace: flux-system
- name: nitter
- namespace: flux-system
-spec:
- commonMetadata:
- labels:
- app.kubernetes.io/name: nitter
- decryption:
- provider: sops
- secretRef:
- name: sops-age
- interval: 10m
- path: ./kubernetes/apps/default/nitter/app
- postBuild:
- substituteFrom:
- - kind: ConfigMap
- name: cluster-settings
- - kind: Secret
- name: cluster-secrets
- prune: true
- sourceRef:
- kind: GitRepository
- name: home-kubernetes
- targetNamespace: default
- wait: false
-
--- kubernetes/apps/media/sonarr/app Kustomization: flux-system/cluster-apps-sonarr HelmRelease: media/sonarr
+++ kubernetes/apps/media/sonarr/app Kustomization: flux-system/cluster-apps-sonarr HelmRelease: media/sonarr
@@ -40,13 +40,13 @@
- secretRef:
name: sonarr
global:
nameOverride: sonarr
image:
repository: ghcr.io/onedr0p/sonarr-develop
- tag: 4.0.8.2223@sha256:f4d8a1203d2f0cf4f1ab69b9682896ef1e73eaf04021edb4ce2a479af961e420
+ tag: 4.0.6.1820@sha256:3418fb8cd12b30fd70c026531e14f5a1222c7b4499d9560aad9f31ddf064f4fb
ingress:
main:
annotations:
gatus.io/enabled: 'true'
gethomepage.dev/description: TV Downloads
gethomepage.dev/enabled: 'true'
--- kubernetes/apps/media/radarr/app Kustomization: flux-system/cluster-apps-radarr HelmRelease: media/radarr
+++ kubernetes/apps/media/radarr/app Kustomization: flux-system/cluster-apps-radarr HelmRelease: media/radarr
@@ -34,13 +34,13 @@
RADARR__INSTANCE_NAME: Radarr
RADARR__LOG_LEVEL: info
RADARR__PORT: 80
TZ: Europe/Prague
image:
repository: ghcr.io/onedr0p/radarr-develop
- tag: 5.10.0.9090@sha256:3802c38f08a3350637d6d9ba10a35a89b791afd95c2e4e7e7402e69c0910b50c
+ tag: 5.8.3.8933@sha256:da6094f6cc4dc95af194612a8a4d7db4fc27ff4a6e5748c2e6d5dd7be9ed69a7
ingress:
main:
annotations:
gatus.io/enabled: 'true'
gethomepage.dev/description: Movie Downloads
gethomepage.dev/enabled: 'true'
--- kubernetes/apps/media/plex/app Kustomization: flux-system/plex HelmRelease: media/plex
+++ kubernetes/apps/media/plex/app Kustomization: flux-system/plex HelmRelease: media/plex
@@ -34,13 +34,13 @@
ADVERTISE_IP: https://plex...PLACEHOLDER..,http://192.168.69.101:32400
NVIDIA_DRIVER_CAPABILITIES: all
NVIDIA_VISIBLE_DEVICES: all
TZ: Europe/Prague
image:
repository: ghcr.io/onedr0p/plex-beta
- tag: 1.41.0.8911-1bd569c5f@sha256:970272244b9c638b596e88591516d091be007b6a35e81236d9e95c5c9c24b681
+ tag: 1.40.5.8854-f36c552fd@sha256:483bb8b03110e6874b2eea984c15039b7423d01c9fd5b436807aa14fe46ba0f2
probes:
liveness:
custom: true
enabled: true
spec:
failureThreshold: 3
--- kubernetes/apps/media/sabnzbd/app Kustomization: flux-system/cluster-apps-sabnzbd HelmRelease: media/sabnzbd
+++ kubernetes/apps/media/sabnzbd/app Kustomization: flux-system/cluster-apps-sabnzbd HelmRelease: media/sabnzbd
@@ -35,13 +35,13 @@
SABNZBD__HOST_WHITELIST_ENTRIES: sabnzbd, sabnzbd.media, sabnzbd.media.svc,
sabnzbd.media.svc.cluster, sabnzbd.media.svc.cluster.local, sabnzbd...PLACEHOLDER..
SABNZBD__PORT: 80
TZ: Europe/Prague
image:
repository: ghcr.io/onedr0p/sabnzbd
- tag: 4.3.3@sha256:c8a03bbe260ba1646fd0e58f9b45dfda8f0a0c9f1f9b5f6f92440cd689cdf353
+ tag: 4.3.2@sha256:b23a4ecc680470e88fc04a6dc27097f4da68adcf9d1ad0d6407bab7010fefade
probes:
liveness:
custom: true
enabled: true
spec:
failureThreshold: 3
--- kubernetes/apps/networking/ingress-nginx/internal Kustomization: flux-system/ingress-nginx-internal HelmRelease: networking/ingress-nginx-internal
+++ kubernetes/apps/networking/ingress-nginx/internal Kustomization: flux-system/ingress-nginx-internal HelmRelease: networking/ingress-nginx-internal
@@ -13,13 +13,13 @@
spec:
chart: ingress-nginx
sourceRef:
kind: HelmRepository
name: ingress-nginx
namespace: flux-system
- version: 4.11.2
+ version: 4.11.1
install:
remediation:
retries: 3
interval: 30m
upgrade:
cleanupOnFail: true
@@ -77,9 +77,9 @@
- name: TEMPLATE_NAME
value: lost-in-space
- name: SHOW_DETAILS
value: 'false'
image:
repository: ghcr.io/tarampampam/error-pages
- tag: 3.3.0
+ tag: 2.27.0
fullnameOverride: ingress-nginx-internal
--- kubernetes/apps/networking/ingress-nginx/external Kustomization: flux-system/ingress-nginx-external HelmRelease: networking/ingress-nginx-external
+++ kubernetes/apps/networking/ingress-nginx/external Kustomization: flux-system/ingress-nginx-external HelmRelease: networking/ingress-nginx-external
@@ -13,13 +13,13 @@
spec:
chart: ingress-nginx
sourceRef:
kind: HelmRepository
name: ingress-nginx
namespace: flux-system
- version: 4.11.2
+ version: 4.11.1
dependsOn:
- name: cloudflared
namespace: networking
install:
remediation:
retries: 3
--- kubernetes/apps/monitoring/gatus/app Kustomization: flux-system/gatus HelmRelease: monitoring/gatus
+++ kubernetes/apps/monitoring/gatus/app Kustomization: flux-system/gatus HelmRelease: monitoring/gatus
@@ -86,13 +86,13 @@
METHOD: WATCH
NAMESPACE: ALL
RESOURCE: both
UNIQUE_FILENAMES: true
image:
repository: ghcr.io/kiwigrid/k8s-sidecar
- tag: 1.27.5@sha256:1fc88232e223a6977c626510372a74ca1f73af073e3c361719ccf02f223c8a12
+ tag: 1.27.4@sha256:f6ed71d0f9f1175df8c4b8c674b339a74785384d25fdad21b3c3dc0554109286
resources:
limits:
memory: 256Mi
requests:
cpu: 10m
restartPolicy: Always
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-dashboard
+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-dashboard
@@ -4703,27 +4703,27 @@
],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
- "expr": "sum(rate(cilium_policy_l7_denied_total{k8s_app=\"cilium\", pod=~\"$pod\"}[1m]))",
+ "expr": "sum(rate(cilium_policy_l7_total{k8s_app=\"cilium\", pod=~\"$pod\", rule=\"denied\"}[1m]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "denied",
"refId": "A"
},
{
- "expr": "sum(rate(cilium_policy_l7_forwarded_total{k8s_app=\"cilium\", pod=~\"$pod\"}[1m]))",
+ "expr": "sum(rate(cilium_policy_l7_total{k8s_app=\"cilium\", pod=~\"$pod\", rule=\"forwarded\"}[1m]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "forwarded",
"refId": "B"
},
{
- "expr": "sum(rate(cilium_policy_l7_received_total{k8s_app=\"cilium\", pod=~\"$pod\"}[1m]))",
+ "expr": "sum(rate(cilium_policy_l7_total{k8s_app=\"cilium\", pod=~\"$pod\", rule=\"received\"}[1m]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "received",
"refId": "C"
}
],
@@ -4869,13 +4869,13 @@
}
},
{
"aliasColors": {
"Max per node processingTime": "#e24d42",
"Max per node upstreamTime": "#58140c",
- "avg(cilium_policy_l7_parse_errors_total{pod=~\"cilium.*\"})": "#bf1b00",
+ "avg(cilium_policy_l7_total{pod=~\"cilium.*\", rule=\"parse_errors\"})": "#bf1b00",
"parse errors": "#bf1b00"
},
"bars": true,
"dashLength": 10,
"dashes": false,
"datasource": {
@@ -4928,13 +4928,13 @@
},
{
"alias": "Max per node upstreamTime",
"yaxis": 2
},
{
- "alias": "avg(cilium_policy_l7_parse_errors_total{pod=~\"cilium.*\"})",
+ "alias": "avg(cilium_policy_l7_total{pod=~\"cilium.*\", rule=\"parse_errors\"})",
"yaxis": 2
},
{
"alias": "parse errors",
"yaxis": 2
}
@@ -4949,13 +4949,13 @@
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{scope}}",
"refId": "A"
},
{
- "expr": "avg(cilium_policy_l7_parse_errors_total{k8s_app=\"cilium\", pod=~\"$pod\"}) by (pod)",
+ "expr": "avg(cilium_policy_l7_total{k8s_app=\"cilium\", pod=~\"$pod\", rule=\"parse_errors\"}) by (pod)",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "parse errors",
"refId": "B"
}
],
@@ -5307,13 +5307,13 @@
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "Max {{scope}}",
"refId": "B"
},
{
- "expr": "max(rate(cilium_policy_l7_parse_errors_total{k8s_app=\"cilium\", pod=~\"$pod\"}[1m])) by (pod)",
+ "expr": "max(rate(cilium_policy_l7_total{k8s_app=\"cilium\", pod=~\"$pod\", rule=\"parse_errors\"}[1m])) by (pod)",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "parse errors",
"refId": "A"
}
],
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-config
+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-config
@@ -7,20 +7,18 @@
data:
identity-allocation-mode: crd
identity-heartbeat-timeout: 30m0s
identity-gc-interval: 15m0s
cilium-endpoint-gc-interval: 5m0s
nodes-gc-interval: 5m0s
- skip-cnp-status-startup-clean: 'false'
debug: 'false'
debug-verbose: ''
enable-policy: default
policy-cidr-match-mode: ''
prometheus-serve-addr: :9962
controller-group-metrics: write-cni-file sync-host-ips sync-lb-maps-with-k8s-services
- proxy-prometheus-port: '9964'
operator-prometheus-serve-addr: :9963
enable-metrics: 'true'
enable-ipv4: 'true'
enable-ipv6: 'false'
custom-cni-conf: 'false'
enable-bpf-clock-probe: 'false'
@@ -28,58 +26,69 @@
monitor-aggregation-interval: 5s
monitor-aggregation-flags: all
bpf-map-dynamic-size-ratio: '0.0025'
bpf-policy-map-max: '16384'
bpf-lb-map-max: '65536'
bpf-lb-external-clusterip: 'false'
+ bpf-events-drop-enabled: 'true'
+ bpf-events-policy-verdict-enabled: 'true'
+ bpf-events-trace-enabled: 'true'
preallocate-bpf-maps: 'false'
- sidecar-istio-proxy-image: cilium/istio_proxy
cluster-name: home-kubernetes
cluster-id: '1'
routing-mode: native
service-no-backend-response: reject
enable-l7-proxy: 'true'
enable-ipv4-masquerade: 'true'
enable-ipv4-big-tcp: 'false'
enable-ipv6-big-tcp: 'false'
enable-ipv6-masquerade: 'true'
+ enable-tcx: 'true'
+ datapath-mode: veth
enable-bpf-masquerade: 'false'
enable-masquerade-to-route-source: 'false'
enable-xt-socket-fallback: 'true'
install-no-conntrack-iptables-rules: 'false'
auto-direct-node-routes: 'true'
+ direct-routing-skip-unreachable: 'false'
enable-local-redirect-policy: 'true'
ipv4-native-routing-cidr: 10.69.0.0/16
devices: eno+ enp6s+ bond+
+ enable-runtime-device-detection: 'true'
kube-proxy-replacement: 'true'
kube-proxy-replacement-healthz-bind-address: 0.0.0.0:10256
bpf-lb-sock: 'false'
+ bpf-lb-sock-terminate-pod-connections: 'false'
+ nodeport-addresses: ''
enable-health-check-nodeport: 'true'
enable-health-check-loadbalancer-ip: 'false'
node-port-bind-protection: 'true'
enable-auto-protect-node-port-range: 'true'
bpf-lb-mode: dsr
bpf-lb-algorithm: maglev
bpf-lb-acceleration: disabled
enable-svc-source-range-check: 'true'
enable-l2-neigh-discovery: 'true'
arping-refresh-period: 30s
+ k8s-require-ipv4-pod-cidr: 'false'
+ k8s-require-ipv6-pod-cidr: 'false'
enable-endpoint-routes: 'true'
enable-k8s-networkpolicy: 'true'
write-cni-conf-when-ready: /host/etc/cni/net.d/05-cilium.conflist
cni-exclusive: 'true'
cni-log-file: /var/run/cilium/cilium-cni.log
enable-endpoint-health-checking: 'true'
enable-health-checking: 'true'
enable-well-known-identities: 'false'
- enable-remote-node-identity: 'true'
+ enable-node-selector-labels: 'false'
synchronize-k8s-nodes: 'true'
operator-api-serve-addr: 127.0.0.1:9234
enable-hubble: 'true'
hubble-socket-path: /var/run/cilium/hubble.sock
hubble-metrics-server: :9965
+ hubble-metrics-server-enable-tls: 'false'
hubble-metrics: dns:query drop tcp flow port-distribution icmp http
enable-hubble-open-metrics: 'false'
hubble-export-file-max-size-mb: '10'
hubble-export-file-max-backups: '5'
hubble-listen-address: :4244
hubble-disable-tls: 'false'
@@ -106,12 +115,13 @@
k8s-client-burst: '20'
remove-cilium-node-taints: 'true'
set-cilium-node-taints: 'true'
set-cilium-is-up-condition: 'true'
unmanaged-pod-watcher-interval: '15'
dnsproxy-enable-transparent-mode: 'true'
+ dnsproxy-socket-linger-timeout: '10'
tofqdns-dns-reject-response-code: refused
tofqdns-enable-dns-compression: 'true'
tofqdns-endpoint-max-ip-per-hostname: '50'
tofqdns-idle-connection-grace-period: 0s
tofqdns-max-deferred-connection-deletes: '10000'
tofqdns-proxy-response-max-delay: 100ms
@@ -123,9 +133,15 @@
proxy-xff-num-trusted-hops-ingress: '0'
proxy-xff-num-trusted-hops-egress: '0'
proxy-connect-timeout: '2'
proxy-max-requests-per-connection: '0'
proxy-max-connection-duration-seconds: '0'
proxy-idle-timeout-seconds: '60'
- external-envoy-proxy: 'false'
+ external-envoy-proxy: 'true'
+ envoy-base-id: '0'
+ envoy-keep-cap-netbindservice: 'false'
max-connected-clusters: '255'
+ clustermesh-enable-endpoint-sync: 'false'
+ clustermesh-enable-mcs-api: 'false'
+ nat-map-stats-entries: '32'
+ nat-map-stats-interval: 30s
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-operator-dashboard
+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-operator-dashboard
@@ -11,17 +11,30 @@
grafana_dashboard: '1'
annotations:
grafana_folder: Cilium
data:
cilium-operator-dashboard.json: |
{
+ "__inputs": [
+ {
+ "name": "DS_PROMETHEUS",
+ "label": "prometheus",
+ "description": "",
+ "type": "datasource",
+ "pluginId": "prometheus",
+ "pluginName": "Prometheus"
+ }
+ ],
"annotations": {
"list": [
{
"builtIn": 1,
- "datasource": "-- Grafana --",
+ "datasource": {
+ "type": "datasource",
+ "uid": "grafana"
+ },
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
@@ -37,13 +50,16 @@
"aliasColors": {
"avg": "#cffaff"
},
"bars": false,
"dashLength": 10,
"dashes": false,
- "datasource": "prometheus",
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
@@ -163,13 +179,16 @@
"aliasColors": {
"MAX_resident_memory_bytes_max": "#e5ac0e"
},
"bars": false,
"dashLength": 10,
"dashes": false,
- "datasource": "prometheus",
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
@@ -293,13 +312,16 @@
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
- "datasource": "prometheus",
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
@@ -390,13 +412,16 @@
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
- "datasource": "prometheus",
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
@@ -487,13 +512,16 @@
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
- "datasource": "prometheus",
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
@@ -584,13 +612,16 @@
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
- "datasource": "prometheus",
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
@@ -681,13 +712,16 @@
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
- "datasource": "prometheus",
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
@@ -778,13 +812,16 @@
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
- "datasource": "prometheus",
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
@@ -875,13 +912,16 @@
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
- "datasource": "prometheus",
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-relay-config
+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-relay-config
@@ -6,9 +6,9 @@
namespace: kube-system
data:
config.yaml: "cluster-name: home-kubernetes\npeer-service: \"hubble-peer.kube-system.svc.cluster.local:443\"\
\nlisten-address: :4245\ngops: true\ngops-port: \"9893\"\ndial-timeout: \nretry-timeout:\
\ \nsort-buffer-len-max: \nsort-buffer-drain-timeout: \ntls-hubble-client-cert-file:\
\ /var/lib/hubble-relay/tls/client.crt\ntls-hubble-client-key-file: /var/lib/hubble-relay/tls/client.key\n\
- tls-hubble-server-ca-files: /var/lib/hubble-relay/tls/hubble-server-ca.crt\ndisable-server-tls:\
- \ true\n"
+ tls-hubble-server-ca-files: /var/lib/hubble-relay/tls/hubble-server-ca.crt\n\n\
+ disable-server-tls: true\n"
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-dashboard
+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-dashboard
@@ -9,3256 +9,1059 @@
app.kubernetes.io/name: hubble
app.kubernetes.io/part-of: cilium
grafana_dashboard: '1'
annotations:
grafana_folder: Cilium
data:
- hubble-dashboard.json: |
- {
- "annotations": {
- "list": [
- {
- "builtIn": 1,
- "datasource": "-- Grafana --",
- "enable": true,
- "hide": true,
- "iconColor": "rgba(0, 211, 255, 1)",
- "name": "Annotations & Alerts",
- "type": "dashboard"
- }
- ]
- },
- "editable": true,
- "gnetId": null,
- "graphTooltip": 0,
- "id": 3,
- "links": [],
- "panels": [
- {
- "collapsed": false,
- "gridPos": {
- "h": 1,
- "w": 24,
- "x": 0,
- "y": 0
- },
- "id": 14,
- "panels": [],
- "title": "General Processing",
- "type": "row"
- },
- {
- "aliasColors": {},
- "bars": false,
- "dashLength": 10,
- "dashes": false,
- "datasource": "prometheus",
- "fill": 1,
- "gridPos": {
- "h": 5,
- "w": 12,
- "x": 0,
- "y": 1
- },
- "id": 12,
- "legend": {
- "avg": false,
- "current": false,
- "max": false,
- "min": false,
- "show": true,
- "total": false,
- "values": false
- },
- "lines": true,
- "linewidth": 1,
- "links": [],
- "nullPointMode": "null",
- "options": {},
- "percentage": false,
- "pointradius": 2,
- "points": false,
- "renderer": "flot",
- "seriesOverrides": [
- {
- "alias": "max",
- "fillBelowTo": "avg",
- "lines": false
- },
- {
- "alias": "avg",
- "fill": 0,
- "fillBelowTo": "min"
- },
- {
- "alias": "min",
- "lines": false
- }
- ],
- "spaceLength": 10,
- "stack": false,
- "steppedLine": false,
- "targets": [
- {
- "expr": "avg(sum(rate(hubble_flows_processed_total[1m])) by (pod))",
- "format": "time_series",
- "intervalFactor": 1,
- "legendFormat": "avg",
- "refId": "A"
- },
- {
- "expr": "min(sum(rate(hubble_flows_processed_total[1m])) by (pod))",
- "format": "time_series",
- "intervalFactor": 1,
- "legendFormat": "min",
- "refId": "B"
- },
- {
- "expr": "max(sum(rate(hubble_flows_processed_total[1m])) by (pod))",
- "format": "time_series",
- "intervalFactor": 1,
- "legendFormat": "max",
- "refId": "C"
- }
- ],
- "thresholds": [],
- "timeFrom": null,
- "timeRegions": [],
- "timeShift": null,
- "title": "Flows processed Per Node",
- "tooltip": {
- "shared": true,
- "sort": 1,
- "value_type": "individual"
- },
- "type": "graph",
- "xaxis": {
- "buckets": null,
- "mode": "time",
- "name": null,
- "show": true,
- "values": []
- },
- "yaxes": [
- {
- "format": "ops",
- "label": null,
- "logBase": 1,
- "max": null,
- "min": null,
- "show": true
- },
- {
- "format": "short",
- "label": null,
- "logBase": 1,
- "max": null,
- "min": null,
- "show": true
- }
- ],
- "yaxis": {
- "align": false,
- "alignLevel": null
- }
- },
- {
- "aliasColors": {},
- "bars": false,
- "dashLength": 10,
- "dashes": false,
- "datasource": "prometheus",
- "fill": 1,
- "gridPos": {
- "h": 5,
- "w": 12,
- "x": 12,
- "y": 1
- },
- "id": 32,
- "legend": {
- "avg": false,
- "current": false,
- "max": false,
- "min": false,
- "show": true,
- "total": false,
- "values": false
- },
- "lines": true,
- "linewidth": 1,
- "links": [],
- "nullPointMode": "null",
- "options": {},
- "percentage": false,
- "pointradius": 2,
- "points": false,
- "renderer": "flot",
- "seriesOverrides": [],
- "spaceLength": 10,
- "stack": true,
- "steppedLine": false,
- "targets": [
- {
- "expr": "sum(rate(hubble_flows_processed_total[1m])) by (pod, type)",
- "format": "time_series",
- "intervalFactor": 1,
- "legendFormat": "{{type}}",
- "refId": "A"
- }
- ],
- "thresholds": [],
- "timeFrom": null,
- "timeRegions": [],
- "timeShift": null,
- "title": "Flows Types",
- "tooltip": {
- "shared": true,
- "sort": 2,
- "value_type": "individual"
- },
- "type": "graph",
- "xaxis": {
- "buckets": null,
- "mode": "time",
- "name": null,
- "show": true,
- "values": []
- },
- "yaxes": [
- {
- "format": "ops",
- "label": null,
- "logBase": 1,
- "max": null,
- "min": null,
- "show": true
- },
- {
- "format": "short",
- "label": null,
- "logBase": 1,
- "max": null,
- "min": null,
- "show": true
- }
- ],
- "yaxis": {
- "align": false,
- "alignLevel": null
- }
- },
- {
- "aliasColors": {},
- "bars": false,
- "dashLength": 10,
- "dashes": false,
- "datasource": "prometheus",
- "fill": 1,
- "gridPos": {
- "h": 5,
- "w": 12,
- "x": 0,
- "y": 6
- },
- "id": 59,
- "legend": {
- "avg": false,
- "current": false,
- "max": false,
- "min": false,
- "show": true,
- "total": false,
- "values": false
- },
- "lines": true,
- "linewidth": 1,
- "links": [],
- "nullPointMode": "null",
- "options": {},
- "percentage": false,
- "pointradius": 2,
- "points": false,
- "renderer": "flot",
- "seriesOverrides": [],
- "spaceLength": 10,
- "stack": true,
- "steppedLine": false,
- "targets": [
- {
- "expr": "sum(rate(hubble_flows_processed_total{type=\"L7\"}[1m])) by (pod, subtype)",
- "format": "time_series",
- "intervalFactor": 1,
- "legendFormat": "{{subtype}}",
- "refId": "A"
- }
- ],
- "thresholds": [],
- "timeFrom": null,
- "timeRegions": [],
- "timeShift": null,
- "title": "L7 Flow Distribution",
- "tooltip": {
- "shared": true,
- "sort": 2,
- "value_type": "individual"
- },
- "type": "graph",
- "xaxis": {
- "buckets": null,
- "mode": "time",
- "name": null,
- "show": true,
- "values": []
- },
- "yaxes": [
- {
- "format": "ops",
- "label": null,
- "logBase": 1,
- "max": null,
- "min": null,
- "show": true
- },
- {
- "format": "short",
- "label": null,
- "logBase": 1,
- "max": null,
- "min": null,
- "show": true
- }
- ],
- "yaxis": {
- "align": false,
- "alignLevel": null
- }
- },
- {
- "aliasColors": {},
- "bars": false,
- "dashLength": 10,
- "dashes": false,
- "datasource": "prometheus",
- "fill": 1,
- "gridPos": {
- "h": 5,
- "w": 12,
- "x": 12,
- "y": 6
- },
- "id": 60,
- "legend": {
- "avg": false,
- "current": false,
- "max": false,
- "min": false,
- "show": true,
- "total": false,
- "values": false
- },
- "lines": true,
- "linewidth": 1,
- "links": [],
- "nullPointMode": "null",
- "options": {},
- "percentage": false,
- "pointradius": 2,
- "points": false,
- "renderer": "flot",
- "seriesOverrides": [],
- "spaceLength": 10,
- "stack": true,
- "steppedLine": false,
- "targets": [
- {
- "expr": "sum(rate(hubble_flows_processed_total{type=\"Trace\"}[1m])) by (pod, subtype)",
- "format": "time_series",
- "intervalFactor": 1,
- "legendFormat": "{{subtype}}",
- "refId": "A"
- }
- ],
- "thresholds": [],
- "timeFrom": null,
- "timeRegions": [],
- "timeShift": null,
- "title": "Trace Flow Distribution",
- "tooltip": {
[Diff truncated by flux-local]
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-l7-http-metrics-by-workload
+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-l7-http-metrics-by-workload
@@ -11,13 +11,22 @@
grafana_dashboard: '1'
annotations:
grafana_folder: Cilium
data:
hubble-l7-http-metrics-by-workload.json: |
{
- "__inputs": [],
+ "__inputs": [
+ {
+ "name": "DS_PROMETHEUS",
+ "label": "prometheus",
+ "description": "",
+ "type": "datasource",
+ "pluginId": "prometheus",
+ "pluginName": "Prometheus"
+ }
+ ],
"__elements": {},
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
--- HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium
+++ HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium
@@ -106,14 +106,12 @@
verbs:
- get
- update
- apiGroups:
- cilium.io
resources:
- - ciliumnetworkpolicies/status
- - ciliumclusterwidenetworkpolicies/status
- ciliumendpoints/status
- ciliumendpoints
- ciliuml2announcementpolicies/status
- ciliumbgpnodeconfigs/status
verbs:
- patch
--- HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium-operator
+++ HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium-operator
@@ -170,12 +170,13 @@
- ciliumpodippools.cilium.io
- apiGroups:
- cilium.io
resources:
- ciliumloadbalancerippools
- ciliumpodippools
+ - ciliumbgppeeringpolicies
- ciliumbgpclusterconfigs
- ciliumbgpnodeconfigoverrides
verbs:
- get
- list
- watch
--- HelmRelease: kube-system/cilium Service: kube-system/cilium-agent
+++ HelmRelease: kube-system/cilium Service: kube-system/cilium-agent
@@ -15,11 +15,7 @@
k8s-app: cilium
ports:
- name: metrics
port: 9962
protocol: TCP
targetPort: prometheus
- - name: envoy-metrics
- port: 9964
- protocol: TCP
- targetPort: envoy-metrics
--- HelmRelease: kube-system/cilium Service: kube-system/hubble-relay
+++ HelmRelease: kube-system/cilium Service: kube-system/hubble-relay
@@ -12,8 +12,8 @@
type: ClusterIP
selector:
k8s-app: hubble-relay
ports:
- protocol: TCP
port: 80
- targetPort: 4245
+ targetPort: grpc
--- HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium
+++ HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium
@@ -16,24 +16,24 @@
rollingUpdate:
maxUnavailable: 2
type: RollingUpdate
template:
metadata:
annotations:
- cilium.io/cilium-configmap-checksum: 41b8349ddf5b1a139409de2a8330c31f5eaf532bba781527911e92555678a14a
+ cilium.io/cilium-configmap-checksum: 17190095812a9d665a81c116f5dbc0a4d1a819fc69d020aa2eed1a86b43aa125
labels:
k8s-app: cilium
app.kubernetes.io/name: cilium-agent
app.kubernetes.io/part-of: cilium
spec:
securityContext:
appArmorProfile:
type: Unconfined
containers:
- name: cilium-agent
- image: quay.io/cilium/cilium:v1.15.7@sha256:2e432bf6879feb8b891c497d6fd784b13e53456017d2b8e4ea734145f0282ef0
+ image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
imagePullPolicy: IfNotPresent
command:
- cilium-agent
args:
- --config-dir=/tmp/cilium/config-map
startupProbe:
@@ -133,16 +133,12 @@
hostPort: 4244
protocol: TCP
- name: prometheus
containerPort: 9962
hostPort: 9962
protocol: TCP
- - name: envoy-metrics
- containerPort: 9964
- hostPort: 9964
- protocol: TCP
- name: hubble-metrics
containerPort: 9965
hostPort: 9965
protocol: TCP
securityContext:
seLinuxOptions:
@@ -162,12 +158,15 @@
- SETGID
- SETUID
drop:
- ALL
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
+ - name: envoy-sockets
+ mountPath: /var/run/cilium/envoy/sockets
+ readOnly: false
- mountPath: /host/proc/sys/net
name: host-proc-sys-net
- mountPath: /host/proc/sys/kernel
name: host-proc-sys-kernel
- name: bpf-maps
mountPath: /sys/fs/bpf
@@ -190,13 +189,13 @@
mountPath: /var/lib/cilium/tls/hubble
readOnly: true
- name: tmp
mountPath: /tmp
initContainers:
- name: config
- image: quay.io/cilium/cilium:v1.15.7@sha256:2e432bf6879feb8b891c497d6fd784b13e53456017d2b8e4ea734145f0282ef0
+ image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
imagePullPolicy: IfNotPresent
command:
- cilium-dbg
- build-config
env:
- name: K8S_NODE_NAME
@@ -215,13 +214,13 @@
value: '6444'
volumeMounts:
- name: tmp
mountPath: /tmp
terminationMessagePolicy: FallbackToLogsOnError
- name: mount-cgroup
- image: quay.io/cilium/cilium:v1.15.7@sha256:2e432bf6879feb8b891c497d6fd784b13e53456017d2b8e4ea734145f0282ef0
+ image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
imagePullPolicy: IfNotPresent
env:
- name: CGROUP_ROOT
value: /sys/fs/cgroup
- name: BIN_PATH
value: /opt/cni/bin
@@ -247,13 +246,13 @@
- SYS_ADMIN
- SYS_CHROOT
- SYS_PTRACE
drop:
- ALL
- name: apply-sysctl-overwrites
- image: quay.io/cilium/cilium:v1.15.7@sha256:2e432bf6879feb8b891c497d6fd784b13e53456017d2b8e4ea734145f0282ef0
+ image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
imagePullPolicy: IfNotPresent
env:
- name: BIN_PATH
value: /opt/cni/bin
command:
- sh
@@ -277,13 +276,13 @@
- SYS_ADMIN
- SYS_CHROOT
- SYS_PTRACE
drop:
- ALL
- name: mount-bpf-fs
- image: quay.io/cilium/cilium:v1.15.7@sha256:2e432bf6879feb8b891c497d6fd784b13e53456017d2b8e4ea734145f0282ef0
+ image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
imagePullPolicy: IfNotPresent
args:
- mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpf
command:
- /bin/bash
- -c
@@ -293,13 +292,13 @@
privileged: true
volumeMounts:
- name: bpf-maps
mountPath: /sys/fs/bpf
mountPropagation: Bidirectional
- name: clean-cilium-state
- image: quay.io/cilium/cilium:v1.15.7@sha256:2e432bf6879feb8b891c497d6fd784b13e53456017d2b8e4ea734145f0282ef0
+ image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
imagePullPolicy: IfNotPresent
command:
- /init-container.sh
env:
- name: CILIUM_ALL_STATE
valueFrom:
@@ -341,13 +340,13 @@
- name: cilium-cgroup
mountPath: /sys/fs/cgroup
mountPropagation: HostToContainer
- name: cilium-run
mountPath: /var/run/cilium
- name: install-cni-binaries
- image: quay.io/cilium/cilium:v1.15.7@sha256:2e432bf6879feb8b891c497d6fd784b13e53456017d2b8e4ea734145f0282ef0
+ image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
imagePullPolicy: IfNotPresent
command:
- /install-plugin.sh
resources:
requests:
cpu: 100m
@@ -362,13 +361,12 @@
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- name: cni-path
mountPath: /host/opt/cni/bin
restartPolicy: Always
priorityClassName: system-node-critical
- serviceAccount: cilium
serviceAccountName: cilium
automountServiceAccountToken: true
terminationGracePeriodSeconds: 1
hostNetwork: true
affinity:
podAntiAffinity:
@@ -412,12 +410,16 @@
hostPath:
path: /lib/modules
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
+ - name: envoy-sockets
+ hostPath:
+ path: /var/run/cilium/envoy/sockets
+ type: DirectoryOrCreate
- name: clustermesh-secrets
projected:
defaultMode: 256
sources:
- secret:
name: cilium-clustermesh
@@ -429,12 +431,22 @@
- key: tls.key
path: common-etcd-client.key
- key: tls.crt
path: common-etcd-client.crt
- key: ca.crt
path: common-etcd-client-ca.crt
+ - secret:
+ name: clustermesh-apiserver-local-cert
+ optional: true
+ items:
+ - key: tls.key
+ path: local-etcd-client.key
+ - key: tls.crt
+ path: local-etcd-client.crt
+ - key: ca.crt
+ path: local-etcd-client-ca.crt
- name: host-proc-sys-net
hostPath:
path: /proc/sys/net
type: Directory
- name: host-proc-sys-kernel
hostPath:
--- HelmRelease: kube-system/cilium Deployment: kube-system/cilium-operator
+++ HelmRelease: kube-system/cilium Deployment: kube-system/cilium-operator
@@ -20,22 +20,22 @@
maxSurge: 25%
maxUnavailable: 100%
type: RollingUpdate
template:
metadata:
annotations:
- cilium.io/cilium-configmap-checksum: 41b8349ddf5b1a139409de2a8330c31f5eaf532bba781527911e92555678a14a
+ cilium.io/cilium-configmap-checksum: 17190095812a9d665a81c116f5dbc0a4d1a819fc69d020aa2eed1a86b43aa125
labels:
io.cilium/app: operator
name: cilium-operator
app.kubernetes.io/part-of: cilium
app.kubernetes.io/name: cilium-operator
spec:
containers:
- name: cilium-operator
- image: quay.io/cilium/operator-generic:v1.15.7@sha256:6840a6dde703b3e73dd31e03390327a9184fcb888efbad9d9d098d65b9035b54
+ image: quay.io/cilium/operator-generic:v1.16.1@sha256:3bc7e7a43bc4a4d8989cb7936c5d96675dd2d02c306adf925ce0a7c35aa27dc4
imagePullPolicy: IfNotPresent
command:
- cilium-operator-generic
args:
- --config-dir=/tmp/cilium/config-map
- --debug=$(CILIUM_DEBUG)
@@ -89,13 +89,12 @@
mountPath: /tmp/cilium/config-map
readOnly: true
terminationMessagePolicy: FallbackToLogsOnError
hostNetwork: true
restartPolicy: Always
priorityClassName: system-cluster-critical
- serviceAccount: cilium-operator
serviceAccountName: cilium-operator
automountServiceAccountToken: true
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
--- HelmRelease: kube-system/cilium Deployment: kube-system/hubble-relay
+++ HelmRelease: kube-system/cilium Deployment: kube-system/hubble-relay
@@ -17,13 +17,13 @@
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
- cilium.io/hubble-relay-configmap-checksum: 9ff143e9d452090a95b3354affb34e15672c8bf2f87e5d5f667dfdb7ca16ee27
+ cilium.io/hubble-relay-configmap-checksum: 058d4aa45f038b89c2abca9819ce810326aeb9f8c6d1560d4a2070e0db250b02
labels:
k8s-app: hubble-relay
app.kubernetes.io/name: hubble-relay
app.kubernetes.io/part-of: cilium
spec:
securityContext:
@@ -34,13 +34,13 @@
capabilities:
drop:
- ALL
runAsGroup: 65532
runAsNonRoot: true
runAsUser: 65532
- image: quay.io/cilium/hubble-relay:v1.15.7@sha256:12870e87ec6c105ca86885c4ee7c184ece6b706cc0f22f63d2a62a9a818fd68f
+ image: quay.io/cilium/hubble-relay:v1.16.1@sha256:2e1b4c739a676ae187d4c2bfc45c3e865bda2567cc0320a90cb666657fcfcc35
imagePullPolicy: IfNotPresent
command:
- hubble-relay
args:
- serve
ports:
@@ -50,30 +50,32 @@
grpc:
port: 4222
timeoutSeconds: 3
livenessProbe:
grpc:
port: 4222
- timeoutSeconds: 3
+ timeoutSeconds: 10
+ initialDelaySeconds: 10
+ periodSeconds: 10
+ failureThreshold: 12
startupProbe:
grpc:
port: 4222
- timeoutSeconds: 3
+ initialDelaySeconds: 10
failureThreshold: 20
periodSeconds: 3
volumeMounts:
- name: config
mountPath: /etc/hubble-relay
readOnly: true
- name: tls
mountPath: /var/lib/hubble-relay/tls
readOnly: true
terminationMessagePolicy: FallbackToLogsOnError
restartPolicy: Always
priorityClassName: null
- serviceAccount: hubble-relay
serviceAccountName: hubble-relay
automountServiceAccountToken: false
terminationGracePeriodSeconds: 1
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
--- HelmRelease: kube-system/cilium Deployment: kube-system/hubble-ui
+++ HelmRelease: kube-system/cilium Deployment: kube-system/hubble-ui
@@ -28,13 +28,12 @@
spec:
securityContext:
fsGroup: 1001
runAsGroup: 1001
runAsUser: 1001
priorityClassName: null
- serviceAccount: hubble-ui
serviceAccountName: hubble-ui
automountServiceAccountToken: true
containers:
- name: frontend
image: quay.io/cilium/hubble-ui:v0.13.1@sha256:e2e9313eb7caf64b0061d9da0efbdad59c6c461f6ca1752768942bfeda0796c6
imagePullPolicy: IfNotPresent
--- HelmRelease: kube-system/cilium ServiceMonitor: kube-system/hubble
+++ HelmRelease: kube-system/cilium ServiceMonitor: kube-system/hubble
@@ -15,12 +15,13 @@
- kube-system
endpoints:
- port: hubble-metrics
interval: 10s
honorLabels: true
path: /metrics
+ scheme: http
relabelings:
- replacement: ${1}
sourceLabels:
- __meta_kubernetes_pod_node_name
targetLabel: node
--- HelmRelease: kube-system/cilium ServiceAccount: kube-system/cilium-envoy
+++ HelmRelease: kube-system/cilium ServiceAccount: kube-system/cilium-envoy
@@ -0,0 +1,7 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: cilium-envoy
+ namespace: kube-system
+
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-envoy-config
+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-envoy-config
@@ -0,0 +1,326 @@
+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: cilium-envoy-config
+ namespace: kube-system
+data:
+ bootstrap-config.json: |
+ {
+ "node": {
+ "id": "host~127.0.0.1~no-id~localdomain",
+ "cluster": "ingress-cluster"
+ },
+ "staticResources": {
+ "listeners": [
+ {
+ "name": "envoy-prometheus-metrics-listener",
+ "address": {
+ "socket_address": {
+ "address": "0.0.0.0",
+ "port_value": 9964
+ }
+ },
+ "filter_chains": [
+ {
+ "filters": [
+ {
+ "name": "envoy.filters.network.http_connection_manager",
+ "typed_config": {
+ "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
+ "stat_prefix": "envoy-prometheus-metrics-listener",
+ "route_config": {
+ "virtual_hosts": [
+ {
+ "name": "prometheus_metrics_route",
+ "domains": [
+ "*"
+ ],
+ "routes": [
+ {
+ "name": "prometheus_metrics_route",
+ "match": {
+ "prefix": "/metrics"
+ },
+ "route": {
+ "cluster": "/envoy-admin",
+ "prefix_rewrite": "/stats/prometheus"
+ }
+ }
+ ]
+ }
+ ]
+ },
+ "http_filters": [
+ {
+ "name": "envoy.filters.http.router",
+ "typed_config": {
+ "@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
+ }
+ }
+ ],
+ "stream_idle_timeout": "0s"
+ }
+ }
+ ]
+ }
+ ]
+ },
+ {
+ "name": "envoy-health-listener",
+ "address": {
+ "socket_address": {
+ "address": "127.0.0.1",
+ "port_value": 9878
+ }
+ },
+ "filter_chains": [
+ {
+ "filters": [
+ {
+ "name": "envoy.filters.network.http_connection_manager",
+ "typed_config": {
+ "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
+ "stat_prefix": "envoy-health-listener",
+ "route_config": {
+ "virtual_hosts": [
+ {
+ "name": "health",
+ "domains": [
+ "*"
+ ],
+ "routes": [
+ {
+ "name": "health",
+ "match": {
+ "prefix": "/healthz"
+ },
+ "route": {
+ "cluster": "/envoy-admin",
+ "prefix_rewrite": "/ready"
+ }
+ }
+ ]
+ }
+ ]
+ },
+ "http_filters": [
+ {
+ "name": "envoy.filters.http.router",
+ "typed_config": {
+ "@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
+ }
+ }
+ ],
+ "stream_idle_timeout": "0s"
+ }
+ }
+ ]
+ }
+ ]
+ }
+ ],
+ "clusters": [
+ {
+ "name": "ingress-cluster",
+ "type": "ORIGINAL_DST",
+ "connectTimeout": "2s",
+ "lbPolicy": "CLUSTER_PROVIDED",
+ "typedExtensionProtocolOptions": {
+ "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+ "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+ "commonHttpProtocolOptions": {
+ "idleTimeout": "60s",
+ "maxConnectionDuration": "0s",
+ "maxRequestsPerConnection": 0
+ },
+ "useDownstreamProtocolConfig": {}
+ }
+ },
+ "cleanupInterval": "2.500s"
+ },
+ {
+ "name": "egress-cluster-tls",
+ "type": "ORIGINAL_DST",
+ "connectTimeout": "2s",
+ "lbPolicy": "CLUSTER_PROVIDED",
+ "typedExtensionProtocolOptions": {
+ "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+ "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+ "commonHttpProtocolOptions": {
+ "idleTimeout": "60s",
+ "maxConnectionDuration": "0s",
+ "maxRequestsPerConnection": 0
+ },
+ "upstreamHttpProtocolOptions": {},
+ "useDownstreamProtocolConfig": {}
+ }
+ },
+ "cleanupInterval": "2.500s",
+ "transportSocket": {
+ "name": "cilium.tls_wrapper",
+ "typedConfig": {
+ "@type": "type.googleapis.com/cilium.UpstreamTlsWrapperContext"
+ }
+ }
+ },
+ {
+ "name": "egress-cluster",
+ "type": "ORIGINAL_DST",
+ "connectTimeout": "2s",
+ "lbPolicy": "CLUSTER_PROVIDED",
+ "typedExtensionProtocolOptions": {
+ "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+ "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+ "commonHttpProtocolOptions": {
+ "idleTimeout": "60s",
+ "maxConnectionDuration": "0s",
+ "maxRequestsPerConnection": 0
+ },
+ "useDownstreamProtocolConfig": {}
+ }
+ },
+ "cleanupInterval": "2.500s"
+ },
+ {
+ "name": "ingress-cluster-tls",
+ "type": "ORIGINAL_DST",
+ "connectTimeout": "2s",
+ "lbPolicy": "CLUSTER_PROVIDED",
+ "typedExtensionProtocolOptions": {
+ "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+ "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+ "commonHttpProtocolOptions": {
+ "idleTimeout": "60s",
+ "maxConnectionDuration": "0s",
+ "maxRequestsPerConnection": 0
+ },
+ "upstreamHttpProtocolOptions": {},
+ "useDownstreamProtocolConfig": {}
+ }
+ },
+ "cleanupInterval": "2.500s",
+ "transportSocket": {
+ "name": "cilium.tls_wrapper",
+ "typedConfig": {
+ "@type": "type.googleapis.com/cilium.UpstreamTlsWrapperContext"
+ }
+ }
+ },
+ {
+ "name": "xds-grpc-cilium",
+ "type": "STATIC",
+ "connectTimeout": "2s",
+ "loadAssignment": {
+ "clusterName": "xds-grpc-cilium",
+ "endpoints": [
+ {
+ "lbEndpoints": [
+ {
+ "endpoint": {
+ "address": {
+ "pipe": {
+ "path": "/var/run/cilium/envoy/sockets/xds.sock"
+ }
+ }
+ }
+ }
+ ]
+ }
+ ]
+ },
+ "typedExtensionProtocolOptions": {
+ "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+ "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+ "explicitHttpConfig": {
+ "http2ProtocolOptions": {}
+ }
+ }
+ }
+ },
+ {
+ "name": "/envoy-admin",
+ "type": "STATIC",
+ "connectTimeout": "2s",
+ "loadAssignment": {
+ "clusterName": "/envoy-admin",
+ "endpoints": [
+ {
+ "lbEndpoints": [
+ {
+ "endpoint": {
+ "address": {
+ "pipe": {
+ "path": "/var/run/cilium/envoy/sockets/admin.sock"
+ }
+ }
+ }
+ }
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ },
+ "dynamicResources": {
+ "ldsConfig": {
+ "apiConfigSource": {
+ "apiType": "GRPC",
+ "transportApiVersion": "V3",
+ "grpcServices": [
+ {
+ "envoyGrpc": {
[Diff truncated by flux-local]
--- HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium-envoy
+++ HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium-envoy
@@ -0,0 +1,171 @@
+---
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ name: cilium-envoy
+ namespace: kube-system
+ labels:
+ k8s-app: cilium-envoy
+ app.kubernetes.io/part-of: cilium
+ app.kubernetes.io/name: cilium-envoy
+ name: cilium-envoy
+spec:
+ selector:
+ matchLabels:
+ k8s-app: cilium-envoy
+ updateStrategy:
+ rollingUpdate:
+ maxUnavailable: 2
+ type: RollingUpdate
+ template:
+ metadata:
+ annotations:
+ prometheus.io/port: '9964'
+ prometheus.io/scrape: 'true'
+ labels:
+ k8s-app: cilium-envoy
+ name: cilium-envoy
+ app.kubernetes.io/name: cilium-envoy
+ app.kubernetes.io/part-of: cilium
+ spec:
+ securityContext:
+ appArmorProfile:
+ type: Unconfined
+ containers:
+ - name: cilium-envoy
+ image: quay.io/cilium/cilium-envoy:v1.29.7-39a2a56bbd5b3a591f69dbca51d3e30ef97e0e51@sha256:bd5ff8c66716080028f414ec1cb4f7dc66f40d2fb5a009fff187f4a9b90b566b
+ imagePullPolicy: IfNotPresent
+ command:
+ - /usr/bin/cilium-envoy-starter
+ args:
+ - --
+ - -c /var/run/cilium/envoy/bootstrap-config.json
+ - --base-id 0
+ - --log-level info
+ - --log-format [%Y-%m-%d %T.%e][%t][%l][%n] [%g:%#] %v
+ startupProbe:
+ httpGet:
+ host: 127.0.0.1
+ path: /healthz
+ port: 9878
+ scheme: HTTP
+ failureThreshold: 105
+ periodSeconds: 2
+ successThreshold: 1
+ initialDelaySeconds: 5
+ livenessProbe:
+ httpGet:
+ host: 127.0.0.1
+ path: /healthz
+ port: 9878
+ scheme: HTTP
+ periodSeconds: 30
+ successThreshold: 1
+ failureThreshold: 10
+ timeoutSeconds: 5
+ readinessProbe:
+ httpGet:
+ host: 127.0.0.1
+ path: /healthz
+ port: 9878
+ scheme: HTTP
+ periodSeconds: 30
+ successThreshold: 1
+ failureThreshold: 3
+ timeoutSeconds: 5
+ env:
+ - name: K8S_NODE_NAME
+ valueFrom:
+ fieldRef:
+ apiVersion: v1
+ fieldPath: spec.nodeName
+ - name: CILIUM_K8S_NAMESPACE
+ valueFrom:
+ fieldRef:
+ apiVersion: v1
+ fieldPath: metadata.namespace
+ - name: KUBERNETES_SERVICE_HOST
+ value: 127.0.0.1
+ - name: KUBERNETES_SERVICE_PORT
+ value: '6444'
+ ports:
+ - name: envoy-metrics
+ containerPort: 9964
+ hostPort: 9964
+ protocol: TCP
+ securityContext:
+ seLinuxOptions:
+ level: s0
+ type: spc_t
+ capabilities:
+ add:
+ - NET_ADMIN
+ - SYS_ADMIN
+ drop:
+ - ALL
+ terminationMessagePolicy: FallbackToLogsOnError
+ volumeMounts:
+ - name: envoy-sockets
+ mountPath: /var/run/cilium/envoy/sockets
+ readOnly: false
+ - name: envoy-artifacts
+ mountPath: /var/run/cilium/envoy/artifacts
+ readOnly: true
+ - name: envoy-config
+ mountPath: /var/run/cilium/envoy/
+ readOnly: true
+ - name: bpf-maps
+ mountPath: /sys/fs/bpf
+ mountPropagation: HostToContainer
+ restartPolicy: Always
+ priorityClassName: system-node-critical
+ serviceAccountName: cilium-envoy
+ automountServiceAccountToken: true
+ terminationGracePeriodSeconds: 1
+ hostNetwork: true
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: cilium.io/no-schedule
+ operator: NotIn
+ values:
+ - 'true'
+ podAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchLabels:
+ k8s-app: cilium
+ topologyKey: kubernetes.io/hostname
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchLabels:
+ k8s-app: cilium-envoy
+ topologyKey: kubernetes.io/hostname
+ nodeSelector:
+ kubernetes.io/os: linux
+ tolerations:
+ - operator: Exists
+ volumes:
+ - name: envoy-sockets
+ hostPath:
+ path: /var/run/cilium/envoy/sockets
+ type: DirectoryOrCreate
+ - name: envoy-artifacts
+ hostPath:
+ path: /var/run/cilium/envoy/artifacts
+ type: DirectoryOrCreate
+ - name: envoy-config
+ configMap:
+ name: cilium-envoy-config
+ defaultMode: 256
+ items:
+ - key: bootstrap-config.json
+ path: bootstrap-config.json
+ - name: bpf-maps
+ hostPath:
+ path: /sys/fs/bpf
+ type: DirectoryOrCreate
+
🦙 MegaLinter status: ✅ SUCCESS
| Descriptor | Linter | Files | Fixed | Errors | Elapsed time |
|---|
See detailed report in MegaLinter reports
Set VALIDATE_ALL_CODEBASE: true in mega-linter.yml to validate all sources, not only the diff