home-ops
home-ops copied to clipboard
feat(helm): update chart rook-ceph-cluster (v1.16.6 → v1.17.1)
This PR contains the following updates:
| Package | Update | Change |
|---|---|---|
| rook-ceph-cluster | minor | v1.16.6 -> v1.17.1 |
Release Notes
rook/rook (rook-ceph-cluster)
v1.17.1
Improvements
Rook v1.17.1 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
- cluster: Specify sensitive ceph config in the CephCluster CR via secrets (#15696, @patrostkowski)
- object: Lower retry log verbosity in notification OBC controller (#15764, @BlaineEXE)
- object: Log all reconcile errors during object store creation (#15747, @travisn)
- docs: Update Prometheus Operator to v0.82.0 (#15750, @OdedViner)
- build: Set correct helm version tag for the release (#15748, @travisn)
- build: Stop publishing release artifacts for non-released builds (#15742, @travisn)
v1.17.0
Upgrade Guide
To upgrade from previous versions of Rook, see the Rook upgrade guide.
Breaking Changes
- Kubernetes v1.28 is now the minimum version supported by Rook through the soon-to-be K8s release v1.33.
- Several ObjectBucketClaim options were added previously in Rook v1.16 that allowed more control over buckets. These controls allow users to self-serve their own S3 policies. Administrators may consider this flexibility a risk, depending on their environment. Rook now disables these options by default to ensure the safest off-the-shelf configurations. To enable the full range of OBC configurations, the new setting
ROOK_OBC_ALLOW_ADDITIONAL_CONFIG_FIELDSmust be set to enable users to set all of these options. For more details, see the OBC additionalConfig documentation. - First-class credential management added to CephObjectStoreUser resources, allowing multiple credentials and declarative credential rotation. For more details, see Managing User S3 Credentials. As a result, existing S3 users provisioned via CephObjectStoreUser resources no longer allow multiple credentials to exist on underlying S3 users, unless explicitly managed by Rook. Rook will purge all but one of the undeclared credentials. This could be a user observable regression for administrators who manually edited/rotated S3 user credentials for CephObjectStoreUsers, and affected users can make use of the new credential management feature as an alternative.
- Kafka notifications configured via CephBucketTopic resources will now default to setting the Kafka authentication mechanism to
PLAIN. Previously, no auth mechanism was specified by default. It was possible to set the auth mechanism viaCephBucketTopic.spec.endpoint.kafka.opaqueData. However, setting&mechanism=<auth type>viaopaqueDatais no longer possible. If any auth mechanism other thanPLAINis in use, modification toCephBucketTopicresources is required.
Features
- The name of a pre-existing Ceph RGW user account can be set as the bucket owner on an ObjectBucketClaim (OBC), rather than a unique RGW user being created for every bucket. A CephObjectStoreUser resource may be used to create the Ceph RGW user account which will be specified on the OBC. If the bucket owner is set on a bucket that already exists and is owned by a different user, the bucket will be re-linked to the specified user.
- The Ceph CSI 3.14 release has a number of features and improvements for RBD and CephFS volumes, volume snapshots, and many more areas. See the Ceph CSI 3.14 release notes for more details.
- External mons: In some two-datacenter clusters, there is no option to start an arbiter mon in an independent K8s node to configure a proper stretch cluster. The external mons now allow a mon to be configured outside the Kubernetes cluster, while Rook manages everything else inside the cluster. For more details, see the External Mon documentation. This feature is in currently in experimental mode.
- DNS resolution for mons: Allows clients outside the K8s cluster to resolve mon endpoints via DNS without requiring manual updates to the list of mon endpoints. This helps in scenarios such as virtual machine live migration. The Ceph client can connect to rook-ceph-active-mons.
.svc.cluster.local to dynamically resolve mon endpoints and receive automatic updates when mon IPs change. To configure this DNS resolution, see Tracking Mon Endpoints. - Node-specific ceph.conf overrides: The ceph.conf overrides can now be customized per-node. This may be helpful for some ceph.conf settings that need to be unique per node depending on the hardware. This can be configured by creating a node-specific configmap that will be loaded for all OSDs and OSD prepare jobs on that node, instead of the default settings that are loaded from the rook-config-override configmap.
v1.16.7
Improvements
Rook v1.16.7 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
- core: Set default Ceph version to v19.2.2 (#15704, @travisn)
- mon: Ensure mon canary pods are cleaned up for multicluster service (#15718, @sp98)
- core: Print correct OSD ID in key rotation logs (#15727, @sp98)
- helm: Add labels to ingress resource (#15719, @chkpwd)
- core: Update cephstatus fsmap gid type to uint64 (#15690, @BlaineEXE)
- pool: Retry status update on fail (#15593, @prazumovsky)
- helm: Allow specifying an ingress object store port to override default (#15669, @travisn)
- rbdmirror: Fix the rados namespace health checkup (#15677, @parth-gr)
- osd: Stabilize oscillating maxUnavailable in pdbs in case of node drain (#15634, @sp98)
- ci: Update x/net version to fix snyk report (#15659, @subhamkrai)
- nfs: Set allow_set_io_flusher_fail=true in config (#15652, @BlaineEXE)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
- [ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
--- kubernetes/apps/rook-ceph/rook-ceph/cluster Kustomization: flux-system/rook-ceph-cluster HelmRelease: rook-ceph/rook-ceph-cluster
+++ kubernetes/apps/rook-ceph/rook-ceph/cluster Kustomization: flux-system/rook-ceph-cluster HelmRelease: rook-ceph/rook-ceph-cluster
@@ -13,13 +13,13 @@
spec:
chart: rook-ceph-cluster
sourceRef:
kind: HelmRepository
name: rook-ceph
namespace: flux-system
- version: v1.16.6
+ version: v1.17.6
dependsOn:
- name: rook-ceph-operator
namespace: rook-ceph
- name: snapshot-controller
namespace: volsync-system
install:
--- HelmRelease: rook-ceph/rook-ceph-cluster Deployment: rook-ceph/rook-ceph-tools
+++ HelmRelease: rook-ceph/rook-ceph-cluster Deployment: rook-ceph/rook-ceph-tools
@@ -17,13 +17,13 @@
app: rook-ceph-tools
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
containers:
- name: rook-ceph-tools
- image: quay.io/ceph/ceph:v19.2.1
+ image: quay.io/ceph/ceph:v19.2.2
command:
- /bin/bash
- -c
- |
# Replicate the script from toolbox.sh inline so the ceph image
# can be run directly, instead of requiring the rook toolbox
--- HelmRelease: rook-ceph/rook-ceph-cluster CephCluster: rook-ceph/rook-ceph
+++ HelmRelease: rook-ceph/rook-ceph-cluster CephCluster: rook-ceph/rook-ceph
@@ -6,13 +6,13 @@
namespace: rook-ceph
spec:
monitoring:
enabled: true
cephVersion:
allowUnsupported: false
- image: quay.io/ceph/ceph:v19.2.1
+ image: quay.io/ceph/ceph:v19.2.2
cleanupPolicy:
allowUninstallWithVolumes: false
confirmation: ''
sanitizeDisks:
dataSource: zero
iteration: 1