Prevent saving cron snapshot on replicas
Currently, the dragonfly-operator configures snapshotting on all instances (both master and replicas), which can lead to redundant and potentially conflicting SAVE operations.
Our read-only services connect exclusively to the replicas, and we want to prevent any latency spikes on these instances. To achieve this, we need to isolate all snapshot operations to the master instance only. This approach would also help mitigate the snapshot timeout issue for us by ensuring it only affects a single, non-latency-critical instance.
Proposed solution
We’d like to request a new configuration option in the operator that allows to enable snapshotting exclusively on the master instance, or ideally, to make this the default behavior.
Example
# setup.yaml
apiVersion: dragonflydb.io/v1alpha1
kind: Dragonfly
metadata:
name: test-dragonfly
spec:
replicas: 3
snapshot:
cron: "* * * * *"
persistentVolumeClaimSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
$ kubectl apply -f setup.yaml
dragonfly.dragonflydb.io/test-dragonfly created
$ kubectl logs test-dragonfly-0 | grep Saving
I20250827 11:24:00.005296 11 save_stages_controller.cc:346] Saving "/dragonfly/snapshots/dump-2025-08-27T11:23:59-summary.dfs" finished after 1 s
$ kubectl logs test-dragonfly-1 | grep Saving
I20250827 11:24:00.006690 11 save_stages_controller.cc:346] Saving "/dragonfly/snapshots/dump-2025-08-27T11:23:59-summary.dfs" finished after 1 s
$ kubectl logs test-dragonfly-2 | grep Saving
I20250827 11:24:00.006232 11 save_stages_controller.cc:346] Saving "/dragonfly/snapshots/dump-2025-08-27T11:23:59-summary.dfs" finished after 1 s
Hi, do you have any idea in your mind about how to implement this feature given that the pods can change its role (e.g. in the case of a failover)?
i think this was implemented with https://github.com/dragonflydb/dragonfly-operator/pull/367, so this might be closed?