redis-operator
redis-operator copied to clipboard
Stopped operator during master change initiated by sentinels causes data loss once operator comes back up
What version of redis operator are you using? helm.sh/chart: redis-operator-0.14.3
kubectl logs -n redis-system -f deployment/redis-operator
I0524 17:19:50.020961 1 request.go:665] Waited for 1.015485872s due to client-side throttling, not priority and fairness, request: GET:https://172.16.0.1:443/apis/admissionregistration.k8s.io/v1?timeout=32s
{"level":"info","ts":1684948790.024166,"logger":"controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":1684948790.0243936,"logger":"setup","msg":"starting manager"}
I0524 17:19:50.024675 1 leaderelection.go:248] attempting to acquire leader lease redis-system/6cab913b.redis.opstreelabs.in...
{"level":"info","ts":1684948790.0246747,"msg":"Starting server","kind":"health probe","addr":"[::]:8081"}
{"level":"info","ts":1684948790.0246751,"msg":"Starting server","path":"/metrics","kind":"metrics","addr":"[::]:8080"}
I0524 17:20:06.465701 1 leaderelection.go:258] successfully acquired lease redis-system/6cab913b.redis.opstreelabs.in
{"level":"info","ts":1684948806.4658704,"logger":"controller.rediscluster","msg":"Starting EventSource","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"RedisCluster","source":"kind source: *v1beta1.RedisCluster"}
{"level":"info","ts":1684948806.4658704,"logger":"controller.redis","msg":"Starting EventSource","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"Redis","source":"kind source: *v1beta1.Redis"}
{"level":"info","ts":1684948806.4659147,"logger":"controller.redis","msg":"Starting Controller","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"Redis"}
{"level":"info","ts":1684948806.4659088,"logger":"controller.rediscluster","msg":"Starting Controller","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"RedisCluster"}
{"level":"info","ts":1684948806.4659746,"logger":"controller.redissentinel","msg":"Starting EventSource","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"RedisSentinel","source":"kind source: *v1beta1.RedisSentinel"}
{"level":"info","ts":1684948806.4659986,"logger":"controller.redissentinel","msg":"Starting Controller","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"RedisSentinel"}
{"level":"info","ts":1684948806.4660645,"logger":"controller.redisreplication","msg":"Starting EventSource","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"RedisReplication","source":"kind source: *v1beta1.RedisReplication"}
{"level":"info","ts":1684948806.4660878,"logger":"controller.redisreplication","msg":"Starting Controller","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"RedisReplication"}
{"level":"info","ts":1684948806.8669167,"logger":"controller.redisreplication","msg":"Starting workers","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"RedisReplication","worker count":1}
{"level":"info","ts":1684948806.8669422,"logger":"controller.rediscluster","msg":"Starting workers","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"RedisCluster","worker count":1}
{"level":"info","ts":1684948806.8670082,"logger":"controllers.RedisReplication","msg":"Reconciling opstree redis replication controller","Request.Namespace":"default","Request.Name":"redis-replication"}
{"level":"info","ts":1684948806.8680391,"logger":"controller.redis","msg":"Starting workers","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"Redis","worker count":1}
{"level":"info","ts":1684948806.868066,"logger":"controller.redissentinel","msg":"Starting workers","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"RedisSentinel","worker count":1}
{"level":"info","ts":1684948806.8681178,"logger":"controllers.RedisSentinel","msg":"Reconciling opstree redis controller","Request.Namespace":"default","Request.Name":"redis-sentinel"}
{"level":"info","ts":1684948806.875349,"logger":"controller_redis","msg":"Successfully Execute the Get Request","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-sentinel","replication name":"redis-replication","namespace":"default"}
{"level":"info","ts":1684948806.8775253,"logger":"controller_redis","msg":"Redis statefulset get action was successful","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-replication"}
{"level":"info","ts":1684948806.8864691,"logger":"controller_redis","msg":"Reconciliation Complete, no Changes required.","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-replication"}
{"level":"info","ts":1684948806.8865402,"logger":"controller_redis","msg":"Redis statefulset get action was successful","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-replication"}
{"level":"info","ts":1684948806.8913488,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-0","ip":"10.50.97.38"}
{"level":"info","ts":1684948806.8923306,"logger":"controller_redis","msg":"Redis service get action is successful","Request.Service.Namespace":"default","Request.Service.Name":"redis-replication-headless"}
{"level":"info","ts":1684948806.8937864,"logger":"controller_redis","msg":"Redis service is already in-sync","Request.Service.Namespace":"default","Request.Service.Name":"redis-replication-headless"}
{"level":"info","ts":1684948806.8958993,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-1","ip":"10.50.97.58"}
{"level":"info","ts":1684948806.8977225,"logger":"controller_redis","msg":"Redis service get action is successful","Request.Service.Namespace":"default","Request.Service.Name":"redis-replication"}
{"level":"info","ts":1684948806.8988414,"logger":"controller_redis","msg":"Redis service is already in-sync","Request.Service.Namespace":"default","Request.Service.Name":"redis-replication"}
{"level":"info","ts":1684948806.9002051,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-0","ip":"10.50.97.38"}
{"level":"info","ts":1684948806.9031112,"logger":"controller_redis","msg":"Redis service get action is successful","Request.Service.Namespace":"default","Request.Service.Name":"redis-replication-additional"}
{"level":"info","ts":1684948806.9041228,"logger":"controller_redis","msg":"Redis service is already in-sync","Request.Service.Namespace":"default","Request.Service.Name":"redis-replication-additional"}
{"level":"info","ts":1684948806.904899,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-1","ip":"10.50.97.58"}
{"level":"error","ts":1684948806.9054615,"logger":"controller_redis","msg":"Error in getting redis pod IP","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"","error":"resource name may not be empty","stacktrace":"github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.getRedisReplicationMasterIP\n\t/workspace/k8sutils/redis-sentinel.go:309\ngithub.com/OT-CONTAINER-KIT/redis-operator/k8sutils.getSentinelEnvVariable\n\t/workspace/k8sutils/redis-sentinel.go:239\ngithub.com/OT-CONTAINER-KIT/redis-operator/k8sutils.generateRedisSentinelContainerParams\n\t/workspace/k8sutils/redis-sentinel.go:145\ngithub.com/OT-CONTAINER-KIT/redis-operator/k8sutils.RedisSentinelSTS.CreateRedisSentinelSetup\n\t/workspace/k8sutils/redis-sentinel.go:72\ngithub.com/OT-CONTAINER-KIT/redis-operator/k8sutils.CreateRedisSentinel\n\t/workspace/k8sutils/redis-sentinel.go:45\ngithub.com/OT-CONTAINER-KIT/redis-operator/controllers.(*RedisSentinelReconciler).Reconcile\n\t/workspace/controllers/redissentinel_controller.go:50\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227"}
{"level":"info","ts":1684948806.9055116,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"","ip":""}
{"level":"info","ts":1684948806.912395,"logger":"controller_redis","msg":"Redis statefulset get action was successful","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-replication"}
{"level":"info","ts":1684948806.9124076,"logger":"controllers.RedisReplication","msg":"Creating redis replication by executing replication creation commands","Request.Namespace":"default","Request.Name":"redis-replication","Replication.Ready":"2"}
{"level":"info","ts":1684948806.9181445,"logger":"controller_redis","msg":"Redis statefulset get action was successful","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-sentinel-sentinel"}
{"level":"info","ts":1684948806.9227648,"logger":"controller_redis","msg":"Redis statefulset get action was successful","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-replication"}
{"level":"info","ts":1684948806.9231458,"logger":"controller_redis","msg":"Changes in statefulset Detected, Updating...","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-sentinel-sentinel","patch":"{\"spec\":{\"template\":{\"spec\":{\"$setElementOrder/containers\":[{\"name\":\"redis-sentinel-sentinel\"}],\"containers\":[{\"$setElementOrder/env\":[{\"name\":\"REDIS_ADDR\"},{\"name\":\"SERVER_MODE\"},{\"name\":\"SETUP_MODE\"},{\"name\":\"MASTER_GROUP_NAME\"},{\"name\":\"IP\"},{\"name\":\"PORT\"},{\"name\":\"QUORUM\"},{\"name\":\"DOWN_AFTER_MILLISECONDS\"},{\"name\":\"PARALLEL_SYNCS\"},{\"name\":\"FAILOVER_TIMEOUT\"}],\"env\":[{\"name\":\"IP\",\"value\":null}],\"name\":\"redis-sentinel-sentinel\"}]}}}}"}
{"level":"info","ts":1684948806.9277086,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-0","ip":"10.50.97.38"}
{"level":"info","ts":1684948806.932499,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-1","ip":"10.50.97.58"}
{"level":"info","ts":1684948806.9358606,"logger":"controller_redis","msg":"Redis statefulset successfully updated ","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-sentinel-sentinel"}
{"level":"info","ts":1684948806.9405348,"logger":"controller_redis","msg":"Redis statefulset get action was successful","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-replication"}
{"level":"info","ts":1684948806.9411027,"logger":"controller_redis","msg":"Redis service get action is successful","Request.Service.Namespace":"default","Request.Service.Name":"redis-sentinel-sentinel-headless"}
{"level":"info","ts":1684948806.9421308,"logger":"controller_redis","msg":"Redis service is already in-sync","Request.Service.Namespace":"default","Request.Service.Name":"redis-sentinel-sentinel-headless"}
{"level":"info","ts":1684948806.9445078,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-0","ip":"10.50.97.38"}
{"level":"info","ts":1684948806.9467404,"logger":"controller_redis","msg":"Redis service get action is successful","Request.Service.Namespace":"default","Request.Service.Name":"redis-sentinel-sentinel"}
{"level":"info","ts":1684948806.9483047,"logger":"controller_redis","msg":"Redis service is already in-sync","Request.Service.Namespace":"default","Request.Service.Name":"redis-sentinel-sentinel"}
{"level":"info","ts":1684948806.9493086,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-1","ip":"10.50.97.58"}
{"level":"info","ts":1684948806.9540164,"logger":"controller_redis","msg":"Redis service get action is successful","Request.Service.Namespace":"default","Request.Service.Name":"redis-sentinel-sentinel-additional"}
{"level":"info","ts":1684948806.9550095,"logger":"controller_redis","msg":"Redis service is already in-sync","Request.Service.Namespace":"default","Request.Service.Name":"redis-sentinel-sentinel-additional"}
{"level":"info","ts":1684948806.9550214,"logger":"controllers.RedisSentinel","msg":"Will reconcile redis operator in again 10 seconds","Request.Namespace":"default","Request.Name":"redis-sentinel"}
{"level":"info","ts":1684948806.9571483,"logger":"controller_redis","msg":"Redis statefulset get action was successful","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-replication"}
{"level":"info","ts":1684948806.961373,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-0","ip":"10.50.97.38"}
{"level":"info","ts":1684948806.9662833,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-1","ip":"10.50.97.58"}
{"level":"info","ts":1684948806.9711354,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-0","ip":"10.50.97.38"}
{"level":"info","ts":1684948806.9754708,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-1","ip":"10.50.97.58"}
# this must not happen
{"level":"info","ts":1684948806.9757428,"logger":"controller_redis","msg":"No Master Node Found with attached slave promoting the following pod to master","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication","pod":"redis-replication-0"}
{"level":"info","ts":1684948806.9757674,"logger":"controller_redis","msg":"Redis Master Node is set to","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication","pod":"redis-replication-0"}
{"level":"info","ts":1684948806.9800692,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-0","ip":"10.50.97.38"}
{"level":"info","ts":1684948806.9838839,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-1","ip":"10.50.97.58"}
{"level":"info","ts":1684948806.9839015,"logger":"controller_redis","msg":"Setting the","pod":"redis-replication-1","to slave of":"redis-replication-0"}
{"level":"info","ts":1684948806.9844625,"logger":"controllers.RedisReplication","msg":"Will reconcile redis operator in again 10 seconds","Request.Namespace":"default","Request.Name":"redis-replication"}
{"level":"info","ts":1684948816.9557374,"logger":"controllers.RedisSentinel","msg":"Reconciling opstree redis controller","Request.Namespace":"default","Request.Name":"redis-sentinel"}
{"level":"info","ts":1684948816.961441,"logger":"controller_redis","msg":"Successfully Execute the Get Request","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-sentinel","replication name":"redis-replication","namespace":"default"}
{"level":"info","ts":1684948816.9681842,"logger":"controller_redis","msg":"Redis statefulset get action was successful","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-replication"}
{"level":"info","ts":1684948816.9775763,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-0","ip":"10.50.97.38"}
{"level":"info","ts":1684948816.9826503,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-1","ip":"10.50.97.58"}
{"level":"info","ts":1684948816.985519,"logger":"controllers.RedisReplication","msg":"Reconciling opstree redis replication controller","Request.Namespace":"default","Request.Name":"redis-replication"}
{"level":"info","ts":1684948816.9885128,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-0","ip":"10.50.97.38"}
{"level":"info","ts":1684948816.9922287,"logger":"controller_redis","msg":"Redis statefulset get action was successful","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-replication"}
{"level":"info","ts":1684948816.9968574,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-0","ip":"10.50.97.38"}
{"level":"info","ts":1684948816.997594,"logger":"controller_redis","msg":"Reconciliation Complete, no Changes required.","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-replication"}
{"level":"info","ts":1684948817.0047696,"logger":"controller_redis","msg":"Redis statefulset get action was successful","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-sentinel-sentinel"}
{"level":"info","ts":1684948817.0077994,"logger":"controller_redis","msg":"Redis service get action is successful","Request.Service.Namespace":"default","Request.Service.Name":"redis-replication-headless"}
{"level":"info","ts":1684948817.0083134,"logger":"controller_redis","msg":"Changes in statefulset Detected, Updating...","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-sentinel-sentinel","patch":"{\"spec\":{\"template\":{\"spec\":{\"$setElementOrder/containers\":[{\"name\":\"redis-sentinel-sentinel\"}],\"containers\":[{\"$setElementOrder/env\":[{\"name\":\"REDIS_ADDR\"},{\"name\":\"SERVER_MODE\"},{\"name\":\"SETUP_MODE\"},{\"name\":\"MASTER_GROUP_NAME\"},{\"name\":\"IP\"},{\"name\":\"PORT\"},{\"name\":\"QUORUM\"},{\"name\":\"DOWN_AFTER_MILLISECONDS\"},{\"name\":\"PARALLEL_SYNCS\"},{\"name\":\"FAILOVER_TIMEOUT\"}],\"env\":[{\"name\":\"IP\",\"value\":\"10.50.97.38\"}],\"name\":\"redis-sentinel-sentinel\"}]}}}}"}
{"level":"info","ts":1684948817.009041,"logger":"controller_redis","msg":"Redis service is already in-sync","Request.Service.Namespace":"default","Request.Service.Name":"redis-replication-headless"}
{"level":"info","ts":1684948817.0155904,"logger":"controller_redis","msg":"Redis service get action is successful","Request.Service.Namespace":"default","Request.Service.Name":"redis-replication"}
{"level":"info","ts":1684948817.0168974,"logger":"controller_redis","msg":"Redis service is already in-sync","Request.Service.Namespace":"default","Request.Service.Name":"redis-replication"}
{"level":"info","ts":1684948817.0205452,"logger":"controller_redis","msg":"Redis service get action is successful","Request.Service.Namespace":"default","Request.Service.Name":"redis-replication-additional"}
{"level":"info","ts":1684948817.0215333,"logger":"controller_redis","msg":"Redis service is already in-sync","Request.Service.Namespace":"default","Request.Service.Name":"redis-replication-additional"}
{"level":"info","ts":1684948817.0238953,"logger":"controller_redis","msg":"Redis statefulset successfully updated ","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-sentinel-sentinel"}
{"level":"info","ts":1684948817.0276859,"logger":"controller_redis","msg":"Redis service get action is successful","Request.Service.Namespace":"default","Request.Service.Name":"redis-sentinel-sentinel-headless"}
{"level":"info","ts":1684948817.028174,"logger":"controller_redis","msg":"Redis statefulset get action was successful","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-replication"}
{"level":"info","ts":1684948817.0281866,"logger":"controllers.RedisReplication","msg":"Creating redis replication by executing replication creation commands","Request.Namespace":"default","Request.Name":"redis-replication","Replication.Ready":"2"}
{"level":"info","ts":1684948817.0289798,"logger":"controller_redis","msg":"Redis service is already in-sync","Request.Service.Namespace":"default","Request.Service.Name":"redis-sentinel-sentinel-headless"}
{"level":"info","ts":1684948817.034016,"logger":"controller_redis","msg":"Redis service get action is successful","Request.Service.Namespace":"default","Request.Service.Name":"redis-sentinel-sentinel"}
{"level":"info","ts":1684948817.0348027,"logger":"controller_redis","msg":"Redis statefulset get action was successful","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-replication"}
{"level":"info","ts":1684948817.0352018,"logger":"controller_redis","msg":"Redis service is already in-sync","Request.Service.Namespace":"default","Request.Service.Name":"redis-sentinel-sentinel"}
{"level":"info","ts":1684948817.0394216,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-0","ip":"10.50.97.38"}
{"level":"info","ts":1684948817.0397985,"logger":"controller_redis","msg":"Redis service get action is successful","Request.Service.Namespace":"default","Request.Service.Name":"redis-sentinel-sentinel-additional"}
{"level":"info","ts":1684948817.0405955,"logger":"controller_redis","msg":"Redis service is already in-sync","Request.Service.Namespace":"default","Request.Service.Name":"redis-sentinel-sentinel-additional"}
{"level":"info","ts":1684948817.0406015,"logger":"controllers.RedisSentinel","msg":"Will reconcile redis operator in again 10 seconds","Request.Namespace":"default","Request.Name":"redis-sentinel"}
{"level":"info","ts":1684948817.0440912,"logger":"controller_redis","msg":"Successfully got the ip for redis","Request.RedisManager.Namespace":"default","Request.RedisManager.Name":"redis-replication-1","ip":"10.50.97.58"}
{"level":"info","ts":1684948817.0443451,"logger":"controllers.RedisReplication","msg":"Will reconcile redis operator in again 10 seconds","Request.Namespace":"default","Request.Name":"redis-replication"}
redis-operator version: 0.14
Does this issue reproduce with the latest release? I think I'm using the latest version
What operating system and processor architecture are you using (kubectl version)?
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.6", GitCommit:"dc2f9dc64421983f0f7839da8ab4ab6d4673daad", GitTreeState:"clean", BuildDate:"2023-04-08T13:29:19Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}
(azure kubernetes service 1.25.6)
What did you do?
I set up a redis-replication with 2 instances and a sentinel setup with 3 nodes. I basically aim to achieve a small HA-ish setup (no sharding).
For this purpose, I deploy the following manifests after installing the operator:
apiVersion: redis.redis.opstreelabs.in/v1beta1
kind: RedisSentinel
metadata:
name: redis-sentinel
spec:
clusterSize: 3
securityContext:
runAsUser: 1000
fsGroup: 1000
redisSentinelConfig:
redisReplicationName : redis-replication
nodeSelector:
kubernetes.io/os: linux
kubernetesConfig:
image: quay.io/opstree/redis-sentinel:v7.0.7
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 101m
memory: 128Mi
limits:
cpu: 101m
memory: 128Mi
and
apiVersion: redis.redis.opstreelabs.in/v1beta1
kind: RedisReplication
metadata:
name: redis-replication
spec:
clusterSize: 2
securityContext:
runAsUser: 1000
fsGroup: 1000
# redisConfig:
# additionalRedisConfig: redis-external-config
nodeSelector:
kubernetes.io/os: linux
kubernetesConfig:
image: quay.io/opstree/redis:v7.0.5
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 101m
memory: 128Mi
limits:
cpu: 101m
memory: 128Mi
redisExporter:
enabled: false
image: quay.io/opstree/redis-exporter:v1.44.0
imagePullPolicy: Always
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 100m
memory: 128Mi
storage:
volumeClaimTemplate:
spec:
storageClassName: picturepark-picturepark-rediscore-storage
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
What did you expect to see?
I checked initial setup, seems to configure sentinels as expected. kubectl exec -it redis-sentinel-sentinel-1 -- redis-cli -p 26379 sentinel master myMaster run against any sentinel agree on the master and also report the correct num-other-sentinels. For example:
PS D:\temp\PP9-18675-redis-sentinel\opstree> kubectl exec -it redis-sentinel-sentinel-0 -- redis-cli -p 26379 sentinel master myMaster
1) "name"
2) "myMaster"
3) "ip"
4) "10.50.96.51"
5) "port"
6) "6379"
7) "runid"
8) "3f8231cdc7c62531b60a7b9dd7974acd6f3574e0"
9) "flags"
10) "master"
11) "link-pending-commands"
12) "0"
13) "link-refcount"
14) "1"
15) "last-ping-sent"
16) "0"
17) "last-ok-ping-reply"
18) "815"
19) "last-ping-reply"
20) "815"
21) "down-after-milliseconds"
22) "30000"
23) "info-refresh"
24) "8974"
25) "role-reported"
26) "master"
27) "role-reported-time"
28) "8994"
29) "config-epoch"
30) "0"
31) "num-slaves"
32) "1"
33) "num-other-sentinels"
34) "2"
35) "quorum"
36) "2"
37) "failover-timeout"
38) "180000"
39) "parallel-syncs"
40) "1"
This is also the case after deleting some of the replication and / or sentinel pods (although it takes quite a while until sentinels are done with restarting).
I can also see that each sentinel has the other sentinels mentioned in their config:
PS D:\temp\PP9-18675-redis-sentinel\opstree> kubectl exec redis-sentinel-sentinel-0 -- cat /etc/redis/sentinel.conf
protected-mode no
port 26379
daemonize no
pidfile "/var/run/redis-sentinel.pid"
logfile ""
dir "/tmp"
acllog-max-len 128
# sentinel monitor mymaster 127.0.0.1 6379 2
# sentinel down-after-milliseconds mymaster 30000
# sentinel parallel-syncs mymaster 1
# sentinel failover-timeout mymaster 180000
sentinel deny-scripts-reconfig yes
sentinel resolve-hostnames no
sentinel announce-hostnames no
# SENTINEL master-reboot-down-after-period mymaster 0
sentinel monitor myMaster 10.50.96.51 6379 2
# Generated by CONFIG REWRITE
latency-tracking-info-percentiles 50 99 99.9
user default on nopass ~* &* +@all
sentinel myid 0712c4552206592e6bb8e99054f09998fb7efc45
sentinel config-epoch myMaster 0
sentinel leader-epoch myMaster 0
sentinel current-epoch 0
sentinel known-replica myMaster 10.50.96.70 6379
sentinel known-sentinel myMaster 10.50.97.16 26379 df66da358e8977c1d99c82a1e4069016ad547016
sentinel known-sentinel myMaster 10.50.97.37 26379 b26d18f00807791caf1a12cfc4c78a880d5104bd
PS D:\temp\PP9-18675-redis-sentinel\opstree> kubectl exec redis-sentinel-sentinel-1 -- cat /etc/redis/sentinel.conf
# brevity...
sentinel known-replica myMaster 10.50.96.70 6379
sentinel known-sentinel myMaster 10.50.96.49 26379 0712c4552206592e6bb8e99054f09998fb7efc45
sentinel known-sentinel myMaster 10.50.97.37 26379 b26d18f00807791caf1a12cfc4c78a880d5104bd
PS D:\temp\PP9-18675-redis-sentinel\opstree> kubectl exec redis-sentinel-sentinel-2 -- cat /etc/redis/sentinel.conf
# brevity...
sentinel known-replica myMaster 10.50.96.70 6379
sentinel known-sentinel myMaster 10.50.96.49 26379 0712c4552206592e6bb8e99054f09998fb7efc45
sentinel known-sentinel myMaster 10.50.97.16 26379 df66da358e8977c1d99c82a1e4069016ad547016
In an incident situation I assume the sentinels should be able to handle the problem on their own without help of the operator. So I expect the following to not lose data:
- Start the whole setup, write some data to redis (ensured the data is readable identically from both instances)
- Scale down operator deployment to 0
- Delete current
redis-replicationmaster pod- it will come back up, but with a different IP. it still has its data and thinks it is master
- sentinels will lose connection to the pod and make the other
redis-replicationpod master - So far, everything seems OK. sentinels respond all in the same way
- Write some data to the new redis master (as it is reported by sentinels. that is what the application would do)
- Once the operator comes back online, I expect it to reconfigure the previously deleted/respawned
redis-replicationpod and the sentinels. It does, however...
What did you see instead? The redis instances are reconfigured in a way so that the newly written data is overwritten by stale data from the deleted/respawned redis pod.
Logs from respawned old master:
Redis is running without password which is not recommended
Setting up redis in standalone mode
Running without TLS mode
Starting redis service in standalone mode.....
8:C 24 May 2023 17:17:39.051 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
8:C 24 May 2023 17:17:39.051 # Redis version=7.0.5, bits=64, commit=00000000, modified=0, pid=8, just started
8:C 24 May 2023 17:17:39.051 # Configuration loaded
8:M 24 May 2023 17:17:39.051 * monotonic clock: POSIX clock_gettime
8:M 24 May 2023 17:17:39.051 * Running mode=standalone, port=6379.
8:M 24 May 2023 17:17:39.051 # Server initialized
8:M 24 May 2023 17:17:39.052 * Reading RDB base file on AOF loading...
8:M 24 May 2023 17:17:39.052 * Loading RDB produced by version 7.0.5
8:M 24 May 2023 17:17:39.052 * RDB age 287 seconds
8:M 24 May 2023 17:17:39.052 * RDB memory usage when created 0.82 Mb
8:M 24 May 2023 17:17:39.052 * RDB is base AOF
8:M 24 May 2023 17:17:39.052 * Done loading RDB, keys loaded: 0, keys expired: 0.
8:M 24 May 2023 17:17:39.052 * DB loaded from base file appendonly.aof.1.base.rdb: 0.000 seconds
8:M 24 May 2023 17:17:39.052 * DB loaded from incr file appendonly.aof.1.incr.aof: 0.000 seconds
8:M 24 May 2023 17:17:39.052 * DB loaded from append only file: 0.000 seconds
8:M 24 May 2023 17:17:39.052 * Opening AOF incr file appendonly.aof.1.incr.aof on server start
8:M 24 May 2023 17:17:39.052 * Ready to accept connections
# yes, it thinks it is master. but the rest of the cluster does not know the new IP it has. sentinels reelect and make the previous follower become master
# everything is fine, until the operator is restarted:
8:M 24 May 2023 17:20:06.984 * Replica 10.50.97.58:6379 asks for synchronization
8:M 24 May 2023 17:20:06.984 * Partial resynchronization not accepted: Replication ID mismatch (Replica asked for '9a0527bc6be1e78e4b987f201259c8c5c4825aae', my replication IDs are '03418a85136264274f44ebe4d57d0161b97d2d28' and '0000000000000000000000000000000000000000')
8:M 24 May 2023 17:20:06.984 * Replication backlog created, my new replication IDs are 'f532b79b37105f1faf12727f13d9e27c3eef93e3' and '0000000000000000000000000000000000000000'
8:M 24 May 2023 17:20:06.984 * Delay next BGSAVE for diskless SYNC
8:M 24 May 2023 17:20:11.327 * Starting BGSAVE for SYNC with target: replicas sockets
8:M 24 May 2023 17:20:11.328 * Background RDB transfer started by pid 281
281:C 24 May 2023 17:20:11.328 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
8:M 24 May 2023 17:20:11.328 # Diskless rdb transfer, done reading from pipe, 1 replicas still up.
8:M 24 May 2023 17:20:11.334 * Background RDB transfer terminated with success
8:M 24 May 2023 17:20:11.334 * Streamed RDB transfer with replica 10.50.97.58:6379 succeeded (socket). Waiting for REPLCONF ACK from slave to enable streaming
8:M 24 May 2023 17:20:11.334 * Synchronization with replica 10.50.97.58:6379 succeeded
8:M 24 May 2023 17:22:11.434 # CONFIG REWRITE executed with success.
and on the other instance
# previous master is down / unreachable (because it got a new IP)
8:S 24 May 2023 17:17:57.971 # Error condition on socket for SYNC: Host is unreachable
8:S 24 May 2023 17:17:58.970 * Connecting to MASTER 10.50.96.230:6379
8:S 24 May 2023 17:17:58.970 * MASTER <-> REPLICA sync started
8:S 24 May 2023 17:17:58.975 # Error condition on socket for SYNC: Host is unreachable
8:S 24 May 2023 17:17:59.975 * Connecting to MASTER 10.50.96.230:6379
8:S 24 May 2023 17:17:59.975 * MASTER <-> REPLICA sync started
8:S 24 May 2023 17:17:59.983 # Error condition on socket for SYNC: Host is unreachable
8:S 24 May 2023 17:18:00.981 * Connecting to MASTER 10.50.96.230:6379
8:S 24 May 2023 17:18:00.981 * MASTER <-> REPLICA sync started
8:S 24 May 2023 17:18:00.987 # Error condition on socket for SYNC: Host is unreachable
8:S 24 May 2023 17:18:01.985 * Connecting to MASTER 10.50.96.230:6379
8:S 24 May 2023 17:18:01.985 * MASTER <-> REPLICA sync started
8:S 24 May 2023 17:18:01.991 # Error condition on socket for SYNC: Host is unreachable
8:S 24 May 2023 17:18:02.990 * Connecting to MASTER 10.50.96.230:6379
8:S 24 May 2023 17:18:02.990 * MASTER <-> REPLICA sync started
8:S 24 May 2023 17:18:02.996 # Error condition on socket for SYNC: Host is unreachable
8:S 24 May 2023 17:18:03.993 * Connecting to MASTER 10.50.96.230:6379
8:S 24 May 2023 17:18:03.993 * MASTER <-> REPLICA sync started
8:S 24 May 2023 17:18:03.999 # Error condition on socket for SYNC: Host is unreachable
8:S 24 May 2023 17:18:04.998 * Connecting to MASTER 10.50.96.230:6379
8:S 24 May 2023 17:18:04.998 * MASTER <-> REPLICA sync started
8:S 24 May 2023 17:18:05.004 # Error condition on socket for SYNC: Host is unreachable
8:S 24 May 2023 17:18:06.001 * Connecting to MASTER 10.50.96.230:6379
8:S 24 May 2023 17:18:06.001 * MASTER <-> REPLICA sync started
8:S 24 May 2023 17:18:06.007 # Error condition on socket for SYNC: Host is unreachable
# sentinel does its job
8:M 24 May 2023 17:18:06.642 * Discarding previously cached master state.
8:M 24 May 2023 17:18:06.642 # Setting secondary replication ID to 548ad044b02ec01cc1548ac08ead8337806a8266, valid up to offset: 38645. New replication ID is 9a0527bc6be1e78e4b987f201259c8c5c4825aae
8:M 24 May 2023 17:18:06.642 * MASTER MODE enabled (user request from 'id=82 addr=10.50.97.74:51240 laddr=10.50.97.58:6379 fd=14 name=sentinel-26080748-cmd age=100 idle=0 flags=x db=0 sub=0 psub=0 ssub=0 multi=4 qbuf=188 qbuf-free=20286 argv-mem=4 multi-mem=169 rbs=8192 rbp=5423 obl=45 oll=0 omem=0 tot-mem=29733 events=r cmd=exec user=default redir=-1 resp=2')
8:M 24 May 2023 17:18:06.649 # CONFIG REWRITE executed with success.
# during this time, new data is written to this instance
# ... and then operator comes back online
8:S 24 May 2023 17:20:06.984 * Before turning into a replica, using my own master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
8:S 24 May 2023 17:20:06.984 * Connecting to MASTER 10.50.97.38:6379
8:S 24 May 2023 17:20:06.984 * MASTER <-> REPLICA sync started
8:S 24 May 2023 17:20:06.984 * REPLICAOF 10.50.97.38:6379 enabled (user request from 'id=149 addr=10.50.97.13:36276 laddr=10.50.97.58:6379 fd=15 name= age=0 idle=0 flags=N db=0 sub=0 psub=0 ssub=0 multi=-1 qbuf=45 qbuf-free=20429 argv-mem=22 multi-mem=0 rbs=16384 rbp=16384 obl=0 oll=0 omem=0 tot-mem=37678 events=r cmd=slaveof user=default redir=-1 resp=2')
8:S 24 May 2023 17:20:06.984 * Non blocking connect for SYNC fired the event.
8:S 24 May 2023 17:20:06.984 * Master replied to PING, replication can continue...
8:S 24 May 2023 17:20:06.984 * Trying a partial resynchronization (request 9a0527bc6be1e78e4b987f201259c8c5c4825aae:63118).
8:S 24 May 2023 17:20:11.327 * Full resync from master: f532b79b37105f1faf12727f13d9e27c3eef93e3:14
8:S 24 May 2023 17:20:11.328 * MASTER <-> REPLICA sync: receiving streamed RDB from master with EOF to disk
8:S 24 May 2023 17:20:11.328 * Discarding previously cached master state.
# bam, data written to new master as selected by sentinel before is lost at this point
8:S 24 May 2023 17:20:11.328 * MASTER <-> REPLICA sync: Flushing old data
8:S 24 May 2023 17:20:11.328 * MASTER <-> REPLICA sync: Loading DB in memory
8:S 24 May 2023 17:20:11.333 * Loading RDB produced by version 7.0.5
8:S 24 May 2023 17:20:11.333 * RDB age 0 seconds
8:S 24 May 2023 17:20:11.333 * RDB memory usage when created 0.99 Mb
8:S 24 May 2023 17:20:11.333 * Done loading RDB, keys loaded: 1, keys expired: 0.
8:S 24 May 2023 17:20:11.333 * MASTER <-> REPLICA sync: Finished with success
This still happens with the 0.16 version of the operator. I'm thinking about that Replication ID mismatch error. Could this problem be solved by simply persisting the replication configuration, so that when a pod restarts, it remembers what role it had and defaults to that until sentinel does reconfiguration?
We have implemented a custom helm chart to spin up a sentinel & redis data cluster. Storing the configs of individual pods in persistent storage was a measure we have implemented.
I reproduced this issue after I removed all the data from each Redis replica to start from scratch via an init container:
1:S 11 Jun 2024 14:54:03.933 # Error condition on socket for SYNC: Host is unreachable
1:S 11 Jun 2024 14:54:04.870 * Connecting to MASTER 10.42.2.219:6379
1:S 11 Jun 2024 14:54:04.870 * MASTER <-> REPLICA sync started
1:S 11 Jun 2024 14:54:07.933 # Error condition on socket for SYNC: Host is unreachable
1:S 11 Jun 2024 14:54:08.883 * Connecting to MASTER 10.42.2.219:6379
1:S 11 Jun 2024 14:54:08.883 * MASTER <-> REPLICA sync started
1:S 11 Jun 2024 14:54:11.933 # Error condition on socket for SYNC: Host is unreachable
1:S 11 Jun 2024 14:54:12.896 * Connecting to MASTER 10.42.2.219:6379
1:S 11 Jun 2024 14:54:12.896 * MASTER <-> REPLICA sync started
The workaround was to delete each replica pod and after that, the issue was resolved.