redis-operator icon indicating copy to clipboard operation
redis-operator copied to clipboard

Operator crashes with segfault on K8s 1.30.1

Open silenium-dev opened this issue 1 year ago • 11 comments

What version of redis operator are you using? redis-operator version: 0.17.0

Does this issue reproduce with the latest release? Yes

What operating system and processor architecture are you using (kubectl version)?

kubectl version Output
Client Version: v1.30.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.1
Architecture: amd64 Vendor: Talos

What did you do? Create a simple Redis resource:

apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: Redis
metadata:
  name: test
spec:
  kubernetesConfig:
    image: quay.io/redis/redis:v7.2.3

What did you expect to see? Redis instance is successfully deployed and running.

What did you see instead? Operator crashes with the following logs:

I0622 09:51:13.831039       1 leaderelection.go:250] attempting to acquire leader lease redis-operator/6cab913b.redis.opstreelabs.in...
I0622 09:51:29.430462       1 leaderelection.go:260] successfully acquired lease redis-operator/6cab913b.redis.opstreelabs.in
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting EventSource","controller":"redis","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"Redis","source":"kind source: *v1beta2.Redis"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting Controller","controller":"redis","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"Redis"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting EventSource","controller":"redisreplication","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisReplication","source":"kind source: *v1beta2.RedisReplication"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting Controller","controller":"redisreplication","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisReplication"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting EventSource","controller":"rediscluster","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisCluster","source":"kind source: *v1beta2.RedisCluster"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting Controller","controller":"rediscluster","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisCluster"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting EventSource","controller":"redissentinel","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisSentinel","source":"kind source: *v1beta2.RedisSentinel"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting EventSource","controller":"redissentinel","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisSentinel","source":"kind source: *v1beta2.RedisReplication"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting Controller","controller":"redissentinel","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisSentinel"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting workers","controller":"redisreplication","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisReplication","worker count":1}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting workers","controller":"rediscluster","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisCluster","worker count":1}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting workers","controller":"redissentinel","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisSentinel","worker count":1}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting workers","controller":"redis","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"Redis","worker count":1}
{"level":"info","ts":"2024-06-22T09:51:29Z","logger":"controllers.Redis","msg":"Reconciling opstree redis controller","Request.Namespace":"default","Request.Name":"test"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Observed a panic in reconciler: runtime error: invalid memory address or nil pointer dereference","controller":"redis","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"Redis","Redis":{"name":"test","namespace":"default"},"namespace":"default","name":"test","reconcileID":"e8277a10-9345-4793-ad38-6613217ecd24"}
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x17a70e8]

goroutine 230 [running]:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:116 +0x1e5
panic({0x19cf900?, 0x2cc0ca0?})
	/usr/local/go/src/runtime/panic.go:914 +0x21f
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.getProbeInfo(0x0, 0x0?, 0x0, 0x0)
	/workspace/k8sutils/statefulset.go:617 +0x3e8
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.generateContainerDef({_, _}, {{0xc000524260, 0x1a}, {0x0, 0x0}, 0x0, 0x0, {0x0, 0x0}, ...}, ...)
	/workspace/k8sutils/statefulset.go:369 +0x159
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.generateStatefulSetsDef({{0xc0009101b8, 0x4}, {0x0, 0x0}, {0xc0009101c0, 0x7}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...)
	/workspace/k8sutils/statefulset.go:234 +0x467
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.CreateOrUpdateStateFul({_, _}, {{_, _}, _}, {_, _}, {{0xc0009101b8, 0x4}, {0x0, ...}, ...}, ...)
	/workspace/k8sutils/statefulset.go:100 +0x1a5
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.CreateStandaloneRedis(0xc0004cf680, {0x1f12bd0, 0xc000103380})
	/workspace/k8sutils/redis-standalone.go:59 +0x853
github.com/OT-CONTAINER-KIT/redis-operator/controllers.(*RedisReconciler).Reconcile(0xc0005ab4a0, {0x0?, 0x0?}, {{{0xc0009101c0?, 0x5?}, {0xc0009101b8?, 0xc00080fd08?}}})
	/workspace/controllers/redis_controller.go:67 +0x346
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x1efc1d0?, {0x1ef8ed0?, 0xc0006ff8c0?}, {{{0xc0009101c0?, 0xb?}, {0xc0009101b8?, 0x0?}}})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119 +0xb7
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000129b80, {0x1ef8f08, 0xc0002139a0}, {0x1a86860?, 0xc000622800?})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:316 +0x3cc
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000129b80, {0x1ef8f08, 0xc0002139a0})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266 +0x1af
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227 +0x79
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 in goroutine 96
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:223 +0x565

silenium-dev avatar Jun 22 '24 10:06 silenium-dev

The issue should be fixed on master, but is still present on v0.17.0

silenium-dev avatar Jun 22 '24 10:06 silenium-dev

@silenium-dev , you can solve it temporarily by visiting https://github.com/OT-CONTAINER-KIT/redis-operator/issues/1002#issuecomment-2182250798.

drivebyer avatar Jun 22 '24 14:06 drivebyer

Facing the same issue:-

apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisCluster
metadata:
  name: redis-cluster
spec:
  clusterSize: 3
  clusterVersion: v7
  persistenceEnabled: true
  podSecurityContext:
    runAsUser: 1000
    fsGroup: 1000
  readinessProbe:
    initialDelaySeconds: 5
    periodSeconds: 5
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 3
  livenessProbe:
    initialDelaySeconds: 5
    periodSeconds: 5
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 3
  kubernetesConfig:
    image: quay.io/opstree/redis:v7.0.12
    imagePullPolicy: IfNotPresent
    resources:
      requests:
        cpu: 101m
        memory: 128Mi
      limits:
        cpu: 101m
        memory: 128Mi
        # redisSecret:
        #   name: redis-secret
        #   key: password
        # imagePullSecrets:
        #   - name: regcred
  redisExporter:
    enabled: false
    image: quay.io/opstree/redis-exporter:v1.44.0
    imagePullPolicy: Always
    resources:
      requests:
        cpu: 100m
        memory: 128Mi
      limits:
        cpu: 100m
        memory: 128Mi
        # Environment Variables for Redis Exporter
        # env:
        # - name: REDIS_EXPORTER_INCL_SYSTEM_METRICS
        #   value: "true"
        # - name: UI_PROPERTIES_FILE_NAME
        #   valueFrom:
        #     configMapKeyRef:
        #       name: game-demo
        #       key: ui_properties_file_name
        # - name: SECRET_USERNAME
        #   valueFrom:
        #     secretKeyRef:
        #       name: mysecret
        #       key: username
        #  redisLeader:
        #    redisConfig:
        #      additionalRedisConfig: redis-external-config
        #  redisFollower:
        #    redisConfig:
        #      additionalRedisConfig: redis-external-config
  storage:
    volumeClaimTemplate:
      spec:
        # storageClassName: standard
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 1Gi
    nodeConfVolume: true
    nodeConfVolumeClaimTemplate:
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 1Gi
            # nodeSelector:
            #   kubernetes.io/hostname: minikube
            # priorityClassName:
            # Affinity:
            # Tolerations: []

This is my configuration. Any workarounds?

Logs:-

{"level":"error","ts":"2024-06-25T05:20:37Z","logger":"controllers.RedisCluster","msg":"Error in getting Redis pod IP","namespace":"redis-cluster","podName":"redis-cluster-leader-0","error":"pods \"redis-cluster-leader-0\" not found","stacktrace":"github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.getRedisServerIP\n\t/workspace/k8sutils/redis.go:34\ngithub.com/OT-CONTAINER-KIT/redis-operator/k8sutils.getRedisServerAddress\n\t/workspace/k8sutils/redis.go:57\ngithub.com/OT-CONTAINER-KIT/redis-operator/k8sutils.configureRedisClient\n\t/workspace/k8sutils/redis.go:389\ngithub.com/OT-CONTAINER-KIT/redis-operator/k8sutils.CheckRedisNodeCount\n\t/workspace/k8sutils/redis.go:297\ngithub.com/OT-CONTAINER-KIT/redis-operator/controllers.(*RedisClusterReconciler).Reconcile\n\t/workspace/controllers/rediscluster_controller.go:77\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:316\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227"}
{"level":"error","ts":"2024-06-25T05:20:37Z","logger":"controllers.RedisCluster","msg":"Error in getting Redis cluster nodes","error":"dial tcp :6379: connect: connection refused","stacktrace":"github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.checkRedisCluster\n\t/workspace/k8sutils/redis.go:232\ngithub.com/OT-CONTAINER-KIT/redis-operator/k8sutils.CheckRedisNodeCount\n\t/workspace/k8sutils/redis.go:300\ngithub.com/OT-CONTAINER-KIT/redis-operator/controllers.(*RedisClusterReconciler).Reconcile\n\t/workspace/controllers/rediscluster_controller.go:77\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:316\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227"}
{"level":"info","ts":"2024-06-25T05:20:37Z","msg":"Observed a panic in reconciler: runtime error: invalid memory address or nil pointer dereference","controller":"rediscluster","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisCluster","RedisCluster":{"name":"redis-cluster","namespace":"redis-cluster"},"namespace":"redis-cluster","name":"redis-cluster","reconcileID":"4dd3baab-afe9-45e5-ad47-729ef915b65e"}
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x17a70e8]

goroutine 181 [running]:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:116 +0x1e5
panic({0x19cf900?, 0x2cc0ca0?})
	/usr/local/go/src/runtime/panic.go:914 +0x21f
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.getProbeInfo(0x0, 0xe0?, 0x0, 0x0)
	/workspace/k8sutils/statefulset.go:617 +0x3e8
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.generateContainerDef({_, _}, {{0xc00032b100, 0x1d}, {0xc0003255d0, 0xc}, 0xc0006fe570, 0x0, {0xc0006a8f60, 0x26}, ...}, ...)
	/workspace/k8sutils/statefulset.go:369 +0x159
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.generateStatefulSetsDef({{0xc00030e798, 0x14}, {0x0, 0x0}, {0xc0003255b0, 0xd}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...)
	/workspace/k8sutils/statefulset.go:234 +0x467
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.CreateOrUpdateStateFul({_, _}, {{_, _}, _}, {_, _}, {{0xc00030e798, 0x14}, {0x0, ...}, ...}, ...)
	/workspace/k8sutils/statefulset.go:100 +0x1a5
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.RedisClusterSTS.CreateRedisClusterSetup({{0x1c573b8, 0x6}, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, ...)
	/workspace/k8sutils/redis-cluster.go:270 +0x9b3
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.CreateRedisLeader(0x1c573b8?, {0x1f12bd0?, 0xc0003049c0?})
	/workspace/k8sutils/redis-cluster.go:222 +0xf8
github.com/OT-CONTAINER-KIT/redis-operator/controllers.(*RedisClusterReconciler).Reconcile(0xc00042fe00, {0x1ef8ed0, 0xc0006fe450}, {{{0xc0003255b0?, 0x5?}, {0xc0003255a0?, 0xc0001add08?}}})
	/workspace/controllers/rediscluster_controller.go:117 +0x646
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x1efc1d0?, {0x1ef8ed0?, 0xc0006fe450?}, {{{0xc0003255b0?, 0xb?}, {0xc0003255a0?, 0x0?}}})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119 +0xb7
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc00069bea0, {0x1ef8f08, 0xc00031f310}, {0x1a86860?, 0xc000326980?})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:316 +0x3cc
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc00069bea0, {0x1ef8f08, 0xc00031f310})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266 +0x1af
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227 +0x79
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 in goroutine 32
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:223 +0x565

update:- Tried #1002 but the issue persists

Dev-Destructor avatar Jun 25 '24 05:06 Dev-Destructor

@Dev-Destructor try completely removing the redisExporter block, that worked for me.

silenium-dev avatar Jun 25 '24 05:06 silenium-dev

@Dev-Destructor try completely removing the redisExporter block, that worked for me.

That seems to fix the issue but it seems no pods are getting created:-

logs:-

{"level":"error","ts":"2024-06-25T06:55:43Z","logger":"controllers.RedisCluster","msg":"Error in getting Redis pod IP","namespace":"redis-cluster","podName":"redis-cluster-leader-0","error":"pods \"redis-cluster-leader-0\" not found","stacktrace":"github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.getRedisServerIP\n\t/workspace/k8sutils/redis.go:34\ngithub.com/OT-CONTAINER-KIT/redis-operator/k8sutils.getRedisServerAddress\n\t/workspace/k8sutils/redis.go:57\ngithub.com/OT-CONTAINER-KIT/redis-operator/k8sutils.configureRedisClient\n\t/workspace/k8sutils/redis.go:382\ngithub.com/OT-CONTAINER-KIT/redis-operator/k8sutils.CheckRedisNodeCount\n\t/workspace/k8sutils/redis.go:297\ngithub.com/OT-CONTAINER-KIT/redis-operator/controllers.(*RedisClusterReconciler).Reconcile\n\t/workspace/controllers/rediscluster_controller.go:77\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:316\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227"}
{"level":"error","ts":"2024-06-25T06:55:43Z","logger":"controllers.RedisCluster","msg":"Error in getting Redis cluster nodes","error":"dial tcp :6379: connect: connection refused","stacktrace":"github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.checkRedisCluster\n\t/workspace/k8sutils/redis.go:232\ngithub.com/OT-CONTAINER-KIT/redis-operator/k8sutils.CheckRedisNodeCount\n\t/workspace/k8sutils/redis.go:300\ngithub.com/OT-CONTAINER-KIT/redis-operator/controllers.(*RedisClusterReconciler).Reconcile\n\t/workspace/controllers/rediscluster_controller.go:77\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:316\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227"}
{"level":"info","ts":"2024-06-25T06:55:43Z","msg":"Observed a panic in reconciler: runtime error: invalid memory address or nil pointer dereference","controller":"rediscluster","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisCluster","RedisCluster":{"name":"redis-cluster","namespace":"redis-cluster"},"namespace":"redis-cluster","name":"redis-cluster","reconcileID":"e203352d-2c36-4d19-b3ed-9cc2d332ee76"}
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x17a70e8]

goroutine 175 [running]:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:116 +0x1e5
panic({0x19cf900?, 0x2cc0ca0?})
	/usr/local/go/src/runtime/panic.go:914 +0x21f
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.getProbeInfo(0x0, 0x30?, 0x0, 0x1)
	/workspace/k8sutils/statefulset.go:617 +0x3e8
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.generateContainerDef({_, _}, {{0xc000120ec0, 0x1c}, {0xc0005c4e20, 0xc}, 0xc000820cc0, 0x0, {0x0, 0x0}, ...}, ...)
	/workspace/k8sutils/statefulset.go:369 +0x159
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.generateStatefulSetsDef({{0xc000620ab0, 0x14}, {0x0, 0x0}, {0xc0005c4e00, 0xd}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...)
	/workspace/k8sutils/statefulset.go:234 +0x467
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.CreateOrUpdateStateFul({_, _}, {{_, _}, _}, {_, _}, {{0xc000620ab0, 0x14}, {0x0, ...}, ...}, ...)
	/workspace/k8sutils/statefulset.go:100 +0x1a5
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.RedisClusterSTS.CreateRedisClusterSetup({{0x1c573b8, 0x6}, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, ...)
	/workspace/k8sutils/redis-cluster.go:270 +0x9b3
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.CreateRedisLeader(0x1c573b8?, {0x1f12bd0?, 0xc00023cea0?})
	/workspace/k8sutils/redis-cluster.go:222 +0xf8
github.com/OT-CONTAINER-KIT/redis-operator/controllers.(*RedisClusterReconciler).Reconcile(0xc0001b8f50, {0x1ef8ed0, 0xc000820ba0}, {{{0xc0005c4e00?, 0x5?}, {0xc0005c4df0?, 0xc0006d1d08?}}})
	/workspace/controllers/rediscluster_controller.go:117 +0x646
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x1efc1d0?, {0x1ef8ed0?, 0xc000820ba0?}, {{{0xc0005c4e00?, 0xb?}, {0xc0005c4df0?, 0x0?}}})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119 +0xb7
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc00013b400, {0x1ef8f08, 0xc00012ca50}, {0x1a86860?, 0xc000050da0?})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:316 +0x3cc
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc00013b400, {0x1ef8f08, 0xc00012ca50})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266 +0x1af
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227 +0x79
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 in goroutine 71
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:223 +0x565
$ kubectl describe rediscluster redis-cluster -n redis-cluster
Name:         redis-cluster
Namespace:    redis-cluster
Labels:       app.kubernetes.io/managed-by=Helm
Annotations:  meta.helm.sh/release-name: redis-cluster
              meta.helm.sh/release-namespace: redis-cluster
API Version:  redis.redis.opstreelabs.in/v1beta2
Kind:         RedisCluster
Metadata:
  Creation Timestamp:  2024-06-25T06:54:05Z
  Finalizers:
    redisClusterFinalizer
  Generation:        2
  Resource Version:  18698
  UID:               6386916b-e04e-4cea-ba71-7504cc669a9b
Spec:
  Cluster Size:     3
  Cluster Version:  v7
  Kubernetes Config:
    Image:              quay.io/opstree/redis:v7.0.5
    Image Pull Policy:  IfNotPresent
    Redis Secret:
      Key:   redis-password
      Name:  redis-secret
    Resources:
      Limits:
        Cpu:     101m
        Memory:  128Mi
      Requests:
        Cpu:     101m
        Memory:  128Mi
    Update Strategy:
  Persistence Enabled:  true
  Port:                 6379
  Redis Follower:
  Redis Leader:
  Storage:
    Node Conf Volume:  false
    Node Conf Volume Claim Template:
      Metadata:
      Spec:
        Resources:
      Status:
    Volume Claim Template:
      Metadata:
      Spec:
        Access Modes:
          ReadWriteOnce
        Resources:
          Requests:
            Storage:  1Gi
      Status:
    Volume Mount:
Status:
  Ready Follower Replicas:  0
  Ready Leader Replicas:    0
  Reason:                   RedisCluster is initializing leaders
  State:                    Initializing
Events:                     <none>

$ kubectl get pods -n redis-cluster
No resources found in redis-cluster namespace.

$ kubectl get statefulsets -n redis-cluster
No resources found in redis-cluster namespace.

cluster manifest:-

apiVersion: redis.redis.opstreelabs.in/v1beta1
kind: RedisCluster
metadata:
  name: redis-cluster
spec:
  clusterSize: 3
  clusterVersion: v7
  persistenceEnabled: true
  securityContext:
    runAsUser: 1000
    fsGroup: 1000
  readinessProbe:
    initialDelaySeconds: 5
    periodSeconds: 5
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 3
  livenessProbe:
    initialDelaySeconds: 5
    periodSeconds: 5
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 3
  kubernetesConfig:
    image: quay.io/opstree/redis:v7.0.5
    imagePullPolicy: IfNotPresent
    resources:
      requests:
        cpu: 101m
        memory: 128Mi
      limits:
        cpu: 101m
        memory: 128Mi
    redisSecret:
      name: redis-secret
      key: redis-password
      # imagePullSecrets:
      #   - name: regcred
  storage:
    volumeClaimTemplate:
      spec:
        # storageClassName: standard
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 1Gi
            # nodeSelector:
            #   kubernetes.io/hostname: minikube
            # priorityClassName:
            # Affinity:
            # Tolerations: []

Dev-Destructor avatar Jun 25 '24 07:06 Dev-Destructor

The issue should be fixed on master, but is still present on v0.17.0

Would you have any idea when we could expect a new release for this? Thanks :)

igoooor avatar Jun 25 '24 13:06 igoooor

same issue when instal redis-sentinel with helm

rubyon avatar Jun 26 '24 23:06 rubyon

From what I understand, the new release has been delayed, but the updated release date will be shared by the end of the day. You can keep an eye on it in the Slack channel.

drivebyer avatar Jul 01 '24 08:07 drivebyer

same issue

melnikovn avatar Jul 05 '24 11:07 melnikovn

Any update here? Seems to still happen with the latest chart and k8s v1.30.3

Any update here? Seems to still happen with the latest chart and k8s v1.30.3

Which chart version do you use? @bitfactory-sem-denbroeder

drivebyer avatar Jul 25 '24 02:07 drivebyer

Please try 0.20.3

drivebyer avatar Jun 18 '25 12:06 drivebyer