dragonfly-operator icon indicating copy to clipboard operation
dragonfly-operator copied to clipboard

Non-zero downtime for failover mechanism master-replica

Open nikatar opened this issue 1 year ago • 2 comments

How to Reproduce

  1. Deploy Dragonfly in a master-replica configuration using the operator with the following configuration:

    apiVersion: dragonflydb.io/v1alpha1
    kind: Dragonfly
    metadata:
      name: dragonfly-clicker
      namespace: redis
      labels:
        app.kubernetes.io/name: dragonfly
        app.kubernetes.io/instance: dragonfly-clicker
        app.kubernetes.io/part-of: dragonfly-operator
        app.kubernetes.io/managed-by: kustomize
        app.kubernetes.io/created-by: dragonfly-operator
    spec:
      image: docker.dragonflydb.io/dragonflydb/dragonfly:v1.25.5
      replicas: 2
      resources:
        requests:
          cpu: 600m
          memory: 4Gi
        limits:
          memory: 6Gi
      args: ["--dbfilename=backup"]
      snapshot:
        cron: "*/5 * * * *"
        persistentVolumeClaimSpec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 10Gi
    
  2. Connect an application or a simple script to Dragonfly that regularly accesses it(via service, for example, dragonfly-clicker.redis.svc.cluster.local:6379").

  3. Delete the pod with the role of "master".

After this, for some time, you will see errors like:

"level":"ERROR","ts":"2024-12-13T12:33:19.406Z","caller":"click/service.go:137","msg":"click worker","error":"connection to redis: dial tcp 172.27.233.230:6379

Current Behavior

  • The pod with the role of "master" is terminated.
  • A new pod is started.
  • The new pod becomes a replica.
  • The old replica is promoted to "master".
  • Dragonfly becomes available only after the master and replica have synchronized(endpoint has been updated)
  • Until this process completes (which can take a long time with large data volumes, for example, with 10GB of cache it took a minute or more), Dragonfly is unavailable to the application because the endpoint is not updated.

Expected Behavior

  • When the master pod fails, the remaining replica immediately becomes the "master".
  • The endpoint updates instantly.
  • Downtime is either nonexistent or minimal.
  • A new pod is started after the switch, connects as a replica to the master, and synchronizes without causing any downtime in servicing requests.

Environment

  • EKS 1.29
  • dragonfly-operator v1.1.18
  • dragonfly v1.25.5

nikatar avatar Dec 13 '24 14:12 nikatar

The expected behaviour is indeed expected. There will be downtime as you mentioned as Kubernetes service updates are eventually consistent, and it would take a bit before all the traffic is routed to the "new" master (i.e old replica).

Currently, I don't think this service update waits for all the replicas to be synchronized. How are you sure that this is the case?

Thanks

Pothulapati avatar Jan 08 '25 08:01 Pothulapati

On my side, the failover is immediate, but my application that uses the db crashes immediately and restarts if an http request is made. In my specific use case, this is an application problem which I need to set a timeout. In other cases, it may be a liveliness probe or heath check question.

Maybe that's where the problem @nikatar ?

K-MeLeOn avatar Jan 10 '25 18:01 K-MeLeOn

Hi, can you please try our latest version and get back to us if you face same issues?

Abhra303 avatar Aug 13 '25 07:08 Abhra303