charts
charts copied to clipboard
seperating redis and sentinel container from a single pod
Name and Version
bitnami/redis , latest
What is the problem this feature will solve?
in present , if i enable sentinel in my helm chart then 2 containers will be formed inside one single pod only . As i need minimum of 3 sentinel , so i need to spawn 3 pods , which needs more cpu and memory . INstead if we seperate the containers in differnt pods and can customise the no of redis servers and no of sentinel pods needed , more resources can be saved.
What is the feature you are proposing to solve the problem?
resources consumption will be reduced . if i need only 2 redis servers , one master and one replica , i need to create 3 pods for fulfilling sentinel quorum.
What alternatives have you considered?
No response
I think your feature request makes sense. You are using this cluster topology: Master-Replicas with Sentinel
, right? Can you share the values of architecture and replicas you are using?
@andresbono , currently my architecture is amd64 and my replica count is 3 as minimum i need 3 sentinels . yes , i am using master -slave with sentinel .
Redis was using IP addresses to be configured before useHostnames was implemented. This can be one of the reasons why sentinels were tied to the workload containers (master/replicas).
As you will need 3 sentinels for a robust deployment, I would suggest that you use replicaCount: 3 even though you only need 1 master and 1 replica. Basically what you are currently doing.
Nevertheless, we will try to review the chart as a whole looking for possible improvements, taking into account your feature request.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Beside resource consumption, the pattern of having 3 sentinels decoupled might allow to ensure Quorum.
I want to migrate away from an operator that doesn't work well in cases of failovers
I'm currently testing failover on this chart with a replicaCount to 3, if I lose 2 sentinel nodes (in the actual coupled setup) the last failover never occurs, my application which relies on streams hangs forever or at best will hit a "Can't write again read-only replica" type of error.
Any updates on this ?
Beside resource consumption, the pattern of having 3 sentinels decoupled might allow to ensure Quorum.
Yes, this is much needed, currently this can be achieved only with 3 pod replicas so 3 sentinels and 3 redis (1 master, 2 slave), however we don't need 2 slave for redis, 1 master and 1 slave is enough. It doesn't hurt, but it uses up resources.
I want to migrate away from an operator that doesn't work well in cases of failovers
We also just dropped the redis-operator (I think it is the only one that is currently maintained) that we were using because it was unreliable and started using this chart.
I'm currently testing failover on this chart with a replicaCount to 3, if I lose 2 sentinel nodes (in the actual coupled setup) the last failover never occurs, my application which relies on streams hangs forever or at best will hit a "Can't write again read-only replica" type of error.
What do you mean by that? If you loose 2 sentinels out of 3 then you loose the majority of the quorum so the sole sentinel can not decide which is the master or slave even if all the master and slave redis containers are working fine in the pods and just the sentinel containers are down in the pods for some reason, that is the expected I think. You need majority (50%< ) of votes in a quorum to be decisive, so 2 out of 3 or 3 out of 5 and so on.