redis-operator
redis-operator copied to clipboard
feat: Added redis-role selector for consistent access to 'master' or 'slave'
Description Redis Replication requires the Redis client to be able to access the master node without having to go through the sentinel client.
Sentinel tracks the redis master, but the endpoint of the redis pod is set to private IP inside the k8s cluster
There are restrictions on accessing the Redis service from outside the k8s cluster.
Of course, it is accessible with the current function of the operator, but the current function does not guarantee access to the redis-server of the master role.
Therefore, in Replication Topology, the ability to label the current pod to distinguish which pod is "leader(i.e master)" has been added,
The label was used as the selector to create a service that guarantees access to a specific role(leader or follower)
In addition, I thought it would be good to provide read-replica endpoint, so I added the function.
Summary Added redis-role label to redis-replication pod. And create a service that guarantees access to a specific role.
Related issue #765 #629 #453
associated with a previous PR #777
Type of change New feature (non-breaking change which adds functionality)
Checklist
- [x] Tests have been added/modified and all tests pass.
- [x] Functionality/bugs have been confirmed to be unchanged or fixed.
- [x] I have performed a self-review of my own code.
- [ ] Documentation has been updated or added where necessary.
Additional Context
$ kubectl get pods -l redis-role=master
NAME READY STATUS RESTARTS AGE
redis-replication-1 2/2 Running 0 24h
$ kubectl get pods -l redis-role=slave
NAME READY STATUS RESTARTS AGE
redis-replication-0 2/2 Running 0 24h
redis-replication-2 2/2 Running 0 24h
The pods are labeled. The key called redis-role has master or slave as a value.
$ kubecl get svc -l redis-role=master
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-replication-leader ClusterIP 10.99.100.47 <none> 6379/TCP 4m38s
$ kubectl exec -it service/redis-replication-leader -- redis-cli ROLE
Defaulted container "redis-replication" out of: redis-replication, redis-exporter
1) "master"
2) (integer) 18697180
3) 1) 1) "192.168.43.242"
2) "6379"
3) "18697180"
2) 1) "192.168.251.50"
2) "6379"
3) "18697037"
And a service is created with the value of the key as the selector. The service will forward the request to the selecting pod.
In this case (redis-replication-leader), forward to "master".
$ kubectl get svc -l redis-role=slave
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-replication-follower ClusterIP 10.96.227.135 <none> 6379/TCP 4m55s
$ kubectl exec -it service/redis-replication-follower -- redis-cli ROLE
Defaulted container "redis-replication" out of: redis-replication, redis-exporter
1) "slave"
2) "192.168.52.228"
3) (integer) 6379
4) "connected"
5) (integer) 18698767
Considering the load distribution in the production environment, I also wanted to provide read-replica endpoint as a service.
func updatePodLabel(ctx context.Context, cl kubernetes.Interface, logger logr.Logger, cr *redisv1beta2.RedisReplication, role string, nodes []string) error {
for _, node := range nodes {
pod, err := cl.CoreV1().Pods(cr.Namespace).Get(context.TODO(), node, metav1.GetOptions{})
if err != nil {
logger.Error(err, "Cannot get redis replication pod")
return err
}
// set Label redis-role
metav1.SetMetaDataLabel(&pod.ObjectMeta, "redis-role", role)
// update Label
_, err = cl.CoreV1().Pods(cr.Namespace).Update(context.TODO(), pod, metav1.UpdateOptions{})
if err != nil {
logger.Error(err, "Cannot update redis replication pod")
return err
}
}
return nil
}
func UpdateRoleLabelPod(ctx context.Context, cl kubernetes.Interface, logger logr.Logger, cr *redisv1beta2.RedisReplication) error {
// find realMaster, and label this pod: 'redis-role=master'
role := "master"
masterPods := GetRedisNodesByRole(ctx, cl, logger, cr, role)
realMaster := checkAttachedSlave(ctx, cl, logger, cr, masterPods)
err := updatePodLabel(ctx, cl, logger, cr, role, []string{realMaster})
if err != nil {
return err
}
// when configuring one size replication
if cr.Spec.Size == pointer.Int32(1) {
err = updatePodLabel(ctx, cl, logger, cr, role, masterPods)
if err != nil {
return err
}
}
role = "slave"
err = updatePodLabel(ctx, cl, logger, cr, role, GetRedisNodesByRole(ctx, cl, logger, cr, role))
if err != nil {
return err
}
return nil
}
Labeled only realMaster as 'redis-role': 'master' using 'GetRedisNodesByRole()' -> 'checkAttachedSlave()'.
If it is 'ClusterSize:1', make an exception.
Codecov Report
Attention: Patch coverage is 11.42857%
with 62 lines
in your changes are missing coverage. Please review.
Project coverage is 38.93%. Comparing base (
d121d86
) to head (b99997d
). Report is 45 commits behind head on master.
Additional details and impacted files
@@ Coverage Diff @@
## master #785 +/- ##
==========================================
+ Coverage 35.20% 38.93% +3.73%
==========================================
Files 19 19
Lines 3213 2738 -475
==========================================
- Hits 1131 1066 -65
+ Misses 2015 1596 -419
- Partials 67 76 +9
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@iamabhishek-dubey @shubham-cmyk @drivebyer Can I get a review?
Could you please rebase from the master branch and address the failing checks? @wkd-woo.
Could you please rebase from the master branch and address the failing checks? @wkd-woo.
I did :)
Could you please rebase from the master branch and address the failing checks? @wkd-woo.
I did :)
the DCO
check that is failing, fix it plz.
@drivebyer I missed it. I took action. I'm sorry.
Can you please check it out?
@drivebyer I missed it. I took action. I'm sorry.
Can you please check it out?
The DCO
issue has been resolved. There are some reviews that you should examine and address the conflicts, please.
@drivebyer Hello, hope you're well. Is there anything else I need to prepare to get a review?
@drivebyer Hello, hope you're well. Is there anything else I need to prepare to get a review?
Actually, you should resolve the issues with the GitHub Actions checks and the Git conflicts. Additionally, I have already provided some review suggestions, which you may want to take a look at.
@drivebyer @shubham-cmyk
Hello there :) Can we talk about this PR?
please rebase on the master && see the review