community
community copied to clipboard
elasticache: ReplicationGroup should include NumCacheClusters in its API
Is your feature request related to a problem? It is not currently possible to create a Redis (cluster mode disabled) cluster with MultiAZ/AutomaticFailover support because you can't set NumCacheClusters (default: 1; requires 2 or more)
Describe the solution you'd like ReplicationGroup should contain NumCacheClusters so it is possible to create a Redis (cluster mode disabled) cluster with MultiAZ/AutomaticFailover support.
Describe alternatives you've considered
Looks like the workaround here is to use replicasPerNodeGroup:
numNodeGroups: 1
replicasPerNodeGroup: 1 # numCacheClusters: 2
numNodeGroups: 1
replicasPerNodeGroup: 5 # numCacheClusters: 6
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/lifecycle frozen
/remove-lifecycle frozen
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/close
@ack-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Provide feedback via https://github.com/aws-controllers-k8s/community. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.