community icon indicating copy to clipboard operation
community copied to clipboard

elasticache: ReplicationGroup should include NumCacheClusters in its API

Open rbranche opened this issue 4 years ago • 8 comments

Is your feature request related to a problem? It is not currently possible to create a Redis (cluster mode disabled) cluster with MultiAZ/AutomaticFailover support because you can't set NumCacheClusters (default: 1; requires 2 or more)

Describe the solution you'd like ReplicationGroup should contain NumCacheClusters so it is possible to create a Redis (cluster mode disabled) cluster with MultiAZ/AutomaticFailover support.

Describe alternatives you've considered

rbranche avatar Aug 26 '21 19:08 rbranche

Looks like the workaround here is to use replicasPerNodeGroup:

numNodeGroups: 1
replicasPerNodeGroup: 1  # numCacheClusters: 2
numNodeGroups: 1
replicasPerNodeGroup: 5  # numCacheClusters: 6

rbranche avatar Aug 27 '21 19:08 rbranche

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle stale

ack-bot avatar Nov 25 '21 23:11 ack-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle rotten

ack-bot avatar Dec 25 '21 23:12 ack-bot

/remove-lifecycle rotten

a-hilaly avatar Dec 27 '21 16:12 a-hilaly

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle stale

ack-bot avatar Mar 27 '22 17:03 ack-bot

/lifecycle frozen

vijtrip2 avatar Mar 28 '22 14:03 vijtrip2

/remove-lifecycle frozen

RedbackThomson avatar Jul 08 '22 16:07 RedbackThomson

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle stale

ack-bot avatar Oct 06 '22 17:10 ack-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle rotten

ack-bot avatar Nov 05 '22 17:11 ack-bot

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Provide feedback via https://github.com/aws-controllers-k8s/community. /close

ack-bot avatar Dec 05 '22 17:12 ack-bot

@ack-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Provide feedback via https://github.com/aws-controllers-k8s/community. /close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

ack-bot avatar Dec 05 '22 17:12 ack-bot