Support clustered Redis
Important Details
How are you running Sentry?
- [x] On-Premise docker [Version 8.21]
- [ ] Saas (sentry.io)
- [ ] Other [briefly describe your environment]
Description
I'm trying to use Sentry with Redis provided by AWS ElastiCache. I get a ton of errors in the logs and similar output when I try to do sentry queues list.
Sep 24 21:56:49 ecs-cluster-production-reserved-on-demand134 docker/sentry/log/f6e3b6c8feb5[4501]: ResponseError: MOVED 5945 172.31.103.250:6379
Sep 24 21:56:49 ecs-cluster-production-reserved-on-demand134 docker/sentry/log/f6e3b6c8feb5[4501]: 21:56:49 [ERROR] sentry.errors: Unable to incr internal metric
Sep 24 21:56:49 ecs-cluster-production-reserved-on-demand134 docker/sentry/log/f6e3b6c8feb5[4501]: ResponseError: MOVED 7359 172.31.103.250:6379
Sep 24 21:56:49 ecs-cluster-production-reserved-on-demand134 docker/sentry/log/f6e3b6c8feb5[4501]: 21:56:49 [ERROR] sentry.errors: Unable to incr internal metric
Sep 24 21:57:19 ecs-cluster-production-reserved-on-demand134 docker/sentry/log/f6e3b6c8feb5[4501]: ResponseError: MOVED 5945 172.31.103.250:6379
Sep 24 21:57:19 ecs-cluster-production-reserved-on-demand134 docker/sentry/log/f6e3b6c8feb5[4501]: 21:57:19 [ERROR] sentry.errors: Unable to incr internal metric
Sep 24 21:57:19 ecs-cluster-production-reserved-on-demand134 docker/sentry/log/f6e3b6c8feb5[4501]: ResponseError: MOVED 3474 172.31.103.59:6379
Sep 24 21:57:19 ecs-cluster-production-reserved-on-demand134 docker/sentry/log/f6e3b6c8feb5[4501]: 21:57:19 [ERROR] sentry.errors: Unable to incr internal metric
Sep 24 21:57:19 ecs-cluster-production-reserved-on-demand134 docker/sentry/log/f6e3b6c8feb5[4501]: ResponseError: MOVED 3474 172.31.103.59:6379
Sep 24 21:57:19 ecs-cluster-production-reserved-on-demand134 docker/sentry/log/f6e3b6c8feb5[4501]: 21:57:19 [ERROR] sentry.errors: Unable to incr internal metric
Steps to Reproduce
- create redis in ElastiCache without encryption but with clustering enabled
- run Sentry in an ECS task
- look at logs
Also:
- the ECS task is listening.
root@ecs-cluster-production-reserved-on-demand134:/mnt/var/log/docker/sentry# docker ps | grep sentry
f6e3b6c8feb5 docker-registry.blueshift.vpc/sentry:8.21 "/entrypoint.sh ru..." 15 minutes ago Up 15 minutes 0.0.0.0:44225->9000/tcp ecs-sentry-5-sentry-d085b6d5cbffc9c75400
root@ecs-cluster-production-reserved-on-demand134:/mnt/var/log/docker/sentry# curl localhost:44225/_health/ ; echo
ok
- I tried
sentry repairand it didn't seem to help. - I see getsentry/sentry#3745 but since that was 3 years ago opinions and code could have changed.
What you expected to happen
Sentry would start and talk to redis.
Possible Solution
I'm guessing that moving to non-clustered redis might fix it, but then we're less reliable and scalable.
The most recent versions of Sentry has upgraded dependencies around this and should work with clustered Redis without issues.
Closing this issue due to staleness. Feel free to comment here if you think we should still work on this.
That's not true. Some components work with Redis Cluster, and others still only work with rb. This is still a work in progress and likely will be for a while.
Reopening per @mattrobenolt's latest comment.
Should Status=On-Hold be added an exclusion for the bot?
I'm no longer trying to use Sentry, but people that are would probably like it to be backed by more reliable redis.
Yeah, sorry for the bot noise @chicks-net. :-/
Thanks @chadwhitacre for taming the bots and cleaning up their mess. 🍀
Is there any progress or plan to support redis cluster
@nttdocomo this work is not planned for the current quarter or the next quarter. If anyone is interested in giving this a shot, we may be able to assist though.
@untitaker Will https://github.com/getsentry/sentry/pull/43436 close this?
It will work towards this goal, but there are more services to convert