ioredis icon indicating copy to clipboard operation
ioredis copied to clipboard

Unhandled error event: Error: All sentinels are unreachable. Retrying from scratch after 10ms.

Open IgorKuznetsov93 opened this issue 3 years ago • 13 comments

Hello. Can you help me with this issue? This already spammed my logs My connect options:

{
 lazyConnect: true,
  sentinels: [
    { host: "xx.xx.x.41", port: 26379 }, { host: "xx.xx.x.42", port: 26379 }, { host: "xx.xx.x.43", port: 26379 },
  ],
  name: "mymaster",
}

Version: "ioredis": "4.27.6", Logs:

`2021-07-25 17:44:07.988: 2021-07-25T14:44:07.987Z ioredis:redis status[xx.xx.x.41:6379]: ready -> close
2021-07-25 17:44:07.988: 2021-07-25T14:44:07.988Z ioredis:connection reconnect in 50ms
2021-07-25 17:44:07.988: 2021-07-25T14:44:07.988Z ioredis:redis status[xx.xx.x.41:6379]: close -> reconnecting
2021-07-25 17:44:07.988: 2021-07-25T14:44:07.988Z ioredis:redis status[xx.xx.x.41:6379]: ready -> close
2021-07-25 17:44:07.988: 2021-07-25T14:44:07.988Z ioredis:connection reconnect in 50ms
2021-07-25 17:44:07.988: 2021-07-25T14:44:07.988Z ioredis:redis status[xx.xx.x.41:6379]: close -> reconnecting
2021-07-25 17:44:08.038: 2021-07-25T14:44:08.038Z ioredis:redis status[xx.xx.x.41:6379]: reconnecting -> connecting
2021-07-25 17:44:08.038: 2021-07-25T14:44:08.038Z ioredis:SentinelConnector All sentinels are unreachable. Retrying from scratch after 10ms.
2021-07-25 17:44:08.039: [ioredis] Unhandled error event: Error: All sentinels are unreachable. Retrying from scratch after 10ms.
2021-07-25 17:44:08.039:     at SentinelConnector.<anonymous> (/opt/www/company/node_modules/ioredis/built/connectors/SentinelConnector/index.js:73:31)
2021-07-25 17:44:08.039:     at Generator.next (<anonymous>:null:null)
2021-07-25 17:44:08.039:     at /opt/www/company/node_modules/ioredis/built/connectors/SentinelConnector/index.js:8:71
2021-07-25 17:44:08.039:     at new Promise (<anonymous>:null:null)
2021-07-25 17:44:08.039:     at __awaiter (/opt/www/company/node_modules/ioredis/built/connectors/SentinelConnector/index.js:4:12)
2021-07-25 17:44:08.039:     at connectToNext (/opt/www/company/node_modules/ioredis/built/connectors/SentinelConnector/index.js:59:37)
2021-07-25 17:44:08.039:     at SentinelConnector.connect (/opt/www/company/node_modules/ioredis/built/connectors/SentinelConnector/index.js:128:16)
2021-07-25 17:44:08.039:     at /opt/www/company/node_modules/ioredis/built/redis/index.js:282:55
2021-07-25 17:44:08.039:     at new Promise (<anonymous>:null:null)
2021-07-25 17:44:08.039:     at Redis.connect (/opt/www/company/node_modules/ioredis/built/redis/index.js:258:21)
2021-07-25 17:44:08.039:     at Timeout._onTimeout (/opt/www/company/node_modules/ioredis/built/redis/event_handler.js:165:18)
2021-07-25 17:44:08.039:     at listOnTimeout (internal/timers.js:554:17)
2021-07-25 17:44:08.039:     at processTimers (internal/timers.js:497:7)
2021-07-25 17:44:08.039:
2021-07-25 17:44:08.040: 2021-07-25T14:44:08.040Z ioredis:redis status[xx.xx.x.41:6379]: reconnecting -> connecting
2021-07-25 17:44:08.040: 2021-07-25T14:44:08.040Z ioredis:SentinelConnector All sentinels are unreachable. Retrying from scratch after 10ms.
2021-07-25 17:44:08.041: [ioredis] Unhandled error event: Error: All sentinels are unreachable. Retrying from scratch after 10ms.
2021-07-25 17:44:08.041:     at SentinelConnector.<anonymous> (/opt/www/company/node_modules/ioredis/built/connectors/SentinelConnector/index.js:73:31)
2021-07-25 17:44:08.041:     at Generator.next (<anonymous>:null:null)
2021-07-25 17:44:08.041:     at /opt/www/company/node_modules/ioredis/built/connectors/SentinelConnector/index.js:8:71
2021-07-25 17:44:08.041:     at new Promise (<anonymous>:null:null)
2021-07-25 17:44:08.041:     at __awaiter (/opt/www/company/node_modules/ioredis/built/connectors/SentinelConnector/index.js:4:12)
2021-07-25 17:44:08.041:     at connectToNext (/opt/www/company/node_modules/ioredis/built/connectors/SentinelConnector/index.js:59:37)
2021-07-25 17:44:08.041:     at SentinelConnector.connect (/opt/www/company/node_modules/ioredis/built/connectors/SentinelConnector/index.js:128:16)
2021-07-25 17:44:08.041:     at /opt/www/company/node_modules/ioredis/built/redis/index.js:282:55
2021-07-25 17:44:08.041:     at new Promise (<anonymous>:null:null)
2021-07-25 17:44:08.041:     at Redis.connect (/opt/www/company/node_modules/ioredis/built/redis/index.js:258:21)
2021-07-25 17:44:08.041:     at Timeout._onTimeout (/opt/www/company/node_modules/ioredis/built/redis/event_handler.js:165:18)
2021-07-25 17:44:08.041:     at listOnTimeout (internal/timers.js:554:17)
2021-07-25 17:44:08.041:     at processTimers (internal/timers.js:497:7)
2021-07-25 17:44:08.041:
2021-07-25 17:44:08.050: 2021-07-25T14:44:08.049Z ioredis:redis status[xx.xx.x.41:26379]: [empty] -> connecting
2021-07-25 17:44:08.050: 2021-07-25T14:44:08.050Z ioredis:redis queue command[xx.xx.x.41:26379]: 0 -> sentinel([ 'get-master-addr-by-name', 'mymaster' ])
2021-07-25 17:44:08.050: 2021-07-25T14:44:08.050Z ioredis:redis status[xx.xx.x.41:26379]: [empty] -> connecting
2021-07-25 17:44:08.051: 2021-07-25T14:44:08.051Z ioredis:redis queue command[xx.xx.x.41:26379]: 0 -> sentinel([ 'get-master-addr-by-name', 'mymaster' ])
2021-07-25 17:44:08.051: 2021-07-25T14:44:08.051Z ioredis:redis status[xx.xx.x.41:26379]: connecting -> connect
2021-07-25 17:44:08.051: 2021-07-25T14:44:08.051Z ioredis:redis status[xx.xx.x.41:26379]: connect -> ready
2021-07-25 17:44:08.051: 2021-07-25T14:44:08.051Z ioredis:connection send 1 commands in offline queue
2021-07-25 17:44:08.052: 2021-07-25T14:44:08.051Z ioredis:redis write command[xx.xx.x.41:26379]: 0 -> sentinel([ 'get-master-addr-by-name', 'mymaster' ])
2021-07-25 17:44:08.052: 2021-07-25T14:44:08.052Z ioredis:redis status[xx.xx.x.41:26379]: connecting -> connect
2021-07-25 17:44:08.052: 2021-07-25T14:44:08.052Z ioredis:redis status[xx.xx.x.41:26379]: connect -> ready
2021-07-25 17:44:08.052: 2021-07-25T14:44:08.052Z ioredis:connection send 1 commands in offline queue
2021-07-25 17:44:08.052: 2021-07-25T14:44:08.052Z ioredis:redis write command[xx.xx.x.41:26379]: 0 -> sentinel([ 'get-master-addr-by-name', 'mymaster' ])
2021-07-25 17:44:08.052: 2021-07-25T14:44:08.052Z ioredis:redis write command[xx.xx.x.41:26379]: 0 -> sentinel([ 'sentinels', 'mymaster' ])
2021-07-25 17:44:08.053: 2021-07-25T14:44:08.053Z ioredis:redis write command[xx.xx.x.41:26379]: 0 -> sentinel([ 'sentinels', 'mymaster' ])
2021-07-25 17:44:08.053: 2021-07-25T14:44:08.053Z ioredis:SentinelConnector Updated internal sentinels: [{"host":"xx.xx.x.41","port":26379},{"host":"xx.xx.x.42","port":26379},{"host":"xx.xx.x.43","port":26379}] @1
2021-07-25 17:44:08.053: 2021-07-25T14:44:08.053Z ioredis:SentinelConnector resolved: xx.xx.x.41:6379 from sentinel xx.xx.x.41:26379
2021-07-25 17:44:08.054: 2021-07-25T14:44:08.054Z ioredis:SentinelConnector Updated internal sentinels: [{"host":"xx.xx.x.41","port":26379},{"host":"xx.xx.x.42","port":26379},{"host":"xx.xx.x.43","port":26379}] @1
2021-07-25 17:44:08.054: 2021-07-25T14:44:08.054Z ioredis:SentinelConnector resolved: xx.xx.x.41:6379 from sentinel xx.xx.x.41:26379
2021-07-25 17:44:08.055: 2021-07-25T14:44:08.055Z ioredis:redis status[xx.xx.x.41:6379]: connecting -> connect
2021-07-25 17:44:08.055: 2021-07-25T14:44:08.055Z ioredis:redis write command[xx.xx.x.41:6379]: 0 -> info([])
2021-07-25 17:44:08.055: 2021-07-25T14:44:08.055Z ioredis:redis status[xx.xx.x.41:26379]: ready -> close
2021-07-25 17:44:08.055: 2021-07-25T14:44:08.055Z ioredis:connection skip reconnecting since the connection is manually closed.
2021-07-25 17:44:08.055: 2021-07-25T14:44:08.055Z ioredis:redis status[xx.xx.x.41:26379]: close -> end
2021-07-25 17:44:08.056: 2021-07-25T14:44:08.056Z ioredis:redis status[xx.xx.x.41:6379]: connect -> ready
2021-07-25 17:44:08.057: 2021-07-25T14:44:08.057Z ioredis:redis status[xx.xx.x.41:6379]: connecting -> connect
2021-07-25 17:44:08.057: 2021-07-25T14:44:08.057Z ioredis:redis write command[xx.xx.x.41:6379]: 0 -> info([])
2021-07-25 17:44:08.057: 2021-07-25T14:44:08.057Z ioredis:redis status[xx.xx.x.41:26379]: ready -> close
2021-07-25 17:44:08.057: 2021-07-25T14:44:08.057Z ioredis:connection skip reconnecting since the connection is manually closed.
2021-07-25 17:44:08.058: 2021-07-25T14:44:08.057Z ioredis:redis status[xx.xx.x.41:26379]: close -> end
2021-07-25 17:44:08.059: 2021-07-25T14:44:08.059Z ioredis:redis status[xx.xx.x.41:6379]: connect -> ready`

IgorKuznetsov93 avatar Jul 25 '21 15:07 IgorKuznetsov93

Hi @IgorKuznetsov93,

I got the same issue with you, how did you fix it? My ioredis version is 4.27.4

Thanks a lot

XieShangxu avatar Nov 22 '21 07:11 XieShangxu

Hi @IgorKuznetsov93,

I got the same issue with you, how did you fix it? My ioredis version is 4.27.4

Thanks a lot

Hi! I thought I had solved the problem, but no, my logger just broke. Unfortunately, the problem still remains Reopen issue

IgorKuznetsov93 avatar Dec 14 '21 09:12 IgorKuznetsov93

Getting the same issue on the same ioredis version. Any update?

FilippGorbunov avatar Jan 05 '22 00:01 FilippGorbunov

I have the same issue Please fix it

holooloo avatar Feb 24 '22 17:02 holooloo

IORedis Version: 4.28.5 Error Line: node_modules\ioredis\built\redis\index.js:327:37

Capture

AmazingDevTeam avatar Mar 15 '22 15:03 AmazingDevTeam

l have same problem

zry754331875 avatar May 25 '22 03:05 zry754331875

Hi Has anyone fixed this issue yet ? @zry754331875

Alsaheem avatar Jun 22 '22 14:06 Alsaheem

We're also seeing this issue, quite a few years have gone by now. Any plans on fixing it? We're using version 5.3.0.

olasundell avatar Jan 27 '23 13:01 olasundell

The error is expected if all your sentinels were not available at that time. More information are helpful for us to debug it further if that's not your case.

luin avatar Jan 27 '23 14:01 luin

Our applications have the same problem every 40 min, when we get that kind of error. We are using the Bitnami helm chart with a 3 node setup of REDIS/sentinel. And all sentinels are online and reachable via an internal google loadbalancer service.. If you need more info please let us now. Is there some kind of tasks that run every 40 min or a connection reset or something else? Thanks in advice.

ChrisNoSim avatar Feb 02 '23 16:02 ChrisNoSim

Our application is not even able to establish the connection the very first time, Setup is pretty simple I am just testing in my local system and I have only one sentinel and one redis instance (both are running on docker

 ioredis:redis status[127.0.0.1:26379]: wait -> connecting

 ioredis:redis queue command[127.0.0.1:26379]: 0 -> sentinel([ 'get-master-addr-by-name', 'redis-queue' ])

 ioredis:redis status[127.0.0.1:26379]: connecting -> connect

ioredis:redis status[127.0.0.1:26379]: connect -> ready

 ioredis:connection send 1 commands in offline queue

ioredis:redis write command[127.0.0.1:26379]: 0 -> sentinel([ 'get-master-addr-by-name', 'redis-queue' ])

ioredis:redis write command[127.0.0.1:26379]: 0 -> sentinel([ 'sentinels', 'redis-queue' ])

 ioredis:SentinelConnector Updated internal sentinels: [{"host":"127.0.0.1","port":26379}] @1

 ioredis:SentinelConnector resolved: 192.168.80.2:6379 from sentinel 127.0.0.1:26379

ioredis:redis status[127.0.0.1:26379]: ready -> close

ioredis:connection skip reconnecting since the connection is manually closed.

 ioredis:redis status[127.0.0.1:26379]: close -> end

 ioredis:connection error: Error: connect ETIMEDOUT

at Socket. ioredis/built/Redis.js:170:41)

 at Object.onceWrapper (node:events:627:28)

at Socket.emit (node:events:513:28)

 at Socket._onTimeout (node:net:565:8)

 at listOnTimeout (node:internal/timers:564:17)

 at processTimers (node:internal/timers:507:7) {

 errorno: 'ETIMEDOUT',

 code: 'ETIMEDOUT',

 syscall: 'connect'

 }

 redis connection error!: Error: connect ETIMEDOUT

 at Socket. (

node_modules/ioredis/built/Redis.js:170:41)

 at Object.onceWrapper (node:events:627:28)

 at Socket.emit (node:events:513:28)

 at Socket._onTimeout (node:net:565:8)

 at listOnTimeout (node:internal/timers:564:17)

at processTimers (node:internal/timers:507:7) {
 errorno: 'ETIMEDOUT',

code: 'ETIMEDOUT',

syscall: 'connect'

 }

mahesh-av-qp avatar Jul 13 '23 14:07 mahesh-av-qp

I also observe random timeouts towards redis, with a mix of "All sentinels are unreachable" and generic timeout errors being printed. The sentinels appear to be completely healthy.

The application performs around 1-2k queries per second and the timing of the error event seems completely random, but there are usually around 500-1000 such errors per day:

image

I have currently disabled retries via maxRetriesPerRequest: null, but will check if enabling retries will get rid of the errors. Other options I tried are enabling enableAutoPipelining but it had no effect.

silverwind avatar Jul 13 '23 16:07 silverwind

Problem still exists...

victor-develop avatar Feb 29 '24 09:02 victor-develop