lettuce
lettuce copied to clipboard
cluster scan command can not remove the fail node
HI,mp911de, use lettuce( version 5.3.1 ) cluster scan command, ReadFrom.REPLICA_PREFERRED 。 cluster has one master ,on peplica . when slave fail。scan command still use the peplica nodes。
new Thread(() -> {
try {
boolean finished = false;
RedisFuture<KeyScanCursor<String>> scanCursor = null;
while (true) {
logger.info("do scan");
try {
ScanArgs args = ScanArgs.Builder.limit(500).match("*");
if (scanCursor == null) {
scanCursor = redisClusterAsyncCmds.scan(args);
} else {
scanCursor = redisClusterAsyncCmds.scan(scanCursor.get(), args);
}
KeyScanCursor<String> cursor = scanCursor.get(1, TimeUnit.SECONDS);
finished = cursor.isFinished();
if (finished) {
logger.info("do scan finish");
scanCursor = null;
finished = false;
Thread.sleep(500);
}
} catch (Throwable e) {
scanCursor = null;
logger.error("scan error", e);
}
}
} catch (Exception e) {
logger.error("xxxx", e);
}
2021-12-17 16:49:30,168 [Thread-180] [RedisChannelHandler.java:171] [DEBUG RedisChannelHandler] - dispatching command AsyncCommand [type=SCAN, output=KeyScanOutput [output=io.lettuce.core.KeyScanCursor@59474ac2, error='null'], commandType=io.lettuce.core.protocol.Command] 2021-12-17 16:49:30,168 [Thread-180] [DefaultEndpoint.java:288] [DEBUG DefaultEndpoint] - [channel=0x64315a40, /127.0.0.1:1422 -> /127.0.0.1:6370, epid=0x9] writeToDisconnectedBuffer() buffering (disconnected) command AsyncCommand [type=SCAN, output=KeyScanOutput [output=io.lettuce.core.KeyScanCursor@59474ac2, error='null'], commandType=io.lettuce.core.protocol.Command] 2021-12-17 16:49:30,168 [Thread-180] [DefaultEndpoint.java:158] [DEBUG DefaultEndpoint] - [channel=0x64315a40, /127.0.0.1:1422 -> /127.0.0.1:6370, epid=0x9] write() done
dbc17543fa76d9126e921b5fec0a12edbcf860a8 127.0.0.1:6375 slave e91ffc893e238f5894c0a352dda0d553f52a7c93 0 1639731114602 6 connected e91ffc893e238f5894c0a352dda0d553f52a7c93 127.0.0.1:6372 master - 0 1639731116782 3 connected 10923-16383 ca645527f02d863c9e373a9bec27305580209ead 127.0.0.1:6370 slave,fail 04126108b170f2c5c41d8f0dcfe7a1a33b68907e 1639730924503 1639730918296 9 connected cc267305a17e6a20c68affbce55dec33f5e2a779 127.0.0.1:6374 slave ac6ea4313e2e0fa1e4ad368a49b1aebb1d76b094 0 1639731113511 5 connected 04126108b170f2c5c41d8f0dcfe7a1a33b68907e 127.0.0.1:6373 master - 0 1639731115692 9 connected 0-5460
You can solve this once 6.1.6 is released via the solution in #1942 , by filtering out failed nodes.
ok,thank you