KeyDelete not working in StackExchange.Redis
I am getting the below error when trying to delete keys of one server, but it's working as expected for another Redis server.
var keys = Server.Keys(0, pattern: keynamespace + "*", pageSize: PageSize, flags: CommandFlags.PreferSlave);
Database.CacheManager.KeyDelete(keys.ToArray(), flags: CommandFlags.DemandMaster);
StackExchange.Redis.RedisTimeoutException: Timeout performing UNLINK (10000ms), inst: 0, qs: 1, in: 0, serverEndpoint: Unspecified/MyServer:12320, mgr: 10 of 10 available, clientName: XXXXX, IOCP: (Busy=0,Free=1000,Min=4,Max=1000), WORKER: (Busy=1,Free=2046,Min=4,Max=2047), v: 2.0.505.18761 (Please take a look at this article for some common client-side issues that can cause timeouts: https://stackexchange.github.io/StackExchange.Redis/Timeouts) at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor1 processor, ServerEndPoint server) at StackExchange.Redis.RedisBase.ExecuteSync[T](Message message, ResultProcessor1 processor, ServerEndPoint server) at StackExchange.Redis.RedisDatabase.KeyDelete(RedisKey[] keys, CommandFlags flags) at MyRepository.DeleteByNameSpace(Int32 db, String keynamespace) in Y:\BuildAgent\work\6aef9a28d000eb28\MyRepository.cs:line 43
I have tried changing the PageSize to different values like 500, 1000, and 10000, but still giving the exception
A few things:
- You'll want to be on a newer version of the library, there's a lot of goodies and diagnostic data in an upgrade.
- We can't see what's happening here because
Database.CacheManager.KeyDeleteisn't our code - it looks like you're running KEYS or SCAN a lot (depending on Redis server version and those are among the most brutal commands, visible inSLOWLOGon the server.
Which version of Redis are you on? (We can tell if it's KEYS or not), can you check SLOWLOG for server stalls, and can you upgrade the library to a much newer release (preferably latest)?
@NickCraver Sorry, It was typo error while posting the question. It is IDatabase.KeyDelete method only.
I am using 2.0.505 version
Which version of Redis server? In any case, you'll want to upgrade the client - there have been significant improvements in the last 4 years.
@NickCraver This code is being used in multiple places so It's a bit risky to upgrade the client at this moment for me. Is there a way to find out the root cause of this issue and Can it be fixed by some code change?
How to check the Redis Server version, Sorry I dont know this.
I have got this information from the Infrastructure team.

@NickCraver fetching keys is very slow. Is there any better way to select keys by pattern?
If the library detects v5 (from the screenshot), it should be using SCAN, which is the best you can do for iterating the keyspace. Does the server expose SLOWLOG (perhaps on the UI) to see whether anything might have been stalling at that time?
@viveknuna - since you're using redis enterprise there should be a tab you can just look at the SLOWLOG from redis, as Marc said you can also just run SLOWLOG GET from the CLI to see what's going on.
If I had to take a shot in the dark @NickCraver & @mgravell - Database.CacheManager is an IDatabase, so when you call the variadic version of Unlink, you stumble into an O(N) operation, so if the number of keys you want to delete is very large, I could easily see this becoming an expensive operation, even though the actual freeing of the memory is being done in the background. Keep in mind all the keys have already been enumerated and it's choking on the delete
Maybe try paginating over the keys to delete?
@slorello89 @mgravell @NickCraver
I am able to solve the issue by fetching all keys without a pattern first in a list and then filtering all the keys that StartsWith the given pattern. so it's not failing while fetching the records. There are around 100000 keys matching this pattern.
and for deleting I am deleting in a batch of 10000 at a time. so there is a total of 10 batches. this is also working as expected.
Please let me know if there is any issue with this approach or any other efficient way to do this.
I'd still probably apply a filter to the scan, I doubt that the scan itself is having any negative impacts on Redis, all you're doing is dramatically increasing the load on your app for having to receive and parse all the key names, and on Redis for having to write all those extra keynames to the socked lol. Given that it's only 100k keys you're deleting, the slowdown is probably physically getting that entire bulky unlink command to Redis. so paginating over the deletion is the way to go, with filtering in the scan :)
@slorello89 filtering with pattern was also giving timeout exception so I had to do in this way.
On this one, we strongly advise upgrading to a later version of the library which has some failover semantics for a few of these things to help, but key scans are going to remain inherently unfriendly to the data store you're trying to use here...that's just not the design of the platform. A new client will do better and have some better error handling, but the core issue is the data storage architecture that'd need to change to get over the root issues here.
Without more info hard to advise here - advice above remains: this is an architecture problem manifesting in load, and needs a restructure to not hit scaling limits here.