StackExchange.Redis icon indicating copy to clipboard operation
StackExchange.Redis copied to clipboard

Exception: Timeout performing GET (5000ms)

Open jkishore443 opened this issue 2 years ago • 2 comments

Most of the times we are getting below error. Anybody has the solution.

Exception: Timeout performing GET (5000ms), next: GET ABCD, inst: 12, qu: 0, qs: 0, aw: False, rs: ReadAsync, ws: Idle, in: 22649, serverEndpoint: abcd, mc: 1/1/0, mgr: 10 of 10 available, clientName: asdssdsa, IOCP: (Busy=0,Free=1000,Min=1,Max=1000), WORKER: (Busy=8,Free=32759,Min=1,Max=32767), v: 2.2.88.56325 (Please take a look at this article for some common client-side issues that can cause timeouts: https://stackexchange.github.io/StackExchange.Redis/Timeouts)

jkishore443 avatar Apr 05 '22 11:04 jkishore443

How big is the key you're fetching in this case - do you have the size handy? The error stats lead me to believe something sizable is on the pipe (rather than worker overload).

NickCraver avatar Apr 05 '22 12:04 NickCraver

@jkishore443 This is just my educated guess, but based on my experience with these sort of timeouts as of late, I'll point out a few things:

-> ws: Idle, in: 22649, rs: ReadAsync There is data available from REDIS, in the buffer, that needs to be processed. And the library is in the process of reading it, but it's a large queue of data waiting to be processed. It's not waiting on REDIS, it's waiting on your system to process it.

-> inst: 12, qu: 0, qs: 0 It's been 12 operations since the last heartbeat was received (INST), but there are no elements waiting for a response from redis (QS) and no responses waiting to be written to REDIS. (QU)

There seems to be no I/O Threads: Busy=0,Free=1000,Min=1,Max=1000)

There are currently 8 active worker threads active on the .NET global worker thread pool. (Busy=8,Free=32759,Min=1,Max=32767)

-> Based on the Min=1, I'm assuming this is a very small machine with 1 CPU/vCPU? One potential problem is that every time .NET needs to create a new thread in the global .NET thread pool (when greater than the minimum value), there is typically a delay. You're currently at 8 worker threads busy, so a lot of worker threads are vying for the CPU, and for 7 of them, they each had to wait up to 500ms before evening starting.

From the Timeout page:

For these .NET-provided global thread pools: once the number of existing (busy) threads hits the “minimum” number of threads, the ThreadPool will throttle the rate at which it injects new threads to one thread per 500 milliseconds. This means that if your system gets a burst of work needing an IOCP thread, it will process that work very quickly. However, if the burst of work is more than the configured “Minimum” setting, there will be some delay in processing some of the work as the ThreadPool waits for one of two things to happen: 1. An existing thread becomes free to process the work 2. No existing thread becomes free for 500ms, so a new thread is created.

I would check out the CPU utilization of the system, and if you're running a multi-core system, I'd look into why your minimum number of worker threads is 1.

dusty9023 avatar Apr 06 '22 00:04 dusty9023

Need more data to help here - closing out to cleanup but happy to chat more with some details.

NickCraver avatar Aug 21 '22 14:08 NickCraver

Hi. I am facing the same issue, and its quite sporadic. Our key is a composite with a prefix of up to 10ish chars + md5 hash, eg ca:v1:w:1:c318022751904ba296ccdeb39bd6a9e6. The value tho can be bigger, cant see into it unfortunately, but lets say it can vary from 1-10 KBs to 1-10s of MBs. It can take time to wire it down to the service some times.

StackExchange.Redis.RedisTimeoutException: Timeout awaiting response (outbound=247KiB, inbound=0KiB, 6264ms elapsed, timeout is 5000ms), inst: 0, qu: 0, qs: 5, aw: False, rs: ReadAsync, ws: Idle, in: 80, in-pipe: 0, out-pipe: 0, serverEndpoint: Unspecified/[REDACTED].com:xxxx, mgr: 10 of 10 available, clientName: [REDACTED SERVICE NAME], IOCP: (Busy=0,Free=1000,Min=4,Max=1000), WORKER: (Busy=7,Free=32760,Min=4,Max=32767), v: 2.0.593.37019 (Please take a look at this article for some common client-side issues that can cause timeouts: https://stackexchange.github.io/StackExchange.Redis/Timeouts)
  ?, in async Task<RedisValue[]> RedisExtensions.HashMemberGetAsync(IDatabase cache, string key, params string[] members)
  ?, in async Task<byte[]> RedisCache.GetAndRefreshAsync(string key, bool getData, CancellationToken token)
  ?, in async Task<byte[]> RedisCache.GetAsync(string key, CancellationToken token)
  ?, in async Task<string> DistributedCacheExtensions.GetStringAsync(IDistributedCache cache, string key, CancellationToken token) ... [REDACTED APP STACK]

Any help?

In logs we noticed that the timeout time goes up to 7seconds, never over that. Was wondering (cant google it out!?) if we can increase this timout to eg. 7s which would cut down our error rate immensely?

vmandic avatar Nov 02 '22 14:11 vmandic

Any update? this is a serious issue and it is not related to performance , even when client and server are not busy we get this error.

SH2015 avatar Feb 06 '23 03:02 SH2015

@vmandic In your case that seems like massive keys (given the outbound key and your description), likely overwhelming outbound bandwidth in spikes.

@SH2015 There's not 1 issue with timeouts, it's almost always environmental and most are unique. If you're having trouble please open an issue with your error message, which intentionally has lots of numbers and details to help diagnose.

NickCraver avatar Feb 12 '23 14:02 NickCraver

Yup @NickCraver thx, we do some rate-limiting also with our instance so its trafficked heavy, we opted to increasing the timeout to 8s in our case that dropped these alot, but we are still facing them, and important note, we should upg the lib to latest possible.

vmandic avatar Mar 09 '23 11:03 vmandic

Yup @NickCraver thx, we do some rate-limiting also with our instance so its trafficked heavy, we opted to increasing the timeout to 8s in our case that dropped these alot, but we are still facing them, and important note, we should upg the lib to latest possible.

how did you increased redis timeout ?

par3 avatar Feb 06 '24 08:02 par3

@par3 in the Redis connection string, depending on the provider; ServiceStack's: SendTimeout=8000&ReceiveTimeout=8000 and StackExchange's: syncTimeout=8000

vmandic avatar Feb 15 '24 17:02 vmandic