Pipelines.Sockets.Unofficial icon indicating copy to clipboard operation
Pipelines.Sockets.Unofficial copied to clipboard

High percentage of threads blocked on DedicatedThreadPoolPipeScheduler.RunWorkLoop

Open robertmujica opened this issue 5 years ago • 52 comments

Hi,

we are having an Issue in prod where we see a high percentage / number of Threads in waiting status, and when looking at full Net call stack, it is pointing to "Pipelines.Sockets.Unofficial.DedicatedThreadPoolPipeScheduler.RunWorkLoop()+99 --"

We were using version 1.0.9 and I came across this article https://blog.marcgravell.com/2019/02/fun-with-spiral-of-death.html where you explain an issue you fixed on 1.1.*, which was very similar to what we were seeing in PROD so i went ahead and raised a PR and upgrading this NuGet to 2.0.7 and at same time I updated StackExchange.Redis from 2.0.519 to 2.0.571

However after deploying this new version and doing more testing we are still seeing high number of blocked threads of course way less than what we saw before but still happening.

this is call stack details, our app is running on Net472. if you need further details let me know.

[[GCFrame]]

[[HelperMethodFrame_1OBJ] (System.Threading.Monitor.ObjWait)] System.Threading.Monitor.ObjWait(Boolean, Int32, System.Object) Pipelines.Sockets.Unofficial.DedicatedThreadPoolPipeScheduler.RunWorkLoop()+99 mscorlib_ni!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)+163 mscorlib_ni!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)+14 mscorlib_ni!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)+52 mscorlib_ni!System.Threading.ThreadHelper.ThreadStart(System.Object)+5c [[GCFrame]] [[DebuggerU2MCatchHandlerFrame]] [[ContextTransitionFrame]] [[DebuggerU2MCatchHandlerFrame]]

ntdll!NtWaitForMultipleObjects+14

KERNELBASE!WaitForMultipleObjectsEx+f9 clr!WaitForMultipleObjectsEx_SO_TOLERANT+62 clr!Thread::DoAppropriateWaitWorker+1e4 clr!Thread::DoAppropriateWait+7d clr!CLREventBase::WaitEx+c4 clr!Thread::Block+27 clr!SyncBlock::Wait+19d [[GCFrame]] clr!ObjectNative::WaitTimeout+e1 [[HelperMethodFrame_1OBJ] (System.Threading.Monitor.ObjWait)] System.Threading.Monitor.ObjWait(Boolean, Int32, System.Object) Pipelines.Sockets.Unofficial.DedicatedThreadPoolPipeScheduler.RunWorkLoop()+99 mscorlib_ni!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)+163 mscorlib_ni!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)+14 mscorlib_ni!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)+52 mscorlib_ni!System.Threading.ThreadHelper.ThreadStart(System.Object)+5c clr!CallDescrWorkerInternal+83 clr!CallDescrWorkerWithHandler+4e clr!MethodDescCallSite::CallTargetWorker+f8 clr!ThreadNative::KickOffThread_Worker+109 [[GCFrame]] clr!ManagedThreadBase_DispatchInner+39 clr!ManagedThreadBase_DispatchMiddle+6c clr!ManagedThreadBase_DispatchOuter+75 [[DebuggerU2MCatchHandlerFrame]] clr!ManagedThreadBase_DispatchInCorrectAD+15 clr!Thread::DoADCallBack+278 [[ContextTransitionFrame]] clr!ManagedThreadBase_DispatchInner+2fc3 clr!ManagedThreadBase_DispatchMiddle+6c clr!ManagedThreadBase_DispatchOuter+75 [[DebuggerU2MCatchHandlerFrame]] clr!ManagedThreadBase_FullTransitionWithAD+2f clr!ThreadNative::KickOffThread+db clr!Thread::intermediateThreadProc+86 kernel32!BaseThreadInitThunk+14 ntdll!RtlUserThreadStart+21

image

robertmujica avatar Mar 20 '19 20:03 robertmujica

This does not necessarily indicate a problem. Basically, to avoid thread-pool starvation issues impact the IO cycle, we use a dedicate thread-pool that is already ready with threads to process IO via the pipelines API. If there's nothing for those threads to do at the time, they'll be sat at a Wait, waiting for a pulse indicating that there's something for them to do.

The default number of threads we spin up here is: 10

So: are you actually seeing a problem here? or are you just seeing these threads sat quietly minding their own business?

Note: this isn't 10 per multiplexer/connection/etc; they are shared.

mgravell avatar Mar 20 '19 20:03 mgravell

BTW: an option here is to pass in a specific SocketManager; by default it uses a shared SocketManager, which uses 10 threads; but an individual SocketManager only holds 5 (the shared instance assumes it may see more activity). You should not, however, create too many SocketManager instances.

mgravell avatar Mar 20 '19 20:03 mgravell

Yes that is what it seems to be happening these threads are waiting in Monitor.Wait to re-acquire a lock which they released. And it seems a Monitor.Pulse or Monitor.PulseAll is never called and when i looked at call stack they all are pointing me to this call:

Pipelines.Sockets.Unofficial.DedicatedThreadPoolPipeScheduler.RunWorkLoop()

So it means at some point the server is running out of threads and start getting request errors, slow response time, etc.

Regarding your suggestion above using SocketManager, would point me to some sample code that i can look at to see if this is an option for us ?

robertmujica avatar Mar 20 '19 21:03 robertmujica

the pulse should be being invoked whenever at least one worker is waiting, which is the case here; I do not believe that the server is running out of threads, but I'm more than happy to investigate - but can you please share what symptoms you are seeing that makes you think this is related? there being worker threads waiting for work does not mean anything - that is expected. It looks like I don't currently expose the queue length, but I expect it is zero; I've added a tweak (see linked commit above) to help me expose this, but it'll need a new client build to show it.

mgravell avatar Mar 20 '19 21:03 mgravell

Thanks for Quick response on this issue, will have new dump files tomorrow to do further analysis so i will share more details. But so far I share same thoughts you have that this should not have a big impact in overall server performance. Yes i can see your commit.

robertmujica avatar Mar 20 '19 23:03 robertmujica

I also observed similar behavior. In one of the posts above you mentioned that the default thread count is 10. I see in the process dump that around 23 threads are blocked here and in another dump 14 threads. Is it expected that the thread count is increased like this?

bbijesh avatar Mar 21 '19 14:03 bbijesh

That is the exact issue we are having. We dont see it normal.

We are doing more test / analysis today with without this nuget.

On Thu 21 Mar 2019, 14:35 Bijesh B, [email protected] wrote:

I also observed similar behavior. In one of the posts above you mentioned that the default thread count is 10. I see in the process dump that around 23 threads are blocked here and in another dump 14 threads. Is it expected that the thread count is increased like this?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/mgravell/Pipelines.Sockets.Unofficial/issues/28#issuecomment-475254523, or mute the thread https://github.com/notifications/unsubscribe-auth/ADpFt-0MUqVMkp4tKYblsRkvst2YEYkDks5vY5iXgaJpZM4cALhh .

robertmujica avatar Mar 21 '19 14:03 robertmujica

Without this NuGet I don't expect that kind of issue as the specific thread pool itself is expected to not exist. Anyway, please keep this thread updated. Thanks!

bbijesh avatar Mar 21 '19 14:03 bbijesh

Just to add another data point to this. We just upgraded from SE.Redis 1.2.6 to 2.0.571. All was fine in our tests and on our QA env, but when it hit Prod, our worker threads which use Redis a lot, thus placing it under fairly high load, randomly stop processing anything until the process is restarted. ie. the CPU on the boxes goes down to virtually 0 and they just sit there not doing anything.

There were no other code changes other than the Redis client upgrade. Taking a memory dump when it happens, I see the same 10 threads just waiting in RunWorkLoop. I understand that is normal when there is nothing to be done, but the question is why does work suddenly cease? Is there a deadlock happening somewhere?

The dump shows no other abnormalities, so I'm really scratching my head here. We are seeing a handful of RedisTimeoutExceptions since the upgrade, but I recall @mgravell saying that's likely because the 2.x branch now knows how to report on timeouts in async scenarios, whereas 1.x didn't, so those just may be a red herring.

mscrivo avatar Mar 21 '19 16:03 mscrivo

one more detail regarding our implementation that we believe maybe causing this issue of leaving many threads in waiting state, is that we are subscribing to 8 Channels ( one for each app ).

We also have two Tasks: one running every 15 seconds on each app to save hash values to Redis Cache and a second thread every 30 seconds to try to sync hash across apps again using Redis Cache.

Do you think this sort of logic maybe the cause of this issue ? we are also thinking about simplifying this logic to use one channel and including application name as part of the message so it makes better use of Thread resources.

Any thoughts ?

robertmujica avatar Mar 21 '19 17:03 robertmujica

The number of channels etc shouldn't be a factor here. If folks are saying there is something odd happening, then I will of course investigate, but: I would really like to focus on the symptoms rather than the investigations. So: I am really really interested in this, but rather than talking about people seeing waiting threads (which does not mean much): I'd really love to hear more about the performance problems and scenarios that you're seeing. Does that make sense?

mgravell avatar Mar 21 '19 18:03 mgravell

Basically

  • Overtime response time increases. e.g. our website should return within 2 seconds, but it starts taking 5, 9 and more seconds to respond.

  • And eventually it stops processing more requests and even web process crashes, here we have taken manual dump snapshot of the server and have found those high number of threads in waiting state.

Following is an extract of what we seeing in those dump files when issue happens,

image

notice that 10.64 % threads blocked when clicking on any of them the .Net call stack is like this

image

Notice this was taken when we had in PROD version 1.0.9

robertmujica avatar Mar 21 '19 19:03 robertmujica

Essentially same symptoms here, things just stop processing after a while, but mem dumps don't indicate why, other than the waiting threads as @robertmujica points out. We have some sync over async code in the redis path, so we're working on resolving those. Will report back tomorrow if it made any difference. Open to suggestions on how to dig deeper here.

For reference, our code is running on .net core 2.2 on Windows 2019 VM's against 3 node Redis cluster on version 5.0.3.

mscrivo avatar Mar 21 '19 20:03 mscrivo

image 31% of threads blocked is quite high. We are running on Azure WebApp, .net framework 472 .

bbijesh avatar Mar 22 '19 02:03 bbijesh

Agreed. I would expect no more than 10, and 10 would not indicate a problem. I will need to investigate, which we will do.

On Thu, 21 Mar 2019, 19:47 Bijesh B, [email protected] wrote:

[image: image] https://user-images.githubusercontent.com/19475646/54797611-da145f00-4c7a-11e9-9091-11213b750624.png 31% of threads blocked is quite high. We are running on Azure WebApp, .net framework 472 .

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/mgravell/Pipelines.Sockets.Unofficial/issues/28#issuecomment-475475376, or mute the thread https://github.com/notifications/unsubscribe-auth/AABDsEgAl1yBnmcnv9iDM4AuwOKUCHFHks5vZERegaJpZM4cALhh .

mgravell avatar Mar 22 '19 02:03 mgravell

We took more dumps and see that 20 threads are waiting at Pipelines_Sockets_Unofficial!Pipelines.Sockets.Unofficial.DedicatedThreadPoolPipeScheduler.RunWorkLoop()+84 Looks like 2 threadpools are getting created then.

bbijesh avatar Mar 22 '19 04:03 bbijesh

analyzing some of the recent logs ( from 22nd of March ) I can see following entries:

Type: System.InvalidOperationException Message: Reading is not allowed after reader was completed. Stack:
[HelperMethodFrame] System.IO.Pipelines.ThrowHelper.ThrowInvalidOperationException_NoReadingAllowed() System.IO.Pipelines.Pipe.AdvanceReader(System.SequencePosition ByRef, System.SequencePosition ByRef) System.IO.Pipelines.Pipe+DefaultPipeReader.AdvanceTo(System.SequencePosition, System.SequencePosition) StackExchange.Redis.PhysicalConnection+<ReadFromPipe>d__108.MoveNext() System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean) System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean) System.Runtime.CompilerServices.AsyncMethodBuilderCore+MoveNextRunner.Run() Pipelines.Sockets.Unofficial.DedicatedThreadPoolPipeScheduler.Execute(System.Action`1<System.Object>, System.Object) Pipelines.Sockets.Unofficial.DedicatedThreadPoolPipeScheduler.RunWorkLoop() Pipelines.Sockets.Unofficial.DedicatedThreadPoolPipeScheduler+<>c.<.cctor>b__41_0(System.Object) System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean) System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean) System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object) System.Threading.ThreadHelper.ThreadStart(System.Object) [GCFrame] [DebuggerU2MCatchHandlerFrame] [ContextTransitionFrame] [DebuggerU2MCatchHandlerFrame]

No i wondering if this is the side effect of having network connectivity issues ?

robertmujica avatar Mar 24 '19 22:03 robertmujica

Reporting back as I said I would. Removing any remaining mixture of sync/async code in this path didn't seem to fix the issue we're having. I'm considering moving back to 1.2.6 just to verify the symptoms we're seeing are in fact caused by the new version.

Looking closer at our logs, to try and see if I can establish any patterns, what I'm seeing is that right before a server goes dead (ie. stops doing any work), I see a handful of RedisTimeoutExceptions like these:

Exception="StackExchange.Redis.RedisTimeoutException" ExceptionDetail="StackExchange.Redis.RedisTimeoutException: Timeout awaiting response (outbound=0KiB, inbound=0KiB, 5235ms elapsed, timeout is 5000ms), inst: 0, qu: 0, qs: 1, in: 0, serverEndpoint: , mgr: 7 of 10 available, clientName: ASWorker9, IOCP: (Busy=5,Free=995,Min=1,Max=1000), WORKER: (Busy=10,Free=32757,Min=1,Max=32767), v: 2.0.571.20511 (Please take a look at this article for some common client-side issues that can cause timeouts: https://stackexchange.github.io/StackExchange.Redis/Timeouts)

mscrivo avatar Mar 25 '19 13:03 mscrivo

We have switched back to 1.2.6 and the behavior is much better.

bbijesh avatar Mar 25 '19 15:03 bbijesh

@robertmujica curious if you tried upgrading to 2.0.588? We went back to 1.2.6 for a while after trying 571 (and it totally made the problem go away), then tried 588 and immediately upon deployment last night the problem of dead worker processes came back. Dumps are showing that 43% of threads are blocked, with one in particular that uses SE.Redis that is in a SpinWait.SpinOnce which I don't recall seeing before.

mscrivo avatar Apr 04 '19 14:04 mscrivo

@mscrivo if you're seeing a significant amount of SpinOnce - presumably because you're using a lot of "sync" paths in very high congestion: that changes in 593, which I've pushed to nuget (note to self: go do a release note).

the presence of a small number of threads waiting by themselves does not indicate a problem; those aren't pool threads - they are reserve threads intended to isolate the IO code from the peaks and troughs of the regular thread-pool.

mgravell avatar Apr 04 '19 15:04 mgravell

(you can also get the SpinOnce fix simply by taking pipelines.sockets.unofficial 2.0.20; all SE.Redis does here is take the later pipelines build)

mgravell avatar Apr 04 '19 15:04 mgravell

The SpinOnce is coming from .net's BlockingCollection which uses a SemaphoreSlim ... we use BlockingCollections quite a bit in our workflow. Could that be a problem?

Happy to share a mem dump or the debug diag report with you, if you'd like to take a look? I'd admittedly not great with understanding .net's locking mechanisms.

mscrivo avatar Apr 04 '19 15:04 mscrivo

ok will try that new build, thanks!

mscrivo avatar Apr 04 '19 15:04 mscrivo

Neither SE.Redis nor Pipelines.Sockets.Unofficial use BlockingCollection - however, Pipelines.Sockets.Unofficial does use SpinOnce() directly in the MutexSlim code; so - if the stalls you're seeing definitely have BlockingCollection in them, then: I think that's coming from your code.

As for "Could that be a problem?" - all I can say is "hell yeah" - SemaphoreSlim has a landmine embedded in it: https://blog.marcgravell.com/2019/02/fun-with-spiral-of-death.html

mgravell avatar Apr 04 '19 15:04 mgravell

@mscrivo no we kept 2.0.571, after further analysis i found in dump file Stackoverflow and OutofMemoryException that we believe is actual root cause of the issue we have. After i fixed the offending lines and Performance testing was done everything seems to be working ok, so next week we are going live with this change ( that is the real test ), will update this threads once i have some news from PROD.

robertmujica avatar Apr 04 '19 15:04 robertmujica

One more thing we did try was to minimize the number of Channels we use for Pub / sub, but after some analysis we found no differences between listening for 8 channels or two, the number of Threads were pretty much the same, so we discarded that SPIKE.

robertmujica avatar Apr 04 '19 15:04 robertmujica

thanks @mgravell. I did read your blog post when you posted it. It was very informative. I guess we have to look for an alternative to using BlockingCollections given we can't control what locking mechanism it uses.

mscrivo avatar Apr 04 '19 15:04 mscrivo

Ah, right - yeah there's an important update here that I had shared on the SE.Redis repos, but I hadn't looked for here:

there was a critical race condition when in MutexSlim - occasionally it screwed up and failed to notify the next competitor, which could result in perceived locks and timeouts. This is now addressed, and was the big change in SE.Redis 588; there is also a more recent 593 which makes the spin-lock a little less aggressive when contested by more than one competitor

mgravell avatar Apr 04 '19 15:04 mgravell

@mscrivo well, I wouldn't get too excited there without good reason, but... if you're sure you're seeing blocks from BlockingCollection then yeah. I don't know your use-case, but in most cases I've found that less is more - i.e. simple primitives like lock can be very effective in the majority of cases (where you aren't doing exotic things).

mgravell avatar Apr 04 '19 15:04 mgravell