lifecycle-manager icon indicating copy to clipboard operation
lifecycle-manager copied to clipboard

Goroutines may be prioritized incorrectly

Open eytan-avisror opened this issue 4 years ago • 2 comments

It was noticed when a huge spike of terminating instances happen (150 instances + another 150 instances after few minutes), we may not have enough goroutines or do not prioritize some goroutines efficiently.

The result was that and event that was received started processing 5-6 minutes later and by that time the heartbeat had already expired.

It's possible that newWorker goroutine is trying to spawn but there are so many other goroutines that it cannot.

We should load test and evaulate:

  • Giving more priority to newWorker and sendHeartbeat goroutines.
  • Overall reduce number of goroutines we spin up

The goal is to make sure we can handle large spikes of scale-downs without starvation

eytan-avisror avatar Apr 10 '20 23:04 eytan-avisror

One improvement that may improve this is refactoring server.go This block should not be called if message already exist, we should check

https://github.com/keikoproj/lifecycle-manager/blob/48c3bc778f5a04df01b792994ae1406b371e8e29/pkg/service/server.go#L90-L93

We should move these checks up https://github.com/keikoproj/lifecycle-manager/blob/48c3bc778f5a04df01b792994ae1406b371e8e29/pkg/service/server.go#L164-L182

eytan-avisror avatar Apr 25 '20 06:04 eytan-avisror

Message validation has been refactored to avoid spinning up worker goroutines for invalid/duplicate messages, this might help a bit. But it seems the best improvement we can make here is around the deregistration waiters which spin up N * Cluster TargetGroups * Terminating Instances goroutines, which make up 99% of the goroutines we spin up. We should come up with some approach to do this differently if possible.

eytan-avisror avatar Apr 28 '20 00:04 eytan-avisror