kangal
kangal copied to clipboard
Kangal 1.6.1 K8s 1.24 race condition jmeter master/worker
I am bumping up against a race condition between the jmeter master and worker pods. It used to be that most kangal runs configured themselves and succeeded. The workers would be running
before the master. And the master was able to register all of the workers. However, 1 in 4 masters would come up before all of the workers and only register a portion of the workers.
Recently I found the masters always running
before the workers. Even if I only had 1 worker.
I can see the workers in a pending
state and then the master pod goes pending
5-10 seconds later. But lately I've noticed the workers in init
and the master is already running
.
This leads to 0 workers detected and I have to manually delete the master so it registers the worker pods. The backoff limit of the master job is hard coded to 1. So I can only kill the master pod once, before the job is considered a failure.
Configuration
Our jmeter pods are tainted so they run on nodes dedicated to their purpose. Karpenter 0.33.1 handles this for us. I am also using custom data.
Solution?
It seems like the master job shouldn't be scheduled until all of the workers are running
.
Work around
To work around this, a script submits the kangal load test and waits for the master job. The job is immediately patched to be suspended. The script waits until all workers running
. Then the master job is unsuspended. This consistently prevents jmeter master from have too few workers registered.
Conclusion
Is anyone running into this? If so, how are you dealing with it? If it is widespread, can we look to the kangal controller for relief?
I was running into this (https://github.com/hellofresh/kangal/issues/228) and worked around the issue inelegantly by pulling a custom JMeter master artifact with a sleep in launcher.sh. Glad to know I am not the only one who has experienced it.
Also having this issue. @jasonbgilroy how long is a sleep you adding into launcher.sh ? I was experimenting with small values (like 10 to 20 seconds), but seems like it's still not enough
@hattivatt
I have found that it takes about 1 minute for 1 jmeter worker to transition to running
. I think it generally takes closer to 2 minutes for more than 1 worker to transition.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.