locust-experiments
locust-experiments copied to clipboard
OpenShift config
Thank you for the article about Locust and kubernetes.
Following your yaml files I put together a similar setup, but for OpenShift (I also tested it in MiniShift).
https://github.com/emilorol/locust-openshift
One thing keeps bugging me. I added the option to auto scale up to 10 slaves, but what I noticed is that every time a new slave is added Locust will reset all other slave to distribute the loads, leaving me with only the manual option of allocating the number of slave manually and before running the test. Any ideas?
Emil! Good job on setting up OpenShift config.
What I noticed (k8s), Locust is not doing well at all when a dynamic change of the setup (number of workers) comes into play. I haven't seen resetting the stats myself, although I've seen some miscalculated numer of workers and users...
I think we should start reporting these things to the Locust team and try to resolve them.
For now it looks like the problem of "dynamic" scaling is a bit neglected one.
yes, I also saw that new slaves are register automatically, but as they got destroyed the counter still showed the old number of slaves.
I believe there is a business opportunity there they are missing out as distributed load testing is here to stay.
On a side note. Have you been able to determine like a golden ration between cpu
and memory
for the slave containers? I started with 0.5 cpu and 512 mb and as soon as they hit 80% of the cpu I will spin up a new slave, but in a couple of cases it was too late and the container will crash. I play with the numbers and I end up with 0.5 and 1GB and the cpu resource at 70% before scaling up.
Issue reported: https://github.com/locustio/locust/issues/1100
Check out the response from the locustio team