1279 Locust instances makes master run at 100% continously
Prerequisites
- [x] I am using the latest version of Locust
- [x] I am reporting a bug, not asking a question
Description
I run some tests where I issue REST calls to a REST API server for RonDB. It works fine, but when I try to start 20 Locust servers with 64 CPUs each, then I start 1279 Locust instances. Doing this leads to the master running at 100% after starting all instances. If I decrease the number of instances to 1000 it works fine, so gather that the problem is some constant in the code. With 1000 the master uses 9% CPU. The test is running on a AWD Graviton 4 VM.
Command line
Starting a script with a Python client, one per instance
Locustfile contents
a
Python version
Ubuntu 24.04 std
Locust version
Ubuntu 24.04 std
Operating system
Ubuntu 24.04
Can you add your logs?
There are no logs to add, locust simply went quietly into 100% CPU usage and I could not even start up the Locust Web UI
Well there must be some logs. Locust will log stuff even before the workers connect... Send the output from the workers too.
If you can, add your locustfile too (not that it should matter).
Try setting --loglevel DEBUG for more info too.
I found the issue, I needed to increase the max number of open files in Linux by editing /etc/security/limits.conf with 2 lines (soft and hard)
ubuntu soft nofile 8192 ubuntu hard nofile 8192
Found an error message in master.log once I figured out how to find the log. The behaviour is still a bit weird that the master is running at 100% and not delivering anything when misconfigured like this