Autoscaling based on multiple cpu utilization for single process crawlers?
Currently the Autoscaled pool will try to scale up if the the cpu utilization is low. The problem can happen in situation where for example some http based crawler (basically single process crawler) runs in environment with multiple cpus. The other cpus will be underutilized and this will be reported to Autoscaled pool which can try to scale up (even though the relevant core is already fully utilized.)
This is probably not such a problem for any browser based crawler as the browsers are running in their own processes and can run on different cores.
Mentioned here: https://github.com/apify/apify-sdk-python/pull/447#issuecomment-2757356744
Maybe we need more detailed information about the utilization so that the each crawler can decide what is relevant for it. (Or possibly make crawlee in general capable of scaling up to multiple cpus?)
Some additional context - when running locally, we consider the overall CPU utilization, not just what the Crawlee process uses. In contrast to that, we only consider the memory used by the current process and its children.
In the JS version, the local implementation also considers the overall system CPU load over all CPUs.