The default value of `availableMemoryRatio` is too low
In Apify Actor with 4 GB of available memory, the AutoscaledPool refuses to scale up, as it only sees ~1 GB of free memory.
This slows down scrapers and might cause higher costs for Crawlee-based Actor users (user is billed per available memory/second, not used memory).
This seems to be caused by the following snippet (line 178):
https://github.com/apify/crawlee/blob/6f2e6b0ccb404ae66be372e87d762eed67c053bb/packages/core/src/autoscaling/snapshotter.ts#L176-L180
The availableMemoryRatio is by default 0.25, which checks out with our observations. This is probably alright for non-Apify users, but a bit dumb for Actors on Apify, where we should utilize all the available resources.
In such cases, this can be remedied by overriding the defaults with the CRAWLEE_AVAILABLE_MEMORY_RATIO envvar or by passing a customized Configuration instance to the crawler:
new PlaywrightCrawler(
{},
new Configuration({
availableMemoryRatio: 1,
})
);
Humph, it'd make sense to me if Actor.init set the ratio to 1. Or can we set the default value of the env var for all runs on the platform without forcing everybody to update dependencies?
Perhaps we can set the default value of availableMemoryRatio in the Apify SDK's Configuration (here)?
We might use the APIFY_IS_AT_HOME envvar to switch between the original default and the new default ratio (~0.9?)
I'd say lets set it at the base image level, with cheerio and normal node having a higher ratio than browser images, but what do you think would be better?
I'd say lets set it at the base image level, with cheerio and normal node having a higher ratio than browser images, but what do you think would be better?
I'm probably missing important info here - if I start a new crawlee project, I get a Dockerfile based on one of the base images, correct?
If I change the crawler type in my code (perfectly legit thing IMO), won't configuration done in the base image just stick? That seems hard to track down...
If I change the crawler type in my code (perfectly legit thing IMO), won't configuration done in the base image just stick? That seems hard to track down..
this is true, but you should also update the image in that case... I guess this is a rough thing to fix... Maybe we can middleground? Expose an env variable from base images that specify the img type and actor.init decides on default ratio based on it?
Or maybe I'm just high and there's a better solution! I'm just throwing ideas here :D
I realized I haven't commented on this anywhere, we only discussed this with Jindra on Thursday - so here is the thing: we already set this value to 1 on the platform, and it worked just fine until recently. It's done in the SDK in Actor.init here:
https://github.com/apify/apify-sdk-js/blob/master/packages/apify/src/actor.ts#L203
What I think might have happened is that a wrong config is resolved via AsyncLocalStorage (as by default all places use the global config which resolves to a scoped one via ALS). If that's the case, it could be caused by #2371.
What I think might have happened is that a wrong config is resolved via AsyncLocalStorage (as by default all places use the global config which resolves to a scoped one via ALS). If that's the case, it could be caused by #2371.
Could you elaborate how? That ALS is not even in place when you're not working with AdaptivePlaywrightCrawler. Or is this just a hunch that two supposedly independent instances of AsyncLocalStorage may interfere in weird ways?
Yes, it's a hunch, based on years of experience working with ALS, seeing all the weird edge cases myself (been using it before it became stable).
What I am sure about:
- we were setting the ratio to 1 (inside
Actor.init) since inception - it was working just fine for a very long time (since the initial 3.0 release)
- the config resolution depends on ALS
- only recently we started adding more ALS usage
It could be as well about some other refactoring, but that particular PR sounds like the ideal first candidate to check.
I haven't tried to reproduce this yet, not sure if it's surfacing always or if it's just a fluke? If it's happening the same all the time, I would first try to revert that PR via patch-package to see if it helps.
Next time let's please at least add a link to slack discussions to the OP for more context.
I will close this since it's no longer surfacing in the current version and I haven't been able to confirm my hunch from above either (also the PR in question looks safe on a second look, it shouldn't affect more than just the adaptive crawler even if it would be the culprit).