azure-functions-host
azure-functions-host copied to clipboard
Allow node worker to use maximum amount of memory available in the dedicated sku
Right now in production node process uses built in default for memory. With the defaults node does not take advantage of the maximum memory available in the SKU. We should be overriding the default memory limits set by the node process to match the amount of memory available of the machine.
Known workarounds
Set the following app setting
"languageWorkers:node:arguments": "--max-old-space-size=<New_Limit_In_MB>"
@mhoeger - any ideas if we can remove the limit or set to max on node worker as we cannot hardcode the limit as this is sku specific.
One thing we could do is to make that value added on premium-sku app creation when "node" is chosen?
So far we only have that guidance documented here: https://docs.microsoft.com/en-us/azure/azure-functions/functions-premium-plan#memory-utilization-considerations
Documentation does not have guidance on what value should be set for a given sku. If this is a fairly common scenario and not harm in setting to a higher number we should set it by default in the node worker.
@mhoeger - Assigning this issue to you to figure out next steps. It not actionable , please close this.
@pragnagopa - sorry that doc is specifically for premium SKU and guidance depends on which machines within premium SKU are selected. I think it's actionable, but not from our end (from Portal/tooling).
cc: @anthonychu and @btardif
This is a good candidate to explore how we can achieve this from the runtime itself.
Python worker uses a script to launch python worker - cc @ankitkumarr @Hazhzeng for more details.
It would be good explore similar approach for node worker where the script figures out what should be the max-old-space-size and sets the start up argument. This will avoid updating any tools.
I agree with @pragnagopa not sure having an app setting is the right way to go. Remember #appsettingsburgeaddress 🙅🏽♂️? (cc: @ammanifold)
Available memory in the instance should be something the runtime provides.
Keep in mind there are scenarios where you can change from small to large instance. Is the customer expected to know that they then need go update this magical app setting otherwise there is no point on changing the instance size since the process would still be limited to the smaller sku memory footprint?
Definitely fair! This isn't a requirement to run but an enhancement (node has one default max memory footprint) which is why I thought it would be better as a visible suggestion to the user rather than something magical that happens and changes node (although i'm struggling to think of when the magic would be bad :)). The runtime approach sounds good - i want to make sure that a user can still control this setting though
Another point - on a premium plan you can have both multiple function apps deployed to the same premium plan and FUNCTIONS_WORKER_COUNT configured so I'm wondering if it'll be hard to find the right balance of how much to increase memory by. @pragnagopa - any suggestions?
There is also the concept of site limits that functions in herits from webapp. Here you can set limits in config for how much memory/cpu/other metrics that app will use.
the scenarios is for when you have multiple apps on the same app service plan and are attempting to do some controlled density.
so there are a lot of knobs that end up interacting. I think that in general the process should strive to use up as much memory as possible and have ways to throttle it back when needed.
@mhoeger - We can start with the simplest case where there is no FUNCTIONS_WORKER_COUNT
As @btardif mentioned above, in general we should figure out if there is a way to remove this limit or set a maximum and let the thresholds on the platform kick in if a process does end up consuming more than its limit.
Hey folks. I'm running into a similar problem with a node function app on hardware that refuses to use all its memory (7GB app service plan instances, keeps crashing with OOM errors at 1.5GB). For the guidance here, where am I supposed to be setting languageWorkers:node:arguments
? Azure Portal application settings doesn't allow :
chars, and setting it in the settings.json
with "languageWorkers:node:arguments": "--max-old-space-size=6000"
doesn't seem to be being respected by the hosted function (it still errors out at 1.5GB)
https://github.com/MicrosoftDocs/azure-docs/issues/68614#issuecomment-760789826 This seems likely to be my problem, for others seeing this. I haven't tested this solution yet but according to that thread it seems like for Linux function apps (like I'm using), you need to replace :
in languageWorkers:node:arguments
with __
In Premium Plan , even when using EP3 and setting languageWorkers:node:arguments > 4096 doesn't change anything, threads still crash when reaching 1/1.5GB looks like azure function host is still using default memory
MicrosoftDocs/azure-docs#68614 (comment) This seems likely to be my problem, for others seeing this. I haven't tested this solution yet but according to that thread it seems like for Linux function apps (like I'm using), you need to replace
:
inlanguageWorkers:node:arguments
with__
Did this work for you?
Reassigning to @AnatoliB and @ejizba for follow up.
In Premium Plan , even when using EP3 and setting languageWorkers:node:arguments > 4096 doesn't change anything, threads still crash when reaching 1/1.5GB looks like azure function host is still using default memory
I'm experiencing the same issue
I am experiencing the same issue.
setting languageWorkers__node__arguments=--max-old-space-size=8192
Linux Premium v3 P3V3 app plan.
Node 16 LTS
Still crashes from node oom at 1.8GB