node
node copied to clipboard
doc: clarify default heap size formula since node 12
📗 API Reference Docs Problem
-
Version:12+
-
Platform: all
- Subsystem: n/a
Location
Section of the site where the content exists
Affected URL(s):
- https://nodejs.org/api/cli.html#cli_max_old_space_size_size_in_megabytes could be a nice place to mention this
Description
Concise explanation of the problem
Despite the explanation here : https://github.com/nodejs/node/pull/25576#issuecomment-455737693
I am not clear on the value of heap size for my instance. Given a formula/calculator, a user can then know for sure the default value based on the GB of memory available. It would be great to have this in the docs.
- [ ] I would like to work on this issue and submit a pull request.
I wanna work on it please assigned it to me
Hey @gtp-m3 I am not a maintainer of node and so cannot assign you. Hopefully one will see this and assign you to it! Thanks.
@gtp-m3
Thanks for your help. It is yours.
BTW, you can work on any issue without prior assignment. Good luck.
@Ayase-252 ok thanks
@naseemkullah can you explain what should I add to doc
@Ayase-252 @naseemkullah should i add a calculator who will show memory consumption for different machine
Should a paragraph to mention the default heap size be sufficient? IMO, a calculator seems too "heavy" to maintain.
Yes just a sentence as to how it's calculated exactly. Maybe with 2 or 3 examples with different available memory for clarity
References: https://github.com/nodejs/node/blob/d65514bc29ff4e91ae560884d10483e33012e095/deps/v8/src/heap/heap.cc#L239-L254 https://github.com/nodejs/node/blob/d65514bc29ff4e91ae560884d10483e33012e095/deps/v8/test/unittests/heap/heap-unittest.cc#L77-L97
Is this pertinent?:
https://github.com/nodejs/node/blob/d65514bc29ff4e91ae560884d10483e33012e095/deps/v8/src/heap/heap.cc#L300-L312
I'm currently trying to sort out what the max possible setting is + at what breakpoints does node choose which settings?
I want to max out my performance for a CI run :)
Hey @naseemkullah @targos @Ayase-252
Any idea, if someone is working on this? It'd be nice if this can be clarified please
I found this issue after digging around what happens if you don't specify --max-old-space-size
. And since the code linked above is a bit hard for me to understand, I simply ran some tests with docker.
I spun up a container with docker run -it node:18 node
and changed the memory limit in Docker Desktop between the runs.
I got the heap size with Math.round(require('v8').getHeapStatistics().total_available_size / (1024 * 1024))
.
I also double checked the total memory amount with Math.round(require("os").totalmem() / 1024 / 1024)
in parenthesis.
Docker memory | totalmem() | heap size |
---|---|---|
1 GB | 915 MB | 466 MB |
2 GB | 1919 MB | 980 MB |
4 GB | 3928 MB | 2007 MB |
6 GB | 5934 MB | 2092 MB |
8 GB | 7946 MB | 2091 MB |
12 GB | 11958 MB | 2092 MB |
15.5 GB | 15479 MB | 2091 MB |
16 GB | 15982 MB | 4140 MB |
So the behaviour is a bit weird between after 4 GB I would say. The 2 GB limit is a bit low, especially for servers with just below 16 GB of RAM (like Heroku's performance-l dyno with 14 GB of RAM). And the jump at 16 GB is also strange. I don't have easy access to a computer with more than 16 GB of RAM, but I suspect it will stay at 4 GB looking at the code above.
I also ran docker run -it node:18 node --max-old-space-size=5120
to verify the heap size and I got 5164 MB in that case.
I'm probably not the one to document this, but I wanted to share it regardless if anyone finds it useful!