node
node copied to clipboard
esm, loader: move to own thread
What is the problem this feature will solve?
- limit contamination
- facilitate synchronous
import.meta.resolve()
What is the feature you are proposing to solve the problem?
Move loaders off-thread
What alternatives have you considered?
No response
Per direction from TSC
Following in the vein of babel: https://github.com/babel/babel/pull/14025 And https://github.com/bmeck/using-node-workshop/tree/main/steps/6_async_and_blocking
- [ ] notify package authors of change (please comment to be added if not already included—sorry I know only so many)
- esmock
- jest?
- mocha (Gil Tayar)
- ts-node (Andrew Bradley)
- yarn (Maël Nison)
@bmeck @MylesBorins could either of you speak to the impetus for this?
Related: https://github.com/nodejs/node/pull/31229
@bmeck @MylesBorins could either of you speak to the impetus for this?
Pros and cons here: https://github.com/nodejs/modules/issues/351#issuecomment-634347749
Can add to the list of package authors: esmock
I wonder, does it make sense for the loaders team to have a thread somewhere which can be subscribed to for notifications of breaking changes? All relevant discussion can happen elsewhere, the thread would be like an RSS feed. Package maintainers can opt-in to notifications of breaking or potentially exciting/disruptive changes by subscribing to that thread.
Might scale better than us hoping we know a comprehensive list of all loaders.
Since Node.js doesn't maintain anything of the sort, that sounds like a better alternative to "surprise!". It's not full-proof, though.
We have a working Proof of Concept: https://github.com/JakobJingleheimer/worker-atomics
Hi, guys. What is the status here? Coming from blocked https://github.com/nodejs/node/pull/43772 😢
Hi! Any estimated date for done this thread ? This thread blocking #43772 which needed for https://github.com/yarnpkg/berry/discussions/4044#discussioncomment-2740697 Asking cause looks like it's long time thread ( currently 4 months )
Hi! Any estimated date for done this thread ?
This thread blocking #43772 which needed for
https://github.com/yarnpkg/berry/discussions/4044#discussioncomment-2740697
Asking cause looks like it's long time thread ( currently 4 months )
Hiya! Please look right above your post to find the active WIP PR link (in the fancy GitHub callout), which was updated recently.
I was chatting with Anna earlier today, and she mentioned the CPU and memory cost of the 2nd thread is non-trivial—effectively doubling node's basic footprint.
Is that something we've already considered? Do we know of any way to mitigate that?
If you use --loader on a single threaded application that is somewhat true but shouldn'tbe 2x. Plenty of stuff like v8 intrinsics, c++ code, etc. are not recreated. If the loader is shared amongst worker threads it can actually be a net gain. Overall ability to intercept CJS and being opt-in seems fine. If we need to save more there are ways to do so that haven't been done since currently the worker thread impl does spin up a full node env.
On Fri, Oct 21, 2022, 3:53 PM Jacob Smith @.***> wrote:
I was chatting with Anna earlier today, and she mentioned the CPU and memory cost of the 2nd thread is non-trivial—effectively doubling node's basic footprint.
Is that something we've already considered? Do we know of any way to mitigate that?
— Reply to this email directly, view it on GitHub https://github.com/nodejs/node/issues/43658#issuecomment-1287426723, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABZJIYVULTTYEQVPJ42AQ3WEL7F3ANCNFSM52PQG7HQ . You are receiving this because you were mentioned.Message ID: @.***>
The way this issue is framed implies that import loaders are moved off thread. In other terms, only users deliberately opting in are using a different thread. However, this is not the direction the work went and all ESM loading logic is being moved. I spoke to quite a few people over time about this and this was not brought up before today.
This is a significant blocker for me as Node.js is often used in environment that are constrained by memory. At the bare minimum we should investigate:
- how much more memory is needed?
- how much more latency this will add?
- can we avoid it, i.e. only move user-provided code off thread?
- will this memory cost disappear or will the thread be kept around?
I agree that we should find answers to those questions. I would hold off on adding it to the TSC agenda until we have those answers.
Edit: I believe it was always our intention to get those numbers before landing.
- how much more memory is needed?
This does not seem to be a straightforward answer. I can't truly measure it until the implementation is runnable (currently blocked by a V8 error). Preliminary result data from our experiment via /usr/bin/time -l
are:
metric | baseline | off-threaded |
---|---|---|
maximum resident set size | 39174144 | 54509568 |
average shared memory size | 0 | 0 |
average unshared data size | 0 | 0 |
average unshared stack size | 0 | 0 |
page reclaims | 2581 | 3517 |
page faults | 0 | 0 |
swaps | 0 | 0 |
block input operations | 0 | 0 |
block output operations | 0 | 0 |
messages sent | 0 | 0 |
messages received | 0 | 0 |
signals received | 0 | 0 |
voluntary context switches | 1 | |
involuntary context switches | 67 | 134 |
instructions retired | 280244491 | 550971942 |
cycles elapsed | 101541186 | 203115216 |
peak memory footprint | 15487232 | 28354496 |
This very basic rough comparison suggests additional memory consumption at peak of 12,867,264
(~83% increase).
- how much more latency this will add?
Preliminary results suggest there's a small initial cost when the worker initialises, but in terms of latency incurred for the work off-loaded, there's effectively none (nanoseconds).
- can we avoid it, i.e. only move user-provided code off thread?
I think this is contrary to the goals we're attempting to achieve with a separate thread (but maybe we only care about custom loaders).
- will this memory cost disappear or will the thread be kept around?
The thread gracefully terminates after the last call to "public" ESMLoader. However, in the tests done so far, that coincides with node itself gracefully terminating. We could specifically test keeping node around for a while after all the ESM / loader stuff to see if the thread terminates. I'm not sure how best to go about that: worker.once('exit')
does not trigger (and also, I fear if it did work, it would prevent/undo the worker.unref()
, leading to false results).
There's also a potential benefit in moving non-custom loading off-thread as it would protect internals from prototype pollution (I think). That would argue that we should make this same refactor for CommonJS too.
This is a significant blocker for me as Node.js is often used in environment that are constrained by memory. At the bare minimum we should investigate:
- how much more memory is needed?
- how much more latency this will add?
- can we avoid it, i.e. only move user-provided code off thread?
- will this memory cost disappear or will the thread be kept around?
@mcollina I'm not convinced we can answer those questions before we have a fully working implementation; without data, we can only make assumptions, and I wouldn't want us to draw a conclusion over possibly baseless assumptions.
Absolutely! I'm concerned about the addition to our startup memory footprint, as this matters for some of our usecases.
I'm flagging that this might be problematic and I was surprised because it was not mentioned in the main text of the issue.
A few more question:
- if a new Worker thread is spawned in the lifetime of the application, will this create another thread to load ESM?
- what about dynamic import? Will it need re-spawning the thread?
A few more question:
- if a new Worker thread is spawned in the lifetime of the application, will this create another thread to load ESM?
- what about dynamic import? Will it need re-spawning the thread?
In both cases, (in the current design/implementation) only if the "loaders" worker is not around (otherwise it will be re-used).
In both cases, (in the current design/implementation) only if the "loaders" worker is not around (otherwise it will be re-used).
So there is only one loaders worker for all worker threads created by Node.js?
No, there's a separate loaders worker for each worker thread.
I assume it should be possible to configure loader hooks per worker thread therefore this may complicate this thread.
I'm also a bit skeptical to have a single loader thread for all workers as this seems to allow "leaking" data between workers which should be isolated. Also this single loader worker would be parallelism blocker.
The comment from @targos and @JakobJingleheimer seem contradictory?
There are 3 different threading models being considered, that I'm aware of.
Sorry, I think https://github.com/nodejs/node/issues/43658#issuecomment-1288637199 is the intended behaviour (the current PR may not achieve that yet). The rationale being that workers are intended to be isolated, so their loaders' state should also be isolated/fresh.
Quick update from our recent team meeting: We invited Gil Tayar (author/maintainer of several pertinent libraries) to discuss spawning a dedicated loaders thread per user-land thread, and he noted that will add enormous complexity to library authors (on top of the extra complexity on node's side). We decided for an initial implementation, it would be better to use a single loaders thread shared by all user-land threads and add caveats to the relevant sections of the docs. If there is sufficient appetite, we can subsequently add a configuration option (perhaps to the Worker
constructor) to spawn dedicated loaders threads.
Was there any progress on not spawning any thread if no custom loaders are defined?
Yep! https://github.com/nodejs/loaders/pull/118#issuecomment-1314595194 and I believe this should be logistically/technically possible.