node icon indicating copy to clipboard operation
node copied to clipboard

esm, loader: move to own thread

Open JakobJingleheimer opened this issue 1 year ago • 6 comments

What is the problem this feature will solve?

  • limit contamination
  • facilitate synchronous import.meta.resolve()

What is the feature you are proposing to solve the problem?

Move loaders off-thread

What alternatives have you considered?

No response


Per direction from TSC

Following in the vein of babel: https://github.com/babel/babel/pull/14025 And https://github.com/bmeck/using-node-workshop/tree/main/steps/6_async_and_blocking


  • [ ] notify package authors of change (please comment to be added if not already included—sorry I know only so many)
    • esmock
    • jest?
    • mocha (Gil Tayar)
    • ts-node (Andrew Bradley)
    • yarn (Maël Nison)

JakobJingleheimer avatar Jul 02 '22 19:07 JakobJingleheimer

@bmeck @MylesBorins could either of you speak to the impetus for this?

JakobJingleheimer avatar Jul 02 '22 19:07 JakobJingleheimer

Related: https://github.com/nodejs/node/pull/31229

aduh95 avatar Jul 05 '22 09:07 aduh95

@bmeck @MylesBorins could either of you speak to the impetus for this?

Pros and cons here: https://github.com/nodejs/modules/issues/351#issuecomment-634347749

GeoffreyBooth avatar Jul 20 '22 05:07 GeoffreyBooth

Can add to the list of package authors: esmock

I wonder, does it make sense for the loaders team to have a thread somewhere which can be subscribed to for notifications of breaking changes? All relevant discussion can happen elsewhere, the thread would be like an RSS feed. Package maintainers can opt-in to notifications of breaking or potentially exciting/disruptive changes by subscribing to that thread.

Might scale better than us hoping we know a comprehensive list of all loaders.

cspotcode avatar Jul 20 '22 11:07 cspotcode

Since Node.js doesn't maintain anything of the sort, that sounds like a better alternative to "surprise!". It's not full-proof, though.

JakobJingleheimer avatar Jul 20 '22 21:07 JakobJingleheimer

We have a working Proof of Concept: https://github.com/JakobJingleheimer/worker-atomics

JakobJingleheimer avatar Sep 17 '22 13:09 JakobJingleheimer

Hi, guys. What is the status here? Coming from blocked https://github.com/nodejs/node/pull/43772 😢

loynoir avatar Oct 05 '22 01:10 loynoir

Hi! Any estimated date for done this thread ? This thread blocking #43772 which needed for https://github.com/yarnpkg/berry/discussions/4044#discussioncomment-2740697 Asking cause looks like it's long time thread ( currently 4 months )

VulgarnyKarlson avatar Oct 19 '22 21:10 VulgarnyKarlson

Hi! Any estimated date for done this thread ?

This thread blocking #43772 which needed for

https://github.com/yarnpkg/berry/discussions/4044#discussioncomment-2740697

Asking cause looks like it's long time thread ( currently 4 months )

Hiya! Please look right above your post to find the active WIP PR link (in the fancy GitHub callout), which was updated recently.

JakobJingleheimer avatar Oct 19 '22 21:10 JakobJingleheimer

I was chatting with Anna earlier today, and she mentioned the CPU and memory cost of the 2nd thread is non-trivial—effectively doubling node's basic footprint.

Is that something we've already considered? Do we know of any way to mitigate that?

JakobJingleheimer avatar Oct 21 '22 20:10 JakobJingleheimer

If you use --loader on a single threaded application that is somewhat true but shouldn'tbe 2x. Plenty of stuff like v8 intrinsics, c++ code, etc. are not recreated. If the loader is shared amongst worker threads it can actually be a net gain. Overall ability to intercept CJS and being opt-in seems fine. If we need to save more there are ways to do so that haven't been done since currently the worker thread impl does spin up a full node env.

On Fri, Oct 21, 2022, 3:53 PM Jacob Smith @.***> wrote:

I was chatting with Anna earlier today, and she mentioned the CPU and memory cost of the 2nd thread is non-trivial—effectively doubling node's basic footprint.

Is that something we've already considered? Do we know of any way to mitigate that?

— Reply to this email directly, view it on GitHub https://github.com/nodejs/node/issues/43658#issuecomment-1287426723, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABZJIYVULTTYEQVPJ42AQ3WEL7F3ANCNFSM52PQG7HQ . You are receiving this because you were mentioned.Message ID: @.***>

bmeck avatar Oct 21 '22 21:10 bmeck

The way this issue is framed implies that import loaders are moved off thread. In other terms, only users deliberately opting in are using a different thread. However, this is not the direction the work went and all ESM loading logic is being moved. I spoke to quite a few people over time about this and this was not brought up before today.

This is a significant blocker for me as Node.js is often used in environment that are constrained by memory. At the bare minimum we should investigate:

  1. how much more memory is needed?
  2. how much more latency this will add?
  3. can we avoid it, i.e. only move user-provided code off thread?
  4. will this memory cost disappear or will the thread be kept around?

mcollina avatar Oct 23 '22 18:10 mcollina

I agree that we should find answers to those questions. I would hold off on adding it to the TSC agenda until we have those answers.

GeoffreyBooth avatar Oct 23 '22 20:10 GeoffreyBooth

Edit: I believe it was always our intention to get those numbers before landing.


  1. how much more memory is needed?

This does not seem to be a straightforward answer. I can't truly measure it until the implementation is runnable (currently blocked by a V8 error). Preliminary result data from our experiment via /usr/bin/time -l are:

metric baseline off-threaded
maximum resident set size 39174144 54509568
average shared memory size 0 0
average unshared data size 0 0
average unshared stack size 0 0
page reclaims 2581 3517
page faults 0 0
swaps 0 0
block input operations 0 0
block output operations 0 0
messages sent 0 0
messages received 0 0
signals received 0 0
voluntary context switches 1
involuntary context switches 67 134
instructions retired 280244491 550971942
cycles elapsed 101541186 203115216
peak memory footprint 15487232 28354496

This very basic rough comparison suggests additional memory consumption at peak of 12,867,264 (~83% increase).

  1. how much more latency this will add?

Preliminary results suggest there's a small initial cost when the worker initialises, but in terms of latency incurred for the work off-loaded, there's effectively none (nanoseconds).

  1. can we avoid it, i.e. only move user-provided code off thread?

I think this is contrary to the goals we're attempting to achieve with a separate thread (but maybe we only care about custom loaders).

  1. will this memory cost disappear or will the thread be kept around?

The thread gracefully terminates after the last call to "public" ESMLoader. However, in the tests done so far, that coincides with node itself gracefully terminating. We could specifically test keeping node around for a while after all the ESM / loader stuff to see if the thread terminates. I'm not sure how best to go about that: worker.once('exit') does not trigger (and also, I fear if it did work, it would prevent/undo the worker.unref(), leading to false results).

JakobJingleheimer avatar Oct 23 '22 20:10 JakobJingleheimer

There's also a potential benefit in moving non-custom loading off-thread as it would protect internals from prototype pollution (I think). That would argue that we should make this same refactor for CommonJS too.

GeoffreyBooth avatar Oct 23 '22 21:10 GeoffreyBooth

This is a significant blocker for me as Node.js is often used in environment that are constrained by memory. At the bare minimum we should investigate:

  1. how much more memory is needed?
  2. how much more latency this will add?
  3. can we avoid it, i.e. only move user-provided code off thread?
  4. will this memory cost disappear or will the thread be kept around?

@mcollina I'm not convinced we can answer those questions before we have a fully working implementation; without data, we can only make assumptions, and I wouldn't want us to draw a conclusion over possibly baseless assumptions.

aduh95 avatar Oct 23 '22 23:10 aduh95

Absolutely! I'm concerned about the addition to our startup memory footprint, as this matters for some of our usecases.

I'm flagging that this might be problematic and I was surprised because it was not mentioned in the main text of the issue.

A few more question:

  1. if a new Worker thread is spawned in the lifetime of the application, will this create another thread to load ESM?
  2. what about dynamic import? Will it need re-spawning the thread?

mcollina avatar Oct 24 '22 07:10 mcollina

A few more question:

  1. if a new Worker thread is spawned in the lifetime of the application, will this create another thread to load ESM?
  2. what about dynamic import? Will it need re-spawning the thread?

In both cases, (in the current design/implementation) only if the "loaders" worker is not around (otherwise it will be re-used).

JakobJingleheimer avatar Oct 24 '22 07:10 JakobJingleheimer

In both cases, (in the current design/implementation) only if the "loaders" worker is not around (otherwise it will be re-used).

So there is only one loaders worker for all worker threads created by Node.js?

mcollina avatar Oct 24 '22 08:10 mcollina

No, there's a separate loaders worker for each worker thread.

targos avatar Oct 24 '22 08:10 targos

I assume it should be possible to configure loader hooks per worker thread therefore this may complicate this thread.

I'm also a bit skeptical to have a single loader thread for all workers as this seems to allow "leaking" data between workers which should be isolated. Also this single loader worker would be parallelism blocker.

Flarna avatar Oct 24 '22 08:10 Flarna

The comment from @targos and @JakobJingleheimer seem contradictory?

mhdawson avatar Nov 01 '22 16:11 mhdawson

There are 3 different threading models being considered, that I'm aware of.

cspotcode avatar Nov 01 '22 19:11 cspotcode

Sorry, I think https://github.com/nodejs/node/issues/43658#issuecomment-1288637199 is the intended behaviour (the current PR may not achieve that yet). The rationale being that workers are intended to be isolated, so their loaders' state should also be isolated/fresh.

JakobJingleheimer avatar Nov 01 '22 22:11 JakobJingleheimer