realm-js icon indicating copy to clipboard operation
realm-js copied to clipboard

Process does not exit when using `new Realm.App` on Node

Open tomduncalf opened this issue 2 years ago • 4 comments

To reproduce:

  1. Create a new node project with the following code in index.js:

    const Realm = require("realm");
    const app = new Realm.App({ id: "myapp-abcde" });
    console.log(`app id: ${app.id}`)
    // process.exit() <- need to call this to actually finish running
    
  2. Observe that node index.js never exits

This issue seems to have started happening in [email protected]. Possibly related: https://github.com/realm/realm-js/issues/3525.

May be the real fix for https://github.com/realm/realm-js/issues/4530, in which case we can revert that change.

tomduncalf avatar Apr 28 '22 09:04 tomduncalf

We have investigated it but haven't reached a conclusion. Attaching a debugger to the node process provides no Realm related stack traces:

(lldb) t 2
* thread #2
    frame #0: 0x00007ff80ae7b34e libsystem_kernel.dylib`kevent + 10
libsystem_kernel.dylib`kevent:
->  0x7ff80ae7b34e <+10>: jae    0x7ff80ae7b358            ; <+20>
    0x7ff80ae7b350 <+12>: movq   %rax, %rdi
    0x7ff80ae7b353 <+15>: jmp    0x7ff80ae771c5            ; cerror_nocancel
    0x7ff80ae7b358 <+20>: retq
(lldb) bt
* thread #2
  * frame #0: 0x00007ff80ae7b34e libsystem_kernel.dylib`kevent + 10
    frame #1: 0x000000010e852964 node`uv__io_poll + 948
    frame #2: 0x000000010e83f561 node`uv_run + 433
    frame #3: 0x000000010dedd839 node`node::WorkerThreadsTaskRunner::DelayedTaskScheduler::Run() + 361
    frame #4: 0x00007ff80aeb34e1 libsystem_pthread.dylib`_pthread_start + 125
    frame #5: 0x00007ff80aeaef6b libsystem_pthread.dylib`thread_start + 15
(lldb) t 3
* thread #3
    frame #0: 0x00007ff80ae793ea libsystem_kernel.dylib`__psynch_cvwait + 10
libsystem_kernel.dylib`__psynch_cvwait:
->  0x7ff80ae793ea <+10>: jae    0x7ff80ae793f4            ; <+20>
    0x7ff80ae793ec <+12>: movq   %rax, %rdi
    0x7ff80ae793ef <+15>: jmp    0x7ff80ae771c5            ; cerror_nocancel
    0x7ff80ae793f4 <+20>: retq
(lldb) bt
* thread #3
  * frame #0: 0x00007ff80ae793ea libsystem_kernel.dylib`__psynch_cvwait + 10
    frame #1: 0x00007ff80aeb3a6f libsystem_pthread.dylib`_pthread_cond_wait + 1249
    frame #2: 0x000000010e84d229 node`uv_cond_wait + 9
    frame #3: 0x000000010dedda58 node`node::TaskQueue<v8::Task>::BlockingPop() + 72
    frame #4: 0x000000010deda9fb node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 379
    frame #5: 0x00007ff80aeb34e1 libsystem_pthread.dylib`_pthread_start + 125
    frame #6: 0x00007ff80aeaef6b libsystem_pthread.dylib`thread_start + 15
(lldb) t 4
* thread #4
    frame #0: 0x00007ff80ae793ea libsystem_kernel.dylib`__psynch_cvwait + 10
libsystem_kernel.dylib`__psynch_cvwait:
->  0x7ff80ae793ea <+10>: jae    0x7ff80ae793f4            ; <+20>
    0x7ff80ae793ec <+12>: movq   %rax, %rdi
    0x7ff80ae793ef <+15>: jmp    0x7ff80ae771c5            ; cerror_nocancel
    0x7ff80ae793f4 <+20>: retq
(lldb) bt
* thread #4
  * frame #0: 0x00007ff80ae793ea libsystem_kernel.dylib`__psynch_cvwait + 10
    frame #1: 0x00007ff80aeb3a6f libsystem_pthread.dylib`_pthread_cond_wait + 1249
    frame #2: 0x000000010e84d229 node`uv_cond_wait + 9
    frame #3: 0x000000010dedda58 node`node::TaskQueue<v8::Task>::BlockingPop() + 72
    frame #4: 0x000000010deda9fb node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 379
    frame #5: 0x00007ff80aeb34e1 libsystem_pthread.dylib`_pthread_start + 125
    frame #6: 0x00007ff80aeaef6b libsystem_pthread.dylib`thread_start + 15
(lldb) t 5
* thread #5
    frame #0: 0x00007ff80ae793ea libsystem_kernel.dylib`__psynch_cvwait + 10
libsystem_kernel.dylib`__psynch_cvwait:
->  0x7ff80ae793ea <+10>: jae    0x7ff80ae793f4            ; <+20>
    0x7ff80ae793ec <+12>: movq   %rax, %rdi
    0x7ff80ae793ef <+15>: jmp    0x7ff80ae771c5            ; cerror_nocancel
    0x7ff80ae793f4 <+20>: retq
(lldb) bt
* thread #5
  * frame #0: 0x00007ff80ae793ea libsystem_kernel.dylib`__psynch_cvwait + 10
    frame #1: 0x00007ff80aeb3a6f libsystem_pthread.dylib`_pthread_cond_wait + 1249
    frame #2: 0x000000010e84d229 node`uv_cond_wait + 9
    frame #3: 0x000000010dedda58 node`node::TaskQueue<v8::Task>::BlockingPop() + 72
    frame #4: 0x000000010deda9fb node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 379
    frame #5: 0x00007ff80aeb34e1 libsystem_pthread.dylib`_pthread_start + 125
    frame #6: 0x00007ff80aeaef6b libsystem_pthread.dylib`thread_start + 15
(lldb) t 6
* thread #6
    frame #0: 0x00007ff80ae793ea libsystem_kernel.dylib`__psynch_cvwait + 10
libsystem_kernel.dylib`__psynch_cvwait:
->  0x7ff80ae793ea <+10>: jae    0x7ff80ae793f4            ; <+20>
    0x7ff80ae793ec <+12>: movq   %rax, %rdi
    0x7ff80ae793ef <+15>: jmp    0x7ff80ae771c5            ; cerror_nocancel
    0x7ff80ae793f4 <+20>: retq
(lldb) bt
* thread #6
  * frame #0: 0x00007ff80ae793ea libsystem_kernel.dylib`__psynch_cvwait + 10
    frame #1: 0x00007ff80aeb3a6f libsystem_pthread.dylib`_pthread_cond_wait + 1249
    frame #2: 0x000000010e84d229 node`uv_cond_wait + 9
    frame #3: 0x000000010dedda58 node`node::TaskQueue<v8::Task>::BlockingPop() + 72
    frame #4: 0x000000010deda9fb node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 379
    frame #5: 0x00007ff80aeb34e1 libsystem_pthread.dylib`_pthread_start + 125
    frame #6: 0x00007ff80aeaef6b libsystem_pthread.dylib`thread_start + 15
(lldb) t 7
* thread #7
    frame #0: 0x00007ff80ae769b6 libsystem_kernel.dylib`semaphore_wait_trap + 10
libsystem_kernel.dylib`semaphore_wait_trap:
->  0x7ff80ae769b6 <+10>: retq
    0x7ff80ae769b7 <+11>: nop

libsystem_kernel.dylib`semaphore_wait_signal_trap:
    0x7ff80ae769b8 <+0>:  movq   %rcx, %r10
    0x7ff80ae769bb <+3>:  movl   $0x1000025, %eax          ; imm = 0x1000025
(lldb) bt
* thread #7
  * frame #0: 0x00007ff80ae769b6 libsystem_kernel.dylib`semaphore_wait_trap + 10
    frame #1: 0x000000010e84d830 node`uv_sem_wait + 16
    frame #2: 0x000000010df4cb63 node`node::inspector::(anonymous namespace)::StartIoThreadMain(void*) + 19
    frame #3: 0x00007ff80aeb34e1 libsystem_pthread.dylib`_pthread_start + 125
    frame #4: 0x00007ff80aeaef6b libsystem_pthread.dylib`thread_start + 15

Moreover, we have tried to remove the initialization of the sync manager (in core App constructor) but it doesn't have any effect.

We haven't been able to find any blocking resources using wtfnode or process._getActiveHandles()/process._getActiveRequests().

However, calling Realm.clearTestState() might tear down things:

const Realm = require("realm");

let app = new Realm.App("smurf");
Realm.clearTestState();

We tried adding:

process.on("beforeExit", () => console.log("before exit"));
process.on("exit", () => console.log("exit"));

and these are not fired in this case.

kneth avatar May 05 '22 09:05 kneth

We also created a (skipped) failing test case for this, which should be enabled when this is fixed: https://github.com/realm/realm-js/pull/4556

tomduncalf avatar May 05 '22 10:05 tomduncalf

Has this issue been confirmed to be resolved by #4556 ?

fronck avatar Jul 01 '22 12:07 fronck

Has this issue been confirmed to be resolved by https://github.com/realm/realm-js/pull/4556 ?

The test is still failing so it doesn't look like it has been resolved.

We are able to reproduce the issue - see test in https://github.com/realm/realm-js/pull/4556

kneth avatar Sep 14 '22 12:09 kneth

How are people using Realm with this issue? I can't even get code to continue to execute after opening Realm.

// within initialization of nodejs script
     await self.open('appdevdevenv', appId);
.  // code that does useful stuff but never executes;
.
.
.
. 

  this.open = async (envName, appId) => {
 app = new Realm.App({ id: 'starter-niraa' });
      self.mongodb = app.currentUser.mongoClient('mongodb-atlas');

      self.sdkModels = await self.mongodb.db('appdev').collection('sdkModel').find({});
      console.log(self.sdkModels) // nodejs outputs the statement but no further code is executed
  };

Edit: the above code is now working in the sense that execution continues. Process remains open at the end. The only thing I did different, briefly, was change the code to do Realm.open instead of using the .mongoclient.db style. While using Realm.open, the output noted that a client reset was needed and it performed a local discard. I then reverted to the original code (above) and it now continues execution properly. Maybe there is a bug there that execution hangs with new Realm.app and the .mongoClient approach if a client reset is needed. Wanted to note this info for the next person.

ceumgmt avatar Nov 27 '22 05:11 ceumgmt

output noted that a client reset was needed

Did it say why a client reset was needed?

kneth avatar Nov 28 '22 08:11 kneth

Is there any progress on this issue? The process doen't exit after importing the Realm object

KorigamiK avatar Oct 31 '23 21:10 KorigamiK

Unfortunately not. For our tests, we have a brutal method (Realm.clearTestState()) to kill all threads and free all resources - but using the method can lead to corruption of your database. process.exit() seems to be a more gentle approach.

kneth avatar Nov 02 '23 17:11 kneth