nodemon icon indicating copy to clipboard operation
nodemon copied to clipboard

Nodemon does not wait for process exit before restarting a new process, when Node's --inspect is used

Open toverux opened this issue 1 year ago • 7 comments

  • Versions: [email protected], [email protected]
  • nodemon -v: 2.0.19
  • Operating system/terminal environment (powershell, gitshell, etc): Ubuntu & ZSH
  • Command you ran: nodemon --inspect=9231 test.js

Normal behavior

According to #1508, nodemon past version 2.0.0 waits for the process and child processes to exit before attempting to restart the process, after a file change or by entering rs in the console.

I have a graceful shutdown asynchronous routine that process.exit() when it's done, but nodemon starts a process before the process actually exits.

Here is a repro where things work correclty -- if running without --inspect:

test.js

process.on('SIGUSR2', () => {
	console.log('Recv. SIGUSR2');

	setTimeout(() => (console.log('Exiting now.'), process.exit(0)), 5000);
});

process.stdin.resume(); // for the script to not finish

Output shows it's fine, nodemon waits (rs was used):

╭─ ~
╰─ nodemon test.js 
[nodemon] 2.0.19
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `node test.js`
rs
Recv. SIGUSR2
Recv. SIGUSR2
Exiting now.
[nodemon] starting `node test.js`

(However, why do I receive SIGUSR2 two times? Had to use process.on instead of process.once because of that, the script was terminated the second time).

pstree:

└─zsh
    └─nodemon test.js
        ├─node /home/morgan/.volta/tools/image/packages/nodemon/bin/nodemon test.js
        │   ├─node test.js
        │   │   └─6*[{node}]
        │   └─10*[{node}]
        └─{nodemon}

Actual behavior

When running with --inspect, nodemon can't wait for the process to exit:

test.js: unchanged

Output shows mingled output of the two processes running at the same time when restarting:

╭─ ~
╰─ nodemon --inspect=9231 test.js
[nodemon] 2.0.19
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `node --inspect=9231 test.js`
Debugger listening on ws://127.0.0.1:9231/55e13b92-4aea-4634-a41d-33ad6488d80b
For help, see: https://nodejs.org/en/docs/inspector
rs
Recv. SIGUSR2
[nodemon] starting `node --inspect=9231 test.js`
Starting inspector on 127.0.0.1:9231 failed: address already in use
Exiting now.

(Also, this time we get SIGUSR2 only once ??)

pstree (a new child process appeared):

└─zsh
    └─nodemon --inspect=9231 test.js
        ├─node /home/morgan/.volta/tools/image/packages/nodemon/bin/nodemon --inspect=9231 ...
        │   ├─sh -c node --inspect=9231 test.js
        │   │   └─node --inspect=9231 test.js
        │   │       └─7*[{node}]
        │   └─10*[{node}]
        └─{nodemon}

It would be nice if this use case could be supported. Having a debugger server always ready to listen is quite practical.

I don't know if it's possible, because it seems that the debugger process wrapping the script is kinda killed by SIGUSR2, which triggers the restart immediately, while the script process is still running. But then I don't know why Starting inspector on 127.0.0.1:9231 failed: address already in use is thrown by the inspector from times to times. Using nodemon flag --signal SIGINT indeed shows a different behavior but then the script doesn't receive it, only the inspector. Which is not the same as when I press CTRL+C for some reason.

Also, I would like to know why SIGUSR2 is sent 2 times (in a short interval) in the normal case.

(Edit: just made a test with SIGINT instead of SIGUSR2. This time I receive it three times instead of one. Running with just node test.js does not show this behavior)

toverux avatar Aug 13 '22 15:08 toverux

So, you've got a number of variables going on here.

There's a long answer here, but the problem is all the moving parts - plus I'm on mac which doesn't replicate what you're seeing.

It mostly boils down to your script trying to handle the exit process. If you change this for a simple server, do you see the same issue?

require('http').createServer((req, res) => res.end('ok')).listen(8000);

remy avatar Aug 16 '22 08:08 remy

This issue has been automatically marked as idle and stale because it hasn't had any recent activity. It will be automtically closed if no further activity occurs. If you think this is wrong, or the problem still persists, just pop a reply in the comments and @remy will (try!) to follow up. Thank you for contributing <3

github-actions[bot] avatar Aug 30 '22 09:08 github-actions[bot]

I ran into this issue for quite some time and, while I'm sure there's some helpful documentation existing somewhere, it didn't click for me until I started reading the code. I'm not sure if my issue is the same as yours, but hopefully my trials and tribulations may benefit you.

TL;DR: sh is your enemy. It captures kill signals and gets in the way of the signal process. Adjust your entrypoint(s) such that nodemon uses forking instead of spawning and, ultimately, you have no instances of sh in your process tree concerning your node app. This will ensure that there is no intermediary between nodemon and your target entrypoint, which will enable nodemon to properly wait on your code exiting.

IN MY CASE The issue resolved to how nodemon started your target process: forking vs spawning. You can see which method nodemon chooses by enabling verbose output; when verbose is enabled, nodemon will emit a message right after "starting `<your script/process>`" stating either "forking" or "spawning". With spawning, nodemon will invoke sh to execute your target; with forking, nodemon executes the target directly and the child process becomes a direct descendant.

During my tests, the existence of sh processes in the process tree unequivocally caused issues with kill signals getting from "point A" all the way to "point B"; I was unable to get nodemon to wait for my node app to exit under any circumstance where an instance of sh existed in the (relevant) process tree. My belief is that, when nodemon does spawning, it ends up tracking the status of sh rather than node (which ends up being a child of sh rather than nodemon); when nodemon sends a restart signal, sh seems to forward the signal to its child and then exit immediately without waiting on the child. It appears that nodemon gets the feedback it's waiting for when it sees sh exit, and continues with the restart completely independently of your actual node process actually exiting. When nodemon uses forking to start node, there is no intermediary sh process to gum up the works and nodemon appears to track node directly enabling nodemon to properly wait on node's exit.

The interaction of processes certainly isn't my expertise, so forgive me for "stating the obvious" or having an unnecessarily "hacky" approach to solving the problem. There's probably a more simple way to do this. Here's what worked for me, though:

I run my Node app as a Docker container using Docker Compose. As many of the guides online suggest, my initial Docker config used npm as the entrypoint. It worked fine, until I started running into restart issues with nodemon. I found minimizing arguments and calling nodemon's binary directly got rid of the sh instances. I adjusted my docker-compose.yml to contain command: [ "node_modules/.bin/nodemon", "./dist/index.js" ] (as opposed to npm start debug with nodemon ./dist/index.js as the scripts entry in package.json) resulted in forking and my expected behavior. For all of the nodemon options, I created a nodemon.json file. I control the node options using the "NODE_OPTIONS" environment variable, which I declare in nodemon.json.

docker-compose.yml

api:
    image: "node:16"
    user: "node"
    working_dir: /home/node/app
    env_file:
        - ../global.env
    environment:
        NODE_ENV: production
        NETWORK_ACCESS: "internal"
    volumes:
        - ./assets:/home/node/assets
        - ./conf:/home/node/conf
        - ./api:/home/node/app
        - ./lib:/home/node/lib
    ports:
        - "9230:9230"
    command: 
        - "node_modules/.bin/nodemon"
        - "./dist/index.js"
    restart: unless-stopped

package.json

{
  "restartable": "false",
  "delay": 2500,
  "verbose": true,
  "watch": [".env", "package.json", "dist"],
  "watchOptions": {},
  "env": {
    "NODE_OPTIONS": "--inspect-brk=0.0.0.0:9230"
  },
  "ext": "js,json"
}

Note: I'm running node in a container that has multiple layers of security between it and the public internet. I wouldn't recommend using --inspect-brk=0.0.0.0 in production, nor certainly anything exposed to the internet.

With this setup, not only does nodemon now properly wait on my node app's exit, but I also gained the ability to send kill signals (docker-compose kill -s <signal>) to my container and restart node without having to restart the entire container.

Hope this helps.

dever23b avatar Aug 31 '22 21:08 dever23b

I ran into the same issue with Nodemon 2.0.19. When i saved the changes, it gives me an error "Error: listen EADDRINUSE: address already in use".

Then I found that nodemon generates files called localhost: (in my case I am using express as my backend router) for each process with different port number, which are backend router is listening. So I just deleted these files and then I restarted nodemon and it worked fine.

image

R4JxF60 avatar Sep 12 '22 07:09 R4JxF60

I'm having the exact same issue with the --inspect flag, and it started out of nowhere a few days ago.

Ubuntu 20.04.5 LTS & bash (tried with zsh) node -v v16.17.0 nodemon -v 2.0.19

package.json script: "dev": "nodemon --inspect ./main/index.js"

Starting server:

[0] [nodemon] 2.0.19
[0] [nodemon] to restart at any time, enter `rs`
[0] [nodemon] watching path(s): *.*
[0] [nodemon] watching extensions: js,mjs,json
[0] [nodemon] starting `node --inspect ./main/index.js`
[0] Debugger listening on ws://127.0.0.1:9229/3a258cc7-734f-438b-9f89-7fb7af1bee94

Whenever I update any file:

[0] [nodemon] restarting due to changes...
[0] [nodemon] starting `node --inspect ./main/index.js`
[0] Starting inspector on 127.0.0.1:9229 failed: address already in use

As a result, my codebase is not updated until I shut the server down and restart it manually.

@edit: After adding a nodemon.json file to my project with some hacks, it SOMETIMES, like 20% of the times, restarts the application as expected. I'm guessing some kind of race condition makes the debugger die before the app starts up again, since this is completely random thing.

{
  "verbose": true,
  "events": {
    "restart": "sh -c 'lsof -i :${PORT:-9229} -t | xargs kill'"
  }
}

Lipemenezes avatar Sep 13 '22 15:09 Lipemenezes

@R4JxF60 just as clarification - nodemon doesn't create any files beyond the config file created in ~/.config/configstore - something in your pipeline is making these files.

@Lipemenezes that nodemon.json file is useful, I'm certain it's a race condition too - nodemon tries hard to be agnostic, so when there's so many layers working at once, I can see how it's getting stuck.

If I can find the time, I'll add this to the FAQ (though not super sure how many people look at it - I'll have to fix that somehow). Otherwise you're welcome to send a PR to add this to the FAQ, I'm sure people will find it useful.

remy avatar Sep 16 '22 12:09 remy

This issue has been automatically marked as idle and stale because it hasn't had any recent activity. It will be automtically closed if no further activity occurs. If you think this is wrong, or the problem still persists, just pop a reply in the comments and @remy will (try!) to follow up. Thank you for contributing <3

github-actions[bot] avatar Oct 05 '22 15:10 github-actions[bot]

Automatically closing this issue due to lack of activity

github-actions[bot] avatar Nov 04 '22 16:11 github-actions[bot]

Definitely not a fix but for others who are still struggling to get it to work, you can also add a sleep to the configuration which gives it time to actually exit before starting up again

{ "verbose": true, "events": { "restart" : "sleep 1 && echo 'Restarting...'" } }

Not the most reliable way, but its fixed majority of the issues that I've had with this race condition.

thebigbluebox avatar Mar 07 '23 22:03 thebigbluebox

Executing a sleep on the restart event doesn't seem to do the trick. It looks like the restart hook doesn't block the actual restart. However, I was able to get a servicable work around by prefixing the --exec with a sleep 1; .

E.g.,

{
  "verbose": true,
  "ext": "ts,json",
  "execMap": {
    "ts": "sleep 1; node --inspect-brk --require ts-node/register"
  }
}

bheiskell avatar Sep 03 '23 17:09 bheiskell