pm2
pm2 copied to clipboard
increment_var does not increment when restarting/reloading app
What's going wrong?
The increment_var
works correctly only when starting the app for the first time. When restarting or reloading, it doesn't work, so all instances get the same initial value in the env variable.
How could we reproduce this issue?
- Set a custom env variable, with initial value 1.
- Set
increment_var
with the name of the custom env variable. - Make sure the
exec_mode
iscluster
. - Log the value of the custom env variable in the application.
- Start the application. Each instance should log a different value.
- Restart or reload the application. Each instance will log the same initial value.
Supporting information
--- PM2 report ----------------------------------------------------------------
Date : Wed Nov 13 2019 12:55:41 GMT+0000 (Coordinated Universal Time)
===============================================================================
--- Daemon -------------------------------------------------
pm2d version : 4.1.2
node version : 12.13.0
node path : not found
argv : /usr/bin/node,/usr/lib/node_modules/pm2/lib/Daemon.js
argv0 : node
user : stimme
uid : 1000
gid : 1000
uptime : 4788min
===============================================================================
--- CLI ----------------------------------------------------
local pm2 : 4.1.2
node version : 12.13.0
node path : /usr/bin/pm2
argv : /usr/bin/node,/usr/bin/pm2,report
argv0 : node
user : stimme
uid : 1000
gid : 1000
===============================================================================
--- System info --------------------------------------------
arch : x64
platform : linux
type : Linux
cpus : Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
cpus nb : 2
freemem : 724017152
totalmem : 4136554496
home : /home/stimme
===============================================================================
--- PM2 list -----------------------------------------------
┌────┬─────────────────────────┬─────────┬─────────┬──────────┬────────┬──────┬──────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├────┼─────────────────────────┼─────────┼─────────┼──────────┼────────┼──────┼──────────┼──────────┼──────────┼──────────┼──────────┤
│ 3 │ Imaging API │ 1.0.0 │ cluster │ 19522 │ 2h │ 2 │ online │ 0.5% │ 73.5mb │ stimme │ disabled │
│ 4 │ Imaging API │ 1.0.0 │ cluster │ 19538 │ 2h │ 2 │ online │ 0.5% │ 74.4mb │ stimme │ disabled │
│ 5 │ Safir Core API │ 1.0.0 │ cluster │ 29155 │ 21s │ 2 │ online │ 0.5% │ 139.9mb │ stimme │ disabled │
│ 6 │ Safir Core API │ 1.0.0 │ cluster │ 29172 │ 18s │ 2 │ online │ 0.5% │ 132.8mb │ stimme │ disabled │
└────┴─────────────────────────┴─────────┴─────────┴──────────┴────────┴──────┴──────────┴──────────┴──────────┴──────────┴──────────┘
Module
┌────┬───────────────────────────────────────┬────────────────────┬───────┬──────────┬──────┬──────────┬──────────┬──────────┐
│ id │ module │ version │ pid │ status │ ↺ │ cpu │ mem │ user │
├────┼───────────────────────────────────────┼────────────────────┼───────┼──────────┼──────┼──────────┼──────────┼──────────┤
│ 0 │ pm2-logrotate │ 2.7.0 │ 1509 │ online │ 0 │ 0% │ 78.9mb │ stimme │
└────┴───────────────────────────────────────┴────────────────────┴───────┴──────────┴──────┴──────────┴──────────┴──────────┘
===============================================================================
--- Daemon logs --------------------------------------------
/home/stimme/.pm2/pm2.log last 20 lines:
PM2 | 2019-11-13T12:45:22: PM2 log: App [Safir Core API:_old_5] exited with code [0] via signal [SIGINT]
PM2 | 2019-11-13T12:45:22: PM2 log: pid=28104 msg=process killed
PM2 | 2019-11-13T12:45:22: PM2 log: App [Safir Core API:6] starting in -cluster mode-
PM2 | 2019-11-13T12:45:22: PM2 log: App [Safir Core API:6] online
PM2 | 2019-11-13T12:45:25: PM2 log: Stopping app:Safir Core API id:_old_6
PM2 | 2019-11-13T12:45:25: PM2 log: App name:Safir Core API id:_old_6 disconnected
PM2 | 2019-11-13T12:45:25: PM2 log: App [Safir Core API:_old_6] exited with code [0] via signal [SIGINT]
PM2 | 2019-11-13T12:45:25: PM2 log: pid=28111 msg=process killed
PM2 | 2019-11-13T12:55:19: PM2 log: App [Safir Core API:5] starting in -cluster mode-
PM2 | 2019-11-13T12:55:19: PM2 log: App [Safir Core API:5] online
PM2 | 2019-11-13T12:55:22: PM2 log: -reload- New worker listening
PM2 | 2019-11-13T12:55:22: PM2 log: Stopping app:Safir Core API id:_old_5
PM2 | 2019-11-13T12:55:22: PM2 log: App name:Safir Core API id:_old_5 disconnected
PM2 | 2019-11-13T12:55:22: PM2 log: App [Safir Core API:_old_5] exited with code [0] via signal [SIGINT]
PM2 | 2019-11-13T12:55:22: PM2 log: pid=28362 msg=process killed
PM2 | 2019-11-13T12:55:22: PM2 log: App [Safir Core API:6] starting in -cluster mode-
PM2 | 2019-11-13T12:55:22: PM2 log: App [Safir Core API:6] online
PM2 | 2019-11-13T12:55:25: PM2 log: Stopping app:Safir Core API id:_old_6
PM2 | 2019-11-13T12:55:25: PM2 log: App [Safir Core API:_old_6] exited with code [0] via signal [SIGINT]
PM2 | 2019-11-13T12:55:25: PM2 log: pid=28379 msg=process killed
I seem to have the same issue, but when running in fork mode. My PM2 version is 4.1.2.
I'm using a file called pm2start.json
to start the app, its contents are
{
"name" : "app-name",
"script" : "./index.js",
"instances" : 4,
"increment_var" : "API_SERVER_PORT",
"exec_mode" : "fork",
"env" : {
"API_SERVER_PORT" : 6060,
"NODE_ENV" : "myEnvironment"
}
}
The index.js has a line console.log('STARTED WITH PORT: ', process.env.API_SERVER_PORT);
After deleting ~/.pm2
the log states:
0|app-name | STARTED WITH PORT: 6060
1|app-name | STARTED WITH PORT: 6061
2|app-name | STARTED WITH PORT: 6062
3|app-name | STARTED WITH PORT: 6063
After doing pm2 stop pm2start.json
and then again pm2 start pm2start.json
, the log says:
2|app-name | STARTED WITH PORT: 6062
2|app-name | STARTED WITH PORT: 6060
0|app-name | STARTED WITH PORT: 6060
0|app-name | STARTED WITH PORT: 6060
0|app-name | STARTED WITH PORT: 6060
0|app-name | STARTED WITH PORT: 6060
1|app-name | STARTED WITH PORT: 6061
1|app-name | STARTED WITH PORT: 6060
1|app-name | STARTED WITH PORT: 6060
1|app-name | STARTED WITH PORT: 6060
3|app-name | STARTED WITH PORT: 6063
3|app-name | STARTED WITH PORT: 6060
3|app-name | STARTED WITH PORT: 6060
3|app-name | STARTED WITH PORT: 6060
It shows that the port numbers are correct for the first start, but on both manual and subsequent automatic restarts (the app crashes since after logging it tries to listen on that port and that doesn't work when all 4 instances try listening on the same port) all instances get the same port - 6060.
Can someone help with this? It's still happening in PM2 version 4.2.0
Still happening in version 4.2.1
same issue, im using 4.2.1 in fork mode.
Same issue with version 4.2.3. My nasty workaround is deleting and starting processes every time - but, please, fix this! ...or, at least, it may be time to write about this limitation in the docs.
Same issue with version 4.2.3. My nasty workaround is deleting and starting processes every time - but, please, fix this! ...or, at least, it may be time to write about this limitation in the docs.
I'm also having the same issue and still using the same workaround. Very nasty!
Same issue with version 4.2.3. My nasty workaround is deleting and starting processes every time - but, please, fix this! ...or, at least, it may be time to write about this limitation in the docs.
I'm also having the same issue and still using the same workaround. Very nasty!
Same for me :(
Same here!
me too😔
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Ping to prevent this issue from being closed for inactivity!
On Sat, May 23, 2020, 18:56 stale[bot] [email protected] wrote:
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Unitech/pm2/issues/4502#issuecomment-633094281, or unsubscribe https://github.com/notifications/unsubscribe-auth/AF6N47QSIC6Z6FVISSRNDS3RS753TANCNFSM4JM3BSLQ .
😑
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
😒
still the same ???
still the same ???
Yes, now I must pm2 kill
then pm2 start
every time.
Unfurtunately yes
same problem...
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Ping to prevent this issue from being closed for inactivity
Ping this issue. Happens in fork mode.
- On
start
each forked instance get's incrementedincrement_var
value (PORT
in the example) - On
restart
all forked instances get sameincrement_var
(PORT
in the example, all get 33010, which leads to errororing of all but one instance because ofEADDRINUSE
)
const path = require('path');
const os = require('os');
const packageJson = require(path.resolve(__dirname, 'package.json'));
const PORT = 33010;
const WARPAPP_PM_FORK_PORT_START = PORT;
const WARPAPP_PM_FORK_PORT_COUNT = Math.max(1, os.cpus().length - 1);
module.exports = {
apps: [
{
name: packageJson.name,
script: path.resolve(__dirname, 'app/app.js'),
autorestart: true,
watch: false,
max_memory_restart: '128M',
instance_var: 'WARPAPP_PM_FORK_ID',
wait_ready: true,
listen_timeout: 3000,
kill_timeout: 3000,
instances: WARPAPP_PM_FORK_PORT_COUNT,
exec_mode: 'fork',
increment_var: 'PORT',
env: {
DEBUG: 'warp*',
NODE_ENV: 'development',
WARP_FQDN_APP: 'xxx.devserver.tld',
WARP_FQDN_CDN: 'xxx.devcdn.tld',
WARP_VERSION_CDN_PRECOMPILED: 'vp001',
PORT,
WARPAPP_PM_FORK_PORT_START,
WARPAPP_PM_FORK_PORT_COUNT
},
env_development: {
DEBUG: 'warp*',
NODE_ENV: 'development',
WARP_FQDN_APP: 'xxx.devserver.tld',
WARP_FQDN_CDN: 'xxx.devcdn.tld',
WARP_VERSION_CDN_PRECOMPILED: 'vp001',
PORT,
WARPAPP_PM_FORK_PORT_START,
WARPAPP_PM_FORK_PORT_COUNT
},
env_testing: {
NODE_ENV: 'testing',
WARP_FQDN_APP: 'xxx.devserver.tld',
WARP_FQDN_CDN: 'xxx.devcdn.tld',
WARP_VERSION_CDN_PRECOMPILED: 'vp001',
PORT,
WARPAPP_PM_FORK_PORT_START,
WARPAPP_PM_FORK_PORT_COUNT
},
env_staging: {
NODE_ENV: 'production',
WARP_FQDN_APP: 'stage.finalapp.tld',
WARP_FQDN_CDN: 'xxxxxx.cloudfront.net',
WARP_VERSION_CDN_PRECOMPILED: 'vp001',
PORT,
WARPAPP_PM_FORK_PORT_START,
WARPAPP_PM_FORK_PORT_COUNT
},
env_production: {
NODE_ENV: 'production',
WARP_FQDN_APP: 'www.finalapp.tld',
WARP_FQDN_CDN: 'xxxxxx.cloudfront.net',
WARP_VERSION_CDN_PRECOMPILED: 'vp001',
PORT,
WARPAPP_PM_FORK_PORT_START,
WARPAPP_PM_FORK_PORT_COUNT
},
error_file: path.join(__dirname, '../../logs/pm2/pm2-err.log'),
out_file: path.join(__dirname, '../../logs/pm2/pm2-out.log'),
log_file: path.join(__dirname, '../../logs/pm2/pm2-combined.log'),
time: true
}
]
};
Note that
- On
start
each forked instance get's incrementedinstance_var
value (WARPAPP_PM_FORK_ID
in the example) - On
restart
each forked instance get's incrementedinstance_var
value (WARPAPP_PM_FORK_ID
in the example) which allows to have a workaround in app logic, where port is set by arithmetic operations that depends onincrement_var
variable name, i.e.
// do not consume port number directly from env
// const PORT = normalizeEnvNumber(process.env.PORT || 33000);
// instead ->
const WARPAPP_PM_FORK_ID = normalizeEnvNumber(process.env.WARPAPP_PM_FORK_ID || 0);
const WARPAPP_PM_FORK_PORT_START = normalizeEnvNumber(process.env.WARPAPP_PM_FORK_PORT_START || 0);
const WARPAPP_PM_FORK_PORT_COUNT = normalizeEnvNumber(process.env.WARPAPP_PM_FORK_PORT_COUNT || 0);
// set PORT from PORT_START plus FORK_ID
const PORT = WARPAPP_PM_FORK_PORT_START + WARPAPP_PM_FORK_ID;
I confirm the existence of this problem. It can't be solved for more than a year. Fortunately, there is a simple logical workaround that does not require restarting the process:
- Remove the
increment_var
parameter so that the first run doesn't break; - Add the
instance_var: "INSTANCE_ID"
setting (the name is taken as an example); - In your code, at the listening start point, add a port initializer based on the instance, for example,
http.listen(parseInt(process.env.PORT) + parseInt(process.env.INSTANCE_ID));
.
This solution is probably the most logical, because knowing the initial port number and the current instance, we can always find out the current increment offset.
I cant reproduce in fork mode on PM2 4.5.6
I just updated to 5.1.1 and the issue is fixed!
it works after kill pm2
@NinoSkopac I'm using 5.2.0 and I still have this issue.
Actually it happened something wrong when using NODE_APP_INSTANCE
: after restart, it started counting from 1 or 2 instead of 0.
So I moved to using increment_var
, and now I've got this issue about having all instances with the same '0' initial value after restart. Deleting and re-creating the app most times fixes the issue, but I've had a couple of times where it didn't work and I had to delete and re-create the app a second time :-/
this is a serious bug that needs to be fixed which already affected our long running service multiple times, just found this issue that lived for almost 3 years. I really regret using pm2 in the first place
Seems like @Unitech is on lifelong vacation.
This bug is killing us. @WillianAgostini
Almost 4 years later .... still same thing. How is there actual product build on this (http://pm2.keymetrics.io) considering the state of affairs? 🤔
it is really a tier-0 bug