PostgresApp 2.4.1 - doesn't start postgres server correctly on big sur (11.1)
See recording: https://recordit.co/OmKakhagGo
It could be that PostgreSQL is recovering from a crash or non-graceful shutdown. I think if a big transaction was in progress, that may take a while.
Postgres.app should probably show a better message in that case, but PostgreSQL should eventually start.
Checking the server log would be a good idea.
2020-12-21 12:42:16.966 CET [3201] LOG: listening on IPv6 address "::1", port 5432
2020-12-21 12:42:16.967 CET [3201] LOG: listening on IPv4 address "127.0.0.1", port 5432
2020-12-21 12:42:16.978 CET [3201] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2020-12-21 12:42:17.398 CET [3215] LOG: database system was interrupted; last known up at 2020-12-19 12:20:16 CET
2020-12-21 12:42:36.353 CET [3406] FATAL: the database system is starting up
2020-12-21 12:42:38.608 CET [3429] FATAL: the database system is starting up
2020-12-21 12:42:39.164 CET [3451] FATAL: the database system is starting up
2020-12-21 12:42:39.250 CET [3452] FATAL: the database system is starting up
2020-12-21 12:42:41.115 CET [3201] LOG: received fast shutdown request
2020-12-21 12:43:01.812 CET [3686] FATAL: the database system is shutting down
2020-12-21 12:43:09.297 CET [3774] FATAL: the database system is shutting down
2020-12-21 12:43:09.978 CET [3775] FATAL: the database system is shutting down
2020-12-21 12:43:10.088 CET [3776] FATAL: the database system is shutting down
2020-12-21 12:43:19.772 CET [3883] FATAL: the database system is shutting down
2020-12-21 12:43:20.393 CET [3201] LOG: startup process (PID 3215) was terminated by signal 9: Killed: 9
2020-12-21 12:43:20.393 CET [3201] LOG: terminating any other active server processes
2020-12-21 12:43:20.393 CET [3201] LOG: abnormal database system shutdown
2020-12-21 12:43:20.404 CET [3201] LOG: database system is shut down
2020-12-21 12:43:57.024 CET [4286] LOG: listening on IPv6 address "::1", port 5432
2020-12-21 12:43:57.025 CET [4286] LOG: listening on IPv4 address "127.0.0.1", port 5432
2020-12-21 12:43:57.031 CET [4286] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2020-12-21 12:43:57.377 CET [4296] LOG: database system was interrupted; last known up at 2020-12-19 12:20:16 CET
2020-12-21 12:44:09.413 CET [4425] FATAL: the database system is starting up
2020-12-21 12:44:12.793 CET [4455] FATAL: the database system is starting up
2020-12-21 12:44:15.145 CET [4490] FATAL: the database system is starting up
2020-12-21 12:44:16.779 CET [4286] LOG: received fast shutdown request
2020-12-21 12:49:40.511 CET [7791] LOG: listening on IPv6 address "::1", port 5432
2020-12-21 12:49:40.511 CET [7791] LOG: listening on IPv4 address "127.0.0.1", port 5432
2020-12-21 12:49:40.521 CET [7791] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2020-12-21 12:49:40.965 CET [7792] LOG: database system was interrupted; last known up at 2020-12-19 12:20:16 CET
2020-12-21 12:50:00.198 CET [7835] FATAL: the database system is starting up
2020-12-21 12:50:21.321 CET [7922] FATAL: the database system is starting up
2020-12-21 12:50:36.893 CET [7967] FATAL: the database system is starting up
2020-12-21 12:50:40.635 CET [7791] LOG: received fast shutdown request
2020-12-21 12:50:40.662 CET [7792] LOG: could not open file "./base/16385/PG_VERSION": Interrupted system call
2020-12-21 12:50:42.728 CET [8030] FATAL: the database system is shutting down
2020-12-21 12:50:47.953 CET [7792] LOG: database system was not properly shut down; automatic recovery in progress
2020-12-21 12:50:48.006 CET [7792] LOG: redo starts at 0/7D6FB088
2020-12-21 12:50:48.013 CET [8032] LOG: shutting down
2020-12-21 12:50:48.106 CET [7791] LOG: database system is shut down
2020-12-21 12:51:56.554 CET [8461] LOG: listening on IPv6 address "::1", port 5432
2020-12-21 12:51:56.554 CET [8461] LOG: listening on IPv4 address "127.0.0.1", port 5432
2020-12-21 12:51:56.560 CET [8461] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2020-12-21 12:51:56.819 CET [8462] LOG: database system was shut down in recovery at 2020-12-21 12:50:48 CET
2020-12-21 12:51:56.824 CET [8462] LOG: database system was not properly shut down; automatic recovery in progress
2020-12-21 12:51:56.852 CET [8462] LOG: redo starts at 0/7D6FB088
2020-12-21 12:51:56.852 CET [8462] LOG: invalid record length at 0/7D6FB168: wanted 24, got 0
2020-12-21 12:51:56.852 CET [8462] LOG: redo done at 0/7D6FB130
2020-12-21 12:51:57.019 CET [8461] LOG: database system is ready to accept connections
That's the log, I've killed process couple of time cause it was hanging out. I'm not understanding what happening as i didn't have issue on macOS catalina. This is happening since upgraded to big sur.
Any hint on how to fix?
Thx
It looks like it recovered at the end -- when it says ready to accept connections startup is complete.
I'm not sure why PostgreSQL took a long time to start up. It could be related to #598, where PostgreSQL doesn't shut down cleanly when the computer shuts down. Then it would have to attempt recovery every time you start the server.
But it could also be that the disk is slow, or some kind of data corruption. Is the data directory in the default location? If you have a data directory on a network drive, and multiple PostgreSQL servers trying to access it, that could cause data corruption.
Finally, it could also be that macOS is just stuck verifying some code signing certificate, like it so often does since Catalina. Turning off Wifi would fix this temporarily.
It looks like it recovered at the end -- when it says
ready to accept connectionsstartup is complete.
Yap, after 4/5 attempt it will complete startup.
I'm not sure why PostgreSQL took a long time to start up. It could be related to #598, where PostgreSQL doesn't shut down cleanly when the computer shuts down. Then it would have to attempt recovery every time you start the server.
It seems postgres.pid is correctly removed on shutdown.
But it could also be that the disk is slow, or some kind of data corruption. Is the data directory in the default location? If you have a data directory on a network drive, and multiple PostgreSQL servers trying to access it, that could cause data corruption.
Yes, it's working on default directory
Finally, it could also be that macOS is just stuck verifying some code signing certificate, like it so often does since Catalina. Turning off Wifi would fix this temporarily.
I'll try to disable wifi and report back.
Finally, it could also be that macOS is just stuck verifying some code signing certificate, like it so often does since Catalina. Turning off Wifi would fix this temporarily. I'll try to disable wifi and report back
I just realised this is actually pretty unlikely, since it should only cause a slowdown the first time, not always.
I just realised this is actually pretty unlikely, since it should only cause a slowdown the first time, not always.
OK, honestly i've no idea why it get 4/5 times before having it up and running. I could try to uninstall and reinstall app
Yap, after 4/5 attempt it will complete startup.
I wonder why the earlier attempts were canceled. The FATAL: the database system is shutting up/down seem to reflect attempts to connect from a client (e.g. PostgresApp refreshing the databases list or Navicat in your video) and thus can be ignored. May the shutdown requests and terminations be caused by quitting PostgresApp while the server was performing it's recovery?
But yes, we could extend the ServerStaus enum with a RunningAndAcceptingConnections state, that may be reached after a while (recovery) or not be reached in some cases (e.g. standby server configurations). Detecting the fact that it is recovering would be hard to get right - a hint to review the log should be enough if that state is not reached, imho.
Can you reproduce the long / multiple recovery issue?
I have it running fine on Big Sur (lots of other stuff isn't working, I don't recommend anyone else succumb to their endless incessant nagging to upgrade...but postgresql at least works). However, after changing my default user passwords, the App no longer shows any databases in its window. I can still get to them in psql however.