Cronicle
Cronicle copied to clipboard
Cronicle backup keeps looping the UI with "Waiting for master" message.
Hello. Just want to check with everyone if issue I am currently having is normal for a multi-cluster setup, or if there's any misconfiguration on my end.
Summary
Cronicle backup server's UI is looping with "Waiting for master" message when not in primary mode.
Steps to reproduce the problem
- Setup a multi-cluster environment node 1 and node 2.
- Assign node 1 or node 2 as a master.
- Access the "backup" server UI
Your Setup
Two VMs configured as a multi-clustered setup with NFS.
Operating system and version?
Ubuntu 18.04
Node.js version?
16.15.0-1nodesource1
Cronicle software version?
0.9.7
Are you using a multi-server setup, or just a single server?
Multi-server setup.
Are you using the filesystem as back-end storage, or S3/Couchbase?
NFS
Can you reproduce the crash consistently?
Yes
Log Excerpts
There's not much log in the Cronicle.log file except the UI access log from the client.
[1653959101.075][2022-05-31 01:05:01][dgsdtstsch02][81312][Cronicle][debug][5][New socket.io client connected: 2-MiMtdIkZoLvsbEAAAH (IP: 10.81.42.185)][] [1653959101.411][2022-05-31 01:05:01][dgsdtstsch02][81312][Cronicle][debug][4][Socket client 2-MiMtdIkZoLvsbEAAAH has authenticated via user session (IP: 10.81.42.185)][] [1653959101.798][2022-05-31 01:05:01][dgsdtstsch02][81312][Cronicle][debug][4][Socket client 2-MiMtdIkZoLvsbEAAAH has authenticated via user session (IP: 10.81.42.185)][] [1653959103.026][2022-05-31 01:05:03][dgsdtstsch02][81312][Cronicle][debug][5][Socket.io client disconnected: 2-MiMtdIkZoLvsbEAAAH (IP: 10.81.42.185)][] [1653959104.077][2022-05-31 01:05:04][dgsdtstsch02][81312][Cronicle][debug][5][New socket.io client connected: KTwxMMEPmeybeJzLAAAJ (IP: 10.81.42.185)][] [1653959104.425][2022-05-31 01:05:04][dgsdtstsch02][81312][Cronicle][debug][4][Socket client KTwxMMEPmeybeJzLAAAJ has authenticated via user session (IP: 10.81.42.185)][] [1653959104.821][2022-05-31 01:05:04][dgsdtstsch02][81312][Cronicle][debug][4][Socket client KTwxMMEPmeybeJzLAAAJ has authenticated via user session (IP: 10.81.42.185)][] [1653959106.028][2022-05-31 01:05:06][dgsdtstsch02][81312][Cronicle][debug][5][Socket.io client disconnected: KTwxMMEPmeybeJzLAAAJ (IP: 10.81.42.185)][]
Did you add that backup to server list (in primary server UI?) If you are looping in "waiting for master" this means server name is not matching master group or you don't have access to data folder. This often happens when migrating from one server to another without updating configs.
@mikeTWC1984
Hello Mike, thanks for replying.
Yes, I've added the backup to the server list in the primary server UI.
Here are the steps on how I configure the Cronicle multi-server cluster.
- /opt/cronicle/data is hosted on NFS and mounted on both servers. Both servers have read-write access to it.
- control.sh setup is only run once on the primary. The secondary server service was restarted as-is after the installation (after copying the secret key from the primary config)
- I've added the secondary server to the primary group (refer to screenshot)
When I tried to test the failover mechanism - I shut down the primary server, and then after a while, it will failover to secondary. Once the secondary became the master, I turned on the secondary server back to join the cluster.
Accessing the UI on the primary works fine - but when I tried to access the secondary server UI, it will constantly loop.
See also in the secondary node UI - the servers clusters went missing, while the primary node has the information just fine.
Here's the Filesystem.log file from the secondary node as the UI keeps looping.
[1654170006.804][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][JSON fetch complete: sessions/898ea20fd384856c8f463ac633305e226faf062f6ea0bc6dbcc2892fa4f1dc7d][] [1654170006.805][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][JSON fetch complete: sessions/898ea20fd384856c8f463ac633305e226faf062f6ea0bc6dbcc2892fa4f1dc7d][] [1654170006.805][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][Fetching Object: users/admin][data/users/34/68/bc/3468bc0c4e5f6aa06c7aee62212ac18f.json] [1654170006.806][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][JSON fetch complete: users/admin][] [1654170006.806][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][Fetching Object: global/schedule][data/global/c7/45/9c/c7459c956e50650e77e33cd03ea3b31b.json] [1654170006.806][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][Fetching Object: global/categories][data/global/d5/34/10/d53410b6a012ac9db284846c1d465e2c.json] [1654170006.807][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][Fetching Object: global/plugins][data/global/be/21/ad/be21ad44831ac52d7c3b3c06ec30ab02.json] [1654170006.807][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][Fetching Object: global/server_groups][data/global/bf/10/4a/bf104a19df63f5227cc0e457c353afa7.json] [1654170006.807][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][JSON fetch complete: global/schedule][] [1654170006.808][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][Fetching Object: global/schedule/0][data/global/b6/15/1b/b6151bb3131d88940fa98921f2c8f4f9.json] [1654170006.808][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][JSON fetch complete: global/categories][] [1654170006.808][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][Fetching Object: global/categories/0][data/global/81/f0/77/81f077cc3e3d7f58ba1f2da11fd20297.json] [1654170006.808][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][JSON fetch complete: global/plugins][] [1654170006.808][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][Fetching Object: global/plugins/0][data/global/83/ab/32/83ab32f287f97e49b1f77a810bfe242d.json] [1654170006.809][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][JSON fetch complete: global/schedule/0][] [1654170006.809][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][JSON fetch complete: global/categories/0][] [1654170006.81][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][JSON fetch complete: global/server_groups][] [1654170006.81][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][Fetching Object: global/server_groups/0][data/global/70/a4/5b/70a45b37adf20d85be16525c9a8e8e03.json] [1654170006.81][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][JSON fetch complete: global/plugins/0][] [1654170006.811][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][JSON fetch complete: global/server_groups/0][] [1654170006.811][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][Storing JSON Object: sessions/898ea20fd384856c8f463ac633305e226faf062f6ea0bc6dbcc2892fa4f1dc7d][data/sessions/3f/b7/fd/3fb7fdd0be7703906950849ca1ce106e.json] [1654170006.814][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][Store operation complete: sessions/898ea20fd384856c8f463ac633305e226faf062f6ea0bc6dbcc2892fa4f1dc7d][] [1654170006.822][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][Fetching Object: sessions/898ea20fd384856c8f463ac633305e226faf062f6ea0bc6dbcc2892fa4f1dc7d][data/sessions/3f/b7/fd/3fb7fdd0be7703906950849ca1ce106e.json] [1654170006.825][2022-06-02 11:40:06][cronicle-secondary][14224][Filesystem][debug][9][JSON fetch complete: sessions/898ea20fd384856c8f463ac633305e226faf062f6ea0bc6dbcc2892fa4f1dc7d][]