Sync occasionally hangs, and takes a long time to recover
Abstract
Sync occasionally seems to hang on one or more devices, but then recovers after a long time
Expected behaviour
- If a device was recently synced, opening the app should sync the device and bring it up to date within a few seconds
Actually happened
- Most of the time this works as expected, but occasionally it seems to hang. After hanging, if I leave Obsidian open long enough, it eventually recovers and continues syncing.
- It seems to take about 10-15 minutes to recover as long as Obsidian stays open.
- On iOS, when the screen goes to sleep, iOS seems to background the app and the sync starts again when I reopen it. I need to keep the screen awake with Obsidian open for it to recover.
Reproducing procedure
- I have couchdb running on a QNAP NAS with Docker.
- I have multiple clients syncing to the same server (Windows PC, MacBook, iPhone, iPad).
- All clients occasionally exhibit this behaviour, and they have always recovered if I left Obsidian open and running long enough.
- This seems to happen once every few weeks.
Report materials
Report from the LiveSync
Report from hatch
----remote config----
cors:
credentials: "true"
headers: accept, authorization, content-type, origin, referer
max_age: "3600"
methods: GET, PUT, POST, HEAD, DELETE
origins: app://obsidian.md,capacitor://localhost,http://localhost
chttpd:
bind_address: any
max_http_request_size: "4294967296"
port: "5984"
require_valid_user: "true"
admins: π
πΈπ·π΄πΆππΈπ·
vendor:
name: The Apache Software Foundation
feature_flags:
partitioned||*: "true"
chttpd_auth:
authentication_redirect: /obsidian-zain-baft/_utils/session.html
hash_algorithms: sha256, sha
require_valid_user: "true"
indexers:
couch_mrview: "true"
prometheus:
additional_port: "false"
bind_address: 127.0.0.1
port: "17986"
httpd:
WWW-Authenticate: Basic realm="couchdb"
bind_address: 127.0.0.1
enable_cors: "true"
port: "5986"
smoosh:
state_dir: ./data
couch_httpd_auth:
authentication_db: π
πΈπ·π΄πΆππΈπ·
secret: π
πΈπ·π΄πΆππΈπ·
authentication_redirect: π
πΈπ·π΄πΆππΈπ·
couchdb_engines:
couch: couch_bt_engine
couchdb:
database_dir: ./data
max_document_size: "50000000"
single_node: "true"
uuid: π
πΈπ·π΄πΆππΈπ·
view_index_dir: ./data
---- Plug-in config ---
version:0.20.7
couchDB_URI: self-hosted
couchDB_USER: π
πΈπ·π΄πΆππΈπ·
couchDB_PASSWORD: π
πΈπ·π΄πΆππΈπ·
couchDB_DBNAME: π
πΈπ·π΄πΆππΈπ·
liveSync: true
syncOnSave: false
syncOnStart: false
savingDelay: 200
lessInformationInLog: false
gcDelay: 0
versionUpFlash: ""
minimumChunkSize: 20
longLineThreshold: 250
showVerboseLog: false
suspendFileWatching: false
trashInsteadDelete: true
periodicReplication: false
periodicReplicationInterval: 60
syncOnFileOpen: false
encrypt: false
passphrase: π
πΈπ·π΄πΆππΈπ·
usePathObfuscation: false
doNotDeleteFolder: false
resolveConflictsByNewerFile: false
batchSave: false
deviceAndVaultName: ""
usePluginSettings: false
showOwnPlugins: false
showStatusOnEditor: true
usePluginSync: false
autoSweepPlugins: false
autoSweepPluginsPeriodic: false
notifyPluginOrSettingUpdated: false
checkIntegrityOnSave: false
batch_size: 50
batches_limit: 40
useHistory: true
disableRequestURI: true
skipOlderFilesOnSync: true
checkConflictOnlyOnOpen: false
syncInternalFiles: true
syncInternalFilesBeforeReplication: false
syncInternalFilesIgnorePatterns: \/node_modules\/, \/\.git\/, \/obsidian-livesync\/
syncInternalFilesInterval: 60
additionalSuffixOfDatabaseName: ""
ignoreVersionCheck: false
lastReadUpdates: 20
deleteMetadataOfDeletedFiles: false
syncIgnoreRegEx: ""
syncOnlyRegEx: ""
customChunkSize: 100
readChunksOnline: true
watchInternalFileChanges: true
automaticallyDeleteMetadataOfDeletedFiles: 0
disableMarkdownAutoMerge: false
writeDocumentsIfConflicted: false
useDynamicIterationCount: false
syncAfterMerge: false
configPassphraseStore: ""
encryptedPassphrase: π
πΈπ·π΄πΆππΈπ·
encryptedCouchDBConnection: π
πΈπ·π΄πΆππΈπ·
permitEmptyPassphrase: false
useIndexedDBAdapter: true
useTimeouts: false
writeLogToTheFile: false
doNotPaceReplication: false
hashCacheMaxCount: 300
hashCacheMaxAmount: 50
concurrencyOfReadChunksOnline: 100
minimumIntervalOfReadChunksOnline: 333
hashAlg: xxhash64
suspendParseReplicationResult: false
doNotSuspendOnFetching: false
useIgnoreFiles: false
ignoreFiles: .gitignore
syncOnEditorSave: false
pluginSyncExtendedSetting: {}
useV1: false
Plug-in log
Here is an example from my MacBook.
Plug-in log
15/02/2024, 17:06:50->Before LiveSync, start OneShot once...
15/02/2024, 17:06:50->OneShot Sync begin... (pullOnly)
15/02/2024, 17:06:54->Content saved:.obsidian/workspace.json ,chunks: 1 (new:1, skip:0, cache:0)
15/02/2024, 17:06:54->STORAGE --> DB:.obsidian/workspace.json: (hidden) Done
15/02/2024, 17:07:46->Content saved:.obsidian/workspace.json ,chunks: 1 (new:1, skip:0, cache:0)
15/02/2024, 17:07:46->STORAGE --> DB:.obsidian/workspace.json: (hidden) Done
15/02/2024, 17:07:50->Scanning hidden files.
15/02/2024, 17:07:50->Hidden files scanned: 1 files had been modified
15/02/2024, 17:07:53->Content saved:.obsidian/workspace.json ,chunks: 1 (new:1, skip:0, cache:0)
15/02/2024, 17:07:53->STORAGE --> DB:.obsidian/workspace.json: (hidden) Done
15/02/2024, 17:07:57->Content saved:.obsidian/workspace.json ,chunks: 1 (new:1, skip:0, cache:0)
15/02/2024, 17:07:57->STORAGE --> DB:.obsidian/workspace.json: (hidden) Done
15/02/2024, 17:08:00->Content saved:Test 2.md ,chunks: 11 (new:0, skip:21, cache:10)
15/02/2024, 17:08:00->DB <- STORAGE (plain) Test 2.md
15/02/2024, 17:08:50->Scanning hidden files.
15/02/2024, 17:08:50->Hidden files scanned: 1 files had been modified
15/02/2024, 17:09:50->Scanning hidden files.
15/02/2024, 17:09:50->Hidden files scanned: 0 files had been modified
15/02/2024, 17:10:50->Scanning hidden files.
15/02/2024, 17:10:50->Hidden files scanned: 0 files had been modified
15/02/2024, 17:11:50->Scanning hidden files.
15/02/2024, 17:11:50->Hidden files scanned: 0 files had been modified
15/02/2024, 17:11:58->Replication activated
15/02/2024, 17:12:50->Scanning hidden files.
15/02/2024, 17:12:50->Hidden files scanned: 1 files had been modified
15/02/2024, 17:13:50->Scanning hidden files.
15/02/2024, 17:13:50->Hidden files scanned: 0 files had been modified
15/02/2024, 17:14:50->Scanning hidden files.
15/02/2024, 17:14:50->Hidden files scanned: 0 files had been modified
15/02/2024, 17:15:50->Scanning hidden files.
15/02/2024, 17:15:50->Hidden files scanned: 0 files had been modified
15/02/2024, 17:16:50->Scanning hidden files.
15/02/2024, 17:16:50->Hidden files scanned: 0 files had been modified
15/02/2024, 17:17:50->Scanning hidden files.
15/02/2024, 17:17:50->Hidden files scanned: 0 files had been modified
15/02/2024, 17:18:50->Scanning hidden files.
15/02/2024, 17:18:50->Hidden files scanned: 0 files had been modified
15/02/2024, 17:19:50->Scanning hidden files.
15/02/2024, 17:19:50->Hidden files scanned: 0 files had been modified
15/02/2024, 17:21:50->Scanning hidden files.
15/02/2024, 17:21:50->Hidden files scanned: 0 files had been modified
15/02/2024, 17:22:50->Scanning hidden files.
15/02/2024, 17:22:50->Hidden files scanned: 0 files had been modified
15/02/2024, 17:23:50->Scanning hidden files.
15/02/2024, 17:23:50->Hidden files scanned: 0 files had been modified
15/02/2024, 17:24:50->Scanning hidden files.
15/02/2024, 17:24:50->Hidden files scanned: 0 files had been modified
15/02/2024, 17:25:50->Scanning hidden files.
15/02/2024, 17:25:50->Hidden files scanned: 0 files had been modified
15/02/2024, 17:26:12->Applied 1. Projects/xxxxxx/Untitled.md (1. Projects/xxxxxx/Untitled.md:14-f6fc2951aa0a415287c532337147f1af) change...
15/02/2024, 17:26:12->Replication completed
15/02/2024, 17:26:12->LiveSync begin...
15/02/2024, 17:26:13->DB -> STORAGE (create,plain) 1. Projects/xxxxxx/xxxxxx.md
15/02/2024, 17:26:13->Applied 1. Projects/xxxxxx/xxxxxx.md (1. Projects/xxxxxx/xxxxxx.md:7-88cf3bdd075c4b0ab8f10e8d361b85a6) change...
15/02/2024, 17:26:13->Replication activated
15/02/2024, 17:26:13->Applying hidden 3 files change...
15/02/2024, 17:26:13->Scanning hidden files.
15/02/2024, 17:26:13->Delete: 1. Projects/xxxxxx/yyyyyy.md: Conflict revision has been deleted and resolved
15/02/2024, 17:26:13->DB -> STORAGE (modify,plain) 1. Projects/xxxxxx/yyyyyy.md
15/02/2024, 17:26:13->DB -> STORAGE (modify,force,plain) 1. Projects/xxxxxx/yyyyyy.md
15/02/2024, 17:26:13->Hidden files scanned: 3 files had been modified
15/02/2024, 17:26:13->Applying hidden 3 files changed
15/02/2024, 17:26:14->Content saved:.obsidian/workspace.json ,chunks: 1 (new:1, skip:0, cache:0)
15/02/2024, 17:26:14->STORAGE --> DB:.obsidian/workspace.json: (hidden) Done
Other information, insights and intuition.
This seems to affect the devices at around the same time. If one device is affected, the next time I open another device, it is likely to be in this stuck state. I have not determined if this happens every time, though. I also don't know what triggers it.
I also have a large vault (1630 files, 131 folders), and the most recent time this was happening (in the logs above), I noticed my QNAP CPU usage was hovering around 95% the entire time Obsidian was stuck. Once it recovered, the QNAP CPU usage dropped to around 50-70%.
I understand the QNAP has a weak CPU, and I have quite a lot of other processes running on it. I suspect the server is the bottleneck, but it still seems to take an unreasonably long time to recover.
Once it recovers, the sync happens quickly (within a few seconds). Interestingly, after I wait for one client to recover, that will work fine while another is still hung. For example, after I wait for my PC and Mac to recover, those two can sync data between them (within seconds). If I then open Obsidian on my iPad, the QNAP CPU spikes and the iPad just sits there waiting for a sync, but the PC and Mac are still fine and sync within seconds.
Thank you for notifying this issue!
This may caused by Fetch chunks on demand. This feature requires serverside work.
When we enable this feature, we synchronise only notes and then fetch the chunks which are actually needed from the note.
This only notes needs some effort.
Disabling Fetch chunks on demand may prevent the synchronisation hang-ups. However, a complete solution would be desirable.
I played around with it but it appears that disabling Fetch chunks on demand did not help that much. I disabled this setting, then restarted both the livesync server and the Obsidian client.
It also looks like restarting the server triggered the synchronisation hang-up.
After restarting, I made a change to a small test file (Test.md, 10 bytes long), and I noticed it was stuck again. Interestingly, I noticed the client was using a lot of CPU while it was stuck like this β this is an 11th gen i7:
It was not like this the entire time β the CPU usage went up and down a bit on both the client and the server.
This time it took about 7 minutes for the first sync to happen on my PC, after which it was fine. My other devices took a similar time (5-7 minutes) for their first sync.
So it seems to take about half the time compared to before, but the first sync is still pretty slow. Anything else I can try?
Thank you for the detail!
This suggests that Self-hosted LiveSync missed the checkpoint and had traced the changes from the beginning of the remote database (Only read and check, no items are transferred. Therefore we would not observe any from the UI).
At v0.22.6, both checkpoints which are local and remote are used. This prevents missing checkpoints.
With this fix, the first synchronisation after the update will take a little longer (about the time reported in this issue, if assumptions are correct). However, it will stabilise afterwards.
Would you mind if I ask you to check the behaviour and issue have been fixed, please?
Thanks for the update.
I restarted the couchdb server a couple of times today and Livesync did not hang, so I guess that is not a reliable way of reproducing the issue.
I did notice that I was running 0.20.7 of the plugin (I just realised Obsidian does not auto update plugins), so I updated it manually. Both my Windows PC and Mac hung with the sync after the update, but recovered as you said it would.
Once all my devices recover, I will monitor this to see if it happens again.
Hi @vrtmrz ,
I have a similar issue. Syncing is only happening once in a while and devices need a huge amount of time to catch up. I have installed the latest version of your plugin today. Can you tell me which logs and information are useful for you? I would like to help you as much as possible to ensure you can track down the issue. I really like your extension and rely on it quite a lot. Thank you.
PS: on desktop I often see stuff like this:
Does it mean sync is somehow paused? While I see this and clearly have a diff between two devices, the log says:
22/02/2024, 00:05:25->Replication completed
22/02/2024, 00:05:32->OneShot Sync begin... (sync)
22/02/2024, 00:05:32->Replication completed
22/02/2024, 00:05:59->OneShot Sync begin... (sync)
22/02/2024, 00:05:59->Replication completed
Even after the sync catches up once in a while, it takes a long time afterwards that changes are synced. A minute or multiple minutes. My CoucDB is 1.2 GB and has 31'548 documents. The LXC running the CouchDB server is mostly bored. Seldom the CPu shortly spikes to 50%.
@JamborJan Thank you for updating the plug-in! Looking at the information you provided, the synchronisation indicates it has balanced, as you have pointed out.
The next possibility might be a mismatch of the local database and storage.
We can verify files by Verify and repair all files -> Verify all button on the Hatch pane, in the settings dialogue.
When pressing that button, all files are compared with the local database, and if not matched, we can choose which one to use.
If all files are matched on all devices, there might be a mismatch between one of the local databases and the remote database. In this case, we should rebuild the database with the device which has the most certain files.
I did the Verify and repair step and the result is still that changes are not synced in a useful amount of time. Specifically: a change is still not replicated after a lot of minutes. Am I getting you right, that I should click the button for triggering the Overwrite remote database action?
Thank you for your try! If no mismatch has been found on every (it is possible that if a device has the latest file, it is not reflected in the local database) device, we have to rebuild the database.
Rebuilding can performed by Rebuild button.
Thank you for your patience and cooperation!
It's still not better. My expectation would be that a change in a file would be replicated to a secondary device in a couple of seconds. I mostly work on my desktop but need to add notes or review something on my mobile phone. I open the app on the phone and nothing is happening. Both devices think they are in sync but they are clearly not. When I keep the mobile phone app open after a long time (hours maybe) something is happening.
Can I provide any logs or more structured details that would help analyze the problem?
FYI. I had to do the follwoing:
- Decoupled all devices and ensured one is the primary device having all latest data
- Updated all devices to the latest LiveSync version 0.22.8 (I was already one hotfix version late)
- Deleted CouchDB
- Setup CouchDB new from master device
- Synced secondary device again
Now it is working again as expected.
Sorry for being late! And, am also sorry for the misdirected guidance. In retrospect, it seems that I did not have a deep understanding of what caused the situation.
At v0.22.7, I have fixed some logic to fetch chunks from the remote. However, the fix contained a misdesign. Finally, it became stable at v0.22.9.
Since v0.22.8, our LiveSync and CouchDB handle chunks a bit efficiently. And the inefficient chunks are deleted on rebuilding. Hence, rebuilding also contributes to improving the situation. However, that version still has some bugs. Please upgrade to v0.22.9.
I am so relieved to hear that your environments have gotten better again! I hope that this explanation makes sense and that you will allow me to close the Issue.
I have been running with Fetch chunks on demand disabled, and have not had any issues since v0.22.6.
Should I leave this option disabled because of the large number of files in my vault? I just updated to v0.22.9 and then enabled Fetch chunks on demand, but the initial sync after this change is taking some time.
The client does not seem to be using much of the CPU, though the couchdb server is.
This hang happened again with the latest version (0.22.12). Some background on the context:
- It has been working fine since my last comment above.
- Last night (on the 14th) I noticed all my devices hung again when I opened Obsidian and it tried to sync. After about 15 minutes, they each finished syncing and worked normally.
- I have been running with Fetch chunks on demand β not sure if this makes a difference.
- I checked my couchdb server, and noticed it had been automatically updated the night before (on the 13th).
I have a Docker environment set up where it updates containers automatically, and it looks like it updated couchdb (from 3.3.2 to 3.3.3), and this may have caused the issue.
Is this expected behaviour?
I found the couchdb logs and it looks like it crashed from a SIGTERM just before the update... not sure if this explains it?
couchdb logs
2024-03-13 21:24:11.152
[notice] 2024-03-13T10:24:11.151652Z nonode@nohost <0.1559.158> acaeaa35e9 obsidian.home.com 192.168.1.254 undefined OPTIONS /obsidian/ 204 ok 1
2024-03-13 21:24:11.390
[notice] 2024-03-13T10:24:11.390382Z nonode@nohost <0.1559.158> 29e2b32465 obsidian.home.com 192.168.1.254 obsidianuser GET /obsidian/ 200 ok 33
2024-03-13 21:24:11.408
[notice] 2024-03-13T10:24:11.405499Z nonode@nohost <0.1559.158> 485ba100c5 obsidian.home.com 192.168.1.254 obsidianuser GET /obsidian/ 200 ok 4
2024-03-13 21:24:11.422
[notice] 2024-03-13T10:24:11.422500Z nonode@nohost <0.1559.158> 8fcee976cc obsidian.home.com 192.168.1.254 undefined OPTIONS /obsidian/_design/replicate? 204 ok 11
2024-03-13 21:24:11.443
[notice] 2024-03-13T10:24:11.443011Z nonode@nohost <0.1559.158> 5bde864057 obsidian.home.com 192.168.1.254 obsidianuser GET /obsidian/_design/replicate? 304 ok 8
2024-03-13 21:24:11.451
[notice] 2024-03-13T10:24:11.451054Z nonode@nohost <0.1559.158> 89b5a30cd1 obsidian.home.com 192.168.1.254 undefined OPTIONS /obsidian/obsydian_livesync_version? 204 ok 1
2024-03-13 21:24:11.462
[notice] 2024-03-13T10:24:11.462117Z nonode@nohost <0.1559.158> 16b09fef55 obsidian.home.com 192.168.1.254 obsidianuser GET /obsidian/obsydian_livesync_version? 304 ok 3
2024-03-13 21:24:11.470
[notice] 2024-03-13T10:24:11.470045Z nonode@nohost <0.1559.158> ebf7c64179 obsidian.home.com 192.168.1.254 undefined OPTIONS /obsidian/_local/obsydian_livesync_milestone? 204 ok 1
2024-03-13 21:24:11.479
[notice] 2024-03-13T10:24:11.479341Z nonode@nohost <0.1559.158> 98d2f23bf2 obsidian.home.com 192.168.1.254 obsidianuser GET /obsidian/_local/obsydian_livesync_milestone? 304 ok 3
2024-03-13 21:24:11.488
[notice] 2024-03-13T10:24:11.488256Z nonode@nohost <0.1559.158> 3241f5f21c obsidian.home.com 192.168.1.254 undefined OPTIONS / 204 ok 1
2024-03-13 21:24:11.495
[notice] 2024-03-13T10:24:11.494821Z nonode@nohost <0.1559.158> 3b4751386c obsidian.home.com 192.168.1.254 obsidianuser GET / 200 ok 2
2024-03-13 21:24:11.503
[notice] 2024-03-13T10:24:11.503267Z nonode@nohost <0.1559.158> 3bcf5202ba obsidian.home.com 192.168.1.254 undefined OPTIONS /obsidian/_local/B1x2mC53febVgPzEmEaWOw%3D%3D? 204 ok 1
2024-03-13 21:24:11.512
[notice] 2024-03-13T10:24:11.511782Z nonode@nohost <0.1559.158> 6106d02dfb obsidian.home.com 192.168.1.254 obsidianuser GET /obsidian/_local/B1x2mC53febVgPzEmEaWOw%3D%3D? 304 ok 3
2024-03-13 21:24:11.521
[notice] 2024-03-13T10:24:11.520640Z nonode@nohost <0.1559.158> ec6a0bf8ff obsidian.home.com 192.168.1.254 obsidianuser GET /obsidian/ 200 ok 3
2024-03-13 21:24:11.530
[notice] 2024-03-13T10:24:11.529532Z nonode@nohost <0.1559.158> 201d56f919 obsidian.home.com 192.168.1.254 undefined OPTIONS /obsidian/_changes?style=all_docs&filter=replicate%2Fpull&since=108574-g1AAAACReJzLYWBgYMpgTmHgzcvPy09JdcjLz8gvLskBCScyJNX___8_K4M5iYHh0odcoBi7QXKihVmyJbp6HCbksQBJhgYg9R9u0FU9sEHmiUlJSZZp6NqyAL6BLdw&limit=50 204 ok 1
2024-03-13 21:24:11.541
[notice] 2024-03-13T10:24:11.540364Z nonode@nohost <0.1559.158> 8089dd872c obsidian.home.com 192.168.1.254 obsidianuser GET /obsidian/_changes?style=all_docs&filter=replicate%2Fpull&since=108574-g1AAAACReJzLYWBgYMpgTmHgzcvPy09JdcjLz8gvLskBCScyJNX___8_K4M5iYHh0odcoBi7QXKihVmyJbp6HCbksQBJhgYg9R9u0FU9sEHmiUlJSZZp6NqyAL6BLdw&limit=50 304 ok 5
2024-03-13 21:24:11.553
[notice] 2024-03-13T10:24:11.552730Z nonode@nohost <0.1559.158> 429bdcd92c obsidian.home.com 192.168.1.254 obsidianuser GET /obsidian/_local/B1x2mC53febVgPzEmEaWOw%3D%3D? 304 ok 3
2024-03-13 21:24:11.564
[notice] 2024-03-13T10:24:11.563890Z nonode@nohost <0.1559.158> 251e9c387f obsidian.home.com 192.168.1.254 obsidianuser GET /obsidian/ 200 ok 3
2024-03-13 21:24:11.572
[notice] 2024-03-13T10:24:11.572366Z nonode@nohost <0.1559.158> 047b9c6322 obsidian.home.com 192.168.1.254 obsidianuser GET /obsidian/ 200 ok 3
2024-03-13 21:24:11.584
[notice] 2024-03-13T10:24:11.584216Z nonode@nohost <0.1559.158> 8f6927bf6c obsidian.home.com 192.168.1.254 obsidianuser GET / 200 ok 2
2024-03-13 21:24:11.587
[notice] 2024-03-13T10:24:11.587341Z nonode@nohost <0.1595.158> 2b0a213c90 obsidian.home.com 192.168.1.254 obsidianuser GET / 200 ok 2
2024-03-13 21:24:11.592
[notice] 2024-03-13T10:24:11.592130Z nonode@nohost <0.1595.158> ea961396a1 obsidian.home.com 192.168.1.254 undefined OPTIONS /obsidian/_local/amEhkelueWmQgxmxfdNN0g%3D%3D? 204 ok 1
2024-03-13 21:24:11.597
[notice] 2024-03-13T10:24:11.596674Z nonode@nohost <0.1595.158> e468083c2a obsidian.home.com 192.168.1.254 obsidianuser GET /obsidian/_local/B1x2mC53febVgPzEmEaWOw%3D%3D? 304 ok 3
2024-03-13 21:24:11.603
[notice] 2024-03-13T10:24:11.603157Z nonode@nohost <0.1595.158> a37b1ba1a5 obsidian.home.com 192.168.1.254 obsidianuser GET /obsidian/_local/amEhkelueWmQgxmxfdNN0g%3D%3D? 304 ok 3
2024-03-13 21:24:11.606
[notice] 2024-03-13T10:24:11.606118Z nonode@nohost <0.1559.158> 9e06ea9435 obsidian.home.com 192.168.1.254 obsidianuser GET /obsidian/ 200 ok 4
2024-03-13 21:24:11.614
[notice] 2024-03-13T10:24:11.614006Z nonode@nohost <0.1559.158> 3caa0b50aa obsidian.home.com 192.168.1.254 undefined OPTIONS /obsidian/_changes?style=all_docs&heartbeat=30000&filter=replicate%2Fpull&since=108574-g1AAAACReJzLYWBgYMpgTmHgzcvPy09JdcjLz8gvLskBCScyJNX___8_K4M5iYHh0odcoBi7QXKihVmyJbp6HCbksQBJhgYg9R9u0FU9sEHmiUlJSZZp6NqyAL6BLdw&limit=50 204 ok 2
2024-03-13 21:24:11.617
[notice] 2024-03-13T10:24:11.616610Z nonode@nohost <0.1595.158> c92cecc5f9 obsidian.home.com 192.168.1.254 obsidianuser GET /obsidian/_local/amEhkelueWmQgxmxfdNN0g%3D%3D? 304 ok 3
2024-03-13 21:24:11.623
[notice] 2024-03-13T10:24:11.622319Z nonode@nohost <0.1595.158> a8475f00f5 obsidian.home.com 192.168.1.254 obsidianuser GET /obsidian/_changes?style=all_docs&heartbeat=30000&filter=replicate%2Fpull&since=108574-g1AAAACReJzLYWBgYMpgTmHgzcvPy09JdcjLz8gvLskBCScyJNX___8_K4M5iYHh0odcoBi7QXKihVmyJbp6HCbksQBJhgYg9R9u0FU9sEHmiUlJSZZp6NqyAL6BLdw&limit=50 304 ok 3
2024-03-13 21:24:11.630
[notice] 2024-03-13T10:24:11.630028Z nonode@nohost <0.1595.158> 5d71acff28 obsidian.home.com 192.168.1.254 obsidianuser GET /obsidian/_local/B1x2mC53febVgPzEmEaWOw%3D%3D? 304 ok 3
2024-03-13 21:24:11.637
[notice] 2024-03-13T10:24:11.636475Z nonode@nohost <0.1595.158> 128f5c33e5 obsidian.home.com 192.168.1.254 undefined OPTIONS /obsidian/_changes?style=all_docs&feed=longpoll&heartbeat=30000&filter=replicate%2Fpull&since=108574-g1AAAACReJzLYWBgYMpgTmHgzcvPy09JdcjLz8gvLskBCScyJNX___8_K4M5iYHh0odcoBi7QXKihVmyJbp6HCbksQBJhgYg9R9u0FU9sEHmiUlJSZZp6NqyAL6BLdw&limit=50 204 ok 1
2024-03-13 21:43:53.421
[info] 2024-03-13T10:43:53.415687Z nonode@nohost <0.33.0> -------- SIGTERM received - shutting down
2024-03-13 21:43:53.421
2024-03-13 21:43:53.421
[info] 2024-03-13T10:43:53.416022Z nonode@nohost <0.33.0> -------- SIGTERM received - shutting down
2024-03-13 21:43:53.421
2024-03-13 21:43:53.924
[error] 2024-03-13T10:43:53.921060Z nonode@nohost <0.523.0> -------- gen_server <0.523.0> terminated with reason: killed
2024-03-13 21:43:53.924
last msg: redacted
2024-03-13 21:43:53.924
state: {state,#Ref<0.2839434048.1790574594.26395>,couch_replicator_doc_processor,nil,<<"_replicator">>,#Ref<0.2839434048.1790443522.26396>,nil,[],true}
2024-03-13 21:43:53.924
extra: []
2024-03-13 21:43:53.924
[error] 2024-03-13T10:43:53.921214Z nonode@nohost <0.523.0> -------- gen_server <0.523.0> terminated with reason: killed
2024-03-13 21:43:53.924
last msg: redacted
2024-03-13 21:43:53.924
state: {state,#Ref<0.2839434048.1790574594.26395>,couch_replicator_doc_processor,nil,<<"_replicator">>,#Ref<0.2839434048.1790443522.26396>,nil,[],true}
2024-03-13 21:43:53.924
extra: []
2024-03-13 21:43:53.924
[error] 2024-03-13T10:43:53.921620Z nonode@nohost <0.523.0> -------- CRASH REPORT Process (<0.523.0>) with 0 neighbors exited with reason: killed at gen_server:decode_msg/9(line:481) <= proc_lib:init_p_do_apply/3(line:226); initial_call: {couch_multidb_changes,init,['Argument__1']}, ancestors: [<0.469.0>,couch_replicator_sup,<0.386.0>], message_queue_len: 0, links: [], dictionary: [{io_priority,{system,<<"shards/80000000-ffffffff/_replicator.17002002...">>}}], trap_exit: true, status: running, heap_size: 2586, stack_size: 29, reductions: 1132941
2024-03-13 21:43:53.924
[error] 2024-03-13T10:43:53.921764Z nonode@nohost <0.523.0> -------- CRASH REPORT Process (<0.523.0>) with 0 neighbors exited with reason: killed at gen_server:decode_msg/9(line:481) <= proc_lib:init_p_do_apply/3(line:226); initial_call: {couch_multidb_changes,init,['Argument__1']}, ancestors: [<0.469.0>,couch_replicator_sup,<0.386.0>], message_queue_len: 0, links: [], dictionary: [{io_priority,{system,<<"shards/80000000-ffffffff/_replicator.17002002...">>}}], trap_exit: true, status: running, heap_size: 2586, stack_size: 29, reductions: 1132941
Sorry for being late! It seems that the CouchDB was force-terminated by an Out-of-memory killer.
Fetch chunks on demand act in two phases.
- Synchronise only notes using design documents (The filter definition in CouchDB. Parameter
replicate%2Fpullof _chages on the log is so). - fetch only the chunks actually used from the synchronised notes.
I think that it has occurred at phase 1.
I have an idea to make it a bit more performant. (I am not sure about the actual effects yet). It will be added in the next release.
Thanks for the update.
For now I have two of my devices with Fetch chunks on demand enabled, and two with it disabled. If it happens again I will post here.