obsidian-livesync
obsidian-livesync copied to clipboard
Replication error, document too_large
The largest file in my vault is a picture, and it's 12048 kb.
β― du -a . | sort -n -r | head -n 5
197392 .
193840 ./assets
12048 ./assets/image_1664561617642_0.png
6544 ./assets/image_1664737228970_0.png
6224 ./assets/CleanShot_2022-11-29_at_16.57.58@2x_1669712493583_0.png
When I use the default value of chunk size
, which is 100, livesync always returns error document too_large.
But when I set chuck size
to 50, it successful replicates the vault.
@PetrusZ
Thank you for your reporting! I will try to reproduce it!
May I ask about the configuration of the remote database? I would like to know howmax_document_size
and max_http_request_size
has been configured.
It can be dumped by the Make report
button on the Hatch
pane.
----remote config----
cluster:
seedlist: couchdb@couchdb-couchdb-0.couchdb-couchdb.database.svc.cluster.local
cors:
credentials: "true"
headers: accept, authorization, content-type, origin, referer
max_age: "3600"
methods: GET, PUT, POST, HEAD, DELETE
origins: app://obsidian.md,capacitor://localhost,http://localhost
chttpd:
bind_address: any
max_http_request_size: "4.294967296e+09"
port: "5984"
require_valid_user: "true"
admins: π
πΈπ·π΄πΆππΈπ·
vendor:
name: The Apache Software Foundation
feature_flags:
partitioned||*: "true"
chttpd_auth:
authentication_redirect: /_utils/session.html
require_valid_user: "true"
secret: FBYlTdT8yPsf5Qp6jJ3C
indexers:
couch_mrview: "true"
prometheus:
additional_port: "false"
bind_address: 127.0.0.1
port: "17986"
httpd:
WWW-Authenticate: Basic realm="couchdb"
bind_address: 127.0.0.1
enable_cors: "true"
port: "5986"
couch_httpd_auth:
authentication_db: π
πΈπ·π΄πΆππΈπ·
secret: π
πΈπ·π΄πΆππΈπ·
authentication_redirect: π
πΈπ·π΄πΆππΈπ·
couchdb_engines:
couch: couch_bt_engine
couchdb:
database_dir: ./data
max_document_size: "1.073741824e+10"
uuid: π
πΈπ·π΄πΆππΈπ·
view_index_dir: ./data
---- Plug-in config ---
couchDB_URI: self-hosted
couchDB_USER: π
πΈπ·π΄πΆππΈπ·
couchDB_PASSWORD: π
πΈπ·π΄πΆππΈπ·
couchDB_DBNAME: π
πΈπ·π΄πΆππΈπ·
liveSync: true
syncOnSave: false
syncOnStart: false
savingDelay: 200
lessInformationInLog: false
gcDelay: 0
versionUpFlash: ""
minimumChunkSize: 20
longLineThreshold: 250
showVerboseLog: false
suspendFileWatching: false
trashInsteadDelete: true
periodicReplication: false
periodicReplicationInterval: 60
syncOnFileOpen: false
encrypt: true
passphrase: π
πΈπ·π΄πΆππΈπ·
doNotDeleteFolder: false
resolveConflictsByNewerFile: false
batchSave: false
deviceAndVaultName: ""
usePluginSettings: false
showOwnPlugins: false
showStatusOnEditor: false
usePluginSync: true
autoSweepPlugins: true
autoSweepPluginsPeriodic: true
notifyPluginOrSettingUpdated: false
checkIntegrityOnSave: false
batch_size: 100
batches_limit: 40
useHistory: true
disableRequestURI: true
skipOlderFilesOnSync: true
checkConflictOnlyOnOpen: false
syncInternalFiles: false
syncInternalFilesBeforeReplication: false
syncInternalFilesIgnorePatterns: \/node_modules\/, \/\.git\/, \/obsidian-livesync\/,workspace$
syncInternalFilesInterval: 60
additionalSuffixOfDatabaseName: avalon
ignoreVersionCheck: false
lastReadUpdates: 17
deleteMetadataOfDeletedFiles: false
syncIgnoreRegEx: ""
syncOnlyRegEx: ""
customChunkSize: 50
readChunksOnline: true
watchInternalFileChanges: true
automaticallyDeleteMetadataOfDeletedFiles: 0
disableMarkdownAutoMerge: false
writeDocumentsIfConflicted: false
useDynamicIterationCount: false
syncAfterMerge: false
configPassphraseStore: LOCALSTORAGE
encryptedPassphrase: π
πΈπ·π΄πΆππΈπ·
encryptedCouchDBConnection: π
πΈπ·π΄πΆππΈπ·
permitEmptyPassphrase: false
useIndexedDBAdapter: false
useTimeouts: false
writeLogToTheFile: false
hashCacheMaxCount: 300
hashCacheMaxAmount: 50
concurrencyOfReadChunksOnline: 100
minimumIntervalOfReadChunksOnline: 333
Hi, here is my config.
Thank you for the detail! It looks configured by the exponential notation.
I have tried setting it up as well and it does not seem to be parsed as intended. (even 1.0e2
).
Could you please check this by setting it up in standard notation?
OK, max_document_size: 10737418240
and max_http_request_size: 4294967296
I had to change my nginx(webserver) config. something in the synchronization was exceeding the body size(since I got an http 413 error), so I added the client_max_body_size directive:
client_max_body_size 200M;
@PetrusZ