obsidian-livesync icon indicating copy to clipboard operation
obsidian-livesync copied to clipboard

413-replication error after setting chunk size. And could not reduce chunk size parameter

Open justemu opened this issue 2 years ago • 7 comments

This plugin once worked very well. The error comes out after a major version update (I think it is 0.14.1). I set the chunk size to 100 as recommened for self-hosted couchDB.

plugin:obsidian-livesync:9054 POST https://couchdb.xxxx.xx/ob/_bulk_docs 413

I know it should be a reverse proxy nginx setting problem. So I checked the nignx.conf. The setting is 'client_max_body_size 0' in the http section which means no limits in nginx.

To notice that the nginx was in a NAS. Up and down-loading of big files behind the same reverse proxy is smooth. So, it should not be a problem caused by the nginx.


Details of my server:

Synoloy NAS running a docker service with a container of CouchDB 3.2.2 Reverse proxy of nginx in the same NAS to provide a https layer.


I am trying to reduce the chunk size to like 50 to see if the problem could be solved. However, when I come back and check the parameter. It is still 100. I have no way to reduce the chunk size setting.

justemu avatar Sep 17 '22 09:09 justemu

Thank you for asking me! Yes. Chunk size is recommended to be set to around 100. But also we have to configure the CouchDB. Could you please try Check database configuration in the remote setting pane of the setting dialogue? It should check the configuration and we can fix issues on the spot.

And I missed the note. Documents will be improved.

I am trying to reduce the chunk size to like 50 to see if the problem could be solved. However, when I come back and check the parameter. It is still 100. I have no way to reduce the chunk size setting.

This might be the different issue, I will investigate it.

vrtmrz avatar Sep 17 '22 09:09 vrtmrz

Thank you for immediate response.

For the first question, I have tried Check database configuration . There is no problem. image

The log info in the console is like this:

image

It was an Replication error, when the client was trying to POST to /ob/_bulk_docs. The error code was 413, document_too_large.

I was able to sync (replication) in previous versions of the plugin. So I think it was the setting of chunk size parameter to large for my server.

I have no idea which part creates the limit in the first place. I think it should not be the nginx reverse porxy. So I also checked the log of the couchdb docker container. It was like this: image

http 413 is only associated with POST to /ob/_bulk_docs.

As for the second issue, that I could not reduce the chunk size setting, is very wierd as well.

justemu avatar Sep 17 '22 15:09 justemu

Thank you for the detailed information!

Did you roll LiveSync back? Check remote configuration looks like it had not test chttpd.max_http_request_size and couchdb.max_document_size. These items are only on 0.14.3 or above.

And once chunks are larger, they will be left there if we configured the chunk size again. please do Rebuild everything once.

But possibly, setting lower values(100,20 ~ 50,10) into Batch size and Batch limin in `Sync settings. This configuration also reduces the request size in replications.

And you can set Chun size to zero, by editing the JSON ,obsidian/plugins/data.json you can set customChunkSize to 0

vrtmrz avatar Sep 17 '22 17:09 vrtmrz

It is very wierd that my version of the plugin is 0.15.4. And it had not test chttpd.max_http_request_size and couchdb.max_document_size. I am trying to rebuild remote database to see what would happen. I will come back to report everything once rebuilding finished.

justemu avatar Sep 22 '22 14:09 justemu

Rebuild Everything:

  1. rebuilding of local databse succeeded.

  2. locking of remote databse succeeded.

  3. Remote Database destoyed. image

  4. Oneshot Sync from local to remote failed for error 'document_too_large'. status 413.

  5. Retry with lower batch size 127/12. Failed again

  6. Retry with lower batch size 66/8. Failed again

  7. Retry with lower batch size 35/6. Failed again

  8. Retry with 20/5. Failed again

  9. Retry with 12/5. Failed again

  10. Retry with 8/5. Failed

  11. Retry with 6/5. Failed

  12. Can't replicate more lower value. image image

To note from the captured log that the first 4 batches of 250 files (1000 in total) replicated sucessfully. It was the 5th batch that failed. There might be a large file in that batch.

The largest files in my vault are the recording files .webm . But I have added webm$ into skip pattern. It seems that the skip pattern of webm$ did not work.

Here is my skip pattern \/node_modules\/, \/\.git\/, \/obsidian-livesync\/, \/workspace$, webm$

Finnally, I checked my database configuration again. There is no items like chttpd.max_http_request_size or couchdb.max_document_size

image

justemu avatar Sep 22 '22 15:09 justemu

Thank you for your careful verification! It getting be clear! I appreciate that you did not gave it up.

May be, the chunk that h:+480891820 is too large to configuration.

/node_modules/, /.git/, /obsidian-livesync/, /workspace$, webm$

This is a configuration of hidden file synchronisation. if you skip the file in normal synchronisation, please configure it Regular expression to ignore files.

Finnally, I checked my database configuration again. There is no items like chttpd.max_http_request_size or couchdb.max_document_size

This is the most mysterious behaviour. These items will be hidden if we do not use IBM cloudnt. I will investigate it.

vrtmrz avatar Sep 22 '22 15:09 vrtmrz

@justemu It should have been fixed at 0.15.9. Could you please try?

vrtmrz avatar Oct 01 '22 17:10 vrtmrz