"bad_request:invalid UTF-8 JSON" during rebuild everything - after having uploaded 7250 notes
Plugin version 0.16.0: I haven't touched my server in months, so I don't think this is a server mis-configuration (everything seemed to have worked flawlessly for months), today I'm getting this. Note that there are plenty of _bulk_docs requests which work, up to a point where they fail. This further seems to point at a client side problem, rather then server side:

My first guess would be, that this is maybe caused by a particular note in my local vault - but I can't tell which from the logs...?
Thank you for asking me! looks like so you are troubled.
Does every synchronising raise this error? or can sometimes be done?
And could you please check the response that causing Invalid JSON by looking into the Response Pane of DevTool (It can be shown by selecting the red request)
Yes, this happens on every sync, i.e. it never fully completes due to this error.
And the response just says this, no additional clue there:

The request payload looks ok to me:

I suppose there's something "wrong" with one of the 250 docs being sent, but which ...? I'd be happy to change the code in a way that it would upload single docs instead of the 250 batch, maybe that way we can figure out which one is the culprit...
Also checked the server logs (normal docker logs of the couchdb container), nothing out of the ordinary there.
Thank you for the detail! But still, I am wondering what makes this. I'm very glad you have helps.
We can reduce the batch size at Batch size and Batch limit at the Advanced settings in the Sync settings pane. And if you have changed Chunk size, it may affect.
(If chunk size has enlarged, we have to make sure that max_http_request_size and max_document_size are also be enlarged. It can be configured Check database configuration in Remote database configuration)
And JSON parsing error would be shown in the CouchDB's log if we had set the logging level of CouchDB to debug. But it makes so many logs.
Ok, up and running again... By reducing the batch size to 1, I managed to isolate the "offending" note. After having deleted it, everything ran smoothly again. Since that offending note sat in my vault for a long time, I wonder why it caused an issue now, though?
Next, the check database log did report 2 issues with max_http_request_size and max_document size (if I remember correctly), I clicked fix for good measure - I take it these are client side settings.
At any rate, I will try to recreate the problem in a test vault with that offending note once I have more time. If the note's size indeed was the problem, I think it would be helpful to catch such an error in a way that it's clear what the problem actually is (e.g. the size of the note or the size config params)
Update: That "offending" note has 384 KB (an old web clip) - is there a size limit somewhere? This somehow never came up as a problem and it sat untouched in my vault for a long time ...
Thank you! It may have happened with the chunk sizes. If we had changed the chunk size, these are effects only subsequent changesets.
Next, the check database log did report 2 issues with max_http_request_size and max_document size (if I remember correctly), I clicked fix for good measure - I take it these are client side settings.
Fix buttons are changing the server-side settings directly. If we click these buttons, CouchDB will accept larger requests and documents.
A note will be split into chunks. and it depends on its content. we can show how they had been done by Dump information of this doc.
If there are many chunks (over 100 or above) please inform me.
If we had changed the chunk size, these are effects only subsequent changesets.
I still got the error yesterday after I had clicked the fix buttons and the file was still in the vault. I don't have time to set up a test vault and test couchdb at the moment, here's that file: https://drive.google.com/file/d/1PsK0er-ArY2E_AzvOizi8rS7eF3oe2mk (since it's just an old web clip, no problem sharing it publicly)
Thank you for the file! I tried synchronising the file and synchronising could be done straightly, Maybe this can be sent normally with our latest configuration.
Thanks for testing this. As I said
I still got the error yesterday after I had clicked the fix buttons and the file was still in the vault.
so this seems to be something peculiar in my environment. I will try to set up a test vault and test couchdb later and see if I can reproduce it there.
#144 some issue happened here all check passed in "check database configuration "