paperless-ng
paperless-ng copied to clipboard
This writer is closed
I can only seem to add one document to the database. Upon adding a second document, the dashboard gives:
This writer is closed
Logs have this output:
Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/asgiref/sync.py", line 288, in main_wrap raise exc_info[1] File "/usr/src/paperless/src/documents/consumer.py", line 296, in try_consume_file document_consumption_finished.send( File "/usr/local/lib/python3.9/site-packages/django/dispatch/dispatcher.py", line 180, in send return [ File "/usr/local/lib/python3.9/site-packages/django/dispatch/dispatcher.py", line 181, in <listcomp> (receiver, receiver(signal=self, sender=sender, **named)) File "/usr/src/paperless/src/documents/signals/handlers.py", line 388, in add_to_index index.add_or_update_document(document) File "/usr/src/paperless/src/documents/index.py", line 140, in add_or_update_document update_document(writer, document) File "/usr/local/lib/python3.9/contextlib.py", line 135, in __exit__ self.gen.throw(type, value, traceback) File "/usr/src/paperless/src/documents/index.py", line 94, in open_index_writer writer.commit(optimize=optimize) File "/usr/local/lib/python3.9/site-packages/whoosh/writing.py", line 1037, in commit self.writer.commit(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/whoosh/writing.py", line 920, in commit self._check_state() File "/usr/local/lib/python3.9/site-packages/whoosh/writing.py", line 555, in _check_state raise IndexingError("This writer is closed") whoosh.writing.IndexingError: This writer is closed
Same issue here
I never resolved it. Has to give up. Happy to try again if you have any ideas?
On Sat, 1 Jan 2022, 22:50 Mert Safter, @.***> wrote:
Same issue here
— Reply to this email directly, view it on GitHub https://github.com/jonaswinkler/paperless-ng/issues/1346#issuecomment-1003568903, or unsubscribe https://github.com/notifications/unsubscribe-auth/AO7T5KHEFBW56DSQ5EUGO53UT4IENANCNFSM5EYVDOIA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you authored the thread.Message ID: @.***>
Hi, I am using paperless within a docker container and managed to fix this issue, by deleting all folders I've mounted and the paperless docker image and did a fresh installation of the image. After that I so far don't get the issue anymore.
Of course I did a backup of the original uploaded files in the media/documents/original folder. And after the clean installation I moved all the backuped files to the consume folder and all (30 documents) were processed without issues.
Hi, Same issue here. Tried to switch between Postgre and SQLite, no differences.
I was wondering if some of you also use MergerFS ? If I switch the binds to "regular" FS directories, I have no issues. If I copy these directories to an MergerFS mount and use them, got the same issues back. I know there are issues between MergerFS and nmap()
Edit: removing cache.files=off
from mount options (in OpenMediaVault in my case) seems to fix it 👍
Yup that could be it. I am using mergerFS. Will test shortly.
On Sun, 3 Apr 2022, 02:04 Vincent Lark, @.***> wrote:
Hi, Same issue here. Tried to switch between Postgre and SQLite, no differences.
I was wondering if some of you also use MergerFS ? If I switch the binds to "regular" FS directories, I have no issues. If I copy these directories to an MergerFS mount and use them, got the same issues back.
I know there are issues between MergerFS and nmap()
— Reply to this email directly, view it on GitHub https://github.com/jonaswinkler/paperless-ng/issues/1346#issuecomment-1086692638, or unsubscribe https://github.com/notifications/unsubscribe-auth/AO7T5KBXEOV4MTHOJCJFJUTVDCD3FANCNFSM5EYVDOIA . You are receiving this because you authored the thread.Message ID: @.***>