weblate
weblate copied to clipboard
WeblateLockTimeoutError during manual git operations
Describe the issue
Hi, Since switching to a new Weblate instance (that is running 5.3.1) I can no longer do manual git operations. When I trigger them via the UI it just runs for a long time and then does nothing. When doing it via REST API I get an error after two minutes.
Upon checking the logs I saw a WeblateLockTimeoutError. I understand that Weblate prevents multiple parallel operations from happening. However, I also get this after doing a full restart of the instance and then calling the API right away.
The background queues are empty so I am not sure what is locking the files. In the old instance with the same settings this worked without issue.
Is there a way to find out what is locking the files?
I already tried
- [X] I've read and searched the documentation.
- [X] I've searched for similar filed issues in this repository.
Steps to reproduce the behavior
- Go to settings of a component
- Go to the repository maintenance
- Click on push or update
Expected behavior
Weblate performs the selected git operation.
Screenshots
No response
Exception traceback
gunicorn stderr | File "/usr/local/lib/python3.11/site-packages/django/core/handlers/exception.py", line 55, in inner
gunicorn stderr | response = get_response(request)
gunicorn stderr | ^^^^^^^^^^^^^^^^^^^^^
gunicorn stderr | File "/usr/local/lib/python3.11/site-packages/django/core/handlers/base.py", line 197, in _get_response
gunicorn stderr | response = wrapped_callback(request, *callback_args, **callback_kwargs)
gunicorn stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gunicorn stderr | File "/usr/local/lib/python3.11/site-packages/django/views/decorators/csrf.py", line 56, in wrapper_view
gunicorn stderr | return view_func(*args, **kwargs)
gunicorn stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^
gunicorn stderr | File "/usr/local/lib/python3.11/site-packages/rest_framework/viewsets.py", line 125, in view
gunicorn stderr | return self.dispatch(request, *args, **kwargs)
gunicorn stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gunicorn stderr | File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 509, in dispatch
gunicorn stderr | response = self.handle_exception(exc)
gunicorn stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^
gunicorn stderr | File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 469, in handle_exception
gunicorn stderr | self.raise_uncaught_exception(exc)
gunicorn stderr | File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
gunicorn stderr | raise exc
gunicorn stderr | File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 506, in dispatch
gunicorn stderr | response = handler(request, *args, **kwargs)
gunicorn stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gunicorn stderr | File "/usr/local/lib/python3.11/site-packages/weblate/api/views.py", line 257, in repository
gunicorn stderr | "result": self.repository_operation(
gunicorn stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^
gunicorn stderr | File "/usr/local/lib/python3.11/site-packages/weblate/api/views.py", line 237, in repository_operation
gunicorn stderr | return getattr(obj, method)(*args)
gunicorn stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
gunicorn stderr | File "/usr/local/lib/python3.11/site-packages/weblate/trans/models/component.py", line 192, in on_link_wrapper
gunicorn stderr | return func(self, *args, **kwargs)
gunicorn stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
gunicorn stderr | File "/usr/local/lib/python3.11/site-packages/weblate/trans/models/component.py", line 1761, in do_push
gunicorn stderr | self.commit_pending(
gunicorn stderr | File "/usr/local/lib/python3.11/site-packages/weblate/trans/models/component.py", line 192, in on_link_wrapper
gunicorn stderr | return func(self, *args, **kwargs)
gunicorn stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
gunicorn stderr | File "/usr/local/lib/python3.11/site-packages/weblate/trans/models/component.py", line 1932, in commit_pending
gunicorn stderr | with self.repository.lock:
gunicorn stderr | File "/usr/local/lib/python3.11/site-packages/weblate/utils/lock.py", line 86, in __enter__
gunicorn stderr | self._enter_implementation()
gunicorn stderr | File "/usr/local/lib/python3.11/site-packages/weblate/utils/lock.py", line 70, in _enter_redis
gunicorn stderr | raise WeblateLockTimeoutError(
gunicorn stderr | weblate.utils.lock.WeblateLockTimeoutError: Lock on lock:repo:16 could not be acquired in 120s
How do you run Weblate?
Docker container
Weblate versions
5.3.1
Weblate deploy checks
check stderr | System check identified some issues:
check stderr |
check stderr | INFOS:
check stderr | ?: (weblate.I021) Error collection is not set up, it is highly recommended for production use
check stderr | HINT: https://docs.weblate.org/en/weblate-5.3.1/admin/install.html#collecting-errors
check stderr | ?: (weblate.I028) Backups are not configured, it is highly recommended for production use
check stderr | HINT: https://docs.weblate.org/en/weblate-5.3.1/admin/backup.html
check stderr |
check stderr | System check identified 2 issues (1 silenced).
Additional context
No response
It is possible that there will be stale lock after some operation crashed. Restarting won't help, as locks are stored in redis and persist for some time. Does the issue persist for you?
This issue has been marked as a question by a Weblate team member. Why? Because it belongs more to the professional Weblate Care or community Discussions than here. We strive to answer these reasonably fast here, too, but purchasing the support subscription is more responsible and faster for your business. And it makes Weblate stronger as well. Thanks!
In case your question is already answered, making a donation is the right way to say thank you!
It did resolve itself the next day but now we have it again. I did some more investigation and I think this is actually triggered by some connection issue to the VCS system. Could it be that when the connection fails that it locks the files until it works again (since it retries after some time)?
We now had an error with the connection and when we then hit retry it reported the locked file error again.
If the process gets killed for some reason, there will be a stale lock for one hour. In case of exception, the lock should be gracefully released. However, on error, another process might be trying it in the background, and thus the repo is locked. You should see this from logs.
This issue has been automatically marked as stale because there wasn’t any recent activity.
It will be closed soon if no further action occurs.
Thank you for your contributions!