Linux Client ignores timeout-settings
Hi, I posted this in #1170 already some time ago, but maybe this wasn't the right place, since this issue was orgininally about windows client.
Expected behaviour
When you set the connection-timeout in ~/.config/Nextcloud/nextcloud.cfg or with the system variable OWNCLOUD_TIMEOUT=xxx well, the client should use this value as a timeout. I tried with 1800s, that means 30min. But at least according to https://docs.nextcloud.com/desktop/2.6/advancedusage.html the default value should be 300 seconds, that means 5min.
Actual behaviour
When you start the client it starts scanning all files in the syncing directory, at least if, like in my case, the upload is not completed yet after a new install of the server. This may take very long, very CPU intensive time, depending on computer speed and number of the files. In may case maybe about 20mins or so (not sure), but usually, this process brakes up before because of a "timeout", so written in the client window. This often even happens already after about 2 minutes, so even less than the default timeout value. After this timeout all the process starts all over again. This means sometimes even after 10 hours or more not even one single file is uploaded and the nextcloud becomes actually completely useless :(
Steps to reproduce
- install new server or at least create an empty account
- download and start appimage linux client
- choose a syncing-directory with quite a lot of files, in my case about 300GB in total
- maybe for the reproduction of the timeout you need an quite unstable internet connection with long pings and package losses to the server? At least thats what I have. :( Right now I can't try with a better connection, but I think, actually I had this problem before as well, but it was not so tragic with a better connection. But still, internet connection now is good enough to upload files over the webinterface, so it should be possible using the client as well!
Client configuration
Operating system: debian stretch
OS language: german
Qt version used by client package (Linux only, see also Settings dialog): Qt 5.12.5
Client package (From Nextcloud or distro) (Linux only): Nextcloud-2.6.4-x86_64.AppImage (stable, build 20200303 ) (but I also tried the old Nextcloud-2.5.1-x86_64.AppImage, same problem)
Installation path of client: simply start of the appimage, no installation. in ~/.nextcloud_appimage
Server configuration
Nextcloud version: Nextcloud 18.0.2 debian buster Apache 2.4.38 PHP 7.3.11-1~deb10u1
Storage backend (external storage): Maria DB 10.3.22-MariaDB-0+deb10u1
Logs
I tried to log, but it's really extremely unpractically, since every file scan is written to the log which makes it extremely long. All file scans means a logfile of about 60MB, but since the scan starts over and over again after some days I have several GB's of logs! A reduced error log file would be great, but this is not possible yet I guess?
This bug report did not receive an update in the last 4 weeks. Please take a look again and update the issue with new details, otherwise the issue will be automatically closed in 2 weeks. Thank you!
I guess the issue still exists, but since I have stable internet again and everything is synced right now it does not occur anymore.
this process brakes up before because of a "timeout", so written in the client window.
Do you happen to have the actual timeout message you saw?
This often even happens already after about 2 minutes, so even less than the default timeout value.
The timeout / OWNCLOUD_TIMEOUT parameter is only the HTTP connection timeout. It may not be applicable to the situation you're describing.
Hi @joshtrichards Thanks for asking! I still have stable internet connection (gladly ;) so I am unable to detect this problem again. Guess you would have to emulate a bad internet connection with package loss to see these timeouts. Indeed I expect to have a bad internet connection again in a few month, but I do not plan to sync everything again, which was the case when the timeout messages occured, so I do not expect to run into this problem this time.
Hello,
thank you for reporting this issue.
This issue was reported long time ago on old versions and we improved a lot since then. If the error shows up again, please open an issue for the current versions.