rsinc
rsinc copied to clipboard
google drive pulls are being reset
Hello, there is an issue with google drive after letting it for a while i found these errors with rclone verbose When one of the threads reaches 10/10 it resets the pulled file and it starts over I belive this can be partially or fully corrected by reducing the multiprocess threads.
I see there is no command line arguments for this option in rsinc, so can you please point me where is the code that manages the number of multiprocess to change it and test with it, thanks
2020/08/14 11:06:57 INFO : Transferred: 166.996M / 347.193 MBytes, 48%, 135.758 kBytes/s, ETA 22m39s Transferred: 0 / 1, 0% Elapsed time: 20m59.6s Transferring:
-
TT T01-08.mkv: 48% /347.193M, 59.334k/s, 51m49s
2020/08/14 11:06:58 INFO : Transferred: 197.996M / 347.229 MBytes, 57%, 160.962 kBytes/s, ETA 15m49s Transferred: 0 / 1, 0% Elapsed time: 20m59.6s Transferring:
-
TT T02-01.mkv: 57% /347.229M, 148.126k/s, 17m11s
2020/08/14 11:06:58 DEBUG : TT T01-13.mkv: Reopening on read failure after 175771803 bytes: retry 2/10: read tcp 10.120.234.50:54694->172.217.6.42:443: read: connection reset by peer 2020/08/14 11:07:07 DEBUG : TT T01-11.mkv: Reopening on read failure after 198178641 bytes: retry 4/10: read tcp 10.120.234.50:34718->216.58.195.74:443: read: connection reset by peer 2020/08/14 11:07:38 DEBUG : TT T01-11.mkv: Reopening on read failure after 201404911 bytes: retry 5/10: read tcp 10.120.234.50:41028->172.217.6.74:443: read: connection reset by peer 2020/08/14 11:07:38 DEBUG : TT T01-10.mkv: Reopening on read failure after 181194896 bytes: retry 2/10: read tcp 10.120.234.50:55814->172.217.164.106:443: read: connection reset by peer 2020/08/14 11:07:43 DEBUG : TT T01-09.mkv: Reopening on read failure after 180146343 bytes: retry 2/10: read tcp 10.120.234.50:55796->172.217.164.106:443: read: connection reset by peer 2020/08/14 11:07:45 DEBUG : TT T01-13.mkv: Reopening on read failure after 182321597 bytes: retry 3/10: read tcp 10.120.234.50:41026->172.217.6.74:443: read: connection reset by peer 2020/08/14 11:07:53 DEBUG : TT T01-12.mkv: Reopening on read failure after 190032848 bytes: retry 2/10: read tcp 10.120.234.50:40984->172.217.6.74:443: read: connection reset by peer 2020/08/14 11:07:54 DEBUG : TT T01-11.mkv: Reopening on read failure after 206549487 bytes: retry 6/10: read tcp 10.120.234.50:34732->216.58.195.74:443: read: connection reset by peer
Hi there, thanks for looking into this.
The multiprocess threads are controlled by the class SubPool in classes.py. The instance that controles syncing is in sync.py and the number of threads is controlled by the hard coded variable NUMBER_OF_WORKERS = 7 in the same file.
What exactly is happening at 10/10 does is reset and fail or just reset and take longer than nessasary?
Hi there, thanks for looking into this.
The multiprocess threads are controlled by the class
SubPoolin classes.py. The instance that controles syncing is in sync.py and the number of threads is controlled by the hard coded variableNUMBER_OF_WORKERS = 7in the same file.What exactly is happening at 10/10 does is reset and fail or just reset and take longer than nessasary?
I clicked the wrong button lol
after 10/10 it resets and takes longer than nessesary and in some cases it reaches 10/10 again and it resets again
thanks for the response ill test it and get back to you
Hi there, thanks for looking into this.
The multiprocess threads are controlled by the class
SubPoolin classes.py. The instance that controles syncing is in sync.py and the number of threads is controlled by the hard coded variableNUMBER_OF_WORKERS = 7in the same file.What exactly is happening at 10/10 does is reset and fail or just reset and take longer than nessasary?
Hello, i tested with 3 workers and also --multi-thread-streams 0 --transfers 1
It improved, I still got theese errors but it didnt reached 10/10 this time 2020/08/14 11:07:54 DEBUG : TT T01-11.mkv: Reopening on read failure after 206549487 bytes: retry 6/10: read tcp 10.120.234.50:34732->216.58.195.74:443: read: connection reset by peer
so Ill continue to use it with 3 or 2 workers
Thanks for the help.