rclone icon indicating copy to clipboard operation
rclone copied to clipboard

Feature request: Listen filesystem events and sync

Open dbcm opened this issue 10 years ago • 50 comments

It would be nice to have rclone in daemon mode, listening some paths and when something change, rclone sync it.

dbcm avatar Dec 11 '15 14:12 dbcm

A nice idea. Could use this library: https://github.com/fsnotify/fsnotify

Or maybe this one which supports recursive watches which will be essential: https://github.com/rjeczalik/notify

Maybe a user interface I'd choose is something like this

rclone fsnotify sync/copy/move /source/path remote:test

What this would do is run the sync/copy/move then wait for an fsnotify event and sync/copy/move the files that changed.

Maybe users would not want the initial sync/copy/move - not sure.

ncw avatar Dec 11 '15 14:12 ncw

\o/

dbcm avatar Dec 11 '15 14:12 dbcm

+1

dibu28 avatar Dec 11 '15 18:12 dibu28

Having doubts this CloudSyncHelper is what it's claiming to be: https://github.com/XElementSoftware/CloudSyncHelper/blob/master/deployment/CloudSyncHelper.xml

On Tue, Dec 15, 2015 at 8:13 AM, timofonic [email protected] wrote:

I have an issue with using Google Drive over Ext4 under Windows. I use dual and hate it, but I have no option because of homework.

Here's the relevant issue over Ext4 by using ext2fsd

:

https://sourceforge.net/p/ext2fsd/discussion/143329/thread/37c888a1/?limit=25 [image: googledrivebigeyecantbelied] https://cloud.githubusercontent.com/assets/49494/11811379/7c17e3aa-a335-11e5-8925-7408418abe44.PNG [image: googledrivehardlinktoext4directory] https://cloud.githubusercontent.com/assets/49494/11811380/7c1d0a06-a335-11e5-8784-e3284d5fe3ef.PNG [image: google-drive-wants-ntfs] https://cloud.githubusercontent.com/assets/49494/11811381/7c25b8a4-a335-11e5-8923-cadd1785dd6a.PNG

I found CloudSyncHelper might be useful, but I'm unable to understand how to install it. I submitted an issue: XElementSoftware/CloudSyncHelper#6 https://github.com/XElementSoftware/CloudSyncHelper/issues/6

— Reply to this email directly or view it on GitHub https://github.com/ncw/rclone/issues/249#issuecomment-164759008.

zeshanb avatar Dec 15 '15 16:12 zeshanb

You can use odrive for that: https://www.odrive.com/

danzig666 avatar Dec 16 '15 13:12 danzig666

+1

dibu28 avatar Dec 25 '15 10:12 dibu28

@dibu28 are you interested in working on this?

ncw avatar Jan 02 '16 15:01 ncw

I don't know much about Go, but I did find a Go wrapper for inotify here:

https://godoc.org/golang.org/x/exp/inotify

Only for Linux it seems but might be able to be extended.

Inotify is very simple and tends to work well. Maybe inotify's features would be somewhat easy to integrate?

mlanner avatar Jan 02 '16 17:01 mlanner

I'd rather use this one, it's cross platform: https://github.com/go-fsnotify/fsnotify We probably need to keep a local database of remote files too.

danzig666 avatar Jan 02 '16 18:01 danzig666

Yes, it seems better. Kind of where this thread started anyway. :)

mlanner avatar Jan 02 '16 18:01 mlanner

@timofonic: Please stop adding "bump" messages. This is a new feature, and there are a lot of those.

This is not as simple as adding two lines of code, this is complex and will require a lot of work to function properly. I have done file monitoring, and there are a lot of cases you need to consider. This is also why a lot of the current applications offering this feature break all the time.

This is a free time project, and @ncw (and the rest of us) are free to chose what we work on. In my personal opinion there are a lot of more important features to work on. Look at the amount of current "support" issues - if we add another "fragile" feature that will not be helped.

klauspost avatar Feb 18 '16 12:02 klauspost

Until this is available, thinking of running an rclone cron job. I am using rclone to backup files to a remote server, so running once daily is sufficient for me.

My only concern is for the rare scenario whether the previous rclone job has not finished while a new one begins. Tested this scenario and the remote had a few duplicate files uploaded from both jobs. Need to enforce mutual exclusion on the jobs, either by detecting running rclone job (through pgrep) and then do a delayed retry of the new job, or by having the jobs use a lock file for mutual exclusion.

Anyone else handle this a different way?

protonmesh avatar Sep 26 '16 14:09 protonmesh

How about a Bash daemon which uses inotify and then spawns rclone? The Bash scripts can be found at https://github.com/resipsa2917/rcloned

resipsadude avatar Oct 15 '16 18:10 resipsadude

@protonmesh Use flock.

bobbintb avatar Nov 03 '16 06:11 bobbintb

I wrote this small Bash service that may be of interest: https://github.com/rhummelmose/rclonesyncservice

rhummelmose avatar Jan 10 '17 21:01 rhummelmose

@rhummelmose very nice. It is about time I made a third party tools page which I could link that from... I made an issue to remind myself to do it #1019

ncw avatar Jan 11 '17 09:01 ncw

There is also this, if anyone is interested: https://forum.rclone.org/t/sync-daemon-for-rclone/252

bobbintb avatar Jan 12 '17 04:01 bobbintb

The original ticket is in regards to having rclone work in dameon mode where a directory is monitored for changes and rclone is executed.

Mr. Carriere above this chain already has a practical solution by using linux cron jobs. Probably having rclone work in dameon mode is out of scope for rclone and maybe it's best to value separation of concerns.

On Fri, Jul 14, 2017, 7:53 AM Timofonic [email protected] wrote:

Any hope about this? Does FUSE/WinFSP support makes this irrelevant?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ncw/rclone/issues/249#issuecomment-315341547, or mute the thread https://github.com/notifications/unsubscribe-auth/AGXRY5wSsHXQLbdMXgw7L4T_2fdgI8n_ks5sN1bMgaJpZM4GznNl .

zeshanb avatar Jul 14 '17 13:07 zeshanb

I think it is a nice idea. I haven't had time to work on it though, so If anyone else has, I'd be happy to guide them.

ncw avatar Jul 14 '17 14:07 ncw

I can share a temporary solution for this I've rolled using incron, and a small shell script, if anyone wants it.

Redrield avatar Sep 12 '17 19:09 Redrield

Hi @Redrield , could you share you shell script? I want to have a try.

Cheers!

Vamoss avatar Oct 25 '17 17:10 Vamoss

@Vamoss I've actually converted it to a python script, but here you go https://gist.github.com/Redrield/81dd7c0cebd5760c65d1127495e487f8 You're gonna need to install pyinotify (and inotify for the actual OS)

Redrield avatar Oct 25 '17 17:10 Redrield

This is what I do on linux. The script runs rclone whenever a file changes, but also every 300secs (timeout) to make sure to update when remote files have changed. The sleep helped against too frequent updates.

#!/bin/bash while [[ true ]] ; do rclone sync .... while inotifywait --recursive --timeout 300 -e modify,delete,create,move ~/XXX ; do rclone sync .... echo sleeping sleep 5 done done

vrossum avatar Mar 07 '18 16:03 vrossum

rclone serve syncthing would be something interesting although as already said probably out of scope.

Of course you can always have a local copy of the folder with syncthing on it and rclone mount + another syncthing instance in a different path or remote, but at that point it's better to just use a more robust fs implementation since rclone mount it still experimental

untoreh avatar Jun 04 '18 15:06 untoreh

Hi, Dropbox will be only supporting unencrypted ext4 on Nov 7, the syncing will be artificially made not to work for other filesystems, including btrfs which is the one I am using.

If rclone supports daemon like syncing mode, I will be very happy that I can easily escape from Dropbox.

Saren-Arterius avatar Aug 11 '18 13:08 Saren-Arterius

+1 for rclone serve syncthing It would be damn awesome if rclone would integrate with syncthing!

Hoeze avatar Aug 19 '18 22:08 Hoeze

#!/bin/bash

while [[ true ]] ; do

	# performs synchronizations / copy
	rclone sync ....

	# waiting for something to change or it will pass 300 seconds
	inotifywait --recursive --timeout 300 -e modify,delete,create,move   SRC_DIR

	# going back to the beginning
done

ghost avatar Sep 28 '18 07:09 ghost

#!/bin/bash

while [[ true ]] ; do

	# performs synchronizations / copy
	rclone sync ....

	# waiting for something to change or it will pass 300 seconds
	inotifywait --recursive --timeout 300 -e modify,delete,create,move   SRC_DIR

	# going back to the beginning
done

can you tell me how to use it so it automatically runs from boot until shutdown in the background?

backamblock avatar Jan 13 '19 20:01 backamblock

can you tell me how to use it so it automatically runs from boot until shutdown in the background?

@backamblock To have the script run automatically on reboot as your own user, you can add it into your crontab (crontab -e) with the special @reboot tag:

@reboot /<complete>/<path>/<to>/your-script

Note that cron does not use the same PATH variable as your shell, so it's easiest to simply give the complete path to the script to execute it. Be careful to set the PATH variable to what you need in the script as well, or the helper utilities (rclone and inotify) may not be found.

raylee avatar Dec 26 '19 20:12 raylee

#!/bin/bash

while [[ true ]] ; do

	# performs synchronizations / copy
	rclone sync ....

	# waiting for something to change or it will pass 300 seconds
	inotifywait --recursive --timeout 300 -e modify,delete,create,move   SRC_DIR

	# going back to the beginning
done

This will cause a full re-scan each time a file is changed, won't it? => rsync will more or less rescan the whole file system all the time

Hoeze avatar Dec 28 '19 19:12 Hoeze