megacmd
megacmd copied to clipboard
Bash sync script example (with sync deletes)
Hi,
I thought I'd share this with you guys.
Since I was trying to use megacmd as something equivalent to rsync, I have created a small bash script that kind of implements sync deletes. It is far from perfect, but it does the job.
#!/bin/bash
NASSHARE="/media/NAS/"
# Check if dir exists
if [ -d "$NASSHARE" ]; then
# List directories and files in tmpfile. Remove attributes like size, time etc.
find $NASSHARE >> /tmp/syncnas_local_listing
sed -i 's/\([ \t]\+[^ \t]*\)\{2\}$//' /tmp/syncnas_local_listing
# Remote listing of mega.co.nz. Remove attributes like size, time etc.
megacmd -recursive list mega:/somefolder/ > /tmp/syncnas_remote_listing 2>&1
sed -i 's/\([ \t]\+[^ \t]*\)\{2\}$//' /tmp/syncnas_remote_listing
# Normalize mega.co.nz paths
sed -i "s#mega:/somefolder/#$NASSHARE#g" /tmp/syncnas_remote_listing
# Remove trailing backslash of path.
sed -e 's#/$##' -i /tmp/syncnas_remote_listing
# Read mega.co.nz listing. If file/dir is not in local listing, delete it.
FILE=$1
while read line; do
# last $ is because some files can contain this filename.
if ! grep -q "${line}$" "/tmp/syncnas_local_listing"; then
# Restore path
DELETEME=$(echo $line | sed "s#$NASSHARE#mega:/somefolder/#g")
# Escape special characters and spaces.
CLEANDELETEME=$(echo $(printf %q "$DELETEME"))
# Put paths to be delete in seperate file.
echo $CLEANDELETEME >> /tmp/syncnas_remote_delete
fi
done < /tmp/syncnas_remote_listing
# Delete files.
if [ -f "/tmp/syncnas_remote_delete" ]; then
cat /tmp/syncnas_remote_delete | while read line ; do megacmd delete "$line" ; done
fi
# Sync local -> remote.
megacmd -recursive=true -verbose=1 sync $NASSHARE mega:/somefolder
fi
# TRemove tmpfiles
rm -f /tmp/syncnas_local_listing
rm -f /tmp/syncnas_remote_listing
rm -f /tmp/syncnas_remote_delete
hey have you made any improvements on this script?
Haven't used Mega for a while now, since the original owner (Kim Dotcom) stated that it isn't safe anymore (Chinese takeover). Also, the connection had some timeouts during large backups...
Now using cheap Amazon S3 storage.