juicesync
juicesync copied to clipboard
A tool to move your data between any clouds or regions.
juicesync
Juicesync is a tool to move your data in object storage between any clouds or regions, also support local disk, SFTP, HDFS and many more storage.
Juicesync is an alias to juicefs sync
, but it may not contain the latest features and bug fixes of the juicefs sync
command, so it's recommended use JuiceFS instead.
How it works?
Juicesync will scan all the keys from two object stores, and comparing them in ascending order to find out missing or outdated keys, then download them from the source and upload them to the destination in parallel.
Install
With Homebrew
brew install juicedata/tap/juicesync
Download binary release
From here
Build from source
Juicesync requires Go 1.16+ to build:
go get github.com/juicedata/juicesync
Upgrade
Please select the corresponding upgrade method according to different installation methods:
- Use Homebrew to upgrade
- Download a new version from release page
Usage
Please check the juicefs sync
command documentation for detailed usage: https://juicefs.com/docs/community/administration/sync
$ juicesync -h
NAME:
juicesync - rsync for cloud storage
USAGE:
juicesync [options] SRC DST
SRC and DST should be [NAME://][ACCESS_KEY:SECRET_KEY@]BUCKET[.ENDPOINT][/PREFIX]
VERSION:
v0.7.1-2-g819f9c0
COMMANDS:
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--start KEY, -s KEY the first KEY to sync
--end KEY, -e KEY the last KEY to sync
--threads value, -p value number of concurrent threads (default: 10)
--http-port PORT HTTP PORT to listen to (default: 6070)
--update, -u update existing file if the source is newer (default: false)
--force-update, -f always update existing file (default: false)
--perms preserve permissions (default: false)
--dirs Sync directories or holders (default: false)
--dry don't copy file (default: false)
--delete-src, --deleteSrc delete objects from source after synced (default: false)
--delete-dst, --deleteDst delete extraneous objects from destination (default: false)
--exclude PATTERN exclude keys containing PATTERN (POSIX regular expressions)
--include PATTERN only include keys containing PATTERN (POSIX regular expressions)
--manager value manager address
--worker value hosts (seperated by comma) to launch worker
--bwlimit value limit bandwidth in Mbps (default: unlimited) (default: 0)
--no-https donot use HTTPS (default: false)
--verbose, -v turn on debug log (default: false)
--quiet, -q change log level to ERROR (default: false)
--help, -h show help (default: false)
--version, -V print only the version (default: false)
SRC and DST must be an URI of the following object storage:
- file: local disk
- sftp: FTP via SSH
- s3: Amazon S3
- hdfs: Hadoop File System (HDFS)
- gcs: Google Cloud Storage
- wasb: Azure Blob Storage
- oss: Alibaba Cloud OSS
- cos: Tencent Cloud COS
- ks3: Kingsoft KS3
- ufile: UCloud US3
- qingstor: Qing Cloud QingStor
- bos: Baidu Cloud Object Storage
- qiniu: Qiniu Object Storage
- b2: Backblaze B2
- space: DigitalOcean Space
- obs: Huawei Cloud OBS
- oos: CTYun OOS
- scw: Scaleway Object Storage
- minio: MinIO
- scs: Sina Cloud Storage
- wasabi: Wasabi Object Storage
- ibmcos: IBM Cloud Object Storage
- webdav: WebDAV
- tikv: TiKV
- redis: Redis
- mem: In-memory object store
Please check the full supported list here.
SRC and DST should be in the following format:
[NAME://][ACCESS_KEY:SECRET_KEY@]BUCKET[.ENDPOINT][/PREFIX]
Some examples:
-
local/path
-
user@host:port:path
-
file:///Users/me/code/
-
hdfs://hdfs@namenode1:9000,namenode2:9000/user/
-
s3://my-bucket/
-
s3://access-key:secret-key-id@my-bucket/prefix
-
wasb://account-name:account-key@my-container/prefix
-
gcs://my-bucket.us-west1.googleapi.com/
-
oss://test
-
cos://test-1234
-
obs://my-bucket
-
bos://my-bucket
-
minio://myip:9000/bucket
-
scs://access-key:[email protected]/prefix
-
webdav://host:port/prefix
-
tikv://host1:port,host2:port,host3:port/prefix
-
redis://localhost/1
-
mem://
Note:
- It's recommended to run juicesync in the target region to have better performance.
- Auto discover endpoint for bucket of S3, OSS, COS, OBS, BOS,
SRC
andDST
can use formatNAME://[ACCESS_KEY:SECRET_KEY@]BUCKET[/PREFIX]
.ACCESS_KEY
andSECRET_KEY
can be provided by corresponding environment variables (see below). - When you get "/" in
ACCESS_KEY
orSECRET_KEY
strings,you need to replace "/" with "%2F". - S3:
- The access key and secret key for S3 could be provided by
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
, or IAM role.
- The access key and secret key for S3 could be provided by
- Wasb(Windows Azure Storage Blob)
- The account name and account key can be provided as connection string by
AZURE_STORAGE_CONNECTION_STRING
.
- The account name and account key can be provided as connection string by
- GCS: The machine should be authorized to access Google Cloud Storage.
- OSS:
- The credential can be provided by environment variable
ALICLOUD_ACCESS_KEY_ID
andALICLOUD_ACCESS_KEY_SECRET
, RAM role, EMR MetaService.
- The credential can be provided by environment variable
- COS:
- The AppID should be part of the bucket name.
- The credential can be provided by environment variable
COS_SECRETID
andCOS_SECRETKEY
.
- OBS:
- The credential can be provided by environment variable
HWCLOUD_ACCESS_KEY
andHWCLOUD_SECRET_KEY
.
- The credential can be provided by environment variable
- BOS:
- The credential can be provided by environment variable
BDCLOUD_ACCESS_KEY
andBDCLOUD_SECRET_KEY
.
- The credential can be provided by environment variable
- Qiniu:
The S3 endpoint should be used for Qiniu, for example, abc.cn-north-1-s3.qiniu.com.
If there are keys starting with "/", the domain should be provided as
QINIU_DOMAIN
. - sftp: if your target machine uses SSH certificates instead of password, you should pass the path to your private key file to the environment variable
SSH_PRIVATE_KEY_PATH
, likeSSH_PRIVATE_KEY_PATH=/home/someuser/.ssh/id_rsa juicesync [src] [dst]
. - Scaleway:
- The credential can be provided by environment variable
SCW_ACCESS_KEY
andSCW_SECRET_KEY
.
- The credential can be provided by environment variable
- MinIO:
- The credential can be provided by environment variable
MINIO_ACCESS_KEY
andMINIO_SECRET_KEY
.
- The credential can be provided by environment variable