mgob
mgob copied to clipboard
s3 directory
can you add directory support for s3 ? thanks..
yes please !!! we dont use a bucket per every mongodb... we have one bucket called backup, with different projects inside, inside projects we have environments like staging/production and after that folders like mysql, mongodb, files ... so I cannot use mgob without this option. If you find time, please help :) should be fairly simple, because minio does support them.
You just have to add the path to the bucket
property.
s3:
url: "https://s3-eu-west-1.amazonaws.com/"
bucket: "my-bucket/folder/"
accessKey: ""
secretKey: ""
api: "S3v4"
Remember to add the final slash (otherwise it'll think it is the filename).
It works fine for me without "/" in the end.
bucket: "my-bucket/folder"
With "/" in the end, it creates a directory with an empty name.
And my link looks like:
https://s3.console.aws.amazon.com/s3/object/somebucket/some_dir//mongodb-1568796900.gz?region=eu-central-1&tab=overview
and you can see that there is a double slash "//" in link after some_dir which indicate mention directory with empty name.
s3:
url: "https://s3.console.aws.amazon.com/"
bucket: "mongo-bakcup"
accessKey: "xxxx"
secretKey: "xxx"
api: "S3v4"
time="2020-06-30T12:14:02+08:00" level=info msg="new dump" archive=/tmp/mongo1-1593490440.gz err="/storage/mongo1/mongo1-1593490440.gz
-> mongo1/mongo-bakcup/mongo/mongo1-1593490440.gz
mc: <ERROR> Failed to copy /storage/mongo1/mongo1-1593490440.gz
. Bucket mongo-bakcup
does not exist. Total: 0 B, Transferred: 0 B, Speed: 0 B/s : exit status 1" plan=mongo1
this issue solve you can try this because it work for me.
s3: url: "https://s3.console.aws.amazon.com/" [issue is in this url your bucket region supose your bucket region ap-south-1 then url will be https://s3-ap-south-1.amazonaws.com/ another region then replace region ] bucket: "mongo-bakcup" accessKey: "xxxx" secretKey: "xxx" api: "S3v4"
bucket: my-bucket/my-folder
works for me
docker image: stefanprodan/mgob:1.3
rclone is connected to DigitalOcean spaces
Looking at the latest stable version (1.3), there are hints of what needs be done:
upload := fmt.Sprintf("aws --quiet s3 cp %v s3://%v/%v%v%v",
file, plan.S3.Bucket, fileName, encrypt, storage)
-
file
is thegz
produced by the tool. -
plan.S3.Bucket
is the bucket with path. There is no need for a trailing slash, as seen in the URL pattern here. As pointed out above, it will lead to a double slash. -
fileName
is the base name of thefile
parameter, and is managed automatically.
So, to upload to me_buck/me/path
, a typical block currently looks like:
s3:
url: "https://s3.me-region.amazonaws.com/"
bucket: "me_buck/me/path"
api: "S3v4"
All in all a mix of previous answers. Here is tested under 1.3 only. A pity there is --quiet
in the hard coded call. It hides errors happening in this thread. It could be an easy PR, but I have not checked how the code bubbles up any error report (it happens a few lines after the one pointed at here). If anyone can write Go and is interested...