mgob icon indicating copy to clipboard operation
mgob copied to clipboard

s3 directory

Open bsormagec opened this issue 5 years ago • 7 comments

can you add directory support for s3 ? thanks..

bsormagec avatar Apr 26 '19 19:04 bsormagec

yes please !!! we dont use a bucket per every mongodb... we have one bucket called backup, with different projects inside, inside projects we have environments like staging/production and after that folders like mysql, mongodb, files ... so I cannot use mgob without this option. If you find time, please help :) should be fairly simple, because minio does support them.

easybi-at avatar Jun 13 '19 15:06 easybi-at

You just have to add the path to the bucket property.

s3:
  url: "https://s3-eu-west-1.amazonaws.com/"
  bucket: "my-bucket/folder/"
  accessKey: ""
  secretKey: ""
  api: "S3v4"

Remember to add the final slash (otherwise it'll think it is the filename).

ivanbeldad avatar Jul 05 '19 15:07 ivanbeldad

It works fine for me without "/" in the end.

  bucket: "my-bucket/folder"

With "/" in the end, it creates a directory with an empty name.

And my link looks like: https://s3.console.aws.amazon.com/s3/object/somebucket/some_dir//mongodb-1568796900.gz?region=eu-central-1&tab=overview and you can see that there is a double slash "//" in link after some_dir which indicate mention directory with empty name.

kuzm1ch avatar Sep 18 '19 09:09 kuzm1ch

s3: url: "https://s3.console.aws.amazon.com/" bucket: "mongo-bakcup" accessKey: "xxxx" secretKey: "xxx" api: "S3v4" time="2020-06-30T12:14:02+08:00" level=info msg="new dump" archive=/tmp/mongo1-1593490440.gz err="" mlog=/tmp/mongo1-1593490440.log planDir=/storage/mongo1 time="2020-06-30T12:14:03+08:00" level=error msg="Backup failed S3 uploading /storage/mongo1/mongo1-1593490440.gz to mongo1/mongo-bakcup/mongo/ failed /storage/mongo1/mongo1-1593490440.gz -> mongo1/mongo-bakcup/mongo/mongo1-1593490440.gz mc: <ERROR> Failed to copy /storage/mongo1/mongo1-1593490440.gz. Bucket mongo-bakcup does not exist. Total: 0 B, Transferred: 0 B, Speed: 0 B/s : exit status 1" plan=mongo1

5sdba avatar Jun 30 '20 04:06 5sdba

this issue solve you can try this because it work for me.

s3: url: "https://s3.console.aws.amazon.com/" [issue is in this url your bucket region supose your bucket region ap-south-1 then url will be https://s3-ap-south-1.amazonaws.com/ another region then replace region ] bucket: "mongo-bakcup" accessKey: "xxxx" secretKey: "xxx" api: "S3v4"

ak895912 avatar Jul 04 '20 12:07 ak895912

bucket: my-bucket/my-folder works for me

docker image: stefanprodan/mgob:1.3 rclone is connected to DigitalOcean spaces

zaverden avatar Mar 22 '21 10:03 zaverden

Looking at the latest stable version (1.3), there are hints of what needs be done:

upload := fmt.Sprintf("aws --quiet s3 cp %v s3://%v/%v%v%v",
	file, plan.S3.Bucket, fileName, encrypt, storage)
  • file is the gz produced by the tool.
  • plan.S3.Bucket is the bucket with path. There is no need for a trailing slash, as seen in the URL pattern here. As pointed out above, it will lead to a double slash.
  • fileName is the base name of the file parameter, and is managed automatically.

So, to upload to me_buck/me/path, a typical block currently looks like:

s3:
  url: "https://s3.me-region.amazonaws.com/"
  bucket: "me_buck/me/path"
  api: "S3v4"

All in all a mix of previous answers. Here is tested under 1.3 only. A pity there is --quiet in the hard coded call. It hides errors happening in this thread. It could be an easy PR, but I have not checked how the code bubbles up any error report (it happens a few lines after the one pointed at here). If anyone can write Go and is interested...

ic avatar Apr 18 '21 00:04 ic