boto3
boto3 copied to clipboard
Directory upload/download with boto3
In the PHP sdk has some function for download and upload as directory(http://docs.aws.amazon.com/aws-sdk-php/v2/guide/service-s3.html#uploading-a-directory-to-a-bucket) Is there any similar function available with boto3?
if there is not such function, what kind of method/s most sufficient for download/upload directory
note My ultimate target is create sync function like aws cli
now i'm using download/upload files using https://boto3.readthedocs.org/en/latest/reference/customizations/s3.html?highlight=upload_file#module-boto3.s3.transfer
Sorry there is no directory upload/download facility in Boto 3 at this moment. We are considering to backport those CLI sync functions to Boto 3, but there is no specific plan yet.
+1 for a port of the CLI sync function
this would be really useful, imho sync is one of the more popular CLI functions.
+1 this would save me a bunch of time
+1 "aws s3 sync SRC s3://BUCKET_NAME/DIR[/DIR....] " Porting this cli to boto3 would be so helpful.
+1
+1
+1
+1
+1
+1
+1
+1
+1
+1
I've been thinking a bit about that, it seems that we have a proof of concept working here: https://github.com/seedifferently/boto_rsync
However the project didn't seems to have any love for a while. Instead of forking it, I was asking myself what it would take to rewrite it as part as a Boto3 feature.
Can I start with just a sync between local system and a boto3 client?
Does AWS provide a CRC-32 check or something that I could use to detect if a file needs to be re-uploaded? Should I base this on the file length instead?
Right now the simple way I used, is:
def sync_to_s3(target_dir, aws_region=AWS_REGION, bucket_name=BUCKET_NAME):
if not os.path.isdir(target_dir):
raise ValueError('target_dir %r not found.' % target_dir)
s3 = boto3.resource('s3', region_name=aws_region)
try:
s3.create_bucket(Bucket=bucket_name,
CreateBucketConfiguration={'LocationConstraint': aws_region})
except ClientError:
pass
for filename in os.listdir(target_dir):
logger.warn('Uploading %s to Amazon S3 bucket %s' % (filename, bucket_name))
s3.Object(bucket_name, filename).put(Body=open(os.path.join(target_dir, filename), 'rb'))
logger.info('File uploaded to https://s3.%s.amazonaws.com/%s/%s' % (
aws_region, bucket_name, filename))
It just upload the new version of every files but it doesn't remove previous ones nor check if the file changed in between.
+1
+1
+1
+1
I guess you can add as many +1 as you want but what would be more useful would be to start a pull-request on the project. Nobody is going to do it for you folks.
Natim, you got to be kidding. Implementing this in a reliable way is not trivial, and they already got it implemented - in python - in the AWS CLI. It is just implemented in such a convoluted way that you need to be a AWS CLI expert to pull it out.
Implementing this in a reliable way is not trivial
I didn't say it was trivial but it doesn't have to be perfect at first and we can iterate on it, I already wrote something working in 15 lines of code we can start from there.
I don't think reading the AWS CLI tool will help to implement it in boto3.
What I really need is simpler than a directory sync. I just want to pass multiple files to boto3 and have it handle the upload of those, taking care of multithreading etc.
I guess this could be done with a light wrapper around existing API, but I'd have to spend some time on investigating it. Does anyone have some hints or a rough idea of how to set it up? I'd be willing to do a PR for this once I find the time.
Awscli's sync function is really fast, so my current code uses subprocess to make a call to it. Having it backported to boto would be so much cleaner though. Another +1 for that to happen.
+1
I was successfully using s4cmd for a while to do this on relatively large directories, but started running into sporadic failures where it wouldn't quite get everything copied. Might be worth taking a peek at what they did there to see if some of it can be salvaged/reused. https://github.com/bloomreach/s4cmd
+1
I used this method (altered from Natims code):
def upload_directory(src_dir, bucket_name, dst_dir):
if not os.path.isdir(src_dir):
raise ValueError('src_dir %r not found.' % src_dir)
all_files = []
for root, dirs, files in os.walk(src_dir):
all_files += [os.path.join(root, f) for f in files]
s3_resource = boto3.resource('s3')
for filename in all_files:
s3_resource.Object(bucket_name, os.path.join(dst_dir, os.path.relpath(filename, src_dir)))\
.put(Body=open(filename, 'rb'))
The main differences (other then logging and different checks) is that this method copies all files in the directory recursively, and that it allows changing the root path in s3 (inside the bucket).