s3fs
s3fs copied to clipboard
Allow empty bucket in constructor or URL. Treat first folder in path as bucket name
Hi
this is more a proposal of increment than a issue. I am working on a project where I have to access different buckets from the same code. In a case like this it can be a little annoying the need to build an S3FS instance for every bucket:
myfs1 = fs.open_fs('s3://bucket1') f1 = myfs1.open('/folder1/file1') myfs2 = fs.open_fs('s3://bucket2') f2 = myfs2.open('/folder2/file2') myfs3 = fs.open_fs('s3://bucket3') ...
It would be nice the possibility to set up the file system with an empty default bucket, treating the first folder in the path as bucket name.
myfs = fs.open_fs('s3://') f1 = myfs.open('/bucket1/folder1/file1') f2 = myfs.open('/bucket2/folder2/file2')
I can provide a PR for this
I'm interested in this too but with fs_gcsfs (based off of s3fs). Was it a concious design decision to require the bucket on initialization? Otherwise, I'd cetainly just prefer to use gs://bucket/folder1/etc
.
~I mean, even more interesting would be for PyFilesystem to automatically choose the plugin to use also so that we could do fs.open('s3://bucket1/folder1/file1)
or fs.open('gs://bucket2/folder2/file2)
but I digress...~ <- clearly I'm new to open_fs
:)