rust-s3
rust-s3 copied to clipboard
bucket.list doesn't work
Describe the bug
The listing methods (both async and blocking) always return an empty list (empty content).
To Reproduce
use s3::bucket::Bucket;
use s3::creds::Credentials;
use s3::S3Error;
#[tokio::main]
async fn main() -> Result<(), S3Error> {
let access_key = ""; // your key
let secret_key = ""; // your secret
let bucket_name = ""; // your bucket name
let region = "".parse()?; // your region
let credentials = Credentials::new(Some(access_key),Some(secret_key), None, None, None).unwrap();
let bucket = Bucket::new(bucket_name, region, credentials)?;
let results = bucket.list("/".to_string(), Some("/".to_string())).await?;
println!("{:?}", results);
Ok(())
}
Expected behavior It should behave like the s3 cli:
aws s3 ls s3://bucket_name/
The execution of this command gives me
PRE a/
PRE b/
PRE c/
PRE d/
that's the content of the root of my bucket and that's what I expect from the .list method.
I also tried with bucket.list("".to_string(), Some("/".to_string())).await?; and bucket.list("/*".to_string(), Some("/".to_string())).await?; but the output is always the same.
[ListBucketResult { name: "bucket_name", next_marker: None, delimiter: Some("/"), max_keys: 1000, prefix: "/*", marker: None, encoding_type: None, is_truncated: false, next_continuation_token: None, contents: [], common_prefixes: None **}]
as you can see contents is empty.
Environment
- Rust version:
1.53.0 - lib version
0.26.4
Additional context I'm using your library for my backup tool and the upload is working perfectly (https://github.com/galeone/bacup/blob/main/src/remotes/aws.rs), this is the first problem I find in your library. It's really useful :+1:
Hi,
I had the same problem, but changing the list command to bucket.list(String::default(), Some("/".to_string())).await?; worked for me to search for objects from the root. I think that's because all paths in s3 doesn't start with /, so searching for a path under / will return nothing I guess 🙄 .
Hi,
I had the same problem, but changing the list command to
bucket.list(String::default(), Some("/".to_string())).await?;worked for me to search for objects from the root. I think that's because all paths in s3 doesn't start with/, so searching for a path under/will return nothing I guess .
I endedp switching to rusoto_s3. You can see in bacup how I use it for enumerating the objects and it works fine
Newbie question.... How do I print the results? Do I need to define lifetime of results?
let results = bucket.list("/".to_string(), Some("/".to_string()));
println!("{:#?}", results);
I'm getting this error:
error[E0425]: cannot find value `results` in this scope
--> src/main.rs:18:23
|
18 | println!("{:#?}", results);
| ^^^^^^^ not found in this scope
Here's my current understanding from a few smoke tests, which confirms the above:
- get, put, and delete seem to ignore an initial
/in a key. - however for list, adding
/at the beginning of a prefix leads to an empty list. - so setting
let key="/file.txt"andlet prefix=keywill lead to an empty result for list requests. - but removing the initial
/allows the object of interest to show up in the list.
@dhbradshaw this might have been addressed with recent fixes to slash encoding
I just ran the smoke test using versions .28 and .30 and had the same list issue. Is the fix newer than .30?
Basically, if you save an object using put with a key of /file.txt it will show up as file.txt.
Then if you look for it with a list prefix /file.txt you won't see it.
But the list prefix is file.txt it will show up.
@dhbradshaw thanks for the quick turnaround, the fix would have caught it, will look into it. A half baked idea I have is that a delimiter is / by default, so a a slash in the path gets interpreted as a delimiter as well, as I said will look into it
Bringing this back to life, as I'm running into the same issue. When I perform a multi-part put with a path of /tmp/my_dir/my_file, it ends up in S3 as tmp/my_dir/my_file.
I believe the problem is related to this change/commit: https://github.com/durch/rust-s3/commit/0bb50cf296f8d7eff24e8390de8e6a29c807b8e1
If the path begins with a /, it is removed, only to be added back to url_str, but then uri_encode is passed false so slashes are not percent-encoded. I don't know enough about the low-level details of the S3 protocol, but this seems to be where things are going wrong. I think the bool passed to uri_encode needs to consider if self.path() begins with a slash or not... but that's just a guess.
This can be seen in the ResponseData from the call (abbreviated):
ResponseData {
bytes: b"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<CompleteMultipartUploadResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"><Location>https://xxx.s3.amazonaws.com/tmp%2Fls_sj5Whz%2F1682208000000%2Fdata</Location><Bucket>xxx</Bucket><Key>tmp/ls_sj5Whz/1682208000000/data</Key></CompleteMultipartUploadResult>",
status_code: 200,
headers: {"date": "Thu, 04 May 2023 01:50:58 GMT", "x-amz-server-side-encryption": "AES256", "content-type": "application/xml", ... "server": "AmazonS3"}
}
The key lacks the initial slash, as well as the <Location> element in the XML.
I just ran into the same problem, trying to adapt the example code from the list method. Does it make sense to remove the leading slash from the prefix in the documentation, or should the library be stripping it automatically?