streaming icon indicating copy to clipboard operation
streaming copied to clipboard

Reuse S3 session

Open wouterzwerink opened this issue 11 months ago • 9 comments

🚀 Feature Request

Currently when I use S3 with an IAM role, I see StreamingDataset fetch new credentials for every shard: image There is a never ending stream of credential logs after this

That's quite inefficient, getting credentials from IAM roles is not that fast. Would be nicer to reuse credentials until they expire

Motivation

Faster is better!

[Optional] Implementation

I think it would work to just reuse the S3 Session object per thread

Additional context

wouterzwerink avatar Mar 06 '24 16:03 wouterzwerink

Hey! If it's not too much of a hassle, mind submitting a PR with your proposed change? I'd be happy to review

snarayan21 avatar Mar 21 '24 15:03 snarayan21

Hey! If it's not too much of a hassle, mind submitting a PR with your proposed change? I'd be happy to review

Sure! I made a fix for this that worked earlier, but will need to clean it up a bit before submitting. Will take a look somewhere next week

wouterzwerink avatar Mar 22 '24 14:03 wouterzwerink

Perfect, thank you @wouterzwerink! Feel free to tag me when the PR is up.

snarayan21 avatar Apr 01 '24 15:04 snarayan21

@wouterzwerink Hey, just wanted to follow up on this, mind submitting a quick PR if/when you have some time? Thanks!!

snarayan21 avatar Apr 08 '24 20:04 snarayan21

I am interested in this issue (actually we need it for potential performance improvement). I think the problem is in which level we want to keep a boto3 seesion. Maybe keep one seesion for each stream? If so, I suppose to create an s3 client in stream and reuse it when trigger download_file() in Stream._download_file(). Any comments?

huxuan avatar Jul 18 '24 09:07 huxuan

@huxuan Are you seeing any performance degradation with the current approach? If yes, by how much?

karan6181 avatar Jul 23 '24 03:07 karan6181

@huxuan Are you seeing any performance degradation with the current approach? If yes, by how much?

I have not done that yet, maybe I can implement a draft version for comparsion.

huxuan avatar Jul 23 '24 03:07 huxuan

@huxuan I ended up abandoning this after increasing the shard size, which made the S3 overhead negligible. Perhaps that will work for you as well?

wouterzwerink avatar Jul 24 '24 12:07 wouterzwerink

@huxuan I ended up abandoning this after increasing the shard size, which made the S3 overhead negligible. Perhaps that will work for you as well?

Thanks for the response. We saved feature vectors in the data, so the sample size is relatively large (about 12 MB per sample). We are already using 200 MB as the size_limit, resulting in approximately 16 samples per shard and a shard size of about 100 MB with zstd (default level 3) compression. IIUC, with a larger shard size, we also need to increase the sampling_granularity to avoid adding more stress to the network.

Any comments are welcome.

huxuan avatar Jul 25 '24 02:07 huxuan