thomas chaton
thomas chaton
Hey @passaro Let me update and give you more feedbacks.
@passaro But if you want to see some failures, you can do something like this. Create 1 bucket with 1M files with random sizes ranging from 100kb to 10GB. And...
Hey @passaro I will try again. For the `syslog`, what do you mean exactly ? How can check them ?
Keep me updated on this @dannycjones This is one of the major practical usage of mountpoint-s3 for any serious deep learning training. Here is another related issue: https://github.com/awslabs/mountpoint-s3/issues/554
@dannycjones Any updates ?
Hey @monthonk Sounds good. I strongly recommend to come up with heavy machine learning benchmarks before going out of alpha. With Rclone, Geesefs, etc.... They are some caviats at scale...
Dear @jamesbornholt, Thanks for the reply ! > Mountpoint itself already handles retries and multi-part internally. Do you think anything else is necessary here from the application side? I'm not...
Another suggestion is to provide a way to disable the HeadBucket request. This doesn't work well for us. We had to do some patching.
> Hey @tchaton, > > Regarding the HeadBucket request, we replaced it with a ListObjectsV2 request in [df4087b](https://github.com/awslabs/mountpoint-s3/commit/df4087bd63de7ff31984d9cc0e4a0db951359c11) about two weeks ago to help support customers who didn't want to...
Dear @jamesbornholt, Thanks for using PyTorch Lightning in your benchmark example. I am one of the core developer of the PyTorch Lightning framework, so there is probably room for collaboration...