s3fs icon indicating copy to clipboard operation
s3fs copied to clipboard

OOM error for s3fs=0.3.3

Open albertshx opened this issue 5 years ago • 5 comments

Hey everyone,

Thank you so much for offering such a convenient packaging of s3 functionalities.

I met an OOM error for s3fs (v 0.3.3) get operation. The code snippet is simply s3.get(remote_path, local_path).  And the file size is around 20GB. 

The process just consumes more memory over time and gets killed by the OS silently without producing any error message in the log. The output of dmesg -T is like: [Thu Aug 29 10:16:01 2019] [17498] 0 17498 54235 240 59 3 0 0 sudo [Thu Aug 29 10:16:01 2019] [17499] 0 17499 31233 244 15 3 0 0 bash [Thu Aug 29 10:16:01 2019] [17514] 0 17514 48152 120 49 3 0 0 su [Thu Aug 29 10:16:01 2019] [17515] 1002 17515 31231 275 15 4 0 0 bash [Thu Aug 29 10:16:01 2019] Out of memory: Kill process 17345 (python) score 972 or sacrifice child [Thu Aug 29 10:16:01 2019] Killed process 17345 (python) total-vm:7821796kB, anon-rss:7630780kB, file-rss:0kB, shmem-rss:0kB

But I didn't encounter the same issue with version 0.2.2. Even though the get download speed is considerably lower than awscli command line :), it works.

So what might be the underlying issue?

albertshx avatar Aug 30 '19 02:08 albertshx

Can you try with 0.3.4?

TomAugspurger avatar Aug 30 '19 11:08 TomAugspurger

Same issue in 0.4.2.

rch9 avatar May 27 '20 09:05 rch9

Would you mind trying with S3FileSystem(..., default_cache_type='none')?

martindurant avatar May 27 '20 12:05 martindurant

Would you mind trying with S3FileSystem(..., default_cache_type='none')?

Thanks, I will try and provide feedback.

rch9 avatar May 27 '20 13:05 rch9

Would you mind trying with S3FileSystem(..., default_cache_type='none')?

Thanks, I will try and provide feedback.

@rch9, did Martin's suggestion end up working for you?

andersy005 avatar Dec 28 '20 13:12 andersy005