s3fs
s3fs copied to clipboard
OOM error for s3fs=0.3.3
Hey everyone,
Thank you so much for offering such a convenient packaging of s3 functionalities.
I met an OOM error for s3fs (v 0.3.3) get operation. The code snippet is simply s3.get(remote_path, local_path). And the file size is around 20GB.
The process just consumes more memory over time and gets killed by the OS silently without producing any error message in the log. The output of dmesg -T is like: [Thu Aug 29 10:16:01 2019] [17498] 0 17498 54235 240 59 3 0 0 sudo [Thu Aug 29 10:16:01 2019] [17499] 0 17499 31233 244 15 3 0 0 bash [Thu Aug 29 10:16:01 2019] [17514] 0 17514 48152 120 49 3 0 0 su [Thu Aug 29 10:16:01 2019] [17515] 1002 17515 31231 275 15 4 0 0 bash [Thu Aug 29 10:16:01 2019] Out of memory: Kill process 17345 (python) score 972 or sacrifice child [Thu Aug 29 10:16:01 2019] Killed process 17345 (python) total-vm:7821796kB, anon-rss:7630780kB, file-rss:0kB, shmem-rss:0kB
But I didn't encounter the same issue with version 0.2.2. Even though the get download speed is considerably lower than awscli command line :), it works.
So what might be the underlying issue?
Can you try with 0.3.4?
Same issue in 0.4.2.
Would you mind trying with S3FileSystem(..., default_cache_type='none')
?
Would you mind trying with
S3FileSystem(..., default_cache_type='none')
?
Thanks, I will try and provide feedback.
Would you mind trying with
S3FileSystem(..., default_cache_type='none')
?Thanks, I will try and provide feedback.
@rch9, did Martin's suggestion end up working for you?