olcto
olcto
I agree my implementation is not very memory efficient, especially if `part_size` were to be increased to the maximum allowable 5GB for multipart upload, the 32-bit python interpreter would most...
Just played around with python tempfile library and the data is now chunked into a temporary file. This changes the memory footprint drastically from a couple hundred MBs to about...
Just to note, the temporary file is only of `part_size` which can range from 100MB to 5GB, not of the entire uploaded file. Seems like a reasonable compromise between HDD...
Considering the minimum `part_size` for a multipart upload is 100MB, I would think that 100MB is the minimum required cached space (RAM or HDD) to store the streamed data before...
Would it be best to keep the upload stream serialized?, so that if there was a temporary connectivity issue that part could be retried without having to cache multiple parts...
I have continued to refine my implementation to upload a stream with B2 CLI and have been using it daily to back up my ZFS (~5GB of data) without failure....