Carlos O'Ryan
Carlos O'Ryan
If we find a download is not making enough progress we can start a second one in parallel. This requires some rate limiting (similar to retry throttlers), and requires some...
That is the point of using resumable uploads. The client should resume the upload if it is interrupted. The client must query the status of the upload, and start from...
Subject to the conditional idempotency rules, we should retry any single-shot upload that fails, with all the right policies, of course.
Find out which "snippets" are required, make a list, design how they would work in Rust (file per snippet? function per snippet?) and how do we test them. My quick...
Details here: https://github.com/googleapis/storage-shared-benchmarking Notably, this uses Google Cloud Monitoring to capture the results, we need to use the client and maybe do some hacking resource discovery to report the VM...
As it says, we need a benchmark. The benchmark should test multiple object sizes, and run "head to head" in production. Consider using https://github.com/googleapis/storage-shared-benchmarking
Over HTTP+JSON Google Cloud Storage can automatically decompress objects. We need a way to disable this decompression, to detect that is happening, and then all kinds of things do not...
When possible, we should send the checksum when the upload starts, so the service can verify it at the end. There is no way to compute and send the checksum...
We need to verify the CRC32C checksum during downloads, and signal an error if the checksum does not match the reported object checksum. We can compute the CRC32C checksum on...
Downloads can fail to start, we need a retry policy to restart the download if the connection is not established. Downloads can fail after they start: we need a different...