Large packets (2GB) don't get sent
Bug Report
It takes Tonic approximately ~25 seconds to prepare a packet when it's 2GB in size, but ultimately it doesn't get received by the localhost server and the client just sits idle. Gave up after 5 minutes of waiting.

Minimal repro attached
@seanmonstar you may be interested
This will just be a collection of various worst-case scenarios again:
- Reading big data chunks with some
read_to_end()API will require those to be resized/reallocated each time until they had been read completely. On each resize you need to copy all already received data into the new segment. Assuming you ask for 1MB extra on each resize you need 2000 allocations, with 2000 * 1 GB byte copies on average. - Besides that some problems with tokios
AsyncReadtrait might lead to calling excessive extra initialization of target arrays with 0s on every read. Which could mean at some time you will write up to 2GB of 0s for everyread()call.
Fixes:
- Generally: Don't use gRPC for transferring big data chunks. Use streaming.
- Implementation wise: Things which read bigger chunks of data should resize them upfront to the target size and don't do dynamic resizing.
Surely though; they should still arrive properly?
Yep, I just checked with logs turned on, the chunk is rejected for a wrong reason (woops). I'll file an issue on the h2 repo.
@seanmonstar this issue https://github.com/hyperium/h2/issues/471?
Sounds about right. Tonic doesn't do any sort of checking for that and delegates to h2.