react
react copied to clipboard
[Fizz] Prevent uncloned large precomputed chunks without relying on render-time assertions
A while back we implemented a heuristic that if a chunk was large it was assumed to be produced by the render and thus was safe to stream which results in transferring the underlying object memory. Later we ran into an issue where a precomputed chunk grew large enough to trigger this hueristic and it started causing renders to fail because once a second render had occurred the precomputed chunk would not have an underlying buffer of bytes to send and these bytes would be omitted from the stream. We implemented a technique to detect large precomputed chunks and we enforced that these always be cloned before writing. Unfortunately our test coverage was not perfect and there has been for a very long time now a usage pattern where if you complete a boundary in one flush and then complete a boundary that has stylehsheet dependencies in another flush you can get a large precomputed chunk that was not being cloned to be sent twice causing streaming errors.
I've thought about why we even went with this solution in the first place and I think it was a mistake. It relies on a dev only check to catch paired with potentially version specific order of operations on the streaming side. This is too unreliable. Additionally the low limit of view size for Edge is not used in Node.js but there is not real justification for this.
In this change I updated the view size for edge streaming to match Node at 2048 bytes which is still relatively small and we have no data one way or another to preference 512 over this. Then I updated the assertion logic to error anytime a precomputed chunk exceeds the size. This eliminates the need to clone these chunks by just making sure our view size is always larger than the largest precomputed chunk we can possibly write. I'm generally in favor of this for a few reasons.
First, we'll always know during testing whether we've violated the limit as long as we exercise each stream config because the precomputed chunks are created in module scope. Second, we can always split up large chunks so making sure the precomptued chunk is smaller than whatever view size we actually desire is relatively trivial.
Comparing: 966d17483c2a628fda36794bc193c7ad2c3124eb...76f7f70c4da63269953354524ac651be64d407a3
Critical size changes
Includes critical production bundles, as well as any change greater than 2%:
Significant size changes
Includes any change greater than 0.2%:
Expand to show
Generated by :no_entry_sign: dangerJS against 76f7f70c4da63269953354524ac651be64d407a3
Yeah I suppose edge has implied tighter memory constraints. Do you think the transferred buffers are likely to not be freed quickly?
but yes I came to the same conclusion that it’s trivial for us to split up our large chunks to make them smaller and that is less bad than accidentally failing to assert the copy like what happened in the case I’m fixing