quic: For streams allow ReadableStream as body
After the change the body parameter for all created QuicStreams can take a ReadableStream of Uint8Array, Arraybuffers etc..
We introduce a DataQueueFeeder class that may be also used for other related mechanisms.
A locally created unidirectional Stream can not have a reader. Therefore, the interface is changed to handle this case. (undercovered when writing the tests).
Furthermore, a ResumeStream must be added after AddStream in QuicSession, as the ResumeStream beforehand triggered with set_outbound is a no-op, as Stream was not added to Session beforehand.
Fixes: https://github.com/nodejs/node/issues/60234
@jasnell This try to add support for streams for outgoing data. I hope the PR is helpful and does not waste too much time. I have contributed long time ago 1-2 patches, but I am not that fluent with the node.js codebase.
Codecov Report
:x: Patch coverage is 73.04786% with 107 lines in your changes missing coverage. Please review.
:white_check_mark: Project coverage is 88.79%. Comparing base (4f24aff) to head (15d2cc1).
:warning: Report is 24 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #60237 +/- ##
==========================================
+ Coverage 88.53% 88.79% +0.26%
==========================================
Files 703 704 +1
Lines 208546 208884 +338
Branches 40217 40339 +122
==========================================
+ Hits 184634 185479 +845
+ Misses 15926 15296 -630
- Partials 7986 8109 +123
| Files with missing lines | Coverage Δ | |
|---|---|---|
| lib/internal/blob.js | 99.81% <100.00%> (+<0.01%) |
:arrow_up: |
| lib/internal/quic/quic.js | 100.00% <100.00%> (ø) |
|
| src/node_binding.cc | 82.74% <ø> (ø) |
|
| src/quic/quic.cc | 100.00% <ø> (ø) |
|
| src/quic/streams.h | 0.00% <ø> (ø) |
|
| src/quic/application.cc | 45.70% <80.00%> (+25.51%) |
:arrow_up: |
| src/dataqueue/queue.h | 25.00% <25.00%> (ø) |
|
| src/node_blob.cc | 76.70% <90.90%> (+0.95%) |
:arrow_up: |
| src/quic/session.cc | 42.95% <66.66%> (+13.88%) |
:arrow_up: |
| src/quic/streams.cc | 42.58% <51.51%> (+39.38%) |
:arrow_up: |
| ... and 1 more |
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
- :package: JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.
There is a crash on some platforms. TODO for next weekend.
Locally, I do not have any crashes. Please note, that some of the commits fix errors present in the actual implementation. I did not make separate PR as these were uncovered by the test included in the commit. Most of them are annoying race conditions or problems with the lifetime of the objects (use after free!), though I do not see security implications as at the current stage, I do not assume a production usage of quic.
It seems, that Linux arm is crashing (or is flaky). So probably, I have to go to a Linux container, next weekend) or something else to figure out, what is happening there. (But it may be a problem not introduced by this PR). (I hope it is a general flakyness on Linux and not ARM specific, or specific on the timing of the runner).
Btw I should have time to look at this PR this upcoming weekend
Btw I should have time to look at this PR this upcoming weekend
This would be great. I am currently trying to reproduce the crash in a non arm container, no luck so far. I will try to use a github code space for arm, next. (Edit: no arm codespaces, but may be the x86 gives a different timing).
This time only unrelated MacOS failures. And I could not reproduce the previous arm faliures locally, may be the rebase has fixed it.
Now, Code coverage is increased, and no failing tests.
The newly added test is actually flaky. Sending out data stalls depending on timing. I have to investigate, why SendPendingData is not called any more.
The new test is failing as it uncovered a yet-to-be-determined race condition. What I see is that after submitting 60000, 12, and 50000 bytes, the last submission stalls. On the receive side, 58881 bytes are received. On the send side, it is unclear whether it is flow control or if the sending is just not rescheduled in the Application object. Ideas welcome for the cause, I will continue next weekend. (Review can still start, bug fixing is unrelated.).
Note that I also found another issue (use after free) during debugging related to using next bound to Session and Stream object that are not valid anymore; this should be fixed. It could be a general pitfall when using Bob.
I think I finally found the race condition, causing the test to fail. Hopefully the automated test will go through. Now I have to decide, whether I will also add tests of bidirectional streams in this PR or another one.
Force pushed local changes (freshly rebased), I hope I have addressed all points of the first review.
@jasnell I am having troubles after moving DataQueueFeeder to queue.h with missing Isolate in this file, if the test for dataqueue.h is build. But I can not move the v8 included up in the test, as format_cpp complains. Probably it is something trivial... Do you have an idea?
Seems as if I need a replacement for quic::Bindingdata in DataQueueFeeder....
Moved DataqueueFeeder out of quic to dataqueue. Fixed some test utility function. Now, the bidirectional test fails, as it seems that data is lost in PipeThrough. So I am debugging again.
After fixing some issues locally. I decided to rewrite DataQueueFeeder and not use a custom entry, as the entry does not make the calling sequence easier. I am currently trying just to push some memory to the DataQueue. However, it does not work. I am investigating the cause.
But it may be a busy loop coming from Blob::Reader (which seems to use polling).
If this is true, I will add a mode that uses an async mechanism.
I am almost done. However, there are some issue caused in Blob (failing test). I need to investigate these and fix the idempotentdataqueue, or whatever caused it.
Ok, this was easy, the test had certain expectation about the inner workings of a byte stream, I adapted it to the new behaviour. One may think about setting a limit for merging.
If the test is successful, it is ready for another round of review.
Only a crash on mac. I assume it is caused by the changes in the blob reader. Though I thought this had been addressed, which seems to be true on other platforms. Any ideas?
Found the reason, now I try to fix it.
Problem solved, ready for review now. (I may only change cosmetic things).