Queue write only after processing all buffers
Partially addresses ( https://github.com/MagicStack/uvloop/issues/404 )
Delays calling _queue_write until writelines has added all buffers with _write. Should allow libuv to leveraged vectorized IO (scatter) via calling sendmsg under-the-hood.
Also given GitHub's new policy regarding running CI. Think someone needs to approve running CI here for that to happen
@1st1 any thoughts here?
Also need some help running the tests. GH doesn't run tests for new contributors on any commit without approval by default
@fantix, do you have any thoughts on this one? 🙂
Friendly nudge 😉
Thanks for the PR and sorry for the long waiting!
Should allow libuv to leveraged vectorized IO (scatter) via calling sendmsg under-the-hood.
Hmm do you have some test results to show if gathered send is/not working before/after this change? I'm asking because I think _queue_write() doesn't call _exec_write() until the next loop iteration, so calling _queue_write() multiple times in a tight loop in writelines() is equivalent to calling it just once. I'll run a test later when I come back to this one.
Hmm do you have some test results to show if gathered send is/not working before/after this change? I'm asking because I think _queue_write() doesn't call _exec_write() until the next loop iteration, so calling _queue_write() multiple times in a tight loop in writelines()
Yeah, this has to be benchmarked, ideally, but the change still looks good & logical. I think we should go with it.
Perhaps we should reshape it a little -- add a default arg "queue_write=True" to _write and call it with "False" from writelines() (to make this internal API a bit easier to use)
Thanks all! 🙏
Yeah agree benchmarking makes sense. Is there a preferred set of benchmarks here we should try with initially? Also is there somewhere to add new benchmarks?
As to context about this change, did a fair bit of sleuthing in issue ( https://github.com/MagicStack/uvloop/issues/404 ), which included a bit of digging into libuv itself. Could always be missing something. So would certainly appreciate feedback on that.
Sure happy to tweak the API to make things a bit more flexible.
@jakirkham thank you for starting this anyways! Please feel free to comment on the PR or the original issue.
Thanks all for improving this and including it 🙏
Sorry for being unresponsive (was out at the time). Though am happy to see this conclusion while catching up 🙂