tokio-uring
tokio-uring copied to clipboard
Roadmap 2023
This issue is an generalusation o then many issues we have around improve x
. I feel we all have our own directions, and rough plans for where we'd like to taker this, and our own timelines. It'd be good to expose these, to prevent duplicate or work which becomes obsolete.
This might be better as a discussion?
I think an issue is actually a good way to do this for now
Personally I'd like a few things:
- Revise our APIs to give users control of when their ops get submitted and allow them to link unsubmitted operations
- Something akin to
AsyncRead
/AsyncWrite
but completion based -
multi_thread
support - Better integration with tokio, ideally by allowing us to have a way to "own" the polling component of the IO driver when running (may be a prereq for good
multi_thread
support) - Larger suite of APIs, especially provided buffers and more multishot ops
- The kernel supports a multi shot for certain operations (#104). I would like those supported.
- To that end, add support for buf rings (#112).
- But there is an outstanding design bug I think should be addressed, #129, because it will be more likely with multi-shot accept.
- I would like to get most open issues from the past two years closed, one way or the other, or refreshed with current thinking.
- Same for the open PRs.
- I hope the CI for the tokio-uring and io-uring crates can be brought up to using a 6.1 kernel.
- I hope to see a path for the completion-based API being designed for
std
and this crate to come together, and then a path for the result to be used byhyper
or projects like that.
I've worked on the first three issues but with old versions of code now.
- Builder API: i think we should unify some of the zoo of ops.
- Fine grained control over when an op is submitted to the ring
- Timer / timeout support (I might do this quite soon)
-
Multishot recv
-
- Also, having been reminded of it, https://github.com/tokio-rs/tokio-uring/issues/112
I'm not sure it's the issue what I was solving for though,
- Better integration with tokio, ideally by allowing us to have a way to "own" the polling component of the IO driver when running (may be a prereq for good
multi_thread
support)
I'm interested in optimizing a single request latency closer to a blocking call. From my last observation, epoll overhead for completion interfered with it. I remember @Noah-Kennedy was working to dig the issue more with your own implementation. Any progress about it? I want to contribute on my own if possible as it's becoming one of the major blockers for our application.
I'll add to the technical desires I have with an organisation one. I'd like a clear, documented understanding of the repos maintainers, and their relationships / objectives for the repo.
I've a very incomplete understanding who to ask for review, and often have PRs or issues commented with review from X required here, and I don't understand why. My usual approach is to ask for review from those I see active.
I'd probably commit more time to this repo if I could be sure I wasn't working against stated goals of some core team members, but I don't even know it has a core team.
I've also had several PRs stall and die because a reviewer has asked to try an alternative implementation, but there has been no visible progress for months. Now, if this repo is owned by those reviewers, fair enough. If it's more community driven, what's the resolution here? Is there any timeline on those types of feedback?
I've just re-read this, and realised it could be construed as critical. That is not the intent. It's a request for clarity in project ownership / review organisation, and an explanation detailing some of the confusion that causes to me. I also realise that this may not be a defined thing, hence this is under roadmap. It is a request to work towards that definition of project structure
@oliverbunting I totally agree actually. I think that I in particular can do a much better job communicating here, and I frankly haven't dedicated enough time frankly to this project either. I'd like to change this and plan to focus on this in 2023. Don't be afraid of being critical if its well-intended.
@Noah-Kennedy I don't think there is such a thing as not enough time on an OS project. We all have jobs, and lives outside those jobs. You owe the project nothing.
My interest in this library is almost completely professional. It makes my life easier, and If contributions make my life easier still (I dislike maintaining forks / large codebases unnecessarily), the it's mutually beneficial. The question for me really is, should I go through the effort of upstreaming, or just hack and maintain the bits I need
We may however be able to align our selfish interests (well, mine is, others may be purely altruistic), such that we all gain
I would like to add that my personal preference is for tokio-uring
to gain more features that make the current-thread runtime more powerful. However, I'm not sure if this all falls into the tokio-uring
camp or the tokio
camp.
That is, taking some inspiration from glommio
, the idea of multiple rings with different roles (in the case of glommio
that is a main, latency sensitive, and a polling ring). This would also introduce the concept of yield_if_needed
for the a latency ring, so that we can be preempted but not needlessly yield back to the scheduler. This feels like it lands more in the tokio-uring
side of the work load.
The other end of it is to permit multiple task-queues into the scheduler, and to be able to dynamically adjust their cooperative pre-emption within the current-thread runtime. This would allow better control of background tasks like garbage collection that a process might need to run, but could be yielded away from to handle higher priority request/response tasks. This probably is work for the tokio
side of things, but would entail exposing primitives via the tokio-uring
apis I imagine.