liburing
liburing copied to clipboard
Expand user_data
My view is 64bit user_data is not enough.
In order to use send_zc more flexibly, a circle linked list is introduced to maintain buffers. Then each send cqe must contain a buffer address, so some memory could be released later. However, there must be an ID to distinguish operations as usual, then I wrap up the ID and the buffer pointer to a new struct, and assign the struct pointer to user_data.
struct io_req {
enum io_operation id;
void *send_buffer;
CIRCLEQ_ENTRY(io_req) entries;
};
Apparently all looks good, but if the application has to cancel some operations rather than fd, it would be very daunting to walk through the long list of such structs and submit plenty of cancel operations.
Overall a longer user_data is needed, meanwhile cancel_userdata can be assigned a mask.
We can't change the CQE, it's fixed. Use cases that need it can use the CQE32 feature and get twice as much space. But:
Apparently all looks good, but if the application has to cancel some operations rather than fd, it would be very daunting to walk through the long list of such structs and submit plenty of cancel operations.
Not sure what this means, "rather than fd"? Cancel can work on a number of criteria, eg combined fd + op or something like that. In general, however, optimizing for cancelations seems odd, as it should be a rarer occurence.
We can't change the CQE, it's fixed. Use cases that need it can use the CQE32 feature and get twice as much space. But:
My idea is: could it be possible to have a configurable flag/parameter to setup a ring instance with bigger sqe/cqe user_data? just like SQE128/CQE32.
Apparently all looks good, but if the application has to cancel some operations rather than fd, it would be very daunting to walk through the long list of such structs and submit plenty of cancel operations.
Not sure what this means, "rather than fd"? Cancel can work on a number of criteria, eg combined fd + op or something like that. In general, however, optimizing for cancelations seems odd, as it should be a rarer occurence.
The pointer of the struct was assigned to the sqe->user_data, so when the cqe returned, the userspace can retrive the pointer to deal with the operaion ID and the send buffer address. Assumed thousands of send buffer were allocated, there would be thousands of different user_data, even if they probably all share the same operation ID member.
Now if we can have a 128bit user_data, 64bits of which stores an operation ID, the other 64bits stores a pointer, then cancel_userdata can match only the operation ID with a mask.