libuv icon indicating copy to clipboard operation
libuv copied to clipboard

feat: support accept raw socket from listen

Open sidyhe opened this issue 7 months ago • 5 comments

hello!

my project is tcp multithread

  • each thread has own uv loop
  • main thread listen and accept
  • in listen callback, take socket and dispatch handle to work thread
  • work thread use uv_tcp_open to bind loop

so, an api added to libuv:

for win:

int uv_tcp_accept_socket(uv_tcp_t* server, uv_os_sock_t* client) {
  int err = 0;

  uv_tcp_accept_t* req = server->tcp.serv.pending_accepts;

  if (!req) {
    /* No valid connections found, so we error out. */
    return WSAEWOULDBLOCK;
  }

  if (req->accept_socket == INVALID_SOCKET) {
    return WSAENOTCONN;
  }

  *client = req->accept_socket;

  /* Prepare the req to pick up a new connection */
  server->tcp.serv.pending_accepts = req->next_pending;
  req->next_pending = NULL;
  req->accept_socket = INVALID_SOCKET;

  if (!(server->flags & UV_HANDLE_CLOSING)) {
    /* Check if we're in a middle of changing the number of pending accepts. */
    if (!(server->flags & UV_HANDLE_TCP_ACCEPT_STATE_CHANGING)) {
      uv__tcp_queue_accept(server, req);
    } else {
      /* We better be switching to a single pending accept. */
      assert(server->flags & UV_HANDLE_TCP_SINGLE_ACCEPT);

      server->tcp.serv.processed_accepts++;

      if (server->tcp.serv.processed_accepts >= uv_simultaneous_server_accepts) {
        server->tcp.serv.processed_accepts = 0;
        /*
         * All previously queued accept requests are now processed.
         * We now switch to queueing just a single accept.
         */
        uv__tcp_queue_accept(server, &server->tcp.serv.accept_reqs[0]);
        server->flags &= ~UV_HANDLE_TCP_ACCEPT_STATE_CHANGING;
        server->flags |= UV_HANDLE_TCP_SINGLE_ACCEPT;
      }
    }
  }

  return err;
}

for unix:

int uv_tcp_accept_socket(uv_tcp_t* server, uv_os_sock_t* client) {
  int err = 0;

  if (server->accepted_fd == -1)
    return UV_EAGAIN;

  *client = server->accepted_fd;

  /* Process queued fds */
  if (server->queued_fds != NULL) {
    uv__stream_queued_fds_t* queued_fds;

    queued_fds = server->queued_fds;

    /* Read first */
    server->accepted_fd = queued_fds->fds[0];

    /* All read, free */
    assert(queued_fds->offset > 0);
    if (--queued_fds->offset == 0) {
      uv__free(queued_fds);
      server->queued_fds = NULL;
    } else {
      /* Shift rest */
      memmove(queued_fds->fds,
              queued_fds->fds + 1,
              queued_fds->offset * sizeof(*queued_fds->fds));
    }
  } else {
    server->accepted_fd = -1;
    if (err == 0)
      uv__io_start(server->loop, &server->io_watcher, POLLIN);
  }
  return err;
}

may i make a PR to repo?

sidyhe avatar Jun 11 '25 03:06 sidyhe

You could use a pipe handle for IPC and send the accepted connected socket to the right worker, no need for a new API.

Any reason why you didn't go that way?

saghul avatar Jun 11 '25 10:06 saghul

i see uv_write2 to send handle but it is an async op

in my project, i make a wrapper interface for libuv, which have a option to decide the IO is single-thread or multi-thread. when single-thread mode, the accept equal to uv_accept that the op is sync and get result immediately when multi-thread mode, i use uv_tcp_accept_socket to take handle, dispatch it, and wait for result (by mutex), to keep sync op too

if use uv_write2, i must break the behavior in different mode.

like this:

struct IUv {
virtual bool listen(...) = 0;
virtual uv_tcp_t* accept(...) = 0;
};

// count = 0 is no io thread
// count > 0 there are ``count`` io threads
IUv* NewUv(int backend_thread_count = 1);

usage like this:

IUv* uv = NewUv(3); // param is read from config file

uv->listen(...);
uv_tcp_t* sock = uv->accept(...); // always sync op, dont care backend is multi-thread or not

sidyhe avatar Jun 11 '25 15:06 sidyhe

Why do you need it to be sync? You could likely synchronize it with a condition variable you wait on when the handle is sent to the other loop.

saghul avatar Jun 11 '25 18:06 saghul

Thanks for the follow-up.

Why do you need it to be sync?
You could likely synchronize it with a condition variable you wait on when the handle is sent to the other loop.

In my wrapper (IUv), accept() has always been a blocking call: it must return a fully-initialised uv_tcp_t*(not really uv_tcp_t, also a wrapper) immediately, whether the backend runs in single-thread or multi-thread mode. This keeps the upper layers unchanged and avoids sprinkling callbacks or futures through legacy code.

Using uv_write2() creates a dead-lock in that model:

  1. uv_write2() may queues the write on the current loop’s write queue.
  2. If I then block the same thread with a uv_sem_wait() or condition variable, the loop stops running.
  3. Because the loop is no longer spinning, the queued write is never flushed, the worker thread never receives the handle, and the waiting thread never wakes up.

Moving the listening socket to a dedicated I/O thread would solve that, but the existing code assumes “the thread that calls libuv APIs is the loop thread”. Splitting them would require adding locks everywhere—something I’m trying to avoid.

I did evaluate uv_try_write2():

  • On Unix it works perfectly because it really sends the handle synchronously.
  • On Windows it always returns UV_EAGAIN when send_handle is non-NULL, so it cannot be used for a portable solution.

That leaves the small helper (uv_tcp_accept_socket() in the patch): pull the raw socket out synchronously, push it to a thread-safe queue, let the worker thread call uv_tcp_open(), then signal the listener thread. It preserves the synchronous API without blocking the event loop, but unfortunately touches libuv internals.

If there is another cross-thread, cross-platform way to send a socket synchronously without stopping the loop thread, I’d be happy to switch to it.

I was worried that my explanation might not be clear enough, so I used AI assistance to translate this message.

Thanks again for your time.

sidyhe avatar Jun 12 '25 03:06 sidyhe

What you want to do should get a lot easier (and synchronous) once one of the uv_import/uv_export pull requests like #4739 makes it all the way to the finish line.

bnoordhuis avatar Jun 14 '25 20:06 bnoordhuis