libuv
libuv copied to clipboard
unix,win: introduce `uv_reject` call
This call will immediately close the accepted fd and continue with the queued fds.
Fixes: https://github.com/libuv/libuv/issues/1754
Note: before adding tests, I would like to receive some feedback and see if this works or what libuv expects/wants to have.
I'm testing using a quite simple tcp server https://github.com/juanarbol/uvxamples/blob/main/src/tcp/tcp-server.c and calling uv_accept_reject, it seems to works fine.
CI failure is a known flake https://github.com/libuv/libuv/issues/4106
Thanks for getting the ball rolling? Maybe this can just be uv_reject?
Apparently this PR woke up a couple things about how libuv polls connections. I will continue with this one after receiving a bit more input from @libuv/collaborators
After commenting the the uv__io_stop and arm that again. I'm getting some failures in delayed_accept; I guess it just makes sure that this EMFILE magic trick is always gonna work. See:
/* Implements a best effort approach to mitigating accept() EMFILE errors.
* We have a spare file descriptor stashed away that we close to get below
* the EMFILE limit. Next, we accept all pending connections and close them
* immediately to signal the clients that we're overloaded - and we are, but
* we still keep on trucking.
*
* There is one caveat: it's not reliable in a multi-threaded environment.
* The file descriptor limit is per process. Our party trick fails if another
* thread opens a file or creates a socket in the time window between us
* calling close() and accept().
*/
https://github.com/libuv/libuv/blob/d2d92b74a8327daf9652a9732454937f3529bd30/src/unix/stream.c#L480
not ok 1 - delayed_accept
# exit code 6
# Output from process `delayed_accept`:
# Assertion failed in /Users/juanjose/GitHub/libuv/test/test-delayed-accept.c on line 61: `r == 0` (-35 == 0)
IMO the IO handle stop/start-again mechanism works like a lock to avoid other threads read a file or create a socket.
I think I just ensures that the uv_accept can be deferred.
I think is not possible to keep collecting io if the connection is not dequeued. That polling is dealing with the backlog size and EMFILE. I tried to avoid stoping the io polling with no success, it is part of the expected behaviour
I don't have a windows machine available rn, I'll work on the windows implementation in a couple days if the lord wants it like that.
Hi @juanarbol, If it works for you, I can work on the Windows implementation of this PR. Once I’m done, you can review it. Is it OK for you?
Hi @juanarbol, If it works for you, I can work on the Windows implementation of this PR. Once I’m done, you can review it. Is it OK for you?
Sure, no problem at all from my side! That is very kind, thank you.
Hey @juanarbol, I've opened a PR to merge the Windows version to your branch. You can review it there. Feel free to apply or change it.
After a log debug session, this seems to be the less intrusive way to implement stream rejections w/out breaking any other desired behaviors like delayed accept.
Special thanks to @huseyinacacak-janea for the windows implementation, added as co-author :)
server.accepted_fd doesn't exist in Windows so the CI fails. Feel free to apply the following patch to fix the error.
diff --git a/test/test-tcp-reject.c b/test/test-tcp-reject.c
index f2cee997..59cefb4b 100644
--- a/test/test-tcp-reject.c
+++ b/test/test-tcp-reject.c
@@ -40,7 +40,11 @@ static void connection_cb(uv_stream_t* tcp, int status) {
ASSERT_OK(uv_reject(tcp));
/* The server should not have accepted the connection */
+#ifdef _WIN32
+ ASSERT(server.tcp.serv.pending_accepts->accept_socket == INVALID_SOCKET);
+#else
ASSERT(server.accepted_fd == -1);
+#endif
/* Close the server */
uv_close((uv_handle_t*) &server, NULL);
@huseyinacacak-janea I think Saúl is talking to you.
@saghul Thank you for the review. I've applied your suggestions and created the patch below. @juanarbol This patch should fix the errors in the CI as well. Could you please apply it?
Patch
diff --git a/src/win/stream.c b/src/win/stream.c
index cb82a942..2a98dcb2 100644
--- a/src/win/stream.c
+++ b/src/win/stream.c
@@ -50,22 +50,27 @@ int uv_listen(uv_stream_t* stream, int backlog, uv_connection_cb cb) {
int uv_reject(uv_stream_t* client) {
int err = 0;
+ uv_pipe_t* pipe_client;
+ uv_tcp_accept_t* req;
+
if (client == NULL) {
- err = ERROR_INVALID_PARAMETER;
+ return uv_translate_sys_error(ERROR_INVALID_PARAMETER);
}
+
switch (client->type) {
case UV_NAMED_PIPE:
- if (((uv_pipe_t*)client)->u.fd == -1) {
- CloseHandle(((uv_pipe_t*)client)->handle);
+ pipe_client = (uv_pipe_t*)client;
+ if (pipe_client->u.fd == -1) {
+ CloseHandle(pipe_client->handle);
} else {
- _close(((uv_pipe_t*)client)->u.fd);
+ _close(pipe_client->u.fd);
}
- ((uv_pipe_t*)client)->u.fd = -1;
- ((uv_pipe_t*)client)->handle = INVALID_HANDLE_VALUE;
+ pipe_client->u.fd = -1;
+ pipe_client->handle = INVALID_HANDLE_VALUE;
break;
case UV_TCP:
- uv_tcp_accept_t* req = ((uv_tcp_t*)client)->tcp.serv.pending_accepts;
+ req = ((uv_tcp_t*)client)->tcp.serv.pending_accepts;
if (!req) {
err = WSAEWOULDBLOCK;
@@ -75,10 +80,6 @@ int uv_reject(uv_stream_t* client) {
err = WSAENOTCONN;
}
- if (shutdown(req->accept_socket, SD_BOTH) == SOCKET_ERROR) {
- err = WSAGetLastError();
- }
-
closesocket(req->accept_socket);
req->accept_socket = INVALID_SOCKET;
break;
diff --git a/test/test-pipe-reject.c b/test/test-pipe-reject.c
index 71ad2c39..243bfbeb 100644
--- a/test/test-pipe-reject.c
+++ b/test/test-pipe-reject.c
@@ -37,6 +37,7 @@ static void connect_cb(uv_connect_t* _, int status) {
static void connection_pipe_cb(uv_stream_t* server, int status) {
ASSERT_OK(uv_reject(server));
+ ASSERT_OK(status);
uv_close((uv_handle_t*) &server_handle, NULL);
}
Could you please apply it?
Done! And thanks again!
Fixing the leak in linux
humble ping @saghul
humble ping @saghul
Done, sorry for the delay!
On the win side the big problem is you care closing the server handle, rather than the accepted connection. You need to peek into handle->pipe.serv.pending_accepts like you do for TCP.
Thanks for the review. I've added a patch to fix the problems on the Win side. Feel free to ping me if there is anything I can do to help this PR move forward.
diff --git a/src/win/stream.c b/src/win/stream.c
index 2a98dcb2..8388e397 100644
--- a/src/win/stream.c
+++ b/src/win/stream.c
@@ -48,45 +48,39 @@ int uv_listen(uv_stream_t* stream, int backlog, uv_connection_cb cb) {
}
-int uv_reject(uv_stream_t* client) {
- int err = 0;
- uv_pipe_t* pipe_client;
- uv_tcp_accept_t* req;
+int uv_reject(uv_stream_t* server) {
+ uv_tcp_accept_t* tcp_req;
+ uv_pipe_accept_t* pipe_req;
- if (client == NULL) {
- return uv_translate_sys_error(ERROR_INVALID_PARAMETER);
+ if (server == NULL) {
+ return UV_EINVAL;
}
- switch (client->type) {
+ switch (server->type) {
case UV_NAMED_PIPE:
- pipe_client = (uv_pipe_t*)client;
- if (pipe_client->u.fd == -1) {
- CloseHandle(pipe_client->handle);
- } else {
- _close(pipe_client->u.fd);
+ pipe_req = ((uv_pipe_t*)server)->pipe.serv.pending_accepts;
+
+ if (!pipe_req || pipe_req->pipeHandle == INVALID_HANDLE_VALUE) {
+ return UV_EAGAIN;
}
- pipe_client->u.fd = -1;
- pipe_client->handle = INVALID_HANDLE_VALUE;
+ CloseHandle(pipe_req->pipeHandle);
+ pipe_req->pipeHandle = INVALID_HANDLE_VALUE;
break;
case UV_TCP:
- req = ((uv_tcp_t*)client)->tcp.serv.pending_accepts;
+ tcp_req = ((uv_tcp_t*)server)->tcp.serv.pending_accepts;
- if (!req) {
- err = WSAEWOULDBLOCK;
+ if (!tcp_req || tcp_req->accept_socket == INVALID_SOCKET) {
+ return UV_EAGAIN;
}
- if (req->accept_socket == INVALID_SOCKET) {
- err = WSAENOTCONN;
- }
-
- closesocket(req->accept_socket);
- req->accept_socket = INVALID_SOCKET;
+ closesocket(tcp_req->accept_socket);
+ tcp_req->accept_socket = INVALID_SOCKET;
break;
default:
- err = ERROR_INVALID_PARAMETER;
+ return UV_EINVAL;
}
- return uv_translate_sys_error(err);
+ return 0;
}
+ pipe_req = ((uv_pipe_t*)server)->pipe.serv.pending_accepts; + CloseHandle(pipe_req->pipeHandle); + pipe_req->pipeHandle = INVALID_HANDLE_VALUE;
Don't you need to check if there are no pending pipe requests before attempting the close, and return UV_EAGAIN, like for TCP?
+ pipe_req = ((uv_pipe_t*)server)->pipe.serv.pending_accepts; + CloseHandle(pipe_req->pipeHandle); + pipe_req->pipeHandle = INVALID_HANDLE_VALUE;Don't you need to check if there are no pending pipe requests before attempting the close, and return UV_EAGAIN, like for TCP?
Thanks. I've updated the patch above.
Folks, rather than doing patches, can you provide a proper PR with all the changes? Maybe a new one is necessary if you are taking this over @huseyinacacak-janea . Or @juanarbol could give you write access to his branch...
Folks, rather than doing patches, can you provide a proper PR with all the changes? Maybe a new one is necessary if you are taking this over @huseyinacacak-janea . Or @juanarbol could give you write access to his branch...
Will do!
@huseyinacacak-janea you're now a collaborator of my fork, I'm sorry, I'm just a bit busy these days. And windows is not my stronger skill.
I'll be stepping down from the project in 2 weeks. If there is anything I can do to help this PR move forward, feel free to reach out to me until then.
Ping @saghul
@huseyinacacak-janea Great work! Can you please rebase the branch?
Please take a look at the unresolved comments, we are close!