help
help copied to clipboard
[uv_tcp_connect] Is there a way to override systemwide tcp connection timeouts?
- Version: 1.11.0
- Platform: Windows 10 x64
- Problem:
Hi libuv team:
I have a 30 seconds timer to perform an app-level timeout on uv_tcp_connect(), but all connections are timed out after 21 seconds when connect to unreachable server, the 21sec is the Windows defaults, but how can I set my own timeout programmatically by using some libuv API?
Thanks, Bill
-
Expected: Set Tcp connect timeout to any value as I wish even override systemwide defaults.
-
Question: As I'm still using v1.11.0 right now, are there any major updates already solved this problem?
Libuv doesn't have an API for that because on most platforms, Windows included:
- the connection timeout is not configurable, and
- non-blocking sockets don't time out
Can you post your code or a code snippet that reproduces the problem? It sounds like something else must be going on. Do you have any LSPs (Layered Service Providers) installed?
Thanks for the timely reply, no LSPs installed. By saying " It sounds like something else must be going on." do you mean that even if Windows defaults to 21 seconds, my program should still timeout(or should say canceled by timer) right on 30s? I can provide code, but need some time to simplify it from business code to a demo.
do you mean that even if Windows defaults to 21 seconds, my program should still timeout(or should say canceled by timer) right on 30s?
Correct.
I can provide code, but need some time to simplify it from business code to a demo.
Okay, understood. Some things you can test quickly:
-
Does it reproduce on other computers?
-
The timeout is configurable through the registry. Does libuv's timeout change if you change it?
https://serverfault.com/questions/193160/which-is-the-default-tcp-connect-timeout-in-windows
I'm not trying to change the registry yet, but I can query the value and it shows me where's the magic 21 comes from:
C:\Windows\system32>netsh interface tcp show global Initial RTO : 3000 Max SYN Retransmissions : 2
And yes all computers in my office has this problem since they upgrade to the latest Windows10. Here's the minimal code snippet to reproduce it.
#include <iostream>
#include <memory>
#include <thread>
#include "third_party/libuv/libuv-1.11.0/include/uv.h"
uv_loop_t* main_loop;
uv_tcp_t* tcp_stream;
//
// Init and run uv loop in a seprate thread.
//
void Init() {
main_loop = new uv_loop_t;
uv_loop_init(main_loop);
std::thread([] {
while (true) {
uv_run(main_loop, UV_RUN_NOWAIT);
std::this_thread::sleep_for(std::chrono::milliseconds(1));
}
}).detach();
}
void Connect() {
tcp_stream = new uv_tcp_t;
uv_tcp_init(main_loop, tcp_stream);
sockaddr* unified_addr = nullptr;
sockaddr_in ipv4_addr;
uv_ip4_addr("1.2.3.4", 9999, &ipv4_addr);
unified_addr = reinterpret_cast<sockaddr*>(&ipv4_addr);
uv_connect_t* connector = new uv_connect_t;
uv_tcp_connect(connector, tcp_stream, unified_addr, [](uv_connect_t* connector, int status) {
// Will get error code -4039, TIMED_OUT
// code will spend 21 seconds to arrive here.
delete connector;
});
std::thread([] {
// Sleep to demonstrate app-level timeout timer.
std::this_thread::sleep_for(std::chrono::seconds(30)); // 30 > 21, Boom
// std::this_thread::sleep_for(std::chrono::seconds(10)); // 10 < 21, OK
//
// If we set the timer to 30, the tcp connection is already timed out when we get here.
//
// Timer timed out, shutdown tcp connection in app-level.
if (!uv_is_closing(reinterpret_cast<uv_handle_t*>(tcp_stream))) {
uv_close(reinterpret_cast<uv_handle_t*>(tcp_stream), [](uv_handle_t* closed_handle) {
delete reinterpret_cast<uv_tcp_t*>(closed_handle);
});
}
}).detach();
}
int main(int argc, char **argv) {
Init();
Connect();
// Keep main alive.
while (true) {
std::this_thread::sleep_for(std::chrono::milliseconds(1));
}
}
@bzoz @piscisaureus Ideas?
There is no API to change the system wide timeout, but you can set it on a per-socket level using the TCP_MAXRT socket option. Libuv has no API for that, but you can get around it by creating the socket yourself, setting the option, and then using uv_tcp_open to create a libuv handle for the socket. https://msdn.microsoft.com/en-us/library/windows/desktop/ms738596(v=vs.85).aspx
Thanks, that's sounds like a great solution! I'll try it and feedback here soon. :)
@piscisaureus @bnoordhuis Thanks for your kindly help, problem resolved, you can close the issue as you wish :) I'm sharing the working code here for other people who met this problem. Tested on Windows10 and MacOSX 10.13.4
#include <iostream>
#include <memory>
#include <thread>
#include "third_party/libuv/libuv-1.11.0/include/uv.h"
#if ( !defined( _WIN32 ) && !defined( _WIN32_WCE ) ) || defined( __SYMBIAN32__ )
# include <errno.h>
# include <netinet/in.h>
# include <netinet/tcp.h>
# include <sys/socket.h>
#elif defined( _WIN32 ) && !defined( __SYMBIAN32__ )
# include <winsock2.h>
# include <Mstcpip.h>
#elif defined( _WIN32_WCE )
# include <winsock2.h>
#endif
#ifdef __APPLE__
#include <TargetConditionals.h>
#if TARGET_OS_MAC
# define MY_MACOSX
#else
# define MY_IOS
#endif
#endif
#ifndef INVALID_SOCKET
# define INVALID_SOCKET -1
#endif
uv_loop_t* main_loop;
uv_tcp_t* tcp_stream;
int GetSocket(int af, int socktype, int proto, uint32_t conn_timeout_sec) {
#if defined( _WIN32 ) && !defined( __SYMBIAN32__ )
SOCKET fd;
#else
int fd;
#endif
if ((fd = socket(af, socktype, proto)) == INVALID_SOCKET) {
return 0;
}
//
// Set TCP connection timeout per-socket level.
// See [https://github.com/libuv/help/issues/54] for details.
//
#if defined( _WIN32 ) && !defined( __SYMBIAN32__ )
setsockopt(fd, IPPROTO_TCP, TCP_MAXRT, (char*)&conn_timeout_sec, sizeof(conn_timeout_sec));
#elif defined( MY_MACOSX )
// (billhoo) MacOS uses TCP_CONNECTIONTIMEOUT to do so.
setsockopt(fd, IPPROTO_TCP, TCP_CONNECTIONTIMEOUT, (char*)&conn_timeout_sec, sizeof(conn_timeout_sec));
#elif defined( MY_IOS )
// (billhoo) Nothing to do right now.
#else // Linux like systems
uint32_t conn_timeout_ms = conn_timeout_sec * 1000;
setsockopt(fd, IPPROTO_TCP, TCP_USER_TIMEOUT, (char*)&conn_timeout_ms, sizeof(conn_timeout_ms));
#endif
return (int)fd;
}
//
// Init and run uv loop in a seprate thread.
//
void Init() {
main_loop = new uv_loop_t;
uv_loop_init(main_loop);
std::thread([] {
while (true) {
uv_run(main_loop, UV_RUN_NOWAIT);
std::this_thread::sleep_for(std::chrono::milliseconds(1));
}
}).detach();
}
void Connect() {
tcp_stream = new uv_tcp_t;
uv_tcp_init(main_loop, tcp_stream);
int fd = GetSocket(PF_INET, SOCK_STREAM, IPPROTO_TCP, 30);
uv_tcp_open(tcp_stream, fd);
sockaddr* unified_addr = nullptr;
sockaddr_in ipv4_addr;
uv_ip4_addr("1.2.3.4", 9999, &ipv4_addr);
unified_addr = reinterpret_cast<sockaddr*>(&ipv4_addr);
uv_connect_t* connector = new uv_connect_t;
uv_tcp_connect(connector, tcp_stream, unified_addr, [](uv_connect_t* connector, int status) {
// Will be 30 seconds to get here.
delete connector;
});
// No more app-level timeout timer is needed.
}
int main(int argc, char **argv) {
Init();
Connect();
// Keep main alive.
while (true) {
std::this_thread::sleep_for(std::chrono::milliseconds(1));
}
}
This is a nice workaround and it works on pretty much all major platforms, may I ask why is this not included in libuv yet?