vert.x
vert.x copied to clipboard
Request Timeout should start after the connection is stabilished
Vertx suppot a timeout in the request, but that timeout start as soon as the method i invoked. I believe that Vertx should support a 'requestTimeout'. This means the timeout should apply only after the connection is stablished and the request starts. To illustrate this, the following snipet will describe it:
final HttpClientRequest req = client.request(HttpMethod.GET, 80, "localhost", "/");
req.connectionHandler(context -> System.out.println("connection id -> " + context.hashCode()));
req.setTimeout(timeout); // timer started
vertx.setTimer(timeout * 2, id -> { // waiting before requesting the data to trigger the timeout above
req.toObservable().
flatMap(resp -> {
if (resp.statusCode() != 200) {
throw new RuntimeException("Wrong status code " + resp.statusCode());
}
return resp.toObservable();
}).
subscribe(data -> System.out.println(data.toString()),
error -> System.out.println("handled error: " + error.getMessage())); // I except this error in case of a timeout
req.end();
});
@aleqi200 you can use HttpClientRequest#sendHead(Handler<HttpVersion>)
and set the timeout at this moment if you need this
@vietj requestTimeout should occur after the connection is established and the request begins. I'm not sure how HttpClientRequest#sendHead(Handler<HttpVersion>)
resolves this issue or is relevant to this issue. If requestTimeout is enabled, then it should started before the request is sent, not after.
Using HttpClientRequest#sendHead(Handler<HttpVersion>)
also forces the headers of the request to be sent before the data of the request is sent, changing the process of how requests are sent. This shouldn't be required to set requestTimeout.
I would also opt for such a request timeout value.
Consider this example:
vertx.createHttpServer().requestHandler(req -> {
logger.info("serverside: got req " + req.path());
if (req.path().contains("slow")) {
vertx.setTimer(3000, h -> req.response().end("foo"));
} else {
req.response().end("foo");
}
}).listen(9090);
HttpClient client = vertx.createHttpClient();
for (int i = 0; i < HttpClientOptions.DEFAULT_MAX_POOL_SIZE; i++) {
final int id = i;
HttpClientRequest slowRequest = client.get(9090, "localhost", "/slow/" + id, resp -> {});
slowRequest.setTimeout(2000);
slowRequest.exceptionHandler(ex -> logger.info("got exception for /slow/" + id + ": " + ex));
slowRequest.end();
}
HttpClientRequest fastRequest = client.get(9090, "localhost", "/fast", resp -> {});
fastRequest.setTimeout(2000);
fastRequest.exceptionHandler(ex -> logger.info("got exception for /fast" + ": " + ex));
fastRequest.end();
Here, the fastRequest will get a TimeoutException simply because all pooled Connections are busy - with the fastRequest not having been sent yet at all. This is counterintuitive if you really want to protect yourself against requests taking too long to process on the remote side. And it's also something that's not too obvious if you don't know about the default HttpClient maxPoolSize.
Therefore, there should be a separate request (or socket) timeout, independent of the time it takes to init/get a connection. For comparison see Apache HttpClient with its 3 timeout settings for a request [1][2].
[1] https://dzone.com/articles/httpclient-4x-timeout [2] http://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/client/config/RequestConfig.html
We are facing the same issue. In my tests, an api with 10ms response time is timing out from webclient with 100ms requestTimeout set.
Does vertex provide any guarantee the time difference between scheduling an api call and it getting executed, so we can pad our requestTimeout appropriately?
Following is test code I have written
@Test
public void connectionTimeOut() throws InterruptedException {
WebClient client = WebClient.create(Vertx.vertx());
client.request(HttpMethod.GET,8000 , "localhost", "/scripts/").timeout(100).send(ar -> {
if (ar.succeeded()) {
System.out.print("Request succeded");
} else {
ar.cause().printStackTrace(); ;
}
});
TimeUnit.MINUTES.sleep(20);
}
java.util.concurrent.TimeoutException: The timeout period of 100ms has been exceeded while executing GET /scripts/ for host localhost
at io.vertx.core.http.impl.HttpClientRequestBase.timeout(HttpClientRequestBase.java:183)
at io.vertx.core.http.impl.HttpClientRequestBase.handleTimeout(HttpClientRequestBase.java:168)
at io.vertx.core.http.impl.HttpClientRequestBase.lambda$setTimeout$0(HttpClientRequestBase.java:126)
at io.vertx.core.impl.VertxImpl$InternalTimerHandler.handle(VertxImpl.java:870)
at io.vertx.core.impl.VertxImpl$InternalTimerHandler.handle(VertxImpl.java:829)
at io.vertx.core.impl.ContextImpl.lambda$wrapTask$2(ContextImpl.java:344)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute$$$capture(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
The same test works if I increase the timeout to 1000ms. Please let us know if there is any workaround for this.
I agree that the timeout should only apply once the connection is started, it could be reinitialized I think when connection occurs
@aleqi200 you can use
HttpClientRequest#sendHead(Handler<HttpVersion>)
and set the timeout at this moment if you need this
Can you please tell me if we have similar workaround for webClient. I am looking at the api methods of webclient and HttpRequest but can't find a similar method. @vietj
Hi. This looks like an old issue and milestones are being moved. We are facing the same on production and planning to switch away from vertx. Any chances of this getting fixed soon?
Hi is there any update on this issue? :D
with Vert.x 4 you can more easily control this, you can set the timer once you have obtained a connection, e.g something like this will achieve what you want
client.request(options).compose(request -> {
// Obtained connection from the pool
request.setTimeout(XXX);
return request.send().compose(resp -> resp.body());
});
For Vert.x 3, this would be a breaking change, (i.e some people would expect the request to timeout when it's in the waiting queue to obtain a connection). We could improve this by resetting the timer once the connection has been obtained from the pool. WDYT ?
The behavior is to start now the timeout when data is written to the request, so creating a request with RxJava like explained in the example should work. The timeout will start after the request has been subscribed to.
Hi vietj, in the 4.0.0-milestone4 I cannot see the compose method that you show :( I can do a compose over the observable but I have the same behavior
im working with
import io.vertx.reactivex.ext.web.client.WebClient
and also tried with
import io.vertx.ext.web.client.WebClient
return request
.rxSend()
.compose { requestBuffer ->
requestBuffer.timeout(readTimeout, TimeUnit.MILLISECONDS) //this is not setting once the connection from the pool has been obtained
}
.doOnSuccess { logger.info("Success on GET with url: $absoluteUri, with status code: ${it.statusCode()}") }
.doOnError { logger.error("Error on GET with url: $absoluteUri and error: {}", it) }
.flatMap { it.process() }
.map { it.toJsonObject() }
There should me multiple types of timeouts, not resetting the timer once the connection has been obtained, we should have at least read timeout (socket timeout) connect timeout pool timeout
Thank you for the reply! :D
for timeouts, I concur with you, however we need to have several kind of different timeouts. The current implementation of pool does not have yet a timeout feature.
WebClient is different, I thought you were using HttpClient that is part of vertx-core where this issue is reported (WebClient is in vertx-web-client).
Yes agreed with you @vietj , thanks for the reply I will try HttpClient from core, sorry for my miss understanding.
This will not be changed in 3.9 because it might create regression with existing users.
In 4.0 you get the HttpClientRequest
when the connection has been created and the requester has been allocated a slot for using this connection.
revisiting this issue
In SharedClientHttpStreamEndpoint.Request
apparently we apply the same timeout logic on both enqueuing and on connecting.
Hello @vietj - is there plan to fix this any sooner? is it possible to somehow override the below behavior through some custom logic? At least if we can get a way to differentiate the timeouts which happend 'before initiating a request' Vs 'after initiating a request' it will be helpful.
@Override
public void onEnqueue(PoolWaiter<HttpClientConnection> waiter) {
onConnect(waiter);
}
@Override
public void onConnect(PoolWaiter<HttpClientConnection> waiter) {
if (timeout > 0L && timerID == -1L) {
timerID = context.setTimer(timeout, id -> {
pool.cancel(waiter, ar -> {
if (ar.succeeded() && ar.result()) {
handler.handle(Future.failedFuture(new NoStackTraceTimeoutException("The timeout of " + timeout + " ms has been exceeded when getting a connection to " + connector.server())));
}
});
});
}
}
I will have a look @yaduvesh
having a look at this for 4.5.0
See https://github.com/eclipse-vertx/vert.x/pull/4937