k6-docs
k6-docs copied to clipboard
gRPC connection doesn't respect Timeout parameter
Brief summary
The grpc k6 tests, specifically with stream connection, are a bit special as we can't really use k6 test iterator and all test execution happens in a single test iteration. The test iteration only completes when grpc connection is closed, means all test workload should happen inside js iteration (for loop for ex). This makes test duration none deterministic and test run completes with never finished iteration.
Unfortunately the Timeout parameter in the connect method doesn't work and there seems to be no way to gracefully finish test.
k6 version
v0.56.0
OS
macos
Docker version and image (if applicable)
No response
Steps to reproduce the problem
- Have timeout set in the grpc client connect
import { check } from "k6";
const client = new grpc.Client();
client.load(["./src/support/echo/server"], "helloworld.proto");
export default function () {
client.connect("127.0.0.1:9999", { plaintext: true, timeout: 1 });
const response = client.invoke("helloworld.Greeter/SayHello", {
name: "Tiko",
});
check(response, {
"status is OK": (res) => res && res.status === grpc.StatusOK,
"response message exists": (res) => res && res.message,
});
console.log(`Response: ${JSON.stringify(response.message)}`);
client.close();
}
- Run test
(base) tiko@tiko-workstation k6 % ./k6 run 1.test.js
/\ Grafana /‾‾/
/\ / \ |\ __ / /
/ \/ \ | |/ / / ‾‾\
/ \ | ( | (‾) |
/ __________ \ |_|\_\ \_____/
execution: local
script: 1.test.js
output: -
scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop):
* default: 1 iterations for each of 1 VUs (maxDuration: 10m0s, gracefulStop: 30s)
INFO[0002] Response: {"message":"Hello Tiko"} source=console
✓ status is OK
✓ response message exists
checks...............: 100.00% 2 out of 2
data_received........: 153 B 76 B/s
data_sent............: 270 B 135 B/s
grpc_req_duration....: avg=2s min=2s med=2s max=2s p(90)=2s p(95)=2s
iteration_duration...: avg=2s min=2s med=2s max=2s p(90)=2s p(95)=2s
iterations...........: 1 0.499242/s
vus..................: 1 min=1 max=1
vus_max..............: 1 min=1 max=1
running (00m02.0s), 0/1 VUs, 1 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs 00m02.0s/10m0s 1/1 iters, 1 per VU
(base) tiko@tiko-workstation k6 % ./k6 --version
k6 v0.56.0 (commit/bf9c8a4d86-dirty, go1.23.0, darwin/amd64)
Expected behaviour
grpc connection timeouts before server response. Notice response is 2s
Actual behaviour
Timeout parameter is no considered. There is no way to make client terminate connection and for test iteration to complete.
Hey @tikolakin, thanks for you report. 🙇
This seems to be expected. The documentation explicitly reports that the timeout parameter on the connect function is for the connection request. It doesn't impact the unary/streams requests.
If you want to set a custom timeout on the requests, then you have to use the dedicated parameter on the specific function for doing that. Please, check the client.invoke function's documentation and its relative params for the expected syntax.
I expect you checked the documentation before. Do you think the documentation is not clear enough? Do you have a suggestion for improving the text?
@codebien, thanks for links to the documentation, it is helpful. I just wanted to note that I, too, was confused by the connect parameter, because the testing guide says:
Next, it invokes the remote procedure, using the syntax
. / , as described in the proto file. This call is made synchronously, with a default timeout of 60000 ms (60 seconds). To change the timeout, add the key timeout to the config object of .connect() with the duration as the value, for instance '2s' for 2 seconds.
Maybe having a different wording there with a link to client.invoke would make for less confusion.
Hey @sukhmel, thanks for sharing your feedback, it's very appreciated! 🙇 k6 docs always welcome contributions. Would you be open to checking them out and contribute? Here's the link: https://github.com/grafana/k6-docs.
Now that we have identified an actionable improvement, I'm transferring this issue there as it fits better.
@codebien I'll try to make a change, thanks for the link