hyper
hyper copied to clipboard
Controls for granular benchmarking
Downstream libraries should be able to easily profile requests.
Which pieces are desired?
- DNS time?
- Connect time
- Write time
- Read time
Probably all of those in addition to how long hyper itself takes for things like parsing.
On Wed, Sep 24, 2014 at 8:35 PM, Sean McArthur [email protected] wrote:
Which pieces are desired?
- DNS time?
- Connect time
- Write time
- Read time
Reply to this email directly or view it on GitHub: https://github.com/hyperium/hyper/issues/56#issuecomment-56759342
I think we want the ability to measure all of the applicable properties in https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/NavigationTiming/Overview.html .
For reference:
interface PerformanceTiming {
readonly attribute unsigned long long navigationStart;
readonly attribute unsigned long long unloadEventStart;
readonly attribute unsigned long long unloadEventEnd;
readonly attribute unsigned long long redirectStart;
readonly attribute unsigned long long redirectEnd;
readonly attribute unsigned long long fetchStart;
readonly attribute unsigned long long domainLookupStart;
readonly attribute unsigned long long domainLookupEnd;
readonly attribute unsigned long long connectStart;
readonly attribute unsigned long long connectEnd;
readonly attribute unsigned long long secureConnectionStart;
readonly attribute unsigned long long requestStart;
readonly attribute unsigned long long responseStart;
readonly attribute unsigned long long responseEnd;
readonly attribute unsigned long long domLoading;
readonly attribute unsigned long long domInteractive;
readonly attribute unsigned long long domContentLoadedEventStart;
readonly attribute unsigned long long domContentLoadedEventEnd;
readonly attribute unsigned long long domComplete;
readonly attribute unsigned long long loadEventStart;
readonly attribute unsigned long long loadEventEnd;
};
I imagine checking the clock several times when you don't want it would cause slow down. I'm thinking these could be events that can receive closures to echoed anything. By default, there's no closure. So you only pay for what you use.
For reference, my simple benchmark of time::precise_time_ns gives an average of 24ns per invocation:
extern crate test;
extern crate time;
use test::Bencher;
#[bench]
fn bench_precise_time_ns(b: &mut Bencher) {
b.iter(|| time::precise_time_ns());
}
On my computer, the Hyper benchmark is 137,963ns, ±63387ns. So you'd have to check the clock 50 times to see a 1% increase. Is that acceptably low enough to do on each invocation (there are 21 timings listed in the W3C document, but several of them seem to be irrelevant (e.g. the DOM-related timings))?
Of course, it makes sense to answer this question by taking real benchmarks of the timing code in Hyper, but that was slightly more work :smiley:
I'm interested because I'd like to implement HTTP archive output for Servo, which depends to an extent on being able to pull this information out of Hyper.
If nobody works on this issue before I finish up the other parts of that task, I'd be interested on taking it on. But in the meantime, anyone else should feel free to claim it.
Oh, that's not much time. I thought at some point someone told me that getting the date for the date header was showing up in profiles.
Similar benchmarks show time::now_utc() at around 75ns.
Perhaps it's the actual formatting step that's taking longer. That part would pretty easy to do only as-demanded.
I would love to use hyper for HTTP load testing and would want these metrics as part of the results
Closures or a Trait impl, a Trait impl might be nicer if your doing something like sending off the metrics gathered to another metrics recorder service
I would love to be able to get access to more granular timing information as well for a project I'm working on.
I believe the Tokio Trace proposal is relevant here.
Any update on this?
If it helps these are the set of metrics we'd like to be able to emit from the AWS SDK for Rust. Obviously what hyper emits won't be one to one but if it helps in figuring out the kind of telemetry data users are interested in this is some of what we have in mind.
| Metric Name | Unit | Type | Description | Attributes (Dimensions) |
|---|---|---|---|---|
| client.http.connections.acquire_duration | s | Histogram | The time it takes a request to acquire a connection | |
| client.http.connections.limit | {connection} | [Async]UpDownCounter | The maximum open connections allowed/configured for the HTTP client | |
| client.http.connections.usage | {connection} | [Async]UpDownCounter | Current state of connections pool | state: idle | acquired |
| client.http.connections.uptime | s | Histogram | The amount of time a connection has been open | |
| client.http.requests.usage | {request} | [Async]UpDownCounter | The current state of HTTP client request concurrency | state: queued | in-flight |
| client.http.requests.queued_duration | s | Histogram | The amount of time a request spent queued waiting to be executed by the HTTP client | |
| client.http.bytes_sent | By | MonotonicCounter | The total number of bytes sent by the HTTP client | server.address |
| client.http.bytes_received | By | MonotonicCounter | The total number of bytes received by the HTTP client | server.address |