athletic
athletic copied to clipboard
Set desired running time
Some annotations that one could set to a "desired running time" would be nice. E.g.
"this" method should preferably run below "0.0002" at average
This way, the benchmarking could also lead to some form of tests. We get a quick feedback of methods not running in the preferred time window.
<?php
class TestEvent {
/**
* @iterations 1000
* @maxTime 0.05
* @avgTime 0.002
*/
}
This would translate as:
the method should run 1000 iterations, not once allowed above the maximum time limit, and the average time should be less than 0.002
That would be interesting, but execution time varies greatly from machines to machines. The only significant result in a benchmark would be to compare A to B. A and B could be different (competitive?) libraries/frameworks, or different version of the same code (to track performance regressions).
I see your point, but if you set the boundries based on the minimal hardware requirements, that wont be any problem. The better the hardware, the better the speed. But in a language such as PHP, performance is futile anyways. A good algorithm may performed thousands of times worse than one written in C, as an extension.
I like this in theory, but I doubt it would be useful in practice. For example, we run CI tests on all our different codebases at my job. The timeouts need to be set ludicrously high because, depending on what other jobs are building/running at the same time, performance profiles vary wildly. This is on code that normally executes in 100ms, but sometimes takes up to 15 seconds due to resource contention.
Disk I/O is the usual culprit, since that is very difficult to regulate even in tightly controlled VMs. CPU can sometimes do it too.
I'm just not sure how practical it would be for someone to base tests on performance metrics which can be crazy variable, even on a single machine.
I was quite interested in this feature too, knowing it is hard as hell to implement from the start. However, i've found couple of things that can turn athletic into test-like utility:
- getrusage() will return time that processor has really been wasting on script. Assuming from comment, this will be a little tricky to ensure timings are correct, but this may help to eliminate latency problems; also, good testing should always use mocks (so there would be no latency problem at all). As i've been told, this doesn't work on Windows.
-
Even more hardcore solution.
perf stat -e instructions -e cycles %your_script%
will give real amount of CPU operations and cycles (though i'm not exactly sure what's the difference between them). Those values tend to vary in something like 5% for cycles and less than 1% for instructions on my notebook, so, i guess, it's possible to set tresholds close to real values. And yeap, this, of course, is linux-only solution too, which requires downloading extra utility packages (linux-tools-generic), but it can be adapted to ensure cross-linux script performance.