pytest-benchmark
pytest-benchmark copied to clipboard
how to implement pedantic setup function which generate extra data for validation
Hello
I am new to this wonderful plugin and my goal is to benchmark my implement of leetcode, for example #26 Remove Duplicates from Sorted Array
Previously I tried to generate some sorted arrays and do the validation as following: (with pytest but without benchmark plugin)
def test_multi():
for _ in range(1000): # test how many times
a = []
no_dup = randint(2, 100)
for i in range(no_dup): # no dup array length
repeat = randint(1, 5)
[a.append(i) for _ in range(repeat)]
assert solution(a) == no_dup
assert a[:no_dup] == list(range(no_dup))
Then I tried to get the benchmark of my implementation.
I thought pedantic mode maybe fit my requirement. But now I wonder how could I re-org the simulation code into setup function.
The problem is that not only my simulator generates the sorted array for different round as input arg, but it also has associated value (within variable no_dup) that I could use for later pytest assert validation.
In this case, does pedantic mode really fit me ? Or there is another better solution for this ?
Best regards, wushuzh
What you're asking is for a cleanup callback in pedantic mode - but we don't have that. To make assertions currently you'd need to use some closure variables to store the inputs and results for later validation.