big_O
big_O copied to clipboard
use github actions to run tests
- add
.github/workflows/run-test.yml
- define config in
yml
file and run tests - remove
.travis.yml
Thank you for your contribution!
The workflow is failing at the moment: https://github.com/Abdullah-Majid/big_O/actions/runs/3237399647/workflow
Thank you for your contribution!
The workflow is failing at the moment: https://github.com/Abdullah-Majid/big_O/actions/runs/3237399647/workflow
got a working version but I think the number of python versions we can use is limited to a select few - apologies for all the commits!
After the change is made, it would be great to squash all commits into one, since the changes are limited
Looking good, could you please squash the 11 commits down to 1?
Looking good, could you please squash the 11 commits down to 1?
Yep will do after work today - apologies had a busy past couple weeks will start work on the readme issue too.
No worries, this is open source after all
Python 3.10 seems to be failing in the pipeline - I rebased off the latest master so not sure why only 2/3 are passing https://github.com/Abdullah-Majid/big_O/actions/runs/3284266596/jobs/5410032754
Maybe some optimizations in 3.10 make the function np.sort
run too fast to reliably measure its complexity. As the comment in the test says "Numpy sorts are fast enough that they are very close to linear"
I suggest adding a new dummy linearithmic function in test_big_o.py
(below the other dummy functions).
def dummy_linearithmic_function(n):
# Dummy operation with linearithmic complexity.
# Constant component of linearithmic function
dummy_constant_function(n)
x = 0
log_n = int(np.log(n))
for i in range(n):
for j in range(log_n):
for k in range(20):
x += 1
return x // 20
I get reliable tests on 3.10 with this modified version of test_big_o
:
def test_big_o(self):
# Each test case is a tuple
# (function_to_evaluate, expected_complexity_class, range_for_n)
desired = [
(dummy_constant_function, compl.Constant, (1000, 10000)),
(dummy_linear_function, compl.Linear, (100, 5000)),
(dummy_quadratic_function, compl.Quadratic, (1, 100)),
(dummy_linearithmic_function, compl.Linearithmic, (10, 5000)),
]
for func, class_, n_range in desired:
res_class, fitted = big_o.big_o(
func, datagen.n_,
min_n=n_range[0],
max_n=n_range[1],
n_measures=25,
n_repeats=1,
n_timings=10,
return_raw_data=True)
residuals = fitted[res_class]
if residuals > 5e-4:
if isinstance(res_class, class_):
err_msg = "(but test would have passed)"
else:
err_msg = "(and test would have failed)"
# Residual value is too high
# This is likely caused by the CPU being too noisy with other processes
# that is preventing clean timing results.
self.fail(
"Complexity fit residuals ({:f}) is too high to be reliable {}"
.format(residuals, err_msg))
sol_class, sol_residuals = next(
(complexity, residuals) for complexity, residuals in fitted.items()
if isinstance(complexity, class_))
self.assertIsInstance(res_class, class_,
msg = "Best matched complexity is {} (r={:f}) when {} (r={:f}) was expected"
.format(res_class, residuals, sol_class, sol_residuals))
Would you mind doing this changes? Thanks!
Closing in favor of #56