Ideas for testing
This issue suggests some ideas, which you might use to improve your testing. Normally writing tests can be a very time-consuming task and it is crucial to have a large number of tests for good coverage.
- check failure for invalid hyper-parameters
- check the construction of a model with a well-known dataset
- check that linearly separable data has an optimal accuracy
- check special qualities of your algorithm, e.g. can detect/is robust against outliers, construct sparse solutions etc.
- use a real-world dataset and compare to performance with similar implementations
- look into scikit-learn and how they are performing the testing
If you have any specific test idea for any algorithm in the linfa ecosystem, please add a comment below :tada:
Would it be ok to write tests referencing the datasets in the dataset folder (like iris) to try and replicate scikit-learn's tests?
good objection, I have create a PR https://github.com/rust-ml/linfa/pull/72, which introduces linfa-datasets for this purpose
there is now a small section at the end of the CONTRIBUTE file explaining how to use linfa-datasets
Thank you!
Can we have benchmarks that measure the algorithms' accuracy values rather than performance? For example, for clustering algorithms we can measure the sum of squared distances from the nearest centroid as a metric for accuracy.