Arraymancer
Arraymancer copied to clipboard
Critical: Add tests to autograd, nn_primitives and nn
Autograd, nn_primitives and nn are in master. To bring them to the same standard as the core tensor library, they need tests:
-
unit tests for individual pieces (like derivative of linear: Weight * input + bias)
- Derivatives and cost functions in particular are critical.
-
Full pipeline test. (learning XOR, or small dataset like Iris / dogs vs cats). Note: loading those datasets should not depend on modules that depends on Arraymancer.
- Tests should catch convergence regression
- A performance and memory benchmark would be nice to have to profile the library regularly. Continuous integration would be terrific but a file in "benchmarks" folder with a history of commit + CPU/GPU + compilation flags would be a great start.
-
End-to-end integration tests with IO libraries like arraymancer-vision (and later csv loading, etc).
- Make sure we don't silently break Arraymancer for them
- Make sure they don't break assumptions we rely on, like color channel being in CHW order.