[devops] TravisCI tests fail more often than not - Need to simplify testing and mock network requests
Dear ml5 community,
I'm submitting a new issue. Please see the details below.
→ Step 1: Describe the issue 📝
Did you find a bug? Want to suggest an idea for feature?
- Want to suggest an idea
Currently the TravisCI tests fail more often than they pass. This has more to do with the fetching of images for analysis, the instantiation of models making big requests to external pretrained models from google or github servers, etc.
It would be wonderful if we can get this under control so that we can also start making a comprehensive set of tests for our features.
Making a note that this is partially improved with #652, but still lots of room for improvement + actually adding more tests on functionality is necessary.
Make a note that we might consider doing mock tests to help with the CI tests but run the full tests locally.
We should be mocking our async network model requests -- e.g. https://jestjs.io/docs/en/mock-functions. I definitely would add this to our devOps and testing todo list.
Just revisiting some ideas here that I will just jot down:
- Ideas around moving from mock/chai to jest?
- Employing snapshot tests as a first pass -- but this requires that we mock our model requests
- Thinking about: Since models are basically functions -- input/output -- we can make some mock models that might be broadly applied for testing purposes something like
mockClassificationModelandmockRegressionModel - Along the same lines of mocking for tests: maybe something like a
mockIOfor mocking out expected inputs and expected outputs since one of ml5's main "special sauces" is how we make it easier to pass a variety of inputs and then friendly structure our outputs. - Maybe we also make a tutorial on mocking async functions in ml5 and mocking external model requests 😬 since this has likely a reason for our testing timeouts as well as creating barriers for contributors to explore testing code. How wonderful would it be if we could help reduce the incentives/intimidation of writing tests! 😍
Noting that I've got some PRs to start this work:
- Sets up ml5-library with jest
- https://github.com/ml5js/ml5-library/pull/1294
- Removes the other testing library references
- https://github.com/ml5js/ml5-library/pull/1295
- Begins refactoring tests
- https://github.com/ml5js/ml5-library/pull/1296
Some notes:
- We should set up CI to run these tests
- Having TS Types have been super helpful for testing -- I def think adding types to ml5 would be great. Maybe we can bring type checkign in here: https://github.com/dejavu1987/ml5-typescript-webpack