automlbenchmark
automlbenchmark copied to clipboard
Automate testing / version upgrade / docker images uploads
To ensure that we don't break the app with future changes, we should automate some basic testing/verification tasks. I suggest the following:
fresh git clone of the repo
fresh Py setup: virtual env + pip install -r requirements.txt
python3 runbenchmark.py constantpredictor test
python3 runbenchmark.py constantpredictor test -m aws
for each framework
python3 runbenchmark.py framework test -m docker
and for each run, verify that it produces successful results.
The first local run is very fast and will detect basic broken features immediately. The AWS run is also relatively fast as we just want to test basic AWS support: no need to run all frameworks there. Running docker mode for each framework though is pretty slow and can't be done in parallel as it is cpu intensive (would need multiple machines, not worth it): but this would properly test frameworks setup and run them against the simple test benchmark.
This kind of script, at the price of additional params, could also be used to "release" the app in case of success:
- tag the branch (new version +
stable
). - push the new tags.
- push the new docker images to docker repo.
Update: releases made with a v*
tag will automatically have their __version__
set and tagged as stable
if it is the most current v*
release.
Building, testing and publishing docker images is not yet automated (though we do have CI for local runs).