stable-diffusion-webui
stable-diffusion-webui copied to clipboard
CI tests with github-actions and some improvements to testing
Describe what this pull request is trying to achieve.
The endgame is to prepare tests for github-actions workflow on every pull request. Changes:
- Added empty checkpoint in test_files dir to be used in tests by default if no other checkpoint is specified
- Server polling now checks whether the sdwebui process is still on and will cancel polling immediately on crash instead of on timeout
- DDIM added to vanilla sampler txt2img test
- options_write utility test temporary disabled because the functionality is broken at the moment.
- Typo fixed in txt2img with tiling test name
- Tests separated in two folders. basic_features tests require no additional downloads, they could be run on cpu quite quickly. advanced_features tests should check correctness of results, so they require clip model, sd model, interrogate, upscale, face restore models etc. Some of them will require cuda device too (probably).
- Tests are run with --tests argument as before, but now it is possible to specify the directory: [--tests advanced_features]. If no options were provided then all tests will be run.
- Autotesting with github-actions will use basic_features tests for now, output of the main program is available as an uploaded artifact under job:
Additional notes and description of your changes
As i stated, the goal is to run autotesting on every PR/commit, so it will be on CPU.
Currently it is possible to run tests on cpu like that and it's quite fast:
python launch.py --tests basic_features --no-half --disable-opt-split-attention --use-cpu all --skip-torch-cuda-test --clip-models-path ./test/test_files/empty.pt
Face restoration cant be done on cpu for some reason, did not investigate it further. Probably FR model is loaded on cuda device no matter the --use-cpu option.
Environment this was tested in
- OS: Windows 10
- Browser: Chrome
- Graphics card: NVIDIA GTX 1060 6GB
~~Well. Looks like localhost connections are refused within github-actions without some trickery.~~ Also i need to propagate failure exitcode to the testing process.
Also @mezotaken, not sure how easy it is to add it but i though that it might be fun to also add code coverage to the report. On older projet and with Jenkins i used this flow:
sh "pip install coverage"
sh "python -m coverage run -m unittest -v"
sh "python -m coverage xml"
publishCoverage adapters: [cobertura('tests/unittest/coverage.xml')], calculateDiffForChangeRequests: true, sourceFileResolver: sourceFiles('STORE_LAST_BUILD')
Link on coverage.py
@salco i'll look into it. I know what it is, but i'm not sure how it will work for this exact case where i'm testing literally everything through api. Plus while doing it i realised, that the ideal way would be to run sdwebui in docker container as a service and run tests via command line without server polling from .py file
@salco i'll look into it. I know what it is, but i'm not sure how it will work for this exact case where i'm testing literally everything through api. Plus while doing it i realised, that the ideal way would be to run sdwebui in docker container as a service and run tests via command line without server polling from .py file
If it's easy and without issue add it, if you find it's too complicated just post the situation and we (anyone in the community) could add it in another pull-request base on your report. :smile: