Integration tests: restart with faked task pool
Built atop #3654 (see last three commits only) Addresses the python side of #3591 (need a bash port to close the issue)
Allow suites in the integration test battery to be started (well restarted actually) with a database which contains mocked task proxies.
This has many advantages:
- Cuts out the vast amounts of time our tests spend getting suites in the right state before tests can be performed.
- Cuts out the timing issues which cause our tests to become flaky and in turn cause us to waste vast quantities of dev time fixing tests rather than working.
- Enables us to put flows in hard or even impossible to reproduce states.
At the moment this functionality is only available to the integration tests, however, the functional tests need it more. There are two options:
- You can actually write tests in the functional test battery (
tests/f) in python using the integration testing framework.- They will run with
pyest tests/f. - When I finally get around to finishing of my pytest extension we will run the bash functional tests with
pytestanyway.
- They will run with
- We can create some bash bindings for this functionality.
Requirements check-list
- [x] I have read
CONTRIBUTING.mdand added my name as a Code Contributor. - [x] Contains logically grouped changes (else tidy your branch by rebase).
- [x] Does not contain off-topic changes (use other PRs for other changes).
- [x] Appropriate tests are included (unit and/or functional).
- [x] No change log entry required (why? e.g. invisible to users).
- [x] No documentation update required.
Knocked together a CLI interface and stuck it into the test_header for use in bash tests.
Rebased but there is one issue, the satisfied field of the task_pool table is very hard to mock as it requires information about the suite's graph in order to work out what the dependencies of the task are. Providing an empty dictionary you get the following KeyError:
cylc.flow.task_pool
419 # TODO (from Oliver's PR review):
420 # Wait, what, the keys to a JSON dictionary are themselves JSON
421 # :vomiting_face:!
422 # This should be converted to its own DB table pre-8.0.0.
423 for pre in itask.state.prerequisites:
424 for k, v in pre.satisfied.items():
425 pre.satisfied[k] = sat[k]
>>> KeyError: ('foo', '2000', 'succeeded')
How best to get around this? Is it safe to stick a try/except around this logic and expect the task_pool/task_proxy to rebuild it for us?
How best to get around this? Is it safe to stick a try/except around this logic and expect the task_pool/task_proxy to rebuild it for us?
Hmm, I think not - prerequisites are satisfied once, by events, now. Pre-SOD task proxies (and DB task pool table) stored their own outputs, and downstream prerequisites could be re-satisfied by dependency matching over again.
So the DB faker might need to parse the graph, or use parsed-graph information. Can we expect the faker-user to specify which prerequisites are satisfied already in the task pool?
So the DB faker might need to parse the graph, or use parsed-graph information
🤮, dammit, t'was so simple too.
Can we expect the faker-user to specify which prerequisites are satisfied already in the task pool?
I don't think so, the amount of information required would be much higher, messy too.
Would it be possible to steal the old dependency-matcher code?
Wait on fixing the DB json key hack from SoD. (@oliver-sanders might need to help me with this).
Wait on fixing the DB json key hack from SoD
Was this #3863?
It was indeed, now to work out how to fake the prereqs table without requiring graph parsing logic.
Closing this for now, hope to come back to it one day...