tracers.py
tracers.py copied to clipboard
Testing
I did the setup.py
before writing tests and then realised I was getting ahead of myself. I had planned to look into using tox to test for 3.5, 3.6, and 3.7 but it may be overkill.
TODO:
- [x] Add coverage.py dependency
- [x] Write all tests (100% coverage?)
- [x] Tracers
- [x] Array1D
- [x] Array2D
- [x] Graph
- [x] Log
- [x] Tracer
- [x] Commander
- [x] Layouts
- [x] Randomize
- [x] JSON output
- [x] API request
- [x] Tracers
- [x] Add helper function to drastically reduce repeated code in tests
- Need to find a way to handle discrepancy in arguments between
Commander
instances and their keys in command JSON objects.
- Need to find a way to handle discrepancy in arguments between
Regarding the helper function, there are two approaches I can think of, roughly:
def assertCommandEqual(
self,
method: str,
*args: Union[Serializable, Undefined],
key: Optional[str] = None
):
cmd = Commander.commands[-1]
expected_cmd = {
"key": key,
"method": method,
"args": list(args),
}
self.assertEqual(expected_cmd, cmd)
or more comprehensively
def test_command(
self,
command: Callable,
*args: Union[Serializable, Undefined],
key: Optional[str] = UNDEFINED
):
if key is UNDEFINED:
key = command.__self__.key
# This would need to be done recursively
args = [arg.key if isinstance(arg, Commander) else arg for arg in args]
cmd = Commander.commands[-1]
expected_cmd = {
"key": key,
"method": command.__name__,
"args": args,
}
self.assertEqual(expected_cmd, cmd)
And really it could be taken a step further by looping over each tracer and calling the latter helper function for each function in a tracer. Suitable arguments for testing each function could be generated automatically based on type annotations lol. Stupid idea, I know, to rely on function definitions for testing like that.
But I am concerned that the second helper function (and especially that idea at the end) would be a bad approach because it'd effectively remove the "explicitness" of tests. But still, can't help but feel there's so much repetition in the tests even with the first helper function just because the tracers are such thin wrappers.
I like your idea of randomizing the arguments to test it. 😂 It actually sounds better than the current (manual) way of testing tracers.js
.
I was also concerned about the same issue that tests are kinda meaningless at this moment. Should we solely (and more thoroughly) test commander.py
and randomize.py
? Like testing if it correctly serializes different types of random values, if it prints commands out to visualization.json
correctly, etc.
I don't know what the best way to do this is. I am not really familiar with best practices and methodologies for testing. I do agree there should be some testing on the visualization.json
file. As for more thorough testing, I don't know what more can be done for commander.py
and randomize.py
. It seems to me testing serialisation of various data types doesn't make sense since what we'd really be doing is testing Python's standard library's json
module instead of our own code and that wouldn't be necessary unless we want to be paranoid and not trust Python to ship with a working and tested json
module.
The problem with my suggestion for parameters based on function definitions is that the tests would rely on the function definitions to work. If a function definition were to be changed to something it's not meant to be, the test wouldn't catch that mistake. I feel like right now the only point of having all the tracer tests is to test for function definitions.
Hmm good points. I think for now we can stick with the current way of testing. Maybe once we have plenty of algorithms translated from js to java/cpp/py, we can run tests on them (that are not using randomized input) to see if they result in the same visualization.json
s?
Yes, I think some practical usage tests would be good to have. I think we could get away with random inputs if we just seed the RNG.
Could use ideas for how to write a test for get_url
. I can test for the return type easily but not sure what else.
I think testing if the URL returned from get_url
responds with 200 would be good.
I think I came up with something decent by mocking some things. I didn't want to test that it returns 200 because that's effectively testing the API, which is out of scope.
This is ready to be reviewed now. I think we can work on tox and automated testing when we're working on setting up CI/CD and packaging.
It seems great to me. I am no expert in Python so I might miss something. I would appreciate if anyone else reading this thread can review once more.