rgsimulator
rgsimulator copied to clipboard
From simulating to automated tests
Wow, kudo's for this nice tool!
Do you have a roadmap? I can image that if you have done a bunch of simulations, you would like to store these simulations and run them over and over again. So that you know that if change your code somewhere, the intended behavior (that you defined during simulation) stays the same.
I've written something similar by hand @ https://gist.github.com/gwillem/127fd0bf94a549d43a00 But it's very tedious to create tests (entering coordinates by hand) and also to maintain tests, without visualisation.
So perhaps your simulator could have:
- [ ] an export button, to export the current scenario (field + desired outcome) to a scenario file
- [ ] an import command line option, to load and visualize existing scenario files
These scenario files could also be parsed by a test runner (like nosetests), that you could have running in a separate window while you're working on your code.
I would take a stab at it, but it will probably take some time :)

Sounds very good!
These are some awesome ideas!
The first feature I'll add is saving situations into a file to be loaded later. Then I'll add possibility to add desired actions as well and run the scenario as a test. Automated testing would then be easy to add. I could imagine something like
./rgsimulator.py liquid.py --test liquid_test*
I will probably have to restructure the code - right now it is a spaghetti of UI, model and controller.
I've taken on gwillem's practice of writing the same kinds of unit tests by hand and I concur -- building these things by hand, then re-visiting them when they fail, is a huge hassle.
I would describe my ideal testing story as follows:
- Use regression tests to ensure no new stupidity creeps in with code changes
- Use this tool to compose new tests (e.g. when observing mistakes in replays) and capture CURRENT behaviour
- Hand-code expected behaviour - expected test results need not always be precise (often, more of the form "do anything you want EXCEPT move there"; expecting the exact same moves every time is unreasonable imo)
- Use this tool to quickly load and review failed tests and see what we DID do.
One possible design (which you can take or leave with my blessing) is to capture all the setups in a text file, with all the expected results in a unit tests file (referencing the setups by name). When unit tests fail, the names of the failing tests can be captured, and (only) the inputs of those failing tests can be imported into this tool for review.
Fortunately I'm going to have some free time this week and I hope to add these features. However, testing doesn't combine well with simulating, especially considering you can now progress turns as well. So I decided to create a new tool to do testing; it will share UI with simulator, and it will be possible to export scenario from simulation if the way the bot behaves in it is desirable.
I am also adding a similar feature to rgfiddle, which I expect to make good progress on tomorrow.
https://github.com/bsuh/rgfiddle/issues/8 https://github.com/bsuh/rgfiddle/issues/milestones?state=open