rgsimulator icon indicating copy to clipboard operation
rgsimulator copied to clipboard

From simulating to automated tests

Open gwillem opened this issue 12 years ago • 5 comments

Wow, kudo's for this nice tool!

Do you have a roadmap? I can image that if you have done a bunch of simulations, you would like to store these simulations and run them over and over again. So that you know that if change your code somewhere, the intended behavior (that you defined during simulation) stays the same.

I've written something similar by hand @ https://gist.github.com/gwillem/127fd0bf94a549d43a00 But it's very tedious to create tests (entering coordinates by hand) and also to maintain tests, without visualisation.

So perhaps your simulator could have:

  • [ ] an export button, to export the current scenario (field + desired outcome) to a scenario file
  • [ ] an import command line option, to load and visualize existing scenario files

These scenario files could also be parsed by a test runner (like nosetests), that you could have running in a separate window while you're working on your code.

I would take a stab at it, but it will probably take some time :)

screenshot - 28-11-13 - 20 27 26

gwillem avatar Nov 28 '13 19:11 gwillem

Sounds very good!

mueslo avatar Nov 28 '13 19:11 mueslo

These are some awesome ideas!

The first feature I'll add is saving situations into a file to be loaded later. Then I'll add possibility to add desired actions as well and run the scenario as a test. Automated testing would then be easy to add. I could imagine something like

./rgsimulator.py liquid.py --test liquid_test*

I will probably have to restructure the code - right now it is a spaghetti of UI, model and controller.

mpeterv avatar Nov 28 '13 19:11 mpeterv

I've taken on gwillem's practice of writing the same kinds of unit tests by hand and I concur -- building these things by hand, then re-visiting them when they fail, is a huge hassle.

I would describe my ideal testing story as follows:

  • Use regression tests to ensure no new stupidity creeps in with code changes
  • Use this tool to compose new tests (e.g. when observing mistakes in replays) and capture CURRENT behaviour
  • Hand-code expected behaviour - expected test results need not always be precise (often, more of the form "do anything you want EXCEPT move there"; expecting the exact same moves every time is unreasonable imo)
  • Use this tool to quickly load and review failed tests and see what we DID do.

One possible design (which you can take or leave with my blessing) is to capture all the setups in a text file, with all the expected results in a unit tests file (referencing the setups by name). When unit tests fail, the names of the failing tests can be captured, and (only) the inputs of those failing tests can be imported into this tool for review.

csj avatar Dec 13 '13 12:12 csj

Fortunately I'm going to have some free time this week and I hope to add these features. However, testing doesn't combine well with simulating, especially considering you can now progress turns as well. So I decided to create a new tool to do testing; it will share UI with simulator, and it will be possible to export scenario from simulation if the way the bot behaves in it is desirable.

mpeterv avatar Dec 14 '13 14:12 mpeterv

I am also adding a similar feature to rgfiddle, which I expect to make good progress on tomorrow.

https://github.com/bsuh/rgfiddle/issues/8 https://github.com/bsuh/rgfiddle/issues/milestones?state=open

bsuh avatar Dec 14 '13 21:12 bsuh