ponyc
ponyc copied to clipboard
Create test suite for testing Pony programs in their process environment
During multiple occassions (e.g. #3127 or #2871 ) it was discovered that stdin interaction was not quite working as expected. Since there is currently no test coverage of this part of the runtime and stdlib, the fixes can only be verified manually.
We want to have an additional test suite, that compiles a pony program and tests its interaction with the process environment.
That is (non-exclusively):
- reading from stdin (redirected file, tty, pipe, fifo)
- writing to stdout/stderr (redirected file, tty, pipe, fifo)
- handling environment variables (empty variables, very long variables, high amount of variables)
- signal handling
There is no particular preference over a certain test framework or programming language to use. It just needs to be able to express all the cases above in a fashion that is not too verbose and remains maintainable. Using the current gtest framework that is used for testing libponyc and libponyrt (in C++) would have the benefit of not introducing another tool, but writing the tests mentioned above in C++ might incur some unwanted verbosity.
If I would do it (which i might), I would do it in python, since its stdlib offers enough low-level apis to do all the things listed above without requiring any 3rd party packages. And the actual test code still remains readable. And I have some experience using python...
I might have a preference for using something like bats
, which could avoid requiring Python to be present, and only require Bash to be present.
EDIT: However, it could be that requiring Python may be more preferable for Windows compared to Bash? Would we expect to use the same process test suite on both Unix and Windows?
What about using Pony itself and/or require the previous version be present for testing?
Looking at doing this since it's still open. Any suggestions before I start?
Is anyone aware of any unix testing framework/harness for these types of tests? I'd prefer an existing solution (probably) to a roll out own. My assumption though is that "the existing solution" would be relatively lightweight.
Looking at doing this since it's still open. Any suggestions before I start?
It looks like a very open-ended requirement. I think understanding what it is you think needs testing at least at a high level is probably where we could start? @SeanTAllen 's suggestion of seeking out an existing framework (if this exists) would have the added advantage of knowing what to test has already been done.
Thank you for raising your hand to help with this.
Tagging @greatmazinger -- Github may not have notified you of my answer to your question above, so tagging you just in case.
Ah thanks for the heads up. Will start on this this week.