autorandr
autorandr copied to clipboard
Automated testing
Would be really nice to integrate jenkins-ci or someting like that.
Would require a flag for reading the data from stdin instead of running xrandr.
I don't think that a flag is required. The script runs xrandr from $PATH, so
#!/bin/sh
#
TEST_DIRECTORY=`mktemp -d`
[ -n "${TEST_DIRECTORY}" -a -d "${TEST_DIRECTORY}" ] || exit 1
cat > "${TEST_DIRECTORY}/xrandr" <<EOF
#!/bin/sh
echo "Test script, could output a fixture instead of calling '\$*'" >&2
exec /usr/bin/xrandr \$@
EOF
chmod a+x "${TEST_DIRECTORY}/xrandr"
export PATH="${TEST_DIRECTORY}:${PATH}"
./autorandr.py
rm -rf "${TEST_DIRECTORY}"
should work fine.
Automated testing certainly is nice-to-have , but frankly I don't find that we'd benefit much from it here. autorandr doesn't have overly complex logic, xrandr output has been stable for at least some years, and other than that, there's mostly syntax errors to test for. That being said, if you're willing to write some useful (separate!) tests and a script that runs through them, we can of course setup a CI service (I think I'd prefer Travis) to run it automatically.
Here's a script that builds all versions of the xrandr frontend, should be useful for testing: https://gist.github.com/phillipberndt/57e317f0b9619943f6d3
On my system, every version (except for 1_0_2, but I doubt that that's still around..) works with autorandr.
See https://gist.github.com/phillipberndt/cc3e7e38d8c77b564aef for a more complete test environment.
After your latest PR, I'm starting to get your point regarding the need for more testing. My gist doesn't suffice though, since it only tests if autorandr parses the configurations correctly. We'd also need to be able to simulate changes in configurations.
So, IMO, what we need is a state-preserving test version of xrandr. Each test would consist of an initial state, an autorandr call, and a final state that is compared to the final state of the test. It must also be configurable to generate error conditions, like for example the "you cannot change more than two screens in one call" case, or "the WM crashes if all screens are disabled". What is most important is that the generation of such test needs to be to some extent automated, such that users can generate tests themselves without knowing much about the internals if something doesn't work as expected for them.