limbo icon indicating copy to clipboard operation
limbo copied to clipboard

Tests should run with or without the data from vcr's fixtures

Open topher200 opened this issue 6 years ago • 8 comments

vcr seems like a great tool for rapid and offline testing! However, it seems to be masking changes to the underlying APIs that plugins are depending on. If I clear out its fixtures, many tests start failing for me. Here's my test results:

test/test_cmd.py ..
test/test_limbo.py ......................
test/test_plugins/test_banner.py ..
test/test_plugins/test_calc.py ..
test/test_plugins/test_commit.py F
test/test_plugins/test_dog.py .
test/test_plugins/test_flip.py ....
test/test_plugins/test_gif.py F.
test/test_plugins/test_github.py F
test/test_plugins/test_google.py .....
test/test_plugins/test_help.py ....
test/test_plugins/test_image.py F.
test/test_plugins/test_map.py .
test/test_plugins/test_stock.py F..F.
test/test_plugins/test_stockphoto.py ..
test/test_plugins/test_tag.py F
test/test_plugins/test_weather.py FF
test/test_plugins/test_wiki.py F.
test/test_plugins/test_youtube.py F.

(my stockphoto.py isn't failing because of #122)


Steps to reproduce:

  1. Run make test ✅ tests pass!
  2. Run rm test/fixtures/*.yaml
  3. Run make test :x: observe many failing tests

topher200 avatar Oct 07 '17 03:10 topher200

Obviously we should fix the failing tests.

Second part of the issue is that the fixtures are masking problems with the tests and plugins. I thought about adding an option to the Makefile to clear out the fixtures, but the more I thought about it... should they be committed at all? If we don't commit the output from vcr (and we add it to .gitignore instead), TravisCI will test with real APIs each run.

topher200 avatar Oct 07 '17 05:10 topher200

the fixtures are there to avoid testing with real APIs with each run; ideally a test run would not hit the network at all. (Right now I think a couple of the tests do and I've been too lazy to fix them)

llimllib avatar Oct 07 '17 18:10 llimllib

Having each test run NOT hit the real APIs is awesome and a great feature for this project. vcr is a really elegant solution to the problem - I'd never heard of it before and I've been loving it for local dev/testing.

I didn't realize until now that some of the tests were written to check specifically against the output that the fixture happened to save. Take test_commit.py for example. If I remove the fixture and run pytest -s test/test_plugins/test_commit.py, I get this error:

    def test_commit():
      with vcr.use_cassette('test/fixtures/commit.yaml'):
        ret = on_message({"text": u"!commit"}, None)
>       assert 'stuff' in ret
E       AssertionError: assert 'stuff' in 'derp, helper method rename\n'

test/test_plugins/test_commit.py:15: AssertionError

I feel like that's selling the tests short! There's nothing wrong with the plugin, the API, or the test... the test should pass. If it doesn't, that means that we can never use our plugin tests to check against changes to external APIs. Issues like #122 may stay hidden forever.


Could a good balance be...

  • fixtures are committed to the project (current behavior)
  • tests should pass if the fixtures are deleted before the test run (new behavior!)

If yes, I'll make the necessary modifications to the current tests.

topher200 avatar Oct 09 '17 09:10 topher200

Let's take the commit plugin test as an example; if we don't use a fixture to specify the return value of the commit API, what can we test? Just that the website returned a 200? How do we know that the plugin worked correctly?

llimllib avatar Oct 09 '17 14:10 llimllib

Great discussion question!

I'd do this:

diff --git a/test/test_plugins/test_commit.py b/test/test_plugins/test_commit.py
index 9e44733..a5e725f 100644
--- a/test/test_plugins/test_commit.py
+++ b/test/test_plugins/test_commit.py
@@ -2,6 +2,7 @@
 import os
 import sys
 
+import six
 import vcr
 
 DIR = os.path.dirname(os.path.realpath(__file__))
@@ -12,4 +13,5 @@ from commit import on_message
 def test_commit():
   with vcr.use_cassette('test/fixtures/commit.yaml'):
     ret = on_message({"text": u"!commit"}, None)
-    assert 'stuff' in ret
+    assert isinstance(ret, six.string_types), ret
+    assert len(ret) > 0

That code answers the questions

  1. did the plugin correctly match on our input text?
  2. did the plugin return some kind of text response?
  3. does that text response have something in it?

topher200 avatar Oct 09 '17 17:10 topher200

To ask the question differently, what is the expected path to find issues like #122? How do we know when an API has changed?

topher200 avatar Oct 09 '17 18:10 topher200

OK, that's a good question, and one I'd thought about but not deeply enough yet.

I think the right answer is to have a separate "network" test suite. So, if you run make test, you get all the non-network tests, but if you run make test-network you get the network tests. If you run make test-all, you get both.

I'm currently not convinced that travis should run the network tests, just due to the fact that they'll be much slower than the non-network tests.

How does this solution strike you?

llimllib avatar Oct 12 '17 14:10 llimllib

That solution sounds great! I'll work on it when I get the chance.

topher200 avatar Oct 12 '17 18:10 topher200