python-zulip-api icon indicating copy to clipboard operation
python-zulip-api copied to clipboard

Bot tests are not being fully executed

Open neiljp opened this issue 6 years ago • 7 comments

Noticed first when exploring the use of pytest, which should work with unittest.TestCase - but it couldn't find a bot dependency (tweepy for twitpost). Then I noticed that this bot was not even being tested!

Cases so far:

  • tools/test-bots: passes OK with 270 tests <-- what is seen in CI, presumably?
  • tools/test-bots * each bot (scripted): fails 5 times with 283 tests
  • Some bots are not being tested at all, and no error/failure is indicated.

neiljp avatar May 26 '18 03:05 neiljp

@zulipbot claim

neiljp avatar May 26 '18 03:05 neiljp

With some simplification and common methods for test discovery in test-bots, both test-bots [bot-names-in-bots-dir] and test-bots currently find 295 tests, with 14 failures.

This is after addition of many __init__.py files to bot directories, which the current discovery misses.

WIP PR incoming for this, likely tomorrow, though more work will be needed to fix the tests.

neiljp avatar May 26 '18 06:05 neiljp

Awesome, thanks for discovering this! I wonder what the right tooling method to prevent folks from adding bots without __init__.py files is...

timabbott avatar May 26 '18 14:05 timabbott

I'm currently working on fixing the bot tests.

I have an alternative solution for the test discovery which seems simpler in terms of identifying tests, though with the other issues found I will explore whether it is necessary. update My solution does at least error-out if the bot name is explicitly mentioned, which is likely to be used during bot development, so that may help?

Agreed that including detection of cases with no __init__.py files is definitely important to remedy!

neiljp avatar May 26 '18 20:05 neiljp

So master has the following, with different test discovery for each:

  • test-bots -> 270 tests pass
  • test-bots ``ls zulip_bots/zulip_bots/bots|grep -v __|grep -v bridge|grep -v fifte-> 287 run, 5x E

So far my branch:

  • fixes fifteen naming (for consistency, in case we want names to match, and to add _s)
  • adds 3 missing __init__.py files for bots with good tests -> 288 tests from test-bots
  • adds __init__.py for twitpost & adds the dependency to requirements.txt -> 292 tests from test-bots
  • adds __init__.py for witai and fixes test issues -> 295 tests from test-bots
  • amends test discovery -> only finds 293 tests (3 failing chess ones:)
  • renames chess to chessbot (provisional name), to avoid it conflicting with the pypi module name -> 293 passing tests from test-bots

There is now "just":

  • the issue of which 2 tests aren't being run in my updated test-bots
  • ensuring bots have a __init__.py file during tests
  • fixing the merels bot tests; if __init__.py is added and library imports resolved, this leads to 334 tests, but 24 failures (this one should maybe be spun out into a separate issue)

neiljp avatar May 26 '18 22:05 neiljp

Splitting merels bot tests as a separate issue seems very reasonable.

I merged all but the last commit, since they seemed reasonable and worked for me. Thanks for cleaning this up!

timabbott avatar May 28 '18 06:05 timabbott

Hello @neiljp, you have been unassigned from this issue because you have not updated this issue or any referenced pull requests for over 14 days.

You can reclaim this issue or claim any other issue by commenting @zulipbot claim on that issue.

Thanks for your contributions, and hope to see you again soon!

zulipbot avatar Jun 07 '18 12:06 zulipbot