Overcome monkeypatch for testing
I think we should start to think about how to overcome the actual testing methodology based on monkeypatches.
The motivation I found for this are:
- overall testing capabilities and stability: we would be able to test more extensively each analyzer thus resulting in a more stable and reliable project
- we would be able to test the actual implementation of more complex analyzers in depth
- a better experience for first-time contributors that already know how to use
unittestand don't have to learn a new testing methodology
Of course, since most of the analyzers are now based on monkeypatches we can proceed in three ways:
- We can remove monkeypatch entirely and rewrite all tests for each plugin using
unittest; this would for sure require quite a lot of work but the result would be much cleaner - We can stop using monkeypatch in new plugins and keep the existing as they are as of now.; each new plugin should then have its test written with
unittest - We can keep using monkeypatch in really simple plugins that only rely on external APIs and don't have much computation (we already have some of them) and test only more complex ones
Also the pipeline inside .github/ may be affected by this change, so we should consider to update it if necessary.
I just wrote down some ideas, feel free to partecipate and add other ideas or suggestions 😄
I agree that the tests framework actually implemented contains some limitations.
At this time, it is already possible to run custom tests for each analyzer if someone wants. One of the few examples can be found here: https://github.com/intelowlproject/IntelOwl/blob/master/tests/api_app/analyzers_manager/observable_analyzers/test_nvd_cve.py
However atm this requires duplication of the mocking that would need to appear both in the main module and in the test module to properly work.
Ideally, it would make sense to create a custom python file for each analyzer containing its own patch and execution and to remove the old monkeypatches from the code.
One of the advantages of the monkeypatch system was not only the reduction of the code to write but thanks to the test_subclasses we executed jobs for each type of analyzable containing each analyzer. By removing the monkeypatch from the main modules, this would not be possible anymore.
Following the unittest principles, this would make sense because first we should test each part of the project indipendently from each other and the rest. At the same time, such "more integrated" tests would still be useful to understand that the main pieces works together. So it could make sense to rewrite these test_subclasses for ObservableAnalyzerTestCase and FileAnalyzerTestCase: the solution could be to test the Plugin start and related methods more appropriately
This issue has been marked as stale because it has had no activity for 10 days. If you are still working on this, please provide some updates.
Given that the current test_subclasses mechanism ensures that each analyzer is tested for all supported observable types, how can we design a new testing strategy using unittest.mock that maintains this level of coverage while improving test isolation and maintainability? Would using a test factory or parameterized tests be a viable approach?
This issue has been marked as stale because it has had no activity for 10 days. If you are still working on this, please provide some updates.
This issue has been marked as stale because it has had no activity for 30 days. If you are still working on this, please provide some updates.
This issue has been marked as stale because it has had no activity for 30 days. If you are still working on this, please provide some updates.
this has been merged to develop