nncf
nncf copied to clipboard
[Good First Issue] [NNCF] Cover NNCF code with least coverage percentage with unit tests to at least 80%
Context
NNCF uses coverage
to run its precommit checks and currently has an admirable coverage rate of 90% across the repository. However, there are still some files that have <80% coverage, some - with no coverage at all. Some of these are located in the experimental
folder and do not have a strict coverage requirement, some are special files such as __init__.py
or setup.py
which are not easily unit-tested, but most have actual production code inside and many of those have exactly 0.00% coverage. We need such files covered by precommit-scope unit tests in order not to let potential bugs get away.
Task
-
Review the current per-file coverage percentage at Codecov (sort by coverage) https://app.codecov.io/gh/openvinotoolkit/nncf?search=&displayType=list
-
For the following files in the list (click to see per-line coverage for each on Codecov), add unit tests (using
pytest
style) under thetests
directory (in a subdirectory corresponding to the tested backend, e.g. a test fornncf/torch/foo.py
would go totests/torch/test_foo.py
, fornncf/common/bar.py
- totests/torch/test_bar.py
).
- [ ] nncf/common/utils/decorators.py
- [ ] nncf/common/utils/os.py
- [ ] nncf/common/utils/patcher.py
- [ ] nncf/common/utils/timer.py
- [ ] nncf/config/schema.py
- [ ] nncf/openvino/quantization/quantize_model.py
- [ ] nncf/quantization/algorithms/accuracy_control/algorithm.py
- [ ] nncf/quantization/algorithms/accuracy_control/evaluator.py
- [ ] nncf/quantization/algorithms/accuracy_control/openvino_backend.py
- [ ] nncf/quantization/algorithms/accuracy_control/ranker.py
- [ ] nncf/quantization/algorithms/accuracy_control/subset_selection.py
- [ ] nncf/quantization/algorithms/hyperparameter_tuner/algorithm.py
- [ ] nncf/quantization/algorithms/hyperparameter_tuner/param_grid.py
- [ ] nncf/quantization/algorithms/post_training/algorithm.py
- [ ] nncf/telemetry/wrapper.py
- [ ] nncf/tensorflow/helpers/model_creation.py
- [ ] nncf/tensorflow/pruning/filter_pruning/functions.py
- [ ] nncf/tensorflow/tensor_statistics/statistics.py
- [ ] nncf/tensorflow/tf_internals.py
- [ ] nncf/tensorflow/utils/node.py
- [ ] nncf/tensorflow/utils/scopes_handle.py
- [ ] nncf/torch/accuracy_aware_training/utils.py
- [ ] nncf/torch/binarization/binarize_functions.py
- [ ] nncf/torch/binarization/extensions.py
- [ ] nncf/torch/pruning/export_utils.py
- [ ] nncf/torch/pruning/operations.py
- [ ] nncf/torch/quantization/base_ctrl.py
- [ ] nncf/torch/quantization/precision_init/autoq_init.py
- [ ] nncf/torch/quantization/statistics.py
- [ ] nncf/torch/sparsity/const/algo.py
- [ ] nncf/torch/sparsity/rb/algo.py
- [ ] nncf/torch/sparsity/rb/loss.py
- Open a PR into NNCF repo, wait for the automatic pre-commit checks to pass and for the Codecov to update its automatic status comment with the coverage difference introduced by the PR. Make sure that the Codecov status comment shows an increase of coverage in a given code file (or files) to at least 80% coverage.
Tips
-
Start with the files having the least amount of coverage.
-
Make sure that the test cases you are adding actually execute the lines that are marked as "missed" in the Codecov per-line display (for instance, nncf/quantization/algorithms/accuracy_control/ranker.py).
-
If at all possible, try to extend existing parametrized tests with proper parameters triggering the missed line logic.
-
Try to add new test cases to the existing test files that contain the tests for the tested component, otherwise - create a new test file in a proper subdirectory with a proper name, use existing test files as a guideline.
-
If you encounter dead code (for example, helper functions that aren't used anywhere, only defined) - delete it instead of covering it with tests. This will raise the coverage for the corresponding file better than anything.
-
Follow the general coding guidelines in PyGuide.md when preparing the test code.
Feel free to split this task into multiple tasks by e.g. implementing it per file. We can help to create sub-tasks in such case.
@vshampor @MaximProshin I'd like to be assigned this issue. Also, since this will be my first time contributing, do you have any advice for me? And, is there a discord or other means for communication for the community I can join?
.take
Thank you for looking into this issue! Please let us know if you have any questions or require any help.
And, is there a discord or other means for communication for the community I can join?
yes sir! Discord
@Saharsh365 , thanks for your interest! The description is quit detailed and there are also tips. My recommendation would be to start from the analysis of the current codecov results, selecting one of the weak files and implementing the first simple test. If it works, the rest could be done by analogy. @vshampor will support you in case of questions. Don't hesitate to bring your questions.
@vshampor The code coverage analysis reports are not working for me anymore. Could you please help me solve this issue
@Saharsh365 we won't be able to help you unless you describe the problem you are encountering in a more specific fashion. I checked that the reports are visible for non-authenticated viewers.
@Saharsh365 we won't be able to help you unless you describe the problem you are encountering in a more specific fashion. I checked that the reports are visible for non-authenticated viewers.
Oh, I'm not sure what happened but I believe that from Friday evening to yesterday the reports were not being displayed. I can however see them now and I believe that may have been due to the weekend? Apologies for taking up your time.
I would like to drop this issue since I am not sure about how to proceed with this task. I used multiple resources to research how to write code coverage tests and I am still unsure about how to apply my knowledge to this codebase to write coverage tests
Hello @Saharsh365, I'm unassigning you, but feel free to repick the task if you wish to - we're here to answer questions, perhaps asking them on Intel DevHub channel would be easier.
.take
Thank you for looking into this issue! Please let us know if you have any questions or require any help.
Hello! I am a first-time contributor and I'm excited to help!
I have written unit tests for nncf/common/utils/decorators.py and nncf/common/utils/os.py.
May I know if there are other ways to check the code coverage before making the PR, other than coverage.py? Would it be advised for me to open a PR for just these 2 files, to check if I am on the right track? :)
I'm also partway through making test for nncf/common/utils/patcher.py and I'd like to ask some questions on it. Particularly, I'm not clear regarding the patch
method.
For the patch
method, it seems that there are 3 separate cases: module, class, and instance-level functions. However, I'm not sure when the instance-level case should be triggered, as when an instance method is passed as the 2nd argument to patch, the class-level case is still triggered due to line 44, obj_cls, fn_name = self.import_obj(obj_cls)
making obj_cls
into a Class object and line 55 always evaluating to True.
Could this method possibly be bugged or am I understanding it incorrectly? Thank you for reading, really appreciate any help.
Greetings!
The implementation may have subtle bugs, in particular because the coverage is missing. I suggest that you add a test case for it. Here's how the "instance method" patching supposed to start from:
class Model:
def forward(self, x):
return x
model = Model()
assert model.forward(7) == 7 # `forward` is looked up on the Model.__dict__, i.e. the class object
# now someone (not us) patches the forward with its own implementation:
def new_forward(x):
return 0
model.forward = new_forward # creates a new *instance* attribute `forward` which masks the original function definition from the class
assert model.forward(7) == 0
# now `model.forward` is what we call here an "instance method", and now we want to patch it
def our_forward(x):
return 42
patcher = Patcher()
patcher.patch(model.forward, our_forward)
assert model.forward(7) == 42
Try implementing that test case, maybe debugging the flow of the patcher for this case to see if it works as expected (or not). If you have further questions, feel free to post these. I think @nikita-savelyevv might also pitch in with some help on these.
Hi @Candyzorua. Regarding your question, if you call patch()
on a method of the instantiated class object then it should fall into the branch you mention. This for example will happen if you call
PATCHER.patch(test_obj.assert_wrapper_stack_method, wrapper1)
for the test_obj
which is created at https://github.com/openvinotoolkit/nncf/blob/develop/tests/common/test_monkey_patching.py#L51 .
@vshampor @nikita-savelyevv Thank you both for your input. Duly noted and I will implement the ideas you have mentioned at my earliest availability.
Hello, I wrote unit tests for the first 5 files and have made the following PR, am continuing with the rest of the tests :)
Hello, I wrote unit tests for the first 5 files and have made the following PR, am continuing with the rest of the tests :)
https://github.com/openvinotoolkit/nncf/pull/2468 has been merged
Hello again,
I have been writing tests for nncf/quantization recently and encountered an issue while making tests for nncf/quantization/algorithms/accuracy_control/algorithm.py.
The function I am testing is _collect_original_biases_and_weights
. The test code is as such:
I get a synthetic linear model from nncf/tests/openvino/native/models.py and quantize it. However, when I pass this to _collect_original_biases_and_weights
I get an error because the nodes of the quantized graph's have None for layer attributes.
Not sure if the nodes of the quantized graph's having None for layer attributes is a bug, or if I have done something incorrect in the testing setup. Any help would be appreciated!
In addition, I would like to clarify if the method through which I am obtaining synthetic models and quantizing them is good practice, or if I should hardcode quantized models.
@andrey-churkin to advise on the accuracy control algorithm requirements
Hello, I would like to follow-up on this, please :)
Hi @Candyzorua, Sorry for the delay in response. An instance of the QuantizationAccuracyRestorer
class works with a graph of a quantized model for which weights compression was not applied. The nncf.quantize()
method compresses weights by default for the OpenVINO backend. Therefore, to resolve the issue, we should turn off weight compression. To do this, please use the following code snippet:
from nncf.openvino.quantization.backend_parameters import BackendParameters
quantized_model = nncf.quantize(
initial_model,
dataset,
advanced_parameters=nncf.AdvancedQuantizationParameters(backend_params={BackendParameters.COMPRESS_WEIGHTS: False})
)
Please let me know if it works for you.
Hi @andrey-churkin, thank you very much for your help. With your suggestion, I was able to test _collect_original_biases_and_weights
. 😄 Most functions in algorithm.py
are now covered, however I am finding it quite difficult to test the longer and more complex apply
function so I am leaving it out for now.
I have opened a PR with tests for 5 more files regarding quantization and accuracy control functions.
Hello! @Candyzorua Are all tests/Tasked covered ? If not can i pick some ? @vshampor I wish to work on this issue :). As said by @MaximProshin that we can split this among sub tasks. Thank you.
Hi @RitikaxShakya, apologies for the late reply. I haven't covered all the files. Here is a list that shows which files have already been covered. If the checkbox is ticked, it means the file has been covered.
- [x] nncf/common/utils/decorators.py
- [x] nncf/common/utils/os.py
- [x] nncf/common/utils/patcher.py
- [x] nncf/common/utils/timer.py
- [x] nncf/config/schema.py
- [ ] nncf/openvino/quantization/quantize_model.py
- [x] nncf/quantization/algorithms/accuracy_control/algorithm.py
- [x] nncf/quantization/algorithms/accuracy_control/evaluator.py
- [x] nncf/quantization/algorithms/accuracy_control/openvino_backend.py
- [x] nncf/quantization/algorithms/accuracy_control/ranker.py
- [x] nncf/quantization/algorithms/accuracy_control/subset_selection.py
- [ ] nncf/quantization/algorithms/hyperparameter_tuner/algorithm.py
- [ ] nncf/quantization/algorithms/hyperparameter_tuner/param_grid.py
- [ ] nncf/quantization/algorithms/post_training/algorithm.py
- [ ] nncf/telemetry/wrapper.py
- [ ] nncf/tensorflow/helpers/model_creation.py
- [ ] nncf/tensorflow/pruning/filter_pruning/functions.py
- [ ] nncf/tensorflow/tensor_statistics/statistics.py
- [ ] nncf/tensorflow/tf_internals.py
- [ ] nncf/tensorflow/utils/node.py
- [ ] nncf/tensorflow/utils/scopes_handle.py
- [ ] nncf/torch/accuracy_aware_training/utils.py
- [ ] nncf/torch/binarization/binarize_functions.py
- [ ] nncf/torch/binarization/extensions.py
- [ ] nncf/torch/pruning/export_utils.py
- [ ] nncf/torch/pruning/operations.py
- [ ] nncf/torch/quantization/base_ctrl.py
- [ ] nncf/torch/quantization/precision_init/autoq_init.py
- [ ] nncf/torch/quantization/statistics.py
- [ ] nncf/torch/sparsity/const/algo.py
- [ ] nncf/torch/sparsity/rb/algo.py
- [ ] nncf/torch/sparsity/rb/loss.py
I think it's a great idea to split up the work. Perhaps you can work on the remaining quantization-related files or the TensorFlow ones? I am continuing work on the PyTorch ones for now. Please let me know, thank you!
@Candyzorua Hello! Thank you for replying :), That's great! i was already looking for to pick up TensorFlow and quantization related files, so i will pick up working on these files! Thank you!
Hi, so if @RitikaxShakya takes Tensorflow and Quantization I'll cover the remaining ones
Hi @DaniAffCH , since I have started on nncf/torch/quantization, I think it would be awesome if you could work on nncf/torch/sparsity, nncf/torch/pruning, as well as nncf/telemetry. I think nncf/torch/binarization has been removed. Please let me know what you think, thanks!