MART
MART copied to clipboard
Automatically check training and eval mode differences
Model authors sometimes use nn.Module.training
to change the control flow of their model. This is problematic because we often make the assumption a model in training more or less produces the same result as in eval mode. We should detect when this is not the case and warn the user so they can take appropriate action!
I will that it is also possible to change control on other things like whether groundtruth is present or not. Detecting this kind of behavior should probably be in scope too. This is done, for example, in this implementation of YOLOv4: https://github.com/AlexeyAB/Yet-Another-YOLOv4-Pytorch/blob/d80d6a20372598b6306b37218cb61533e8bd9592/model.py#L893
Thankfully that code doesn't actually change anything about the output just whether it computes a loss or not.