Evaluation during training and it’s statistics
Is your feature request related to a problem? Please describe. I I have two scenes, one for training and the other for testing. During the training, I would like to test the model in the testing scene every 20k steps, and collect and plot the statistics on Tensorboard. So that I can see whether the model really learn something instead of overfitting the training scene. Is this possible? Any suggestions for doing this?
Describe the solution you'd like can easily switch between training and testing phrases by control some environment parameters.
Hi @chenzhutian,
This is a useful feature. I logged it internally under MLA-2465 and we will prioritize accordingly.
Hi @maryamhonari any pointers if I want to have a quick hack for this feature? (working on a paper submission....