torchmetrics
torchmetrics copied to clipboard
Total Variation Loss
What does this PR do?
This PR adds Total Variation Loss. Fixes #972
Before submitting
- [x] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
- [x] Did you read the contributor guideline, Pull Request section?
- [ ] Did you make sure to update the docs?
- [ ] Did you write any new necessary tests?
PR review
Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃
Codecov Report
Merging #978 (49c32b3) into master (4aeb6cb) will decrease coverage by
48%
. The diff coverage is98%
.
Additional details and impacted files
@@ Coverage Diff @@
## master #978 +/- ##
========================================
- Coverage 86% 38% -48%
========================================
Files 193 195 +2
Lines 11404 11461 +57
========================================
- Hits 9778 4303 -5475
- Misses 1626 7158 +5532
@ragavvenkatesan how is it going here? :rabbit: we are considering a new feature release next week, do you think we can finish this addition by then so it will be included? :pray:
So, I was waiting on you folks for some feedback on the initial code. I am working on tests this weekend, I can prioritize for next release since you brought it up.
So, @Borda I think the metric is implemented. I working on unit test (taking suggestions for tests now). I have tested this on actual model training and seems like it is working as expected. Let me know how this looks.
Addressed all comments @SkafteNicki Anything on tests?
@ragavvenkatesan do we have some kind of reference implementation of this metric (sklearn, scipy, something else)? That is normally what we need for testing to secure that the implementation is correct.
I don't think there is an easy implementation available anywhere. For reference, here is the hacky way that tensorflow tests a similar loss in their pull-request for this loss.
Note: They have slight difference in their implementation as they only use the denoising loss, but I could do something similar. https://github.com/tensorflow/tensorflow/pull/6662/files#diff-33c59c5ad653d2f5e292da5968484cd7e3e0320dc453f92eb51cc8e1d21ec0d4R2310
Hi @ragavvenkatesan, It seems like we can compare against: https://kornia.readthedocs.io/en/latest/_modules/kornia/losses/total_variation.html
This is a better idea. I will write tests comparing against this. Thanks.
@ragavvenkatesan how is it going? :)
Hi @Borda I apologize, went on vacation. Getting back to this and prioritizing this now.
@Borda can you make a push into the source branch again please? I am getting a merge conflict when I push. Thanks.
@Borda can you make a push into the source branch again please? I am getting a merge conflict when I push. Thanks.
all shall be resolved now :)
@SkafteNicki, any idea why the variable type fails?
@ragavvenkatesan, could you pls check and resolve conflicts? :otter:
It seems to be failing for all the latest tests; in fact, it hangs and dies...