stylegan2-pytorch
stylegan2-pytorch copied to clipboard
What is difference between "conv2d_gradfix" and "nn.Conv2d"?
I meet a warning:
/home/xxxxxxxxx/stylegan2-pytorch/op/conv2d_gradfix.py:89: UserWarning: conv2d_gradfix not supported on PyTorch 1.6.0. Falling back to torch.nn.functional.conv2d(). f"conv2d_gradfix not supported on PyTorch {torch.version}. Falling back to torch.nn.functional.conv2d()."'
I use pytorch 1.6.0
From NVIDIAs repo: "Custom replacement for torch.nn.functional.conv2d that supports
arbitrarily high order gradients with zero performance penalty."
From NVIDIAs repo: "Custom replacement for
torch.nn.functional.conv2dthat supports arbitrarily high order gradients with zero performance penalty."
Does NVIDIA have an official StyleganV2 code based on Pytorch? Can you explain what "cond2d_fixgrad" is?
Yeah this is it right here: https://github.com/NVlabs/stylegan2-ada-pytorch
Can you explain what "cond2d_fixgrad" is?
I'm not sure what you mean. Is that a typo or is there actually something called that specifically? conv2d_gradfix I think has something to do with the PPL loss function. I'm working on my own implementation and when I tried to do normal conv2d layers in a basic configuration I got an error about there being gradients that weren't used in my graph. With conv2d_gradfix you can use a flag that I think stops gradients from being calculated in certain places.
Check the version they use in their implementation btw. I think the problem you're having is version related.
so why we need it? In this repo, at “op” did, you can find “conv2d-fixgrid”. I just want to know why don’t use nn.conv2d directly
------------------ Original ------------------ From: Shahbuland Matiana @.> Date: Sun,Mar 21,2021 9:07 PM To: rosinality/stylegan2-pytorch @.> Cc: shoutOutYangJie @.>, Author @.> Subject: Re: [rosinality/stylegan2-pytorch] What is difference between "conv2d_gradfix" and "nn.Conv2d"? (#203)
Yeah this is it right here: https://github.com/NVlabs/stylegan2-ada-pytorch
Can you explain what "cond2d_fixgrad" is?
I'm not sure what you mean. Is that a typo or is there actually something called that specifically? conv2d_gradfix I think has something to do with the PPL loss function. I'm working on my own implementation and when I tried to do normal conv2d layers in a basic configuration I got an error about there being gradients that weren't used in my graph. With conv2d_gradfix you can use a flag that I think stops gradients from being calculated in certain places.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
Main point of conv2d_gradfix is skip gradient computation for weights when it is not needed, in cases like gradient penalty computations. But I found that difference is not very significant.
Does anyone have a suggestion for a suitable alternative to upfirdn2d? I am struggling with building a cpp extension and would like to switch to using native PyTorch instead. Would anyone happen to have any knowledge on this matter?
Thank you in advance.