bias-loss-skipblocknet
bias-loss-skipblocknet copied to clipboard
Hi, how can I add the biasloss loss function to mobilenetv3?
Hi, how can I add the biasloss loss function to mobilenetv3?
If you are using the implementation from the torch vision you need to remove the last conv from the self.features
and define a new member. For example:
Original:
layers.append( Conv2dNormActivation( lastconv_input_channels, lastconv_output_channels, kernel_size=1, norm_layer=norm_layer, activation_layer=nn.Hardswish, ) ) self.features = nn.Sequential(*layers)
Modified:
self.features = nn.Sequential(*layers)
self.last_conv = Conv2dNormActivation( lastconv_input_channels, lastconv_output_channels, kernel_size=1, norm_layer=norm_layer, activation_layer=nn.Hardswish, )
then return the output of the last_conv from forward pass and pass it to the bias loss
Thank you for your reply. I followed your advice, however, there is a new error in the code: RuntimeError: mat1 and mat2 shapes cannot be multiplied (16x160 and 960x1280)
Thank you for your reply. I followed your advice, however, there is a new error in the code: RuntimeError: mat1 and mat2 shapes cannot be multiplied (16x160 and 960x1280)
| | dzzwq123521 | | @.*** | 签名由网易邮箱大师定制 On 9/7/2022 20:41,Lusine @.***> wrote:
If you are using the implementation from the torch vision - you need to remove the last conv from the self.features and define a new member. For example: Original: ` lastconv_input_channels = inverted_residual_setting[-1].out_channels lastconv_output_channels = 6 * lastconv_input_channels layers.append( Conv2dNormActivation( lastconv_input_channels, lastconv_output_channels, kernel_size=1, norm_layer=norm_layer, activation_layer=nn.Hardswish, ) )
self.features = nn.Sequential(*layers)`
Modified: self.features = nn.Sequential(*layers) self.last_conv = Conv2dNormActivation( lastconv_input_channels, lastconv_output_channels, kernel_size=1, norm_layer=norm_layer, activation_layer=nn.Hardswish, ) then return the output of the last_conv from forward pass and pass it to the bias loss
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
as the error is saying you are trying to multiply tensors with different sizes and you need to debug your code.
There is a problem in the process of parameter transmission, the return value "target" is missing, how to modify the code of parameter transmission? Thank you!
Error:
return forward_call(*input, **kwargs) TypeError: forward() missing 1 required positional argument: 'target'
Passing parameters in the forward method in the model:
def forward(self, x: Tensor) -> Tensor:
x = self.features(x)
x = self.avgpool(x)
# x = self.last_conv()
x = torch.flatten(x, 1)
x = self.classifier(x)
return x
or:
def forward(self, features, output, target):###### features_copy = features.clone().detach() features_dp = features_copy.reshape(features_copy.shape[0], -1)
features_dp = (torch.var(features_dp, dim=1))
if self.norm_mode == 'global':
variance_dp_normalised = self.norm_global(features_dp)
else:
variance_dp_normalised = self.norm_local(features_dp)
weights = ((torch.exp(variance_dp_normalised * self.beta) - 1.) / 1.) + self.alpha
loss = weights * self.ce(output, target) ####
loss = loss.mean()
return loss
There is a problem in the process of parameter transmission, the return value "target" is missing, how to modify the code of parameter transmission? Thank you!
Error:
return forward_call(*input, **kwargs) TypeError: forward() missing 1 required positional argument: 'target'
Passing parameters in the forward method in the model:
def forward(self, x: Tensor) -> Tensor: x = self.features(x) x = self.avgpool(x) # x = self.last_conv() x = torch.flatten(x, 1) x = self.classifier(x)
return x
or:
def forward(self, features, output, target):###### features_copy = features.clone().detach() features_dp = features_copy.reshape(features_copy.shape[0], -1)
features_dp = (torch.var(features_dp, dim=1)) if self.norm_mode == 'global': variance_dp_normalised = self.norm_global(features_dp) else: variance_dp_normalised = self.norm_local(features_dp)
weights = ((torch.exp(variance_dp_normalised * self.beta) - 1.) / 1.) + self.alpha loss = weights * self.ce(output, target) #### loss = loss.mean()
return loss
| | dzzwq123521 | | @.*** | 签名由网易邮箱大师定制 On 9/7/2022 21:55,Lusine @.***> wrote:
as the error is saying you are trying to multiply tensors with different sizes and you need to debug your code.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>