Reverse_Engineering_GMs icon indicating copy to clipboard operation
Reverse_Engineering_GMs copied to clipboard

Parameter setting in deepfake detection

Open wytcsuch opened this issue 3 years ago • 5 comments

Thank you very much for your contribution.In the deepfake detection module of the paper, parameter lambda1-4 are set as follows which is inconsistent with the code: 参数设置

loss1=0.05*l1(low_freq_part,zero).to(device) 
loss2=-0.001*max_value.to(device)
loss3 = 0.01*l1(residual_gray,zero_1).to(device)
loss_c =20*l_c(classes,labels.type(torch.cuda.LongTensor))
loss5=0.1*l1(y,y_trans).to(device)

Can you explain that? Thank you.

wytcsuch avatar Jul 20 '21 07:07 wytcsuch

Hi, The values in the code might be changed as we were carrying out various ablation studies to find out the optimized parameters. For reproducing the experiment results in paper, please follow the training details mentioned in the paper. Thank you!!

vishal3477 avatar Jul 20 '21 19:07 vishal3477

@vishal3477 Thank you very much for your reply. I set the parameters according to the paper and train my own data. The training loss is as follows. 微信截图_20210721142341 I find that the Repetitive Loss is negative. It seems that the training is not normal. Can you help me? In order to clarify the logic of the code, I just changed the parameter names in the code:

    low_freq, low_freq_k_part, max_value, low_freq_orig, fingerprint_res, low_freq_trans, fingerprint_gray =model_FEN(batch)   
    outputs, features=model_CLS(fingerprint_res)
    _, preds=torch.max(outputs, dim=1)
   
    n=25
    zero=torch.zeros([low_freq.shape[0],2*n+1,2*n+1], dtype=torch.float32).to(device)  
    zero_1=torch.zeros(fingerprint_gray.shape, dtype=torch.float32).to(device)
    
    Magnitude_loss = opt.lambda_1 *L2(fingerprint_gray,zero_1)  #Magnitude loss 
    Spectrum_loss= opt.lambda_2 *L2(low_freq_k_part,zero)   #Spectrum loss
    Repetitive_loss= - opt.lambda_3 *max_value #Repetitive_loss   
    Energy_loss= opt.lambda_4 *L2(low_freq,low_freq_trans)   #Energy_loss
    Cross_loss =opt.lambda_cros * L_cross(outputs,labels)  #
    
    loss= Spectrum_loss + Repetitive_loss + Magnitude_loss + Cross_loss + Energy_loss

parameter setting:

parser.add_argument('--lambda_1', default = 0.05, type = float)  #0.01
parser.add_argument('--lambda_2', default = 0.001, type = float) #0.05
parser.add_argument('--lambda_3', default = 0.1, type = float) #0.001
parser.add_argument('--lambda_4', default = 1.0, type = float)  #0.1
parser.add_argument('--lambda_cros', default = 1.0, type = float)

I'm looking forward to your reply

wytcsuch avatar Jul 21 '21 06:07 wytcsuch

The repetitive loss is negative as defined in paper. image

vishal3477 avatar Jul 21 '21 17:07 vishal3477

@vishal3477 Thank you very much for your reply. I set the parameters according to the paper and train my own data. The training loss is as follows. 微信截图_20210721142341 I find that the Repetitive Loss is negative. It seems that the training is not normal. Can you help me? In order to clarify the logic of the code, I just changed the parameter names in the code:

    low_freq, low_freq_k_part, max_value, low_freq_orig, fingerprint_res, low_freq_trans, fingerprint_gray =model_FEN(batch)   
    outputs, features=model_CLS(fingerprint_res)
    _, preds=torch.max(outputs, dim=1)
   
    n=25
    zero=torch.zeros([low_freq.shape[0],2*n+1,2*n+1], dtype=torch.float32).to(device)  
    zero_1=torch.zeros(fingerprint_gray.shape, dtype=torch.float32).to(device)
    
    Magnitude_loss = opt.lambda_1 *L2(fingerprint_gray,zero_1)  #Magnitude loss 
    Spectrum_loss= opt.lambda_2 *L2(low_freq_k_part,zero)   #Spectrum loss
    Repetitive_loss= - opt.lambda_3 *max_value #Repetitive_loss   
    Energy_loss= opt.lambda_4 *L2(low_freq,low_freq_trans)   #Energy_loss
    Cross_loss =opt.lambda_cros * L_cross(outputs,labels)  #
    
    loss= Spectrum_loss + Repetitive_loss + Magnitude_loss + Cross_loss + Energy_loss

parameter setting:

parser.add_argument('--lambda_1', default = 0.05, type = float)  #0.01
parser.add_argument('--lambda_2', default = 0.001, type = float) #0.05
parser.add_argument('--lambda_3', default = 0.1, type = float) #0.001
parser.add_argument('--lambda_4', default = 1.0, type = float)  #0.1
parser.add_argument('--lambda_cros', default = 1.0, type = float)

I'm looking forward to your reply

Hi, did you solve this problem? Same here. My losses are very similar to yours and the classification accuracy doesn't improve at all, it is always like 50%...

littlejuyan avatar Jan 05 '22 17:01 littlejuyan

@littlejuyan can you please share the losses you are getting. The printed loss would be negative as defined in the paper as we want to maximize it.

vishal3477 avatar Jun 17 '22 02:06 vishal3477