DeblurGANv2 icon indicating copy to clipboard operation
DeblurGANv2 copied to clipboard

Adding an fixed Auxiliary Classifier generate loss to the lossD or lossG part

Open vgg4resnet opened this issue 4 years ago • 6 comments

hi, I am trying to do a face recognition project 。Some of my Images are motion blured baddly,my trained face recognition model mobilefacenet(this trained model is fixed ) get bad results 。I want to use the deblur gan model to improve my images qualities。 I want to use the trained face recognition model as Auxiliary Classifier to control the loss,So I can improve the accuracy of face recognition 。my idea is giving some image pairs from the same persons,the deblur G model can generate some new imgs for those same persons pairs。And using the mobilefacnet recognition model calucate as a new loss to the nework。If the recognition loss drops ,the selected images pairs quality probably impoved 。** And My way is that I calculate the cos distance between fixed images pairs , and then adding them together as new loss A 。then adding the loss A to loss D or loss G, by decreasing the new loss A, the face recognition accuracy may impove , the images quality may improve ,However ,when I added the loss A to loss D part ,the new part lossA drop slowly ,and can not converge ,the loss drop from 14 to 13 ,keep staying in around 13, I guess i mistook something ,can My idea works?**

vgg4resnet avatar Jul 10 '20 07:07 vgg4resnet

    def infer_facepic(self):
        '''
        faces : list of PIL Image
        target_embs : [n, 512] computed embeddings of faces in facebank
        names : recorded names of faces in facebank
        tta : test time augmentation (hfilp, that's all)
        '''
        self.facemodel.eval()        
        self.netG.train(True)
        self.netG.eval()
        idx = 0        
        embs = []
        tta=False
        faces=self.findfacenames
        listSpeacialFiles=["3_0005.jpg","3_0007.jpg","3_0010.jpg","3_0012.jpg","3_0013.jpg",
        # "lja_0016.jpg","lja_0017.jpg",
        # "newmate_0018.jpg","newmate_0019.jpg","newmate_0010.jpg","newmate_0019.jpg",
        # "dexing_0032.jpg","dexing_0056.jpg",
        # "maoli_0015.jpg"
        ]
        totalfaceloss=0
        iSum=0
        for i ,imgpys in enumerate( faces):
                ####get imageA generate from Deblur netG model 
                myimg=cv2.imread(imgpy)
                myimg=cv2.resize(myimg,(256,256))
                pred=self.fpnpredictonline(myimg,self.maskface) 
                img=Image.fromarray(pred)
                pred=cv2.resize(pred,(112,112))
                ####get imageA's comparing face feature vectors
                embs1=self.facemodel(self.test_transform(img).to(self.devicess).unsqueeze(0)).detach().cpu().numpy().reshape(1,-1)

                ####get imageB generate from Deblur netG model
                imgpy=imgpys[1]
                myimg=cv2.imread(imgpy)
                myimg=cv2.resize(myimg,(256,256))
                pred=self.fpnpredictonline(myimg,self.maskface) 
                img=Image.fromarray(pred) 
                ####get imageB's comparing face feature vectors 
                embs2=self.facemodel(self.test_transform(img).to(self.devicess).unsqueeze(0)).detach().cpu().numpy().reshape(1,-1)
                ####comparing A,B images loss ,adding them together as new  part of G loss 
                totalfaceloss=totalfaceloss+1-cosine_similarity(embs1,embs2)[0][0]
                newimgpy=imgpy.replace(".jpg",".88fd")
                import numpy as np 
                #np.savetxt(newimgpy,embs1)
        #print(totalfaceloss*1.0/((len(faces))*1.0),totalfaceloss,len(faces))
        print("xxxxxxxxxxxxxxxxxxxxxxx\n",totalfaceloss*1.0/(iSum*1.0),totalfaceloss,iSum)
        #self.lossReAutoGrad=totalfaceloss*1.0/((len(faces))*1.0)
        self.lossReAutoGrad=totalfaceloss*1.0/(iSum*1.0)
        self.netG.train(True)
        myReAutoGrad=self.lossReAutoGrad

        min_idx, minimum=0,0

above is somecode i used to create a new part loss of G : this part code is generate news Images pairs,and calucating the the loss A (self.lossReAutoGrad) ["3_0005.jpg","3_0007.jpg","3_0010.jpg","3_0012.jpg","3_0013.jpg", ] are images ,they failed in face recognition part due to heavily motion blured ,faces are image pairs

vgg4resnet avatar Jul 10 '20 07:07 vgg4resnet

    def fpnpredictonline(self, img: np.ndarray, mask: Optional[np.ndarray], ignore_mask=True) -> np.ndarray:
        (img, mask), h, w = self._preprocess(img, mask)

        with torch.no_grad():
            inputs = [img.cuda()]

            if not ignore_mask:
                inputs += [mask]
            import pdb
            #pdb.set_trace()
            pred = self.netG(*inputs)
        return self._postprocess(pred)[:h, :w, :]  

vgg4resnet avatar Jul 10 '20 07:07 vgg4resnet

adding the loss to net work G'loss

    def _run_epoch(self, epoch):
        self.metric_counter.clear()
        for param_group in self.optimizer_G.param_groups:
            lr = param_group['lr']

        epoch_size = config.get('train_batches_per_epoch') or len(self.train_dataset)
        tq = tqdm.tqdm(self.train_dataset, total=epoch_size)
        tq.set_description('Epoch {}, lr {}'.format(epoch, lr))
        i = 0
        for data in tq:
            inputs, targets = self.model.get_input(data)

            #outputs = self.netG(inputs)
            
            #outputs,outputs_full = self.netG(inputs)
            
            outputs = self.netG(inputs)
            #print(outputs.shape,inputs.shape)
            #pdb.set_trace()
            _,_,AutoReloss=self.infer_facepic()
            self.reloss=AutoReloss             
            loss_D = self._update_d(outputs, targets)
            self.optimizer_G.zero_grad()
            loss_content = self.criterionG(outputs, targets)
            loss_adv = self.adv_trainer.loss_g(outputs, targets)

            print("xxxxxxxxxxxxxxxxxxxxxxx\n",loss_content , self.adv_lambda , loss_adv,27*AutoReloss)
**###########adding the face recognition loss to G loss**
            loss_G = loss_content + self.adv_lambda * loss_adv+0.7*AutoReloss
            #loss_G =0*loss_content + 0*self.adv_lambda * loss_adv+27*AutoReloss
            loss_G.backward(retain_graph=True)
            loss_G.backward()
            self.optimizer_G.step()
            self.metric_counter.add_losses(loss_G.item(), loss_content.item(), loss_D)

vgg4resnet avatar Jul 10 '20 07:07 vgg4resnet

I want to know how to decrea the self.lossReAutoGrad (the face recognition assuming name is the loss A ),and I wonder my way of adding the lossA to the lossD works。the loss can backward or not ?thanks in advanced

vgg4resnet avatar Jul 10 '20 07:07 vgg4resnet

anyone can help me?

vgg4resnet avatar Jul 14 '20 01:07 vgg4resnet

@KupynOrest @t-martyniuk can you help me ,or give me some cues?

vgg4resnet avatar Jul 17 '20 01:07 vgg4resnet