mixed-precision-pytorch icon indicating copy to clipboard operation
mixed-precision-pytorch copied to clipboard

master_params and model_params

Open mark0620 opened this issue 6 years ago • 1 comments

Hi I'm interested in your project.

BTW i have a question about master_params and model_params.

i thought master_params are 32bits and model_params are 16bits since you leave a comment in train.py like this.

`

def master_params_to_model_params(self, model_params, master_params):
"""
Move FP32 master params to FP16 model params.
"""
for model, master in zip(model_params, master_params):
model.data.copy_(master.data)
`

however in this code, `

 if not hasattr(self, 'optimizer'):                                                                                    
         if self.fp16_mode:                                                          
             self.optimizer = optim.SGD(                                                                                   
                 self.master_params, lr, momentum=0.9, weight_decay=5e-4)                                                  
         else:                                                                                                             
             self.optimizer = optim.SGD(                                                                                   
                 self.model.parameters(),                                                                                  
                 lr,                                                                                                       
                 momentum=0.9,                                                                                             
                 weight_decay=5e-4)

`

you use master_params in fp16 mode.

which params is for fp16? master or model?

thanks for your kind reply.

mark0620 avatar Aug 30 '19 10:08 mark0620

Hi I'm interested in your project.

BTW i have a question about master_params and model_params.

i thought master_params are 32bits and model_params are 16bits since you leave a comment in train.py like this.

`

def master_params_to_model_params(self, model_params, master_params): """ Move FP32 master params to FP16 model params. """ for model, master in zip(model_params, master_params): model.data.copy_(master.data) `

however in this code, `

 if not hasattr(self, 'optimizer'):                                                                                    
         if self.fp16_mode:                                                          
             self.optimizer = optim.SGD(                                                                                   
                 self.master_params, lr, momentum=0.9, weight_decay=5e-4)                                                  
         else:                                                                                                             
             self.optimizer = optim.SGD(                                                                                   
                 self.model.parameters(),                                                                                  
                 lr,                                                                                                       
                 momentum=0.9,                                                                                             
                 weight_decay=5e-4)

`

you use master_params in fp16 mode.

which params is for fp16? master or model?

thanks for your kind reply.

I think the project is right. Only in fp16 mode, we have master_params, otherwise, we only have model_params which is in fp32 precision .

luhang-HPU avatar Mar 25 '20 03:03 luhang-HPU