insightface
insightface copied to clipboard
jmlr training code vs description in arxiv paper
Thanks for sharing your excellent work! I am looking through your code, and there seem to be some differences with the paper shared on arxiv (or I misunderstood something). Are the data preparation, training, and configuration files the same as those used to train the pretrained-model you share? Some examples:
- in
rec_builder.pyline 79,cfg.input_size = 512. My understanding is that this results in image with size 512, not 256 as described in the paper - in
train.pyline 245, we haveiter_loss.backward(), butiter_loss = dloss['Loss'], which does not include the "bone_losses". I.e. We only have L_vert + L_land in Eq. 4 in the paper on arxiv.
Thanks in advance.
Hi,
- The training input size is 256, please see https://github.com/deepinsight/insightface/blob/master/reconstruction/jmlr/configs/s1.py#L18
- you are right, we will fix the code soon.
Thanks for the reply. Yes, I checked the configuration file (s1.py and base.py); I was referring to:
if __name__ == "__main__":
cfg = get_config('configs/s1.py')
cfg.task = 0
cfg.input_size = 512
in rec_builder.py.
I'll be looking forward to the edited code :)
The number 512 is just the size of the image in rec file, not the training size.