SimSwap
                                
                                 SimSwap copied to clipboard
                                
                                    SimSwap copied to clipboard
                            
                            
                            
                        Request for guidance regarding training
Hello,
I trained a 512 model with VGGFace2 for 390k steps and I got the following video output . I have also attached one of the samples obtained at step 390k. Do you know where I might be messing up?

I used the SimSwap colab.ipnyb
Hello,
I trained a 512 model with VGGFace2 for 390k steps and I got the following video output
. I have also attached one of the samples obtained at step 390k. Do you know where I might be messing up?
I used the SimSwap colab.ipnyb
Hello! Maybe some missing fix in code? Can you share your checkpoints?
Did you set --Gdeep True ? Even though 400k should be the minimum you should have seen semi normal results with 390k. Make sure your weights have updated properly and you are pointing to them when initiating testing. I believe they will get better with more iterations but the current output does not make sense.
Did you set --Gdeep True ? Even though 400k should be the minimum you should have seen semi normal results with 390k. Make sure your weights have updated properly and you are pointing to them when initiating testing. I believe they will get better with more iterations but the current output does not make sense.
I think this is still a problem with the code - it seems that some layers do not load properly. Even after 80,000 I already had a bad result, and the faces changed, albeit in poor quality. But unfortunately, while there is no checkpoint, it is impossible to test it with the code and say for sure or help ...
I see, the training code seems very straightforward, i lack knowledge in this particular area but the generator functions may not be set up properly. I really need to learn this next. Theres just so much to learn 😭.
omg man, can you share it how to train vggface with 512 ? the dataset is 100GB. my colab only can 70gb free.
omg man, can you share it how to train vggface with 512 ? the dataset is 100GB. my colab only can 70gb free.
I do not think that he will share his checkpoints or dataset. During my observation of this repository, not many people have contributed to the SimSwap community. Of course, really cool guys like @ftaker887, @instant-high, @woctezuma and others helped solve problems, wrote GUIs, made suggestions for improving the code (sorry if I forgot to mention someone). But unfortunately, most users are used to receiving help or resources but prefer not contributing anything in return. This is my opinion, correct me if I'm wrong.
Turn off the mask, and print the results. From the training output, the training should be no problem. The problem may occur with the test code. There may be a code compatibility problem, the test code and the training model forward code are not compatible
Turn off the mask, and print the results. From the training output, the training should be no problem. The problem may occur with the test code. There may be a code compatibility problem, the test code and the training model forward code are not compatible
Hi! @neuralchen! I try to train 224. epoch 88500 Hope this helps solve the problem
With mask:

Without mask:

I also have the same problem The model can be successfully trained and the training sample data is normal but Error while testing
I think the effect is too white , is this normal?
I did a little experiment with the code. If in videoswap.py in line 89 change swap_result = swap_model(None, frame_align_crop_tenor, id_vetor, None, True)[0] to swap_result = swap_model(None, spNorm(frame_align_crop_tenor), id_vetor, None, True)[0]
The result will be slightly better but the face will be very contrasting:

But this affect other public models - people and 512:
 Perhaps they should be accepted, something like parse
Perhaps they should be accepted, something like parse --name arg from base_options (I didn't manage to do it) and add input_norm = spNorm(frame_align_crop_tenor) if name != 'people' or name != '512' else frame_align_crop_tenor and swap_result = swap_model(None, input_norm, id_vetor, None, True)[0]
Perhaps some layer from the model does not load or does not load correctly. If we compare it with a screen with a white face, then in addition to high contrast, there is a difference in the eyes, teeth, etc.
Sent your ckpt to my email, i will check what has happend.
Sent your ckpt to my email, i will check what has happend.
I send link on gmail in profile
Hey guys,
Sorry for the late reply. I was busy with some things. Here is my checkpoint I got the results from. I have not debugged it till now. I will look into it and get back to you guys soon. Also, I did set --Gdeep True.
Hey guys,
Sorry for the late reply. I was busy with some things. Here is my checkpoint I got the results from. I have not debugged it till now. I will look into it and get back to you guys soon. Also, I did set
--Gdeep True.
I try to test checkpoints. The effect that arose on the video was due to the fact that in the test code the crop_size option was tied to beta checkpoint 512 and --crop_size 512 try to load beta 512 from 550000 epoch. I solved it in this way #246. But it didn't solve all problem and now instead of a multi-colored disco face like in the video, we get just a boring one-color :)
Pretrained network G has fewer layers; The following are not initialized:
['down0', 'first_layer', 'last_layer', 'up0']
With mask
 Without mask
Without mask

大家好, 这么晚才回复很抱歉。我忙于一些事情。这是我得到结果的检查点。直到现在我还没有调试它。我会调查并尽快回复你们。另外,我确实设置了
--Gdeep True.我尝试测试检查点。视频上出现的效果是由于在测试代码中该
crop_size选项与 beta 检查点 512 相关联,并--crop_size 512尝试从 550000 epoch 加载 beta 512。我以这种方式解决了它#246。但它并没有解决所有问题,现在我们得到的不是视频中的多色迪斯科脸,而是一个无聊的单色:)Pretrained network G has fewer layers; The following are not initialized: ['down0', 'first_layer', 'last_layer', 'up0']带口罩 不带口罩

Hey guys, Sorry for the late reply. I was busy with some things. Here is my checkpoint I got the results from. I have not debugged it till now. I will look into it and get back to you guys soon. Also, I did set
--Gdeep True.I try to test checkpoints. The effect that arose on the video was due to the fact that in the test code the
crop_sizeoption was tied to beta checkpoint 512 and--crop_size 512try to load beta 512 from 550000 epoch. I solved it in this way #246. But it didn't solve all problem and now instead of a multi-colored disco face like in the video, we get just a boring one-color :)Pretrained network G has fewer layers; The following are not initialized: ['down0', 'first_layer', 'last_layer', 'up0']With mask
Without mask
终于有人和我有同样的问题https://github.com/neuralchen/SimSwap/issues/251
Hey guys, Sorry for the late reply. I was busy with some things. Here is my checkpoint I got the results from. I have not debugged it till now. I will look into it and get back to you guys soon. Also, I did set
--Gdeep True.I try to test checkpoints. The effect that arose on the video was due to the fact that in the test code the
crop_sizeoption was tied to beta checkpoint 512 and--crop_size 512try to load beta 512 from 550000 epoch. I solved it in this way #246. But it didn't solve all problem and now instead of a multi-colored disco face like in the video, we get just a boring one-color :)Pretrained network G has fewer layers; The following are not initialized: ['down0', 'first_layer', 'last_layer', 'up0']With mask
Without mask
512 version --netG should be loaded in the same way as fs_network_fix,py instead of fix_network_512.py
Hey guys, Sorry for the late reply. I was busy with some things. Here is my checkpoint I got the results from. I have not debugged it till now. I will look into it and get back to you guys soon. Also, I did set
--Gdeep True.I try to test checkpoints. The effect that arose on the video was due to the fact that in the test code the
crop_sizeoption was tied to beta checkpoint 512 and--crop_size 512try to load beta 512 from 550000 epoch. I solved it in this way #246. But it didn't solve all problem and now instead of a multi-colored disco face like in the video, we get just a boring one-color :)Pretrained network G has fewer layers; The following are not initialized: ['down0', 'first_layer', 'last_layer', 'up0']With mask
Without mask
512 version --netG should be loaded in the same way as fs_network_fix,py instead of fix_network_512.py
to do this, though the error like
Pretrained network G has fewer layers; The following are not initialized:  
['down0', 'first_layer', 'last_layer', 'up0']
had gone...
But the result of face swap were still worse like above...
I try to change the model initialize way , because in the train time, generate result of face swap is normal. but still worse...
I don't understood.
Hey bud, was wondering if you had trained this model any further. Thanks for posting the other one, greatly appreciated.