SimSwap icon indicating copy to clipboard operation
SimSwap copied to clipboard

Request for guidance regarding training

Open mittalgovind opened this issue 3 years ago • 19 comments

Hello,

I trained a 512 model with VGGFace2 for 390k steps and I got the following video output multi_specific_1080p.mp4. I have also attached one of the samples obtained at step 390k. Do you know where I might be messing up? step_390000

I used the SimSwap colab.ipnyb

mittalgovind avatar May 03 '22 19:05 mittalgovind

Hello,

I trained a 512 model with VGGFace2 for 390k steps and I got the following video output multi_specific_1080p.mp4. I have also attached one of the samples obtained at step 390k. Do you know where I might be messing up? step_390000

I used the SimSwap colab.ipnyb

Hello! Maybe some missing fix in code? Can you share your checkpoints?

netrunner-exe avatar May 03 '22 21:05 netrunner-exe

Did you set --Gdeep True ? Even though 400k should be the minimum you should have seen semi normal results with 390k. Make sure your weights have updated properly and you are pointing to them when initiating testing. I believe they will get better with more iterations but the current output does not make sense.

Fibonacci134 avatar May 04 '22 01:05 Fibonacci134

Did you set --Gdeep True ? Even though 400k should be the minimum you should have seen semi normal results with 390k. Make sure your weights have updated properly and you are pointing to them when initiating testing. I believe they will get better with more iterations but the current output does not make sense.

I think this is still a problem with the code - it seems that some layers do not load properly. Even after 80,000 I already had a bad result, and the faces changed, albeit in poor quality. But unfortunately, while there is no checkpoint, it is impossible to test it with the code and say for sure or help ...

netrunner-exe avatar May 04 '22 05:05 netrunner-exe

I see, the training code seems very straightforward, i lack knowledge in this particular area but the generator functions may not be set up properly. I really need to learn this next. Theres just so much to learn 😭.

Fibonacci134 avatar May 04 '22 10:05 Fibonacci134

omg man, can you share it how to train vggface with 512 ? the dataset is 100GB. my colab only can 70gb free.

papipulato avatar May 04 '22 12:05 papipulato

omg man, can you share it how to train vggface with 512 ? the dataset is 100GB. my colab only can 70gb free.

I do not think that he will share his checkpoints or dataset. During my observation of this repository, not many people have contributed to the SimSwap community. Of course, really cool guys like @ftaker887, @instant-high, @woctezuma and others helped solve problems, wrote GUIs, made suggestions for improving the code (sorry if I forgot to mention someone). But unfortunately, most users are used to receiving help or resources but prefer not contributing anything in return. This is my opinion, correct me if I'm wrong.

netrunner-exe avatar May 04 '22 13:05 netrunner-exe

Turn off the mask, and print the results. From the training output, the training should be no problem. The problem may occur with the test code. There may be a code compatibility problem, the test code and the training model forward code are not compatible

neuralchen avatar May 04 '22 15:05 neuralchen

Turn off the mask, and print the results. From the training output, the training should be no problem. The problem may occur with the test code. There may be a code compatibility problem, the test code and the training model forward code are not compatible

Hi! @neuralchen! I try to train 224. epoch 88500 Hope this helps solve the problem

With mask: frame_0000000(3)

Without mask: frame_0000000(4)

netrunner-exe avatar May 04 '22 18:05 netrunner-exe

I also have the same problem The model can be successfully trained and the training sample data is normal but Error while testing

doctorcui avatar May 05 '22 06:05 doctorcui

I think the effect is too white , is this normal?

doctorcui avatar May 05 '22 06:05 doctorcui

I did a little experiment with the code. If in videoswap.py in line 89 change swap_result = swap_model(None, frame_align_crop_tenor, id_vetor, None, True)[0] to swap_result = swap_model(None, spNorm(frame_align_crop_tenor), id_vetor, None, True)[0] The result will be slightly better but the face will be very contrasting: frame_0000000(5)

But this affect other public models - people and 512: frame_0000000(6) Perhaps they should be accepted, something like parse --name arg from base_options (I didn't manage to do it) and add input_norm = spNorm(frame_align_crop_tenor) if name != 'people' or name != '512' else frame_align_crop_tenor and swap_result = swap_model(None, input_norm, id_vetor, None, True)[0]

netrunner-exe avatar May 05 '22 08:05 netrunner-exe

Perhaps some layer from the model does not load or does not load correctly. If we compare it with a screen with a white face, then in addition to high contrast, there is a difference in the eyes, teeth, etc.

netrunner-exe avatar May 05 '22 09:05 netrunner-exe

Sent your ckpt to my email, i will check what has happend.

neuralchen avatar May 05 '22 12:05 neuralchen

Sent your ckpt to my email, i will check what has happend.

I send link on gmail in profile

netrunner-exe avatar May 05 '22 14:05 netrunner-exe

Hey guys,

Sorry for the late reply. I was busy with some things. Here is my checkpoint I got the results from. I have not debugged it till now. I will look into it and get back to you guys soon. Also, I did set --Gdeep True.

mittalgovind avatar May 05 '22 19:05 mittalgovind

Hey guys,

Sorry for the late reply. I was busy with some things. Here is my checkpoint I got the results from. I have not debugged it till now. I will look into it and get back to you guys soon. Also, I did set --Gdeep True.

I try to test checkpoints. The effect that arose on the video was due to the fact that in the test code the crop_size option was tied to beta checkpoint 512 and --crop_size 512 try to load beta 512 from 550000 epoch. I solved it in this way #246. But it didn't solve all problem and now instead of a multi-colored disco face like in the video, we get just a boring one-color :)

Pretrained network G has fewer layers; The following are not initialized:
['down0', 'first_layer', 'last_layer', 'up0']

With mask frame_0000000-2 Without mask frame_0000000-3

netrunner-exe avatar May 05 '22 21:05 netrunner-exe

大家好, 这么晚才回复很抱歉。我忙于一些事情。是我得到结果的检查点。直到现在我还没有调试它。我会调查并尽快回复你们。另外,我确实设置了--Gdeep True.

我尝试测试检查点。视频上出现的效果是由于在测试代码中该crop_size选项与 beta 检查点 512 相关联,并--crop_size 512尝试从 550000 epoch 加载 beta 512。我以这种方式解决了它#246。但它并没有解决所有问题,现在我们得到的不是视频中的多色迪斯科脸,而是一个无聊的单色:)

Pretrained network G has fewer layers; The following are not initialized:
['down0', 'first_layer', 'last_layer', 'up0']

带口罩 不带口罩 frame_0000000-2 frame_0000000-3

Hey guys, Sorry for the late reply. I was busy with some things. Here is my checkpoint I got the results from. I have not debugged it till now. I will look into it and get back to you guys soon. Also, I did set --Gdeep True.

I try to test checkpoints. The effect that arose on the video was due to the fact that in the test code the crop_size option was tied to beta checkpoint 512 and --crop_size 512 try to load beta 512 from 550000 epoch. I solved it in this way #246. But it didn't solve all problem and now instead of a multi-colored disco face like in the video, we get just a boring one-color :)

Pretrained network G has fewer layers; The following are not initialized:
['down0', 'first_layer', 'last_layer', 'up0']

With mask frame_0000000-2 Without mask frame_0000000-3

终于有人和我有同样的问题https://github.com/neuralchen/SimSwap/issues/251

doctorcui avatar May 06 '22 01:05 doctorcui

Hey guys, Sorry for the late reply. I was busy with some things. Here is my checkpoint I got the results from. I have not debugged it till now. I will look into it and get back to you guys soon. Also, I did set --Gdeep True.

I try to test checkpoints. The effect that arose on the video was due to the fact that in the test code the crop_size option was tied to beta checkpoint 512 and --crop_size 512 try to load beta 512 from 550000 epoch. I solved it in this way #246. But it didn't solve all problem and now instead of a multi-colored disco face like in the video, we get just a boring one-color :)

Pretrained network G has fewer layers; The following are not initialized:
['down0', 'first_layer', 'last_layer', 'up0']

With mask frame_0000000-2 Without mask frame_0000000-3

512 version --netG should be loaded in the same way as fs_network_fix,py instead of fix_network_512.py

doctorcui avatar May 06 '22 01:05 doctorcui

Hey guys, Sorry for the late reply. I was busy with some things. Here is my checkpoint I got the results from. I have not debugged it till now. I will look into it and get back to you guys soon. Also, I did set --Gdeep True.

I try to test checkpoints. The effect that arose on the video was due to the fact that in the test code the crop_size option was tied to beta checkpoint 512 and --crop_size 512 try to load beta 512 from 550000 epoch. I solved it in this way #246. But it didn't solve all problem and now instead of a multi-colored disco face like in the video, we get just a boring one-color :)

Pretrained network G has fewer layers; The following are not initialized:
['down0', 'first_layer', 'last_layer', 'up0']

With mask frame_0000000-2 Without mask frame_0000000-3

512 version --netG should be loaded in the same way as fs_network_fix,py instead of fix_network_512.py

to do this, though the error like

Pretrained network G has fewer layers; The following are not initialized:  
['down0', 'first_layer', 'last_layer', 'up0']

had gone...
But the result of face swap were still worse like above...
I try to change the model initialize way , because in the train time, generate result of face swap is normal. but still worse...
I don't understood.

boreas-l avatar May 07 '22 06:05 boreas-l

Hey bud, was wondering if you had trained this model any further. Thanks for posting the other one, greatly appreciated.

Fibonacci134 avatar Oct 13 '22 15:10 Fibonacci134