Fa-Ting Hong

Results 100 comments of Fa-Ting Hong

The results of the trained model cannot get better results so I don't release it, and I am too busy to figure it out now. You can train the model...

This work utilizes a set of keypoints to represent both head poses and expressions, thus it cannot disentangle these two terms. For your interest, you can refer to this paper:...

你好, 拼接肯定是会错位的,照你的需求, 1)你可以去看看一些换脸的项目,这样就不需要重新拼接 2)把face-vid2vid和DaGAN结合,将expression和head pose进行disentangle.

Sorry, my mistake. If you are using the depth model of depth_face_model_Voxceleb2_10w, you can set this line https://github.com/harlanhong/CVPR2022-DaGAN/blob/c594a2eb43d9f74fac4f783de26aa7ee3e5cbe35/demo.py#L164 to 50 layers, that is : `depth_encoder = depth.ResnetEncoder(50, False)`

My mistake. Let me make it more clear. There are two types of depth checkpoints in the onedrive: ![image](https://user-images.githubusercontent.com/19970321/179522269-775d6954-022a-47e0-a228-44f653ecfde2.png) if you use the first one, you should rewrite the code...

Please use the command like this: CUDA_VISIBLE_DEVICES=0 python demo.py --config config/vox-adv-256.yaml --driving_video source/example.mp4 --source_image source/example.png --checkpoint download/SPADE_DaGAN_vox_adv_256.pth.tar --kp_num 15 --generator SPADEDepthAwareGenerator --result_video results/example_out.mp4 --relative --adapt_scale --find_best_frame

If you need to use the weights of SPADEGenerator only, you can filter other irrelevant parameters, like this: ` ckp_decoder = OrderedDict((k,v) for k,v in checkpoint['generator'].items() if 'decoder' in k)...

Please download the torch-1.9.0+cu111-cp37-cp37m-linux_x86_64.whl and install it manually.

That's awesome, thanks, I will Integrate DaGAN into Huggingface Spaces in near future.

Hi @AK391 , The Hugging Face is a great platform for us to present our project. However, I met a problem when deploying my code on it, It seems quite...