One-Shot-Face-Swapping-on-Megapixels
One-Shot-Face-Swapping-on-Megapixels copied to clipboard
How can i swap faces between two images?
I have two images and i want to swap the face from one image to another. Any suggestion would be great? thanks
You need to get face segmentations at the first step, please refer to this for details.
Then you can swap the two faces by giving the path to the inference code, or you can read images by cv2.imread()
directly.
Hi, thankyou so much for replying, i tried the way you said, but there is a bug in the inference.py file, op module is not found , I tried to downgrade the pytorch but the version 1.3 is not found that you used.any help would be appreciated. Thanks
On Mon, 22 Nov 2021 at 10:45, Yuhao @.***> wrote:
You need to get face segmentations at the first step, please refer to this https://github.com/zllrunning/face-parsing.PyTorch for details. Then you can swap the two faces by giving the path to the inference code, or you can read images by cv2.imread() directly.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/zyainfal/One-Shot-Face-Swapping-on-Megapixels/issues/26#issuecomment-975135171, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMZS64D6TZMCDJJ7W6UYFZDUNHGVRANCNFSM5IGLXPQA .
op module
is from StyleGAN2.
Please refer to stylegan2-pytorch for details. (and please take care of CUDA version, it is crucial to compile op modules)
Thanks, I've managed to solved it, but I'm facing this new error, what is exactly and source and target id, I'm trying to replace whole face here, not an particular face attribute.
I assume you give the image name to args.
However, this inference.py
is coded for CelebA-HQ, where srcID and tgtID are the image indexes.
So in your case, you can read images and their segmentations by cv.imread()
directly to src_face
, tgt_face
, and tgt_mask
.
ok, thanks, will this image swapping works on the images like a profile pictures, like in the Celeb HQ, the images are mainly containg the faces of the celebrities, and i want to swap faces of an image which is like this
For this kind of image, you need to detect and align the face according to the CelebA-HQ paper (or simply by RetinaFace). Then, you can swap them if everything goes well.
You need to get face segmentations at the first step, please refer to this for details. Then you can swap the two faces by giving the path to the inference code, or you can read images by
cv2.imread()
directly.
here i've to find the segmentation of target image or the source images. getting the following result
source Img:
Source IMg Segmentation:
Target IMg:
Output Img:
The target segmentation.
For this kind of image, you need to detect and align the face according to the CelebA-HQ paper (or simply by RetinaFace). Then, you can swap them if everything goes well.
Can you suggest some repo that were doing it, that will be great help. Thanks
The target segmentation. Again the same,
You may make mistake of the target segmentation, it should have values from 0 to 17 (the semantic index), not RGB values.
Just like in https://github.com/zllrunning/face-parsing.PyTorch/blob/master/evaluate.py line 39,
save vis_parsing_anno
as segmentaion image.
You may make mistake of the target segmentation, it should have values from 0 to 17 (the semantic index), not RGB values. Just like in https://github.com/zllrunning/face-parsing.PyTorch/blob/master/evaluate.py line 39, save it as segmentaion image.
yeah, i need to use the png segementation, getting this after, seem not realistic
Well, this is the result. You can try injection
model to check which is better.
For this kind of image, you need to detect and align the face according to the CelebA-HQ paper (or simply by RetinaFace). Then, you can swap them if everything goes well.
Can you suggest some repo that were doing it, that will be great help. Thanks
Okay any suggeston on this
A simple practice is to use RetinaFace to detect and align the face. RetinaFace: https://github.com/deepinsight/insightface/tree/master/detection/retinaface Align functioin: https://github.com/deepinsight/insightface/blob/ce3600a74209808017deaf73c036759b96a44ccb/recognition/arcface_mxnet/common/build_eval_pack.py Line 72 get_norm_crop()
Thanks a lot for your time, can you tell like will it be possible using any virtual Tryon model, I tired few of them but haven't got any proper result. Can you suggest some model that may help if any.
On Mon, 22 Nov 2021 at 19:02, Yuhao @.***> wrote:
A simple practice is to use RetinaFace to detect and align the face. RetinaFace: https://github.com/deepinsight/insightface/tree/master/detection/retinaface Align functioin: https://github.com/deepinsight/insightface/blob/ce3600a74209808017deaf73c036759b96a44ccb/recognition/arcface_mxnet/common/build_eval_pack.py Line 72 get_norm_crop()
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/zyainfal/One-Shot-Face-Swapping-on-Megapixels/issues/26#issuecomment-975525477, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMZS64FNR33CYPNQFPP7ZKTUNJA6LANCNFSM5IGLXPQA .
Sorry, I didn't use any Try-on models. And face alignment is one step of face recognition, so I think there are try-on models for face recognition but no try-on models just for alignment.
Thanks, I'll check them out.
On Mon, 22 Nov 2021 at 19:10, Yuhao @.***> wrote:
Sorry, I didn't use any Try-on models. And face alignment is one step of face recognition, so I think there are try-on models for face recognition but no try-on models just for alignment.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/zyainfal/One-Shot-Face-Swapping-on-Megapixels/issues/26#issuecomment-975533295, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMZS64F7X2TQBDIYJPIXFJTUNJB35ANCNFSM5IGLXPQA .
You need to get face segmentations at the first step, please refer to this for details. Then you can swap the two faces by giving the path to the inference code, or you can read images by
cv2.imread()
directly.here i've to find the segmentation of target image or the source images. getting the following result source Img:
Source IMg Segmentation:
Target IMg:
Output Img:
Hello, may I ask how to get the mask that can be used for human face? I would be grateful if you could reply.
You may make mistake of the target segmentation, it should have values from 0 to 17 (the semantic index), not RGB values. Just like in https://github.com/zllrunning/face-parsing.PyTorch/blob/master/evaluate.py line 39, save
vis_parsing_anno
as segmentaion image.
Have you tried this?