SLADD
SLADD copied to clipboard
Question about training
Anybody have reproduced the training process of the model?
When I train the model use FF-FS datasets, the function of 'getitem' of FFdata.py will first resize the input image to 256*256, then the dlib.get_frontal_face_detector() can't find a face rectange area from the image...
Could U please tell me how to get the Mask documents?
Could U please tell me how to get the Mask documents?
you can get the mask file from the video accoding to the file directory masks of each.
Could anybody tell how can I generate mask file?& How json file is like?
Read this: https://github.com/ondyari/FaceForensics/tree/master/dataset
python .\faceforensics_download_v4.py "D:/output_path/"
-d all #
-c c40 # quality {raw, c23, c40}
-t masks # type {videos, **masks**, models}
--server EU2
It will look like this: FaceForensics++\manipulated_sequences\Deepfakes\masks\videos\xxx_yyy.mp4
"The ground-truth forgery mask Mgt is created depending on categories of the input images"
if the input image is:
- adversarial forgery : Mgt is the resized deformed final mask (i.e. Mgt = Md);
- original forgery from the training dataset: As most datasets provide the ground truth forgery region, we can directly use them as Mgt
- original pristine from the training dataset: Mgt is an all-zero matrix (i.e. Mgt = 0), indicating there is no forgery region in the input
Read this: https://github.com/ondyari/FaceForensics/tree/master/dataset
python .\faceforensics_download_v4.py "D:/output_path/" -d all # -c c40 # quality {raw, c23, c40} -t masks # type {videos, **masks**, models} --server EU2
It will look like this: FaceForensics++\manipulated_sequences\Deepfakes\masks\videos\xxx_yyy.mp4
"The ground-truth forgery mask Mgt is created depending on categories of the input images"
if the input image is:
- adversarial forgery : Mgt is the resized deformed final mask (i.e. Mgt = Md);
- original forgery from the training dataset: As most datasets provide the ground truth forgery region, we can directly use them as Mgt
- original pristine from the training dataset: Mgt is an all-zero matrix (i.e. Mgt = 0), indicating there is no forgery region in the input
Thanks!!! But I have not seen the code of generating adversarial forgery(Maybe it's because of my carelessness). Is the generation of adversarial forgery included in this released codes(repo)?
1. has anyone meet this warning in the training process? Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. 2. Meanwhile, the training is so slow when I run the code? is this normal?
When I train the model use FF-FS datasets, the function of 'getitem' of FFdata.py will first resize the input image to 256*256, then the dlib.get_frontal_face_detector() can't find a face rectange area from the image...
It seems I am facing the same question. Have you solved it? Ignoring the resize step seems inappropriate. I would appreciate it if you could help me out.
@ProgrammingTD i think the code presumes all images are face-cropped already so 256*256 will just resize to fit model input size
i think the code presumes all images are face-cropped already so 256*256 will just resize to fit model input size
In README.md, the data structure didn't mention the face-cropped step... I tried to resize the input images to 512 * 512, then the dlib.get_frontal_face_detector() could work, but 512 * 512 doesn't match the pretrained XceptionNet. Have you utilized the face-crop methods on FF++ to reproduce this project? Thanks for your comment!
@phinoo @ProgrammingTD I am facing the same problem (Face detector failed). Have you managed to find a solution?
@phinoo @ProgrammingTD I am facing the same problem (Face detector failed). Have you managed to find a solution?
according to 4.1 in paper, the author said "we resize the aligned faces to 256 × 256 for all the samples in training and test datasets" so i think @forkbabu is right . we should crop the faces in original videos and masks. Then the detector could work as planned. but i only tried for a small dataset(about 50 videos)
@ProgrammingTD
How did you align the masks? I used dlib to extract and align the faces, but I could not do the same for the masks.
@ProgrammingTD
How did you align the masks? I used dlib to extract and align the faces, but I could not do the same for the masks.
for each video in FF++, i got the face location parameters from the original video, and use the same parameters in corresponding fake and mask videos to crop them
Can you share your code? @ProgrammingTD
Could anybody tell how can I generate mask file?& How json file is like?
Did you solve it? I have the same question as you. Looking forward to your reply.
Can you share your code? @ProgrammingTD
in closed issue, the author offers data processing code
Which code is used to mask data?
Could U please tell me how to get the Mask documents?
Did you solve it? I have the same question as you. Looking forward to your reply.
@ProgrammingTD How did you align the masks? I used dlib to extract and align the faces, but I could not do the same for the masks.
for each video in FF++, i got the face location parameters from the original video, and use the same parameters in corresponding fake and mask videos to crop them
Can you provide the code? Thank you
in closed issue 10, the author offers data processing code
how to do 'image extract processing'? could anyone share the code?
Have solved it? Can you share the code?