RECCE icon indicating copy to clipboard operation
RECCE copied to clipboard

pretrained model

Open GANG370 opened this issue 2 years ago • 17 comments

Very interested in your work! How to train my own dataset or can you provide some pre training models on ff++ or wilddeepfake? thanks a lot

GANG370 avatar Jul 18 '22 12:07 GANG370

No such file or directory: 'path/to/config.yaml when train,is it miss something?

GANG370 avatar Jul 18 '22 13:07 GANG370

path/to/config.yaml should be set as a specific file location, e.g., config/Recce.yml

XJay18 avatar Jul 22 '22 11:07 XJay18

I retrained the model on FF++ (c23) and you can access the model parameters via this link. (Password: gn4Tzil#)

XJay18 avatar Jul 22 '22 11:07 XJay18

Thanks for your reply, According to the meaning of the paper, do you only need real pictures when training? Is my theory wrong? If I need to train my own data set, what should I do?

GANG370 avatar Jul 26 '22 00:07 GANG370

Hi, the inputs to the network contain both real and fake images. The main idea is to compute the reconstruction loss for real images only, aiming to learn the common representations of real samples. The network requires fake samples to learn discriminative features.

For training with your own dataset, you should define a custom dataloader that returns RGB images and binary labels. You may refer to the provided dataloaders under dataset/ directory and modify the code.

XJay18 avatar Jul 27 '22 15:07 XJay18

Hi, the shared model parameter link is invalid. Can you share another one? Thank you very much ! :)

Blosslzy avatar Aug 15 '22 11:08 Blosslzy

Hi, the previous sharing link expired. You can access the re-trained FF++ weights via this link. (Password: 7v+MRf8L)

XJay18 avatar Aug 21 '22 15:08 XJay18

Thank you for your reply!I tested with my test.py based on your provided re-trained weights, but got about 86% AUC in FF++ c40. I random sample one frame in each video of the test dataset, and then generate the frame-level result. I wonder if the difference in less than ideal results is caused by frame-level and video-level. Can you give me some advice?

Blosslzy avatar Aug 27 '22 07:08 Blosslzy

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

XJay18 avatar Sep 03 '22 07:09 XJay18

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi, I have the same problem when using the provided checkpoints to test.

When I used your provided pretrained FF++ weights (C40 version) to test the FF++ test data, I only got the 86.75 of AUC (see attached image). In fact, this results is similar to my own retraining model performance.

I didn’t change the codes except from the dataloader, which I used my own pickle file. I also used your provided face crop files in #1 to preprocess the video frames. So, I don’t know what reasons cause this problem, and maybe it is resulted from the FF++ dataset preprocessing.

Hence, would you like to share the file of your data processing or your preprocessed FF++ dataset for reference? My email is [email protected]. Thank you!

1663250711216

MZMMSEC avatar Sep 15 '22 14:09 MZMMSEC

Hi, the previous sharing link expired. You can access the re-trained FF++ weights via this link. (Password: 7v+MRf8L)

Hi, this link seems to be broken, could you update it? Thanks

zhangchaosd avatar Sep 16 '22 02:09 zhangchaosd

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Thank you very much!! My email is [email protected].

Blosslzy avatar Sep 16 '22 07:09 Blosslzy

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hello, I have tried using the provided code to test the generalization performance on FF++(c40), but the results show a sharp performance drop (generally around 0.55). Considering that the only difference is the dataset, I hope you can provide a copy of the FF++ dataset. Thank you very much! My email is [email protected].

BeauDing avatar Sep 16 '22 07:09 BeauDing

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi,I'm really interested in your work. Can you share your dataset with me? My email is [email protected]

SongHaixu avatar Sep 19 '22 14:09 SongHaixu

Hi, Thank s for your job, Can you share your dataset with me? My email is [email protected]

Simplesss avatar Sep 26 '22 12:09 Simplesss

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi, Very interested in your job. but, I'm still having trouble processing the data and not getting the results. So, would you like to share the file of your data processing or your preprocessed FF++ dataset with me? My email is [email protected]. Thank you!

Ruhangs avatar Dec 06 '22 06:12 Ruhangs

Hi,Hi, Thank s for your job, It's also hard for me to reproduce the results . Can you share your dataset with me? My email is [email protected]

rainfalj avatar Dec 13 '22 03:12 rainfalj

Hello,

Thank you for your excellent work. Could you also share your preprocessed data with me? My mail is: [email protected]

SaharHusseini avatar Dec 30 '22 14:12 SaharHusseini

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi, I'm very interested in your work. But I'm still having trouble processing the data and not getting the results. So, would you like to share the file of your data processing or your preprocessed FF++ dataset with me? My email is [email protected]. Thank you!

WYing333 avatar Jan 31 '23 07:01 WYing333

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Thank you for your excellent job, and I would appreciate it if you could provide the preprocessed dataset. My email is: [email protected]

zhongjian1999 avatar Feb 15 '23 05:02 zhongjian1999

Hello Thank you for the work. Could you please also send me the preprocessed dataset and pretrained model please? My email address is [email protected] Thank you!

xuyingzhongguo avatar Feb 24 '23 14:02 xuyingzhongguo

Thank s for your great job! It's also hard for me to reproduce the results . Can you share your dataset with me? My email is [email protected]

xarryon avatar Mar 19 '23 11:03 xarryon

Hello. Thank you for your fantastic work. Could you also share your preprocessed data with me? My mail is: [email protected] Thank you a lot!!!!

whisper0411 avatar Jul 05 '23 07:07 whisper0411

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi,thank you for your code,could you share your preprocessed FF++ data with me ? my email is [email protected]

QuanLNTU avatar Aug 24 '23 09:08 QuanLNTU

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi, thank you for your code, can you share the dataset with me. I really appreciate it. My email address is [email protected]

VoyageWang avatar Dec 01 '23 06:12 VoyageWang