MaskRelation icon indicating copy to clipboard operation
MaskRelation copied to clipboard

It performed very well on the trainset, but performed very poorly on the val set

Open YXLH7 opened this issue 2 years ago • 7 comments

Has anyone successfully run this paper? I ran the pytorch version and it performed very well on the training set, but it performed very poorly on the validation set, can anyone help me?

YXLH7 avatar Mar 31 '23 08:03 YXLH7

Hello, is it convenient to share the code of data preprocessing and the structure of data storage? Thank you very much

wym-dlone avatar Apr 22 '23 11:04 wym-dlone

Has anyone successfully run this paper? I ran the pytorch version and it performed very well on the training set, but it performed very poorly on the validation set, can anyone help me?

Could you describe the problem in detail? The model is possibly overfitted. Is the model trained on the whole training dataset?

zimyang avatar Apr 24 '23 08:04 zimyang

Hello, is it convenient to share the code of data preprocessing and the structure of data storage? Thank you very much

Code of data preprocessing is detect_from_video.py. The structure of data storage: data.json --video name 1 ----list (data type: List. A list of filenames of frames) ----label (data type: int. True is 0. False is 1) --video name 2

zimyang avatar Apr 24 '23 08:04 zimyang

@zimyang thanks for the response but detect_from_video.py don't create data.json... Can you please provide the modified preprocessing code tailored for your repo? Thanks

devloverumar avatar Feb 01 '24 18:02 devloverumar

@YXLH7 Have you already run through this code and can you give me a brief description of the training process?

liruotong123 avatar Mar 06 '24 07:03 liruotong123

@devloverumar Hello,Do you have the relevant json file?

AdaxKK avatar Mar 18 '24 03:03 AdaxKK

How to get these data.json? 1. Extract facial images。 Refer to extract_images.pydetect_from_video.py. (1) Read videos of the dataset. (2) Extract the facial regions from video frames by face detectors like Dlib. (3) Save the facial regions as image files.

2. Create data.json (1) Read paths of the images. (2) Get the ground truth of videos from the database. The format of data.json: [ { key: "ffpp001", value: {'list': ["ffpp001/ffpp001_001.jpg", "ffpp001/ffpp001_002.jpg", "ffpp001/ffpp001_003.jpg"], 'label': 0} }, { key: "ffpp002", value: {'list': ["ffpp002/ffpp002_001.jpg", "ffpp002/ffpp002_002.jpg", "ffpp002/ffpp002_003.jpg"], 'label': 1} }, ...]

By the way, my email has been expired because of graduation. Welcome to contact me via yangziming20[at]mails[dot]ucas.ac.cn Thanks for your support!

zimyang avatar Mar 27 '24 13:03 zimyang