EAGRNet icon indicating copy to clipboard operation
EAGRNet copied to clipboard

How to get started of dataset file structure ?

Open rrryan2016 opened this issue 4 years ago • 9 comments
trafficstars

Hey, thanks for your great work and kind sharing.

I am just a beginner in face parsing, and I intend to get started with your work. :P

I'd like to first try on Helen, and I downloaded by the link (https://www.sifeiliu.net/face-parsing) provided by you.

The original folder structure is as below,

├── exemplars.txt ├── images ├── labels ├── points ├── README.txt ├── testing.txt └── tuning.txt

But how can I get the same structure in README

dataset/ images/ labels/ edges/ train_list.txt test_list.txt

Does train_list.txt includes all images (paths) in images/?

Does test_list.txt includes all images and labels matches, as each images may have multiple label png file ?

Looking forward your reply or any tutorial link of Helen.

rrryan2016 avatar Dec 22 '20 15:12 rrryan2016

Hello, sorry for the disturbance again.

Due to the fact that I have no way to deal with Helen as above, I further try the code on LaPa. But I came across a problem in compute_mean_ioU(). I just found no os.path.join(datadir, 'label_names.txt') or os.path.join(datadir, 'project', im_name + '.npy') in the Helen, LaPa or CelebAMask-HQ.

Could you please tell me what are these files, and how to get them if possible?

Thanks in advance.

rrryan2016 avatar Jan 03 '21 02:01 rrryan2016

Actually we use existing projection matrix to make alignment, but due to confidentiality we can not provide the original data. You can refer to OpenCV or other related libraries to make alignment if necessary.

tegusi avatar Jan 04 '21 06:01 tegusi

hi, have you successfully run this project, can you give some guidance in the dataset? Thank u .

RachelWang122 avatar Mar 21 '21 03:03 RachelWang122

Hey, thanks for your great work and kind sharing.

I am just a beginner in face parsing, and I intend to get started with your work. :P

I'd like to first try on Helen, and I downloaded by the link (https://www.sifeiliu.net/face-parsing) provided by you.

The original folder structure is as below,

├── exemplars.txt ├── images ├── labels ├── points ├── README.txt ├── testing.txt └── tuning.txt

But how can I get the same structure in README

dataset/ images/ labels/ edges/ train_list.txt test_list.txt

Does train_list.txt includes all images (paths) in images/?

Does test_list.txt includes all images and labels matches, as each images may have multiple label png file ?

Looking forward your reply or any tutorial link of Helen.

Hello, sorry for the disturbance again.

Due to the fact that I have no way to deal with Helen as above, I further try the code on LaPa. But I came across a problem in compute_mean_ioU(). I just found no os.path.join(datadir, 'label_names.txt') or os.path.join(datadir, 'project', im_name + '.npy') in the Helen, LaPa or CelebAMask-HQ.

Could you please tell me what are these files, and how to get them if possible?

Thanks in advance.

hi, have you successfully run this project, can you give some guidance in the dataset? Thank u .

RachelWang122 avatar Mar 21 '21 03:03 RachelWang122

Sorry that I didn't provide the preprocessing code in advance. The parsing result is a segmentation map, you only need to calculate the facial pixels of each components and aggregate them into single parsing map.

tegusi avatar Mar 21 '21 04:03 tegusi

Sorry that I didn't provide the preprocessing code in advance. The parsing result is a segmentation map, you only need to calculate the facial pixels of each components and aggregate them into single parsing map.

thanks for your reply at first. Is this the content of the label folder? We can get it in the data download link you provided

RachelWang122 avatar Mar 21 '21 08:03 RachelWang122

Sorry that I didn't provide the preprocessing code in advance. The parsing result is a segmentation map, you only need to calculate the facial pixels of each components and aggregate them into single parsing map.

thanks for your reply at first. Is this the content of the label folder? We can get it in the data download link you provided

You can prepare the label maps as described based on the original helen dataset .

tegusi avatar Mar 22 '21 10:03 tegusi

Hello, sorry for the disturbance again.

Due to the fact that I have no way to deal with Helen as above, I further try the code on LaPa. But I came across a problem in compute_mean_ioU(). I just found no os.path.join(datadir, 'label_names.txt') or os.path.join(datadir, 'project', im_name + '.npy') in the Helen, LaPa or CelebAMask-HQ.

Could you please tell me what are these files, and how to get them if possible?

Thanks in advance.

你跑成功??帅哥

dreamlychina avatar Nov 29 '21 07:11 dreamlychina

Hi, @rrryan2016, do you have solve this problem? Could you please provide some advice?

Thanks a lot!

wxqlab avatar Nov 26 '22 06:11 wxqlab