EAGRNet
EAGRNet copied to clipboard
How to get started of dataset file structure ?
Hey, thanks for your great work and kind sharing.
I am just a beginner in face parsing, and I intend to get started with your work. :P
I'd like to first try on Helen, and I downloaded by the link (https://www.sifeiliu.net/face-parsing) provided by you.
The original folder structure is as below,
├── exemplars.txt ├── images ├── labels ├── points ├── README.txt ├── testing.txt └── tuning.txt
But how can I get the same structure in README
dataset/ images/ labels/ edges/ train_list.txt test_list.txt
Does train_list.txt includes all images (paths) in images/?
Does test_list.txt includes all images and labels matches, as each images may have multiple label png file ?
Looking forward your reply or any tutorial link of Helen.
Hello, sorry for the disturbance again.
Due to the fact that I have no way to deal with Helen as above, I further try the code on LaPa. But I came across a problem in compute_mean_ioU(). I just found no os.path.join(datadir, 'label_names.txt') or os.path.join(datadir, 'project', im_name + '.npy') in the Helen, LaPa or CelebAMask-HQ.
Could you please tell me what are these files, and how to get them if possible?
Thanks in advance.
Actually we use existing projection matrix to make alignment, but due to confidentiality we can not provide the original data. You can refer to OpenCV or other related libraries to make alignment if necessary.
hi, have you successfully run this project, can you give some guidance in the dataset? Thank u .
Hey, thanks for your great work and kind sharing.
I am just a beginner in face parsing, and I intend to get started with your work. :P
I'd like to first try on Helen, and I downloaded by the link (https://www.sifeiliu.net/face-parsing) provided by you.
The original folder structure is as below,
├── exemplars.txt ├── images ├── labels ├── points ├── README.txt ├── testing.txt └── tuning.txt
But how can I get the same structure in README
dataset/ images/ labels/ edges/ train_list.txt test_list.txt
Does train_list.txt includes all images (paths) in images/?
Does test_list.txt includes all images and labels matches, as each images may have multiple label png file ?
Looking forward your reply or any tutorial link of Helen.
Hello, sorry for the disturbance again.
Due to the fact that I have no way to deal with Helen as above, I further try the code on LaPa. But I came across a problem in
compute_mean_ioU(). I just found noos.path.join(datadir, 'label_names.txt')oros.path.join(datadir, 'project', im_name + '.npy')in the Helen, LaPa or CelebAMask-HQ.Could you please tell me what are these files, and how to get them if possible?
Thanks in advance.
hi, have you successfully run this project, can you give some guidance in the dataset? Thank u .
Sorry that I didn't provide the preprocessing code in advance. The parsing result is a segmentation map, you only need to calculate the facial pixels of each components and aggregate them into single parsing map.
Sorry that I didn't provide the preprocessing code in advance. The parsing result is a segmentation map, you only need to calculate the facial pixels of each components and aggregate them into single parsing map.
thanks for your reply at first. Is this the content of the label folder? We can get it in the data download link you provided
Sorry that I didn't provide the preprocessing code in advance. The parsing result is a segmentation map, you only need to calculate the facial pixels of each components and aggregate them into single parsing map.
thanks for your reply at first. Is this the content of the label folder? We can get it in the data download link you provided
You can prepare the label maps as described based on the original helen dataset .
Hello, sorry for the disturbance again.
Due to the fact that I have no way to deal with Helen as above, I further try the code on LaPa. But I came across a problem in
compute_mean_ioU(). I just found noos.path.join(datadir, 'label_names.txt')oros.path.join(datadir, 'project', im_name + '.npy')in the Helen, LaPa or CelebAMask-HQ.Could you please tell me what are these files, and how to get them if possible?
Thanks in advance.
你跑成功??帅哥
Hi, @rrryan2016, do you have solve this problem? Could you please provide some advice?
Thanks a lot!