dface
dface copied to clipboard
For gen_landmark_48.py, where does trainImageList.txt come from?
I couldn't figure out how to get gen_landmark_48.py to work with CelebA, so I used the the trainImageList from here: https://github.com/AITTSMD/MTCNN-Tensorflow/blob/master/prepare_data/trainImageList.txt
I also used the dataset that goes with it. When I then call gen_landmark_48.py, it only generates 18 landmark images, which seems like it is much fewer than it should be.
Where is the trainImageList.txt that you used to generate your models?
When I use the trainImageList.txt from the MTCNN-Tensorfow repo, I had to uncomment the following line to get gen_landmark_48.py to work: https://github.com/kuaikuaikim/DFace/blob/master/dface/prepare_data/gen_landmark_48.py#L52
After that, I was able to train ONet.
How can you get pass errors in Issues #10?
I applied the following diff based on the comments from that thread:
diff --git a/dface/core/detect.py b/dface/core/detect.py
index a780158..889a1fe 100644
--- a/dface/core/detect.py
+++ b/dface/core/detect.py
@@ -387,6 +387,8 @@ face candidates:%d, current batch_size:%d"%(num_boxes, batch_size)
# cropped_ims_tensors = np.zeros((num_boxes, 3, 24, 24), dtype=np.float32)
cropped_ims_tensors = []
for i in range(num_boxes):
+ if tmph[i] <= 0 or tmpw[i] <= 0:
+ continue
tmp = np.zeros((tmph[i], tmpw[i], 3), dtype=np.uint8)
tmp[dy[i]:edy[i]+1, dx[i]:edx[i]+1, :] = im[y[i]:ey[i]+1, x[i]:ex[i]+1, :]
crop_im = cv2.resize(tmp, (24, 24))
Not sure if this is correct though.
Sorry,I also use the landmark data from MTCNN-Tensorfow. However,i am not sure about the data for Generating Onet training data come from, this form the wider training data or the data MTCNN-Tensorfow used?