lymanblue
lymanblue
Hi~ Our expected input format is RGB. But even we set is_bgr = False, the converted model still requires BGRA format when we import the model in Xcode.
Hi~ The BGRA format is shown in the comment of coreml model. The comment is described as "/// face_img as color (kCVPixelFormatType_32BGRA) image buffer, 112 pixels wide by 112 pixels...
We tried, but it only cleaned 24 files and the improvement is little. The overlapped files is so few that we want to know this overlapped amount is correct or...
Our result is based on the feature refinement of megaface and facescrub at the same time. We trained MobileFaceNet on CAISA dataset (i.e., small protocol). The MF Acc. is around...
The result is already based the suggested noise list. According to your comment, I find that our original interpretation of MF Acc. may be wrong. Is MF Acc the printed...
Thanks. And there is no file like ms1m.py in dataset directory. Would you provide the file in the future? Or is the data loader of ms1m the same with other...
Therefore, we have to preprocess (e.g., face alignment) the cleaned-MS1M of DeepGlint by ourself with the aid of the msra_lmk file. On the other hand, if we use the MS1M-V2...
Is the MS1M-IBUG from InsightFace the cropped and aligned result of the cleaned-MS1M?
For training the model directly from MS1M-V2 from InsightFace. Do you mean the following steps? (e.g., LFW for validation) - set --train_root to the faces_emore/train.rec in train.py - set --train_file_list...
Thank you~! Could we use the the prepare_data.py from https://github.com/TreB1eN/InsightFace_Pytorch to convert the mxnet's format to the specified format? The data format looks similar. (if identical would be better).