FaceX-Zoo icon indicating copy to clipboard operation
FaceX-Zoo copied to clipboard

A PyTorch Toolbox for Face Recognition

Results 120 FaceX-Zoo issues
Sort by recently updated
recently updated
newest added

Thank you for this project I have some errors when training with Swin-Trans I have all requirements and using GTX1070Ti 8Gb but I cannot train. You can see it below....

遇到一个问题 INFO 2021-06-04 19:48:13 train.py: 101] Epoch 0, iter 0, lr 0.100000, loss 9.736826 INFO 2021-06-04 19:50:31 train.py: 101] Epoch 0, iter 100, lr 0.100000, loss 19.080706 INFO 2021-06-04 19:52:51...

按操作步骤生成了70000张数据pair,补充进oulu p1进行训练但训练DUM收敛效果很差,不知道什么原因 ![image](https://user-images.githubusercontent.com/32166605/147430755-1f2d1bcd-6d9f-4cca-94ab-3eaa5f5a9aca.png)

In the project I can not find the code for cropping aligned image, how to do that? Thank you.

Introduction中第二段的最后一句:However, they still suffer from the ambiguity problem revealed in data that cannot be directly solved from the single instance perspective.想请教一下这里的所说的无法从单实例角度解决这一问题的具体含义是什么,比如在SCN中是如何体现的?初学者,理解水平有限,还望不吝赐教。

original image: ![image](https://user-images.githubusercontent.com/41863500/119775840-bef6a700-bef6-11eb-9a29-535090a3a673.png) 1. add mask with "no speed up": ![image](https://user-images.githubusercontent.com/41863500/119775957-e51c4700-bef6-11eb-968a-9c5709872f5a.png) 2. add mask with "speed up": ![image](https://user-images.githubusercontent.com/41863500/119775813-b30ae500-bef6-11eb-884b-e9b618d29e55.png)

请教一下如何利用现有sdk实现多幅图片同时检测人脸,即batch_size > 1

My task is face recognition with a real dataset from Surveillance camera. My training set have 3000 Id .but when I set ` --resume ` and `--pretrain_model './model_pretrain/Attention92/Epoch_17.pt'` . I...

Hi Some modules are imported from `timm` in [`backbone/Swin_Transformer.py`](https://github.com/JDAI-CV/FaceX-Zoo/blob/2f97a0ef2dafcd772e244f186e44b8d684fcdddc/backbone/Swin_Transformer.py#L11). But, I could not find where`timm` is defined.

Hi, Do `SwinTransformer` based models get the `112 x 112` images as their input (like all other models)? or do they get `224 x 224` images? PS. While in [Step1:...