face-occlusion-classification
face-occlusion-classification copied to clipboard
Face occlusion classification
git clone https://github.com/LamKser/face-occlusion-classification.git
cd face-occlusion-classification
:computer: Hardware & Environment
- All the train and test processes are done on google colab with GPU Tesla T4
conda env create -f environment.yml conda activate face-occlusion
:books: Dataset
- Crawl 9,749 images from the internet, crop the face by using FaceMaskDetection and divide into 2 classes:
0- Non-occluded face1- Occluded face
Figure 1: Non-occluded face example
Figure 2: Occluded face example
-
Then split the dataset into 3 sets (7 - 2 - 1):
Train set: 6,826 imagesVal set: 1,945 imagesTest set: 978 images
-
Data structure:
face_occlusion ├───Train │ ├───1 │ │ ├─face_0.jpg │ │ ├─face_1.jpg │ │ └... │ └───0 ├───Val │ ├───1 │ └───0 └───Test ├───1 └───0 -
:link: Data link: face occlusion dataset
:triangular_ruler: Config
- To use other model or change hyperparameters, you can edit
train.ymlandtest.ymlinconfigsfolder - Available models:
densenet169,resnet18,resnet50
:building_construction: Train model
-
Train
python train.py --opt configs/train.yml -
Show the training and validation progress
tensorboard --logdir logger -
If using
wandbto log training process:wandb: project: <Type your project> name: <Type experiment name>
:chart_with_upwards_trend: Test model
-
Test the model
python test_model.py --opt configs/test.yml -
Test single image
python test_single_image.py --model <model_name> --weight <weight_path> --image <image_path> -
ONNX model
-
Convert pytorch model to onnx
python onnx/convert_2_onnx.py --model <model name> \\ --weight <weight and checkpoint file> \\ --save <path/to/save/onnx/*.onnx> \\ --opset_version <version> (optional) -
Run onnx model
python onnx/run_onnx.py --onnx <onnx file> --img <your image>
-
:bar_chart: Results (Train/Val/Test)
- All the trained model: trained model
- The pretrained models are trained with 30 epochs
Last model
| Model | Params (M) | Infer (ms) | Accuracy | Precision | Recall | F1 | Weights |
|---|---|---|---|---|---|---|---|
| VGG16 | 134.2 | 7.76 | 0.9805 | 0.981 | 0.9789 | 0.9799 | link |
| VGG19 | 139.5 | 9.36 | 0.9836 | 0.9831 | 0.9831 | 0.9831 | link |
| VGG16-BN | 134.2 | 8.3 | 0.9734 | 0.9746 | 0.9705 | 0.9725 | link |
| VGG19-BN | 139.5 | 10.01 | 0.9713 | 0.9765 | 0.9642 | 0.9703 | link |
| DenseNet169 | 12.4 | 25.46 | 0.9795 | 0.9729 | 0.9852 | 0.979 | link |
| DenseNet201 | 18 | 31.06 | 0.9744 | 0.9787 | 0.9684 | 0.9735 | link |
| ResNet18 | 11.1 | 3.69 | 0.9703 | 0.9665 | 0.9726 | 0.9695 | link |
| ResNet50 | 23.5 | 7.15 | 0.9754 | 0.9787 | 0.9705 | 0.9746 | link |
| ResNet152 | 58.1 | 19.31 | 0.9805 | 0.983 | 0.9768 | 0.9799 | link |
| ConvNeXt-Base | 87.5 | 13.26 | 0.9867 | 0.9894 | 0.9831 | 0.9862 | link |
| ConvNeXt-Small | 49.4 | 11.54 | 0.9887 | 0.9853 | 0.9915 | 0.9884 | link |
| ConvNeXt-Tiny | 27.8 | 7.24 | 0.9867 | 0.9832 | 0.9894 | 0.9863 | link |