L2CS-Net
                                
                                 L2CS-Net copied to clipboard
                                
                                    L2CS-Net copied to clipboard
                            
                            
                            
                        The official PyTorch implementation of L2CS-Net for gaze estimation and tracking
Installation
 
- Set up a virtual environment:
python3 -m venv venv
source venv/bin/activate
- Install required packages:
pip install -r requirements.txt  
Demo
- Install the face detector:
pip install git+https://github.com/elliottzheng/face-detection.git@master
- Download the pre-trained models from here and Store it to models/.
- Run:
 python demo.py \
 --snapshot models/L2CSNet_gaze360.pkl \
 --gpu 0 \
 --cam 0 \
This means the demo will run using L2CSNet_gaze360.pkl pretrained model
MPIIGaze
We provide the code for train and test MPIIGaze dataset with leave-one-person-out evaluation.
Prepare datasets
- Download MPIIFaceGaze dataset from here.
- Apply data preprocessing from here.
- Store the dataset to datasets/MPIIFaceGaze.
Train
 python train.py \
 --dataset mpiigaze \
 --snapshot output/snapshots \
 --gpu 0 \
 --num_epochs 50 \
 --batch_size 16 \
 --lr 0.00001 \
 --alpha 1 \
This means the code will perform leave-one-person-out training automatically and store the models to output/snapshots.
Test
 python test.py \
 --dataset mpiigaze \
 --snapshot output/snapshots/snapshot_folder \
 --evalpath evaluation/L2CS-mpiigaze  \
 --gpu 0 \
This means the code will perform leave-one-person-out testing automatically and store the results to evaluation/L2CS-mpiigaze.
To get the average leave-one-person-out accuracy use:
 python leave_one_out_eval.py \
 --evalpath evaluation/L2CS-mpiigaze  \
 --respath evaluation/L2CS-mpiigaze  \
This means the code will take the evaluation path and outputs the leave-one-out gaze accuracy to the evaluation/L2CS-mpiigaze.
Gaze360
We provide the code for train and test Gaze360 dataset with train-val-test evaluation.
Prepare datasets
- 
Download Gaze360 dataset from here. 
- 
Apply data preprocessing from here. 
- 
Store the dataset to datasets/Gaze360. 
Train
 python train.py \
 --dataset gaze360 \
 --snapshot output/snapshots \
 --gpu 0 \
 --num_epochs 50 \
 --batch_size 16 \
 --lr 0.00001 \
 --alpha 1 \
This means the code will perform training and store the models to output/snapshots.
Test
 python test.py \
 --dataset gaze360 \
 --snapshot output/snapshots/snapshot_folder \
 --evalpath evaluation/L2CS-gaze360  \
 --gpu 0 \
This means the code will perform testing on snapshot_folder and store the results to evaluation/L2CS-gaze360.