Peppa_Pig_Face_Landmark
Peppa_Pig_Face_Landmark copied to clipboard
A simple face detect and alignment method, which is easy and stable.
Peppa_Pig_Face_Engine
Switch all these models to pytorch, yes, i am doing that.
introduction
It is a simple demo including face detection and face aligment, and some optimizations were made to make the result smooth.
CAUTION: this is the tensorflow2.0 branch, if you want to work on tf1, please switch to tf1 branch, it still works.
Purpose: I want to make a face analyzer including face detect and face alignment. Most of the face keypoints codes that opensourced are neither stable nor smooth, including some research papers. And the commercial sdk is pretty expensive. So, there is Peppa_Pig_Face_Engine.
I think it is pretty cool, see the demo:
click the gif to see the video:
and with face mask:
requirment
-
tensorflow2.0 (tensorflow1 need to switch to tf1 branch )
-
opencv
-
python 3.6
-
easydict
-
flask
update
2020.4.28 A new detector based on centernet is trained, here is the link, but i don't know how to integrate it into this project, because it is a tf1 project. I am thinking about to rewrite the project to make it more usefull.
2020.2.4: add http server
useage
- download pretrained model, put them into ./model
-
detector
Lightnet_0.5 including a tflite model, (time cost: mac [email protected], tf2.0 15ms+, tflite 8ms+-,input shape 320x320, model size 560K)
- baidu disk ( password yqst )
- google drive
-
keypoints
shufflenetv2_0.75 including a tflite model, (time cost: mac [email protected], tf2.0 5ms+, tflite 3.7ms+- model size 2.5M)
- baidu disk (code fcdc)
- google drive
the dir structure as :
./model ├── detector │ ├── saved_model.pb │ └── variables │ ├── variables.data-00000-of-00002 │ ├── variables.data-00001-of-00002 │ └── variables.index ├── keypoints │ ├── saved_model.pb │ └── variables │ ├── variables.data-00000-of-00002 │ ├── variables.data-00001-of-00002 │ └── variables.index
- run
python demo.py --cam_id 0
use a camera
orpython demo.py --video test.mp4
detect for a video
orpython demo.py --img_dir ./test
detect for images dir no track
orpython demo.py --video test.mp4 --mask True
if u want a face mask
start a http server
-
run
python demo.py --web 1
-
test by run
python web_demo_test.py
,
the result is a json, formate:
[{ "bbox": [x1, y1, x2, y2], "landmark": [[x1, y1], [x2, y2],[x3,y3],[...]]}]
Train
The project is based on two of my other repos, and both tensorflow1 and tensorflow2 are supported. If you want to train with your own data, or you want to know the details about the models, click them.
- dsfd_light_model tf2 branch
- centernet
- face_landmark
TODO
- [x] Transfer to tensorflow 2.0
- [x] small model including tflite
- [x] add http server demo
- [ ] Add some GAN model to make it fun ing....
- [ ] 3-d face algorithm
- [ ] a mobile device version, it is on the way, i learn a lot about mobile device. **
- [ ] switch to pytorch, HAHA... dont worry it wont happen.