Jianzhu Guo
Jianzhu Guo
If the UV tex is corresponding to BFM model, you can render it back to the image by sampling. This repo seems not to provide such function, you can refer...
You can try the non-onnx version, go info the `else` branch above.
FaceBoxes support GPU mode, but you may slightly modify it or, you can refer to the official repo: https://github.com/zisianw/FaceBoxes.PyTorch
Do you mean the augmentation? Based on the scaled orthogonal projection, I think it is not difficult to deduce the relation of transformed image -> transformed params (scale, eular angles...
The rasterization is a little slower, you can use OpenGL or other lib to accelerate it, or downsample the bfm vertices (3w+).
Mouth (especially with large expressions) and eyes are not very well for this repo. One possible solution is to refine them with another network.
With dense_flag = False, the outputted 68 landmarks ought to be consistent with Dlib.
I think it is easy to run on batches with small modifications, if you are familiar with pytorch.
I am a little confused about your question. 1. Each landmark has a (x, y, z) coordinates, by weak perspective projection. 2. Just applying the ICP ?
It seems that you do not use the cython version of NMS?