head-pose-estimation
head-pose-estimation copied to clipboard
3D model points to be made as dynamic from the input image or video
I would like ot know how to change the 3D model points declared in Pose_estimator.py with respect to different image size that is 1280 X 1084
def __init__(self, img_size=(480, 640)):
self.size = img_size
# 3D model points.
self.model_points = np.array([
(0.0, 0.0, 0.0), # Nose tip
(0.0, -330.0, -65.0), # Chin
(-225.0, 170.0, -135.0), # Left eye left corner
(225.0, 170.0, -135.0), # Right eye right corne
(-150.0, -150.0, -125.0), # Left Mouth corner
(150.0, -150.0, -125.0) # Right mouth corner
]) / 4.5
Please suggest me the way to get out of this to improve my accuracy of landmark point detection
The __init__
function accepts a tuple as image size. May I ask why do you want to modify the model points according to the image size?
I would like to obtain results in real time using an webcam which capotures 1280X1080. Without changing the model points accuracy is less when compared with older one output. Please suggest me how can i increase my accuracy to maximum.
I would like to increase the accuracy of the facial eye points as i feel that it is not matching the eyes of the video captured and also the eye detection must be improved mainly and so i need to increase accuracy
I'm afraid the detection accuracy is actually up to the CNN model, not the 3D face model points. If a more accurate result is required, you may want to try other CNN models with better performance, as this model is quite simple and is more of a demonstration work.
Thanks for your reply. Please guide me the steps to be done for the processing of pose estimation of eye detection with image resolution of 1280 X 1080 and import into the pose estimation code which contains image size as 480, 640
The variation of image resolution is already handled by this line in the demo file.
Did you find anything unusual when using that file directly?
Thank you for the reply i am clear now. Can you tell me the method you have used to calculate the below mentioned points
3D model points.
self.model_points = np.array([
(0.0, 0.0, 0.0), # Nose tip
(0.0, -330.0, -65.0), # Chin
(-225.0, 170.0, -135.0), # Left eye left corner
(225.0, 170.0, -135.0), # Right eye right corne
(-150.0, -150.0, -125.0), # Left Mouth corner
(150.0, -150.0, -125.0) # Right mouth corner
]) / 4.5
And i would like to undersatnd the requirement of you using "/4.5" whta is the purpose of this 4.5 in this code
And also i am unable to do the training for our own datasets with tensorflow. Please guide us the method you used to train the data's step by step. I would like to know the steps you followed to train the dataset of eye detection with tensorflow. I would also prefer to know how did you perform the annotation of eyes
Please guide me in tensorflow training step by step procedure and annotation of eyes
I would like to know how to change the below mentioned points of Facial landmarks self.model_points = np.array([ (0.0, 0.0, 0.0), # Nose tip (0.0, -330.0, -65.0), # Chin (-225.0, 170.0, -135.0), # Left eye left corner (225.0, 170.0, -135.0), # Right eye right corne (-150.0, -150.0, -125.0), # Left Mouth corner (150.0, -150.0, -125.0) # Right mouth corner ]) / 4.5
And i would like to undersatnd the requirement of you using "/4.5" whta is the purpose of this 4.5 in this code
And also i am unable to do the training for our own datasets with tensorflow. Please guide us the method you used to train the data's step by step. I would like to know the steps you followed to train the dataset of eye detection with tensorflow. I would also prefer to know how did you perform the annotation of eyes
Please guide me in tensorflow training step by step procedure and annotation of eyes
For the training part, please refer to this repo.
The face model points are used for pose estimation. Check out this function but also be aware this repo actually use another function with 68 points involved.
The number 4.5 is for scaling. You may refer to the OpenCV documentation for the technical details of PnP, or this post that makes a good explanation.
I am unable to match the corner points of the eye detection can u help me to improve the eye detection using facial landmark points
For a better result you might want to read this great paper: Face Alignment In-the-Wild: A Survey
Wish that helps.