Deep3DFaceRecon_pytorch
Deep3DFaceRecon_pytorch copied to clipboard
How to get mesh image with original image size?
All res is with img size 224. If I use original image size mesh. How can I inverse the transform mat
@sicxu @YuDeng , I also want to ask this question. I noticed that the input image would be cropped by the 5 facial landmarks location into a 224x224 size, if I want to paste the reconstructed image to the original image, how should I do?
@Feywell , have you solved this problem?
I save the mat in preprocessing,. So I warp the render image use the inverted mat when pasteing the image to the original image
@Feywell , sorry to reply you so lately. Could you share your codes to invert the reconstructed image to the original one? I also save the trans_params information in align_img function and try to recover the reconstructed image to the original size, but I got the results which matched not very well.
Here is my take on it. In the function resize_n_crop_img in preprocess.py
orig_left,orig_up,orig_crop_size = (left,up,target_size)/s
will give you the upper left corner of the crop in the original image (orig_left,orig_up) and the size of the squared crop, orig_crop_size
I find this idea also gets slightly jitting effects when processing video.
Finally, I save s, left and up firstly.
Then, I resize the original image by s, and paste back the left, up to the resized original image, which works better and seamlessly.
Here is my take on it. In the function
resize_n_crop_imginpreprocess.py
orig_left,orig_up,orig_crop_size = (left,up,target_size)/swill give you the upper left corner of the crop in the original image
(orig_left,orig_up)and the size of the squared crop,orig_crop_size