pytorch-pose
pytorch-pose copied to clipboard
Hi, there are some problems about transform.py
inp = crop(img_trans, center, scale, [256,256])
trans_pts3d = transform_preds(torch.tensor(pts3d), center, scale, [256, 256])
I want to crop all data to [256,256], and the groundtrue landmarks is transformed by your transform_preds function. But finally the landmarks did not match the image. Can you help me to correct it? THANK YOU!
@wqz960 Hi, I also want to train this network on on my own dataset, my input picuture size is 720x1280, but I don't know how to set the scale and center. Can you give me some advice?