Human-Path-Prediction icon indicating copy to clipboard operation
Human-Path-Prediction copied to clipboard

ETH original dataset

Open 12num opened this issue 2 years ago • 4 comments

Thank you very much for your work, I have a request I hope you can agree to Could you share with me the data of your ETH processing script before processing, thank you very much!

12num avatar Jul 20 '22 07:07 12num

hello, I have the same question. I can't get the same *.pickle data from original data(sgan provided) using image2world function, do you have some progress?

mulplue avatar May 08 '23 02:05 mulplue

你好,我有同样的问题。我无法使用 image2world 函数从原始数据(提供 sgan)中获取相同的 *.pickle 数据,您有什么进展吗?

Hello, I'm sorry to tell you that I haven't made any progress on this issue, so I turned to other model explorations. If you have any questions, please contact me by email.

12num avatar May 08 '23 03:05 12num

Hello, I'm sorry to tell you that I haven't made any progress on this issue, so I turned to other model explorations. If you have any questions, please contact me by email.

Hello, now I know how to make the world-pixel transformation using the homography matrix, but I still don't know how Ynet filter the data, so I turned to other model explorations, too. Here's the world2image transformation(from ETH official guidance), I hope it can help someone who are concerned about this issue:

def world2image(traj_w, H_inv):    
    # Converts points from Euclidean to homogeneous space, by (x, y) \u2192 (x, y, 1)
    traj_homog = np.hstack((traj_w, np.ones((traj_w.shape[0], 1)))).T  
    # to camera frame
    traj_cam = np.matmul(H_inv, traj_homog)  
    # to pixel coords
    traj_uvz = np.transpose(traj_cam/traj_cam[2]) 
    return traj_uvz[:, :2]

mulplue avatar May 18 '23 07:05 mulplue

Hello, I'm sorry to tell you that I haven't made any progress on this issue, so I turned to other model explorations. If you have any questions, please contact me by email.

Hello, now I know how to make the world-pixel transformation using the homography matrix, but I still don't know how Ynet filter the data, so I turned to other model explorations, too. Here's the world2image transformation(from ETH official guidance), I hope it can help someone who are concerned about this issue:

def world2image(traj_w, H_inv):    
    # Converts points from Euclidean to homogeneous space, by (x, y) \u2192 (x, y, 1)
    traj_homog = np.hstack((traj_w, np.ones((traj_w.shape[0], 1)))).T  
    # to camera frame
    traj_cam = np.matmul(H_inv, traj_homog)  
    # to pixel coords
    traj_uvz = np.transpose(traj_cam/traj_cam[2]) 
    return traj_uvz[:, :2]

I would like to ask if the UCY dataset has the same homography matrix for converting coordinates between world coordinates and pixel coordinates. Similar work, Y-net, NSP-SFM, etc., all use map information in pixel space. The final indicators (ADE/FDE) are in world coordinates. How do they achieve conversion on UCY? Also I noticed that the original dataset of UCY seems to be in pixel coordinates, most of the existing work uses world coordinates, how is this converted, thank you very much

Chenzhou727 avatar Nov 11 '23 09:11 Chenzhou727