Social-STGCNN
Social-STGCNN copied to clipboard
I am very curious about the element in your proposed adjacency matrix
When I see your code, why do you use the relative position to calculate the norm between two nodes in a graph under a frame, shown as follows (The related code has been bold):
def seq_to_graph(seq_,seq_rel,norm_lap_matr = True):
seq_ = seq_.squeeze()
seq_rel = seq_rel.squeeze()
seq_len = seq_.shape[2]
max_nodes = seq_.shape[0]
V = np.zeros((seq_len,max_nodes,2))
A = np.zeros((seq_len,max_nodes,max_nodes))
for s in range(seq_len):
step_ = seq_[:,:,s]
step_rel = seq_rel[:,:,s]
for h in range(len(step_)):
V[s,h,:] = step_rel[h]
A[s,h,h] = 1
for k in range(h+1,len(step_)):
l2_norm = anorm(step_rel[h],step_rel[k])
A[s,h,k] = l2_norm
A[s,k,h] = l2_norm
if norm_lap_matr:
G = nx.from_numpy_matrix(A[s,:,:])
A[s,:,:] = nx.normalized_laplacian_matrix(G).toarray()
return torch.from_numpy(V).type(torch.float),
torch.from_numpy(A).type(torch.float)
Hi Wang, Thanks for reaching out. Relative position is a form of normalization for the data. Other methods are relative change w.r.t initial point, middle or last observed point. Also, this is a form of using the relative speed instead of position, it's kind of a better signal. Hope this helps.
Hi Wang, Thanks for reaching out. Relative position is a form of normalization for the data. Other methods are relative change w.r.t initial point, middle or last observed point. Also, this is a form of using the relative speed instead of position, it's kind of a better signal. Hope this helps.
Thank you for your answer.
From your code regarding data processing, step_rel[h] is just like the vector pointing from the previous absolute position to the current absolute position for a certain pedestrian. If this is form of using relative speed, why don't you use the finite difference to calculate since the frame rate is 0.4 for ETH/UCY dataset.
I am looking forward to getting your explanation.
There is no issue in using FD (computation cost?) The step_rel amplifies noise too. Probably, changing the "normalization" method can enhance the results, by how much % vs current results and accounting for extra computational cost is something to look at. I encourage you to look into our latest work Social-Implicit which also uses other datasets.