GRU-D icon indicating copy to clipboard operation
GRU-D copied to clipboard

Computing Time Interval Delta

Open 2M-kotb opened this issue 6 years ago • 4 comments

Based on my understanding of the paper, the computing of time interval at time step t is based on the mask value of the previous time step t-1 [eq(2) in the paper].

It is looks like you compute time interval at time step t based on mask value at time step t

for idx in range(missing_index[0].shape[0]):
            i = missing_index[0][idx] # for 1st dim
            j = missing_index[1][idx] # for 2nd dim
            k = missing_index[2][idx] # for 3rd dim
            if j != 0:
                Delta[i,j,k] = Delta[i,j,k] + Delta[i,j-1,k] 

But, I think it suppose to be as follows:

for idx in range(missing_index[0].shape[0]):
            i = missing_index[0][idx] # for 1st dim
            j = missing_index[1][idx] # for 2nd dim
            k = missing_index[2][idx] # for 3rd dim
            if j != 0 and j!=9 :
                Delta[i,j+1,k] = Delta[i,j+1,k] + Delta[i,j,k]

2M-kotb avatar Feb 06 '19 12:02 2M-kotb

@zhiyongc Another thing, when computing x_last_obsv we need to make a copy of speed_squences array by using np.copy() because using assignment makes a reference.

(line 71 in main): X_last_obsv = speed_sequences should be: X_last_obsv = np.copy(speed_sequences)

Many thanks to you for this great code, really helpful

2M-kotb avatar Feb 07 '19 10:02 2M-kotb

@zhiyongc When you generate Mask, Delta, and Last_observed_X in the code, you shuffle them twice, while you shuffle speed_sequences and speed_label once. The first time when you create them and the second time when you expand their dimension as shown in the following

if masking:
        X_last_obsv = X_last_obsv[index] #---->   1st time
        Mask = Mask[index] #----> 1st time
        Delta = Delta[index] #-----> 1st time
        speed_sequences = np.expand_dims(speed_sequences, axis=1)
        X_last_obsv = np.expand_dims(X_last_obsv[index], axis=1) #---> 2nd time
        Mask = np.expand_dims(Mask[index], axis=1) #------> 2nd time
        Delta = np.expand_dims(Delta[index], axis=1) #-------> 2nd time
        dataset_agger = np.concatenate((speed_sequences, X_last_obsv, Mask, Delta), axis = 1)

So, I think we need to shuffle them once also

if masking:
        X_last_obsv = X_last_obsv[index]
        Mask = Mask[index]
        Delta = Delta[index]
        speed_sequences = np.expand_dims(speed_sequences, axis=1)
        X_last_obsv = np.expand_dims(X_last_obsv, axis=1)
        Mask = np.expand_dims(Mask, axis=1)
        Delta = np.expand_dims(Delta, axis=1)
        dataset_agger = np.concatenate((speed_sequences, X_last_obsv, Mask, Delta), axis = 1)


2M-kotb avatar Feb 07 '19 16:02 2M-kotb

Hi @DeepWolf90, I agree with you in terms of the issues you posted. All the three issues are fixed in the new version of the code. As I tested, the prediction accuracy improved by using the updated code. Thanks again for your kind help and the comprehensive descriptions of the issues!

zhiyongc avatar Feb 10 '19 09:02 zhiyongc

@zhiyongc Sorry if the following question sounds naive, but I'm new to pytorch

In forward function, line 144: outputs = None

Based on my understanding you don't apply any prediction layer you just used the hidden state to compute the loss.

why don't you use a linear activation fully connected layer as a prediction layer?

2M-kotb avatar Jul 02 '19 19:07 2M-kotb