Deep-Learning-for-Tracking-and-Detection icon indicating copy to clipboard operation
Deep-Learning-for-Tracking-and-Detection copied to clipboard

Question of tracking the untrackable

Open litingfeng opened this issue 6 years ago • 24 comments

Hi,

I just read the paper Tracking The Untrackable: Learning to Track Multiple Cues with Long-Term Dependencies and I have a question which I hope you can give me some hints:

What is the dimension of the similarity score ( vector or number)? Say if t_i connects d_j, is the score denoted as or is the output of some process of ?

I am looking forward to your answer. Thank you very much,

litingfeng avatar Oct 17 '17 07:10 litingfeng

The similarity score is a single number that is produced by the target RNN using the feature vector as input.

abhineet123 avatar Oct 17 '17 13:10 abhineet123

@abhineet123 Did you mean that is the input of target RNN(O) and score is the output? But in Figure 2. , is the output of fc layer following RNN(O).

litingfeng avatar Oct 17 '17 13:10 litingfeng

Yes, that is correct. The target RNN applies the softmax classifier and cross entropy loss to the feature vector to produce the similarity score.

This is mentioned in the last line of 3.5 (ii) (first para of column 2 on page 5):

"Our target RNN is also trained to perform the task of data association – outputs the score of whether a detection (d) corresponds to a target (t) from using a Softmax classifier and cross-entropy loss."

It seems that the target RNN produces both and the similarity score.

abhineet123 avatar Oct 17 '17 13:10 abhineet123

@abhineet123 I think I can understand it now. Thank you very much for your elaborate explanation.

litingfeng avatar Oct 17 '17 13:10 litingfeng

Hi,

I have another question here about Figure 3. In the second paragraph on page 4( section 3.2), is a 500-D vector, is a H-D vector, which is 128 according to Implementation Details(2nd paragraph page 6). However, both of them are the output of the same CNN, why is it different of their dimension?

litingfeng avatar Oct 18 '17 03:10 litingfeng

The paper does not clearly mention how the same CNN is outputting both 500 and 128-D feature vectors but Fig. 3 does show what looks like an extra layer on top of the CNNs corresponding to the 500-D outputs which might indicate another layer that performs this conversion. This seems to be confirmed by the last para of 3.2 that mentions that they used a pre-trained VGGNet as the appearance feature extractor after replacing its last FC layer with one of their own that produces 500-D vectors.

The reason behind this difference seems simple enough. The 500-D vectors correspond to the appearance history of the target and are all passed through the LSTM to generate a single H dimensional vector that represents the overall target appearance by fusing information from all of these vectors.

This vector is directly comparable to the H-D feature vector corresponding to the candidate detection which is probably produced by the CNN without this last FC layer that they added. The two vectors are thus concatenated to generate the 2H-D vector that is finally processed by the Siamese classification network whose output is also a 500-D vector. Sec. 4.3 mentions that the Siamese classification network is constructed using the same CNN as used for appearance feature extraction. This suggests that the FC layer of the Siamese network that produces this final 500-D output is similar to the FC layer that converts the H-D CNN output to the 500-D vectors . Since it is a Siamese network, however, it contains a pair of these feature extractor CNNs and its FC layer has been trained to distinguish between two of these H-D vectors instead of simply mapping one into a 500-D vector.

abhineet123 avatar Oct 18 '17 04:10 abhineet123

In summary, you mean that the 500-D vector of is the output of the feature extractor CNN with a 500 units of fc layer following a 128 units of fc layer, and the is the output of incomplete version of CNN which doesn't have the 500 units fc layer. I'm not sure whether I comprehend your point in the right way. In addition, I don't quite understand the last sentence and the architecture of the siamese network. Does it look like the figure ? 1

litingfeng avatar Oct 18 '17 09:10 litingfeng

Yes that is what I mean. It is impossible to say exactly how the Siamese network is designed until they release their code but yes, this is roughly what I had in mind.

abhineet123 avatar Oct 18 '17 14:10 abhineet123

@abhineet123 I am really grateful for your reply. Later I will ask the author for more details.

litingfeng avatar Oct 18 '17 14:10 litingfeng

Glad to be of assistance and please let me know what the authors have to say about this.

abhineet123 avatar Oct 18 '17 14:10 abhineet123

Hi,

I asked the author but haven't received response yet. Here I have another question: do you know how to train the LSTM in this appearance model? Is LSTM included in siamese CNN? I'm still connused about the training procedure. Thank you very much.

litingfeng avatar Oct 28 '17 15:10 litingfeng

No I am also waiting for the authors to release their code to get the details of the training procedure.

abhineet123 avatar Oct 28 '17 17:10 abhineet123

Hi, In 3.3 Motion, the authors wrote that velocities are extracted by their motion feature extractor. Does anyone have a clue on what algorithm it could be ?

swamika001 avatar Feb 12 '18 18:02 swamika001

Probably some kind of optical flow algorithm like cvCalcOpticalFlowPyrLK. This is what they used in an earlier version of this paper.

abhineet123 avatar Feb 12 '18 19:02 abhineet123

Hi,

In 3.5 Target, any idea what is the input sequence to the Target RNN. The authors mention that the output of appearance, motion and interaction are concatenated and passed to Target RNN. But then how do that result in a sequence?

nidhinkrishnanv avatar Apr 01 '18 09:04 nidhinkrishnanv

Has the code been released yet or any implementation available?

tonmoyborah avatar Apr 11 '18 07:04 tonmoyborah

Not as far as I know.

abhineet123 avatar Apr 11 '18 12:04 abhineet123

hi, is the the paper Tracking The Untrackable: Learning to Track Multiple Cues with Long-Term Dependencies have code to be implementation

behappyZheng avatar Oct 10 '18 05:10 behappyZheng

Not that I am aware of.

abhineet123 avatar Oct 10 '18 13:10 abhineet123

Do you have any intuition about input image size fed into CNN? VGG16 takes 224*224 size as input and produces 28055 as first FC layer. But, a person's size(height and width) would not be a square. So how they cropped the image? If input image size is different, then first FC layer will be different.

icesohelrana avatar Nov 18 '18 01:11 icesohelrana

In their earlier paper, they extract the patch and then resize it to a fixed size (224*224 in your case) without preserving the aspect ratio. Though the patch becomes distorted to human eyes, it probably doesn't make any difference to the CNN as long as test patches are distorted in the same way as training ones.

abhineet123 avatar Nov 18 '18 01:11 abhineet123

hi, is the the paper Tracking The Untrackable: Learning to Track Multiple Cues with Long-Term Dependencies have code to be implementation?

tianzhihen avatar Feb 19 '20 06:02 tianzhihen

Not that I'm aware of.

abhineet123 avatar Feb 19 '20 13:02 abhineet123

Is there a tracking method using self-attention(such as transformer、BERT) recently?

tianzhihen avatar Feb 21 '20 04:02 tianzhihen