TransTrack icon indicating copy to clipboard operation
TransTrack copied to clipboard

Questions about implementation details

Open Co4AI opened this issue 4 years ago • 3 comments

As introduced in the paper, TransTrack takers composite features (from different frames) as input for transformer during inference. But I find TransTrack take features from the same frame during training (only supports static-picture training). Moreover, when train current-frame decoder, it doesn't take pre-feature (combine the same features). Am I right?

Co4AI avatar Jan 06 '21 03:01 Co4AI

It seems that self.decoder_track is trained under torch.randn(1).item() > 0.0 condition, self.decoder is trained otherwise. There are two questions:

  1. when we train self.decoder, the two part feature maps are the same which different from testing stage.
  2. when we train self.decoder_track, we match outputs of modified images with unchanged annotation to get indices.

I think it's hard to understand. Do I miss something important?

Co4AI avatar Jan 06 '21 04:01 Co4AI

It seems that self.decoder_track is trained under torch.randn(1).item() > 0.0 condition, self.decoder is trained otherwise. There are two questions:

  1. when we train self.decoder, the two part feature maps are the same which different from testing stage.
  2. when we train self.decoder_track, we match outputs of modified images with unchanged annotation to get indices.

I think it's hard to understand. Do I miss something important?

Hi~

  1. We tried to train self.decoder with two feature from different images, the result is similar(even a little worse).
  2. We are verifying now changing annotations accordingly, and to see whether it helps. Once we get the result, we will update the result here

PeizeSun avatar Jan 06 '21 16:01 PeizeSun

It seems that self.decoder_track is trained under torch.randn(1).item() > 0.0 condition, self.decoder is trained otherwise. There are two questions:

  1. when we train self.decoder, the two part feature maps are the same which different from testing stage.
  2. when we train self.decoder_track, we match outputs of modified images with unchanged annotation to get indices.

I think it's hard to understand. Do I miss something important?

Hi~

  1. We tried to train self.decoder with two feature from different images, the result is similar(even a little worse).
  2. We are verifying now changing annotations accordingly, and to see whether it helps. Once we get the result, we will update the result here

Thank you for your reply~

  1. About training self.decoder with different features, I got the same result (a little worse). But I wonder why different training strategy can get improvement.
  2. Looking forward for your updating.

Co4AI avatar Jan 07 '21 05:01 Co4AI