applezy8866

Results 3 issues of applezy8866

Hello, When using auto_fp16() to wrap the model, with self.fp16_enabled = False during training, the inference results are different, I don't know why, because it seems auto_fp16() won't work when...

I found when doing temporal alignment in TemporalSelfAttention, you rotate the previous bev feature and translate the reference points. Why not rotate and translate the previous bev feature or rotate...

Dear, I found that in dataset.py: " if not self.test_mode: self.num_frames_per_sample += 1" and in tracker.py: "num_frame = img.size(1) - 1" so what‘s the usage of the additional frame?