PersFormer_3DLane
PersFormer_3DLane copied to clipboard
where is the lane vis output tensor in paper?
thank you for your share, but I find that the github code is not consistant with paper.
such as, "Unifying anchor design in 2D and 3D. We first put curated anchors (red) in the BEV space (left), then project them to the front view (right). Offset xi k and ui k (dashed line) are predicted to match ground truth (yellow and green) to anchors. The correspondence is thus built, and features are optimized together“
the code predicting lane length using the laneatt old method, which predict the start ration, but paper uses the lane visible for each row??
@zimurui Hi, could you elaborate more cause I did not fully understand your question. LaneATT predicts the length of lanes directly while our 2D branch predicts each row with visibility to cope with the final length.
@zimurui Hi, could you elaborate more cause I did not fully understand your question. LaneATT predicts the length of lanes directly while our 2D branch predicts each row with visibility to cope with the final length.
Thank your replay. I checked again. Persformer regress visibility by adding another 72 outputs in offsets prediction task, which is
" self.reg_layer = nn.Linear(2 * self.anchor_feat_channels * self.fmap_h, 2 * self.n_offsets) .... reg = self.reg_layer(batch_anchor_features)"
that is different with laneatt method. Is this performs better for laneatt?
Add more a question, could you please provide some trained weigths?
@zimurui Yes, it is different from the original LaneATT and we did not try it. We design this to make 2D/3D branches consistent at the very beginning. Intuitively, the critical problem is that LaneATT has too many anchors to keep them both in 2D and 3D.
Add more a question, could you please provide some trained weights?
Sorry for that. We could not provide pretrained weights in accordance with Waymo license.