cross_view_transformers
cross_view_transformers copied to clipboard
the difference between paper‘s figure and visualized predictd results
hi,I run the demo in cross_view_transformers/scripts/example.ipynb,but I find the difference between paper‘s figure and the outputs'result , the figures are as fowllows:
in my opinion, if we get the paper's figure, the model is a three labels but not a simply binary segmentation model, I look forward to your answer,thank you.
I think most of the papers right now have separate models for dynamic objects and static road layout, that's why you can only see a single model here. Just my personal perspective.
Derrick is correct - all of the models in this work were trained for single classes for closer comparison to prior works like Lift-Splat and FIERY
Thank you for your answers, but I want to ask how to achieve the result graph shown by the author.
@yangyangsu29 Did you solve the problem now? Thanks.
@yangyangsu29 Did you solve the problem now? Thanks. training each class (vehicle and driveable area)separately,it can reproduce the paper‘s result indeed; in visualization, the predict‘s map overlay together as follow:
@yangyangsu29
python3 scripts/train.py \ +experiment=cvt_nuscenes_vehicle data.dataset_dir=/media/datasets/nuscenes \ data.labels_dir=/media/datasets/cvt_labels_nuscenes
Thanks for your reply, please allow me to refine the question further.
You mean that when the above command is executed, only the vehicle-related model is trained, I need to change "+experiment=cvt_nuscenes_vehicle" to "+experiment=cvt_nuscenes_road" to train driveable area, but the visualization is not saved after this command. Therefore, I edit the _log_image function in cross_view_transformer/callbacks/visualization_callback.py to store the visualization feature map by opencv. I would like to ask how to add the two feature maps, and whether the author has implemented it in this repo. In other words, I wish you could be a little more detailed.
@yangyangsu29
python3 scripts/train.py \ +experiment=cvt_nuscenes_vehicle data.dataset_dir=/media/datasets/nuscenes \ data.labels_dir=/media/datasets/cvt_labels_nuscenes
Thanks for your reply, please allow me to refine the question further. You mean that when the above command is executed, only the vehicle-related model is trained, I need to change "+experiment=cvt_nuscenes_vehicle" to "+experiment=cvt_nuscenes_road" to train driveable area, but the visualization is not saved after this command. Therefore, I edit the _log_image function in cross_view_transformer/callbacks/visualization_callback.py to store the visualization feature map by opencv. I would like to ask how to add the two feature maps, and whether the author has implemented it in this repo. In other words, I wish you could be a little more detailed.
yes , change "+experiment=cvt_nuscenes_vehicle" to "+experiment=cvt_nuscenes_road" to train driveable area, then run ./scripts/example.ipynd(modify ckpt_path and so on) to infer the val datasets, in visualization, you need write less codes to show them in a figure as showed above
@yangyangsu29 Now, I've generated the resulting figure of the roads and vehicle as you showed earlier (https://user-images.githubusercontent.com/49515300/174055573-75356d4f-6838-456a-87de-2747d24ca09f.png). However, I don't know how to combine them. In ./scripts/example.ipynd, there is only one pth path, I try to load two paths at the same time, and then generate the visualization result (as shown in the code below), but the result is not very good. Can you explain how the visualizations overlay together? Can you send the modified file to my email? My email is [email protected], thank you for your patience!
`with torch.no_grad(): for batch in loader: batch = {k: v.to(device) if isinstance(v, torch.Tensor) else v for k, v in batch.items()} pred = network(batch) pred_v = network_v(batch)
visualization = np.vstack(viz(batch=batch, pred=pred))
visualization_v = np.vstack(viz(batch=batch, pred=pred_v))
visualization = visualization + visualization_v
images.append(visualization)`
@yangyangsu29 Can you be more detailed about this "in visualization, you need write less codes to show them in a figure as showed above"? Thanks.
@gongyan1 Have you found a way to combine the two visualization results? Please give me some advice,thanks !
Sorry for the delay - try this notebook
to visualize merge predictions from two models