second.pytorch
second.pytorch copied to clipboard
how's the performance on 16 beams lidar data?
Does there any performance demonstration videos or gif to show detection result on 16 beams data?
@muzi2045 Seems a little slow..
record problem, inference_time between 30ms~50ms
Then why get this blocked effect?
@muzi2045 how to prepare 16-beam lidar?make it kitti like format?
do you still use the pre_train model for the 16 beam lidar?@muzi2045 @muzi2045
No, that's not pertained model released by Author
Thanks!!
Liheng [email protected] 于2019年5月17日周五 上午11:16写道:
No, that's not pertained model released by Author
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/traveller59/second.pytorch/issues/155?email_source=notifications&email_token=AKEYD45YTUONGSENWBIUUNTPVYPRNA5CNFSM4HFOXAY2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVTTRLY#issuecomment-493303983, or mute the thread https://github.com/notifications/unsubscribe-auth/AKEYD46XXPLC2G3UP4BLZX3PVYPRNANCNFSM4HFOXAYQ .
You pretrain the model by KITTI?or by nuscenes @muzi2045
both dataset are trained, nuscenes performs better.
thank you very much!
@muzi2045 Hi Muzi,for pretraing your 16 beam model with kitti and nuscene, did you use the original 64/32 beam data , or downsample them to 16 beam?
Thank you in advance!
trained with 64/32 beam lidar data, inference with 16 beam lidar data, don't need to downsample @turboxin
@muzi2045 thank you very much!
@muzi2045 hello!You mentioned that inference_time is between 30ms~50ms, may I ask what GPU are you using? Could you please also provide some quantitative performance data on your results on 16 beam lidar? Thanks a lot!
if you using 1050Ti , inference time between 40ms ~ 60ms (without tensorrt speed up) with 1080TI , inference time between 15ms~30ms(without tensorrt)
trained with 64/32 beam lidar data, inference with 16 beam lidar data, don't need to downsample @turboxin
how to run the demo on my own dataset?(32 beams data),could you help me?Thank you in advance!
@muzi2045 Hi, could u share ur pretrained model. I have trained Kiiti dataset and inference on velodyne 16, but the result seems not good. very appreciate
@muzi2045 Hi, could you show an example of how you converted the model to tensorrt?
there has some problem in pytorch-> onnx -> tensorRT, I can't successfully convert this model in tensorrt to speed up the inference cost time. But you can refer this repo, the author looks like convert success. nutonomy_pointpillars Good Luck to you @dhellfeld
@muzi2045 Hi, Thanks for your share, I am a newer in this area, could you give some advice on how to use SECOND do the inference and visualization work as the GIF you shown.