rangenet_lib icon indicating copy to clipboard operation
rangenet_lib copied to clipboard

Different result of rangenet_lib

Open TT22TY opened this issue 5 years ago • 37 comments

Hello, I have tested rangenet_lib, the result is as follows, which is different to the demo on the website(https://github.com/PRBonn/rangenet_lib). I use the pre_trained model provided on the website, I wonder why it is different and wrong. Could you please give some suggestions to find out the reason? Besides , I also opened an issue under the SuMa++ (https://github.com/PRBonn/semantic_suma/issues/6#issue-525720509)

image

Thank you very much.

TT22TY avatar Nov 27 '19 01:11 TT22TY

Hi @TT22TY,

we are currently investigating the issue. Currently, we only know that it might occur on Nvidia RTX graphic cards.

Do you also have an RTX model?

(sorry for the late reply.)

jbehley avatar Dec 03 '19 06:12 jbehley

Thanks for your reply. :) @jbehley I use GeForce RTX 2080,Cuda 10.0, V10.0.130, Cudnn 7.5.0, Driver Version: 410.48. I am not sure what does "an RTX model" mean? Thank you very much.

TT22TY avatar Dec 04 '19 05:12 TT22TY

Okay, this seems to be really an issue with the Geforce RTX line. I experienced similar problems with my RTX 2070 and therefore we have to investigate what's the reason.

The non-TensorRT part works under Pytorch with an RTX 2080 Ti, since this is the card we used for our experiments.

jbehley avatar Dec 04 '19 07:12 jbehley

It seems to be a problem of the version of TensorRT and RTK graphic cards. Everything works as expected with TensorRT 5.1:

image

Hope that resolves your issue. We also changed the README.md to account for this requirement.

jbehley avatar Dec 05 '19 07:12 jbehley

Thank you very much! @jbehley

In fact ,I use TensorRT 5.1.5.0, so I wonder which version do you installed? Besides, since the version of TensorRT is related to cuda and cudnn, could you please tell me the exact version of the cuda and cudnn you use? Mine is Cuda 10.0, V10.0.130, Cudnn 7.5.0, Driver Version: 410.48. TensorRT 5.1.5.0.

Thank you!

TT22TY avatar Dec 05 '19 09:12 TT22TY

Hi @TT22TY ,

I've tested on three different setups.

  1. Quadro P4000, Cuda 10.0.130, Cudnn 7.6.0, Driver 418.56, TensorRT 5.1.5.0
  2. GeForce GTX 1080 Ti, Cuda 9.0.176, Cudnn 7.3.1, Driver 390.116, TensorRT 5.1.5.0
  3. GeForce GTX 1050 Ti, Cuda 10.0.130, Cudnn 7.5.0, Driver 410.79, TensorRT 5.1.5.0

I hope it helps.

Chen-Xieyuanli avatar Dec 05 '19 20:12 Chen-Xieyuanli

Hi @TT22TY,

and I run it under Ubuntu 18.04 using the following combination:

  1. GeForce RTX 2070, Cuda V10.1.243, Cudnn 7.6.3, Driver 418.87.01, TensorRT 5.1.5.0-ga

For TensorRT I downloaded: nv-tensorrt-repo-ubuntu1804-cuda10.1-trt5.1.5.0-ga-20190427_1-1_amd64.deb

jbehley avatar Dec 05 '19 22:12 jbehley

@jbehley @Chen-Xieyuanli ,Thank you very much, I am not sure which part lead to the wrong result, mine is Ubuntu 16.04, GeForce RTX 2080, Cuda V10.0.130, Cudnn 7.5.0, Driver Version: 410.48 ,TensorRT 5.1.5.0.

For TensorRT I downloaded: Tensorrt-TensorRT-5.1.5.0. ubuntu-16.04.5x86_64-gnu.cuda-10.0.cudnn7.5.tar.gz

Do you have any suggestions for which one should I update, I will try it again.

Thank you very much! :)

TT22TY avatar Dec 06 '19 01:12 TT22TY

Hi @TT22TY , since you have the RTX graphic card, I would suggest you following @jbehley's setup.

Please make sure that you build the system with the expected setup, because different versions of Cuda, Cudnn or TensorRT could exist at the same time. When compiling the system, It might still connect to the wrong versions.

Chen-Xieyuanli avatar Dec 06 '19 07:12 Chen-Xieyuanli

@Chen-Xieyuanli Thank you very much, I will try. :)

TT22TY avatar Dec 06 '19 09:12 TT22TY

In reference to issue #6,

Below is my hardware setup. GeForce GTX 1050 Cuda: 10.1, V10.1.243 Cudnn: 7 TensorRT: 6.0.1

Just wanted to know, if this setup is compatible or should I switch to other setup. Thanks!

Ewaolx avatar Dec 08 '19 00:12 Ewaolx

We currently only experienced problem with RTX models. However, I would currently suggest to use TensorRT 5.1, since this is the version we also tested the most and have had running on many other systems.

If you go for TensorRT 6.0, we cannot guarantee that everything works as expected.

jbehley avatar Dec 09 '19 07:12 jbehley

Thanks for the update.

I tried running with the TensorRT 5.1, but still getting the same result. Also, I tried the .bin file provided in the example folder and had no issues getting the expected result.

It would be great, if possible, you guys can try this pcd file and see if getting the same results as mine. Thank you!

Ewaolx avatar Dec 21 '19 03:12 Ewaolx

Hi , I have also get similiar bad result of example scan: bad my envs:

Ubuntu 5.4.0-6ubuntu1~16.04.12
CUDA Version 10.0.130  cudnn 7.5.1
TITAN RTX 24g  
Nvidia driver  440.44

and TensorRT information:

ii  libnvinfer-dev                                              5.1.5-1+cuda10.0                                      amd64        TensorRT development libraries and headers
ii  libnvinfer-samples                                          5.1.5-1+cuda10.0                                      all          TensorRT samples and documentation
ii  libnvinfer5                                                 5.1.5-1+cuda10.0                                      amd64        TensorRT runtime libraries
ii  tensorrt                                                    5.1.5.0-1+cuda10.0                                    amd64        Meta package of TensorRT

LongruiDong avatar Dec 26 '19 08:12 LongruiDong

Hi, @jbehley I just try on Xavier with TensorRT 5.1 , it does work correctly, while try it on TX2 with TensorRT 6, it is incorrect as before. It seem that the GPU and TensorRT version really matters.

Besides, I am wondering when will you release the ROS interface for lidar-bonnetal~

Thank you very much!

TT22TY avatar Jan 08 '20 11:01 TT22TY

Hi , I have also get similiar bad result of example scan: bad my envs:

Ubuntu 5.4.0-6ubuntu1~16.04.12
CUDA Version 10.0.130  cudnn 7.5.1
TITAN RTX 24g  
Nvidia driver  440.44

and TensorRT information:

ii  libnvinfer-dev                                              5.1.5-1+cuda10.0                                      amd64        TensorRT development libraries and headers
ii  libnvinfer-samples                                          5.1.5-1+cuda10.0                                      all          TensorRT samples and documentation
ii  libnvinfer5                                                 5.1.5-1+cuda10.0                                      amd64        TensorRT runtime libraries
ii  tensorrt                                                    5.1.5.0-1+cuda10.0                                    amd64        Meta package of TensorRT

Hi, still not solved on my Titan RTX GPU although I tried using

Cuda V10.1.243, Cudnn 7.6.3, Driver 440.44, TensorRT 5.1.5.0-ga

I have also test on Driver 418.87(other same as above line), However meet run time error #15 again...

If I train a new model from scratch on semantic kitti in the current environment, will this issue be solved, By doing so , Whether it can achieve the expected semantic segmentation results?

Thanks a lot~

LongruiDong avatar Jan 09 '20 09:01 LongruiDong

You can always run the model without TensorRT (see lidar-bonnetal for the pytorch models), since this worked reliably on all our systems with different GPUs.

We currently cannot do anything to solve or give advise, since it seems to be a problem with the RTX and specific versions. I don't know if you can turn-off some optimizations (fp16, etc.) and this hurts the result.

@TT22TY: Good to hear that it works partially.

You can also try to open an issue with Nvidida and TensorRT (https://github.com/NVIDIA/TensorRT/issues). They might have some suggestions.

jbehley avatar Jan 09 '20 10:01 jbehley

Hi @TT22TY, I'm glad that it works for you now, and we now also know that it's quite sensitive to the GPU and TensorRT versions. Andres @tano297 may later release a better version of C++ and ROS interface for LiDAR-bonnetal.

Hi @LongruiDong, I'm sorry for this repo doesn't work properly for you. A more complete development version with different model formats may release later by Andres @tano297.

Since we are almost clear that the problems are caused by the GPUs and TensorRT, which we cannot do anything, I will therefore close this issue.

If there are other problems relating to this, please feel free to ask me reopening this issue.

Chen-Xieyuanli avatar Jan 09 '20 11:01 Chen-Xieyuanli

Hi @TT22TY,

and I run it under Ubuntu 18.04 using the following combination:

1. GeForce RTX 2070, Cuda V10.1.243, Cudnn 7.6.3, Driver 418.87.01, TensorRT 5.1.5.0-ga

For TensorRT I downloaded: nv-tensorrt-repo-ubuntu1804-cuda10.1-trt5.1.5.0-ga-20190427_1-1_amd64.deb

Hi. Are you sure you get the correct result based on this configuration?

We test this configuration in a fresh machine, exactly followed your config. There is output, but it is not the proper segment result. image

Here is our system config: image image image image

Any other things I am missing?

Thanks for help

Claud1234 avatar Apr 28 '20 17:04 Claud1234

Hi, @Claud1234 I did not get the correct result on my machine(Ubuntu 16.04, GeForce RTX 2070, Cuda V10.0.130, Cudnn 7.5.0, Driver Version: 410.48 ,TensorRT 5.1.5.0.), while I get the correct result on Xavier .
Since I do not have a machine with RTX 2070, I have no idea how it performs on this configuration. Sorry for that, and I am also looking forward to solutions to this issue.

TT22TY avatar May 06 '20 10:05 TT22TY

I also get a differernt result. But after converting the onnx opset version from 7 to 9, and optimizing this model, it can work normally image

after converting and optimizing the model image this is my convert code

import onnx
from onnx import version_converter, optimizer

model_path = './model.onnx'
original_model = onnx.load(model_path)
converted_model = version_converter.convert_version(original_model, 9)
optimized_model = optimizer.optimize(converted_model)

onnx.save(optimized_model, './model.onnx')

kuzen avatar Jun 10 '20 08:06 kuzen

Hey @kuzen, thank you very much for your feedback. We now find a new solution to the incompatible problem! :-)

Chen-Xieyuanli avatar Jun 10 '20 09:06 Chen-Xieyuanli

Hi @kuzen , thank you for sharing your solution, could you please also share you hardware setup, for example, the Cuda, and tensorrt version, thank you very much~

TT22TY avatar Jun 11 '20 09:06 TT22TY

Hi @kuzen , thank you for sharing your solution, could you please also share you hardware setup, for example, the Cuda, and tensorrt version, thank you very much~

Hi, this is my software version Titan RTX, Ubuntu 16.04, Cuda V10.1, Driver 430.50, TensorRT 5.1.5-ga, libcublas 10.1.0.105-1, Cudnn 7.6.4.38-1 image

kuzen avatar Jun 11 '20 10:06 kuzen

I also get a differernt result. But after converting the onnx opset version from 7 to 9, and optimizing this model, it can work normally image

after converting and optimizing the model image this is my convert code

import onnx
from onnx import version_converter, optimizer

model_path = './model.onnx'
original_model = onnx.load(model_path)
converted_model = version_converter.convert_version(original_model, 9)
optimized_model = optimizer.optimize(converted_model)

onnx.save(optimized_model, './model.onnx')

I kind of do not understand, because in original model(file 'model.onnx'), the onnx opset version already is 9. I do not know why you say it is 7?

Claud1234 avatar Jun 12 '20 14:06 Claud1234

I kind of do not understand, because in original model(file 'model.onnx'), the onnx opset version already is 9. I do not know why you say it is 7?

Thanks. Conversion is not necessary. Using only the optimizer is enough.

kuzen avatar Jun 14 '20 03:06 kuzen

@kuzen Thank you very much. But it does not work for me, and I wonder whether it works for you @Claud1234 @LongruiDong

TT22TY avatar Jun 15 '20 05:06 TT22TY

@kuzen Thank you very much. But it does not work for me, and I wonder whether it works for you @Claud1234 @LongruiDong

@kuzen @TT22TY I have tried the approach to convert the opset version and optimize the ONNX model. But a very weird thing is I only succeed to get correct result for ONE time! After I delete the '.trt' file then re-try it, the results are always same and wrong! I did not change anything for dependencies at all.

@kuzen Would you please re-try or explain more about the whole thing? Can you get the correct result every time if delete the '.trt' file? I have unified the version of libcublas and CUDA as you recommended, but I am not sure whether this is necessary.

Claud1234 avatar Jun 15 '20 14:06 Claud1234

Ubuntu 18.04, RTX 2060, cuda 10.1, tensorrt 5.1

With FP16 set true: Result is wrong. fp16

With FP16 set false. Result looks normal netTensorrt.cpp line 508 (builder->setFp16Mode(false);)

Screenshot from 2020-06-17 00-45-43

balajiravichandiran avatar Jun 16 '20 08:06 balajiravichandiran

@balajiravichandiran Thank you very much for the feedback!

Chen-Xieyuanli avatar Jun 16 '20 10:06 Chen-Xieyuanli