ros_deep_learning
ros_deep_learning copied to clipboard
set precision to fp32
Hi,
I'm using segnet to load a custom segmentation network and I was wondering if I can change precision to fp32?
Currently I am getting this when i run it ... [TRT] desired precision specified for GPU: FASTEST ... [TRT] selecting fastest native precision for GPU: FP16
Thanks for the help :)
Oh and I am getting this error I haven't noticed before
[TRT] Registering tensor: output_34 for ONNX tensor: output [TRT] Resize_261 [Resize] outputs: [output -> (1, 12, 1080, 1920)[FLOAT]], [TRT] Marking output_34 as output: output [TRT] Marking onnx::Resize_335_2 as output: onnx::Resize_335 [TRT] Marking input.1_3 as output: input.1 [TRT] Marking input.3_4 as output: input.3 [TRT] Marking output as output: output [0;31m[TRT] [network.cpp::markOutputHelper::1811] Error Code 4: Internal Error (Tensor output is already set as network output)
Do you know how this can be fixed?
@cicoita the segmentation models in ros_deep_learning / jetson-inference are expected to be FCN-Resnet from PyTorch architecture. You might have better luck with the newer Isaac ROS Image Segmentation package: https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_segmentation
And I don't believe I expose the desired precision as a ROS param, but you could hardcode that change or expose it if desired.