CUDA-PointPillars icon indicating copy to clipboard operation
CUDA-PointPillars copied to clipboard

export model to onnx

Open HuangVictorAuto opened this issue 2 years ago • 1 comments

Hi, Nvidia AI team. thanks for your opensource sample code for deploying the pointpillar on Xavier. By the export to onnx part, I have a question. How can you ensure you only export the middel part of the network(after voxelization and encode to 10 feature per pillar), not the whole part which include voxelization, pillar feature extraction, scatter to bev, backbone, postprocess?

 torch.onnx.export(model,                   # model being run
          (dummy_voxel_features, dummy_voxel_num_points, dummy_coords), # model input (or a tuple for multiple inputs)
          "./pointpillar.onnx",    # where to save the model (can be a file or file-like object)
          export_params=True,        # store the trained parameter weights inside the model file
          opset_version=11,          # the ONNX version to export the model to
          do_constant_folding=True,  # whether to execute constant folding for optimization
          keep_initializers_as_inputs=True,
          input_names = ['input', 'voxel_num_points', 'coords'],   # the model's input names
          output_names = ['cls_preds', 'box_preds', 'dir_cls_preds'], # the model's output names
          )

thanks!

HuangVictorAuto avatar Mar 10 '22 02:03 HuangVictorAuto

They're using onnx-graphsurgeon to drop those operation after exporting model as whole.

Relevant script; https://github.com/NVIDIA-AI-IOT/CUDA-PointPillars/blob/main/tool/simplifier_onnx.py

OrcunCanDeniz avatar May 09 '22 10:05 OrcunCanDeniz