When converting to onnx using the author's script, the OUTPUTS of the onnx model show the names Clipfgr_dim, Addr1o_dim_1 or Addr3o_dim_1
When I used the author's script to convert to onnx, the OUTPUTS of the onnx model showed the names Clipfgr_dim, Addr1o_dim_1 or Addr3o_dim_1. I don't know what causes this, and I hope the exported onnx has fixed input and output, not dynamic input and output. I modified it in the way https://github.com/xlite-dev/RVM-Inference/issues/17, but when I executed the command in the terminal, the model didn't save, which puzzled me a lot. If any friends see this post of mine, I sincerely hope to receive your suggestions and help. Thank you very much!
Terminal command: python export_onnx.py --model-variant mobilenetv3 --checkpoint E:\Project_RobustVideoMatting\rvm_mobilenetv3.pth --precision float32 --opset 12 --device cuda --output rvm.onnx
The onnx converted using the author's export_onnx.py:
The comparison of export_onnx.py: