YOLOv5-Multibackbone-Compression
YOLOv5-Multibackbone-Compression copied to clipboard
From YOLOv5TPH, export to ONNX/pb failed.
Thanks a lot for this awesome repos.
It seems that export.py is the same as ultralytics's yolov5 repo, which is suitable for standard yolov5 models.
However, when exporting TPH model to ONNX or pb, it cannot work. The error msg of ONNX is too long and no useful info, but the pb convertor just gave us a simple message,
TensorFlow saved_model: export failure: name 'C3TR' is not defined
Seem to be a custom layer/module issue.
Any plan to add onnx/pb convert support for yolov5TPH? Thanks you!
谢谢!这是一个极好的项目。
我观察到export.py和ultralytics的yolov5代码仓库里的一致,而这个export.py适用于标准的yolov5模型。
不过,试图将TPH模型转为ONNX或pb时,这个export脚本不能工作。关于ONNX转换的错误信息很长,而且没有有用的信息,但是转换为pb时弹出一条简单的错误信息:TensorFlow saved_model: export failure: name 'C3TR' is not defined
似乎是自定义层的问题。
不知作者有无任何计划添加关于TPH模型转换到onnx的支持呢?谢谢!
ONNX error message:
Exception raised from data_ptr<long int> at /opt/conda/conda-bld/pytorch_1623448234945/work/build/aten/src/ATen/core/TensorMethods.cpp:5759 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7f7a533b8a22 in xxx/lib/python3.8/site-packages/torch/lib/libc10.so) frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x5b (0x7f7a533b53db in xxx/lib/python3.8/site-packag es/torch/lib/libc10.so) frame #2: long* at::Tensor::data_ptr<long>() const + 0xde (0x7f7a9b7d4dde in xxx/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so) frame #3: torch::jit::onnx_constant_fold::runTorchSlice_opset10(torch::jit::Node const*, std::vector<at::Tensor, std::allocator<at::Tensor> >&) + 0x47e (0x7f7aa3b0adfe in xxx/lib/python3.8/site-packages/torch/lib/libtorch_python.so) frame #4: torch::jit::onnx_constant_fold::runTorchBackendForOnnx(torch::jit::Node const*, std::vector<at::Tensor, std::allocator<at::Tensor> >&, int) + 0x1c5 (0x7f7aa3b0c0f5 in xxx/lib/python3.8/site-packages/torch/lib/libtorch_python.so) frame #5: <unknown function> + 0xae9afe (0x7f7aa3b4aafe in xxx/lib/python3.8/site-packages/torch/lib/libtorch_python.so) frame #6: torch::jit::ONNXShapeTypeInference(torch::jit::Node*, std::map<std::string, c10::IValue, std::less<std::string>, std::allocator<std::pair<std::string const, c10::IValue> > > const&, int) + 0x906 (0x7f7aa3b4f8b6 in xxx/lib/python3.8/site-packages/torch/lib/libtorch_python.so) frame #7: <unknown function> + 0xaf1564 (0x7f7aa3b52564 in xxx/lib/python3.8/site-packages/torch/lib/libtorch_python.so) frame #8: <unknown function> + 0xa6d780 (0x7f7aa3ace780 in xxx/lib/python3.8/site-packages/torch/lib/libtorch_python.so) frame #9: <unknown function> + 0x4fdd6e (0x7f7aa355ed6e in xxx/lib/python3.8/site-packages/torch/lib/libtorch_python.so) <omitting python frames>
Additional info: convert to coreml successful.
Thanks a lot for following my work. In terms of converting YOLOv5TPH to ONNX,I'll take a try when my tutor don't push me to do other project.