mmpose icon indicating copy to clipboard operation
mmpose copied to clipboard

LiteHRNet Pytorch2onnx.py fails

Open astaranowicz opened this issue 3 years ago • 5 comments

This issue was closed (https://github.com/open-mmlab/mmpose/issues/820#issue-954613454) but the conversion still errors out. Was there more code changes required?

astaranowicz avatar Dec 03 '21 00:12 astaranowicz

It works under PyTorch==1.8.0 onnx==1.8.0, do you use other versions?

PeiqiWang avatar Dec 03 '21 01:12 PeiqiWang

I'm on PyTorch==1.9.1, onnx==1.8.0. I can't downgrade to Pytorch==1.8.0 as I have many other items that require 1.9.X as the minimum.

astaranowicz avatar Dec 06 '21 21:12 astaranowicz

I have tried the base liteHRNet ckpts, base configs, and a conda environment with PyTorch==1.8.0 onnx==1.8.0. The conversion still fails at the same point with the same error message: RuntimeError: Failed to export an ONNX attribute 'onnx::Gather', since it's not constant, please try to make things (e.g., kernel size) static if possible

astaranowicz avatar Dec 10 '21 22:12 astaranowicz

I met the same error. torch==1.9.1
onnx=1.10.2

bobby20180331 avatar Feb 28 '22 10:02 bobby20180331

I don't know if this will help you all, but I am sharing the workaround as I have successfully exported to ONNX and confirmed that I can successfully reason about it. Sorry for the Japanese article. Docker containers are pushed to the Docker Hub so you can get a working environment immediately. Trial and error for Lite-HRNet's ONNX export + variable batch size settings

The converted model has been committed here. Please feel free to use it. 268_Lite-HRNet https://github.com/PINTO0309/PINTO_model_zoo

Also, because Lite-HRNet is a top-down model, ideally the batch size should be undefined. Therefore, I have posted a pull request to export to ONNX with the batch size undefined. [Fix] Changed channel size from -1 to an operable number "channel_shuffle" #1242

PINTO0309 avatar Mar 15 '22 16:03 PINTO0309