onnxconverter-common
onnxconverter-common copied to clipboard
support sizes for Resize op
The resize op optionally supports specifying the target shape using the optinal sizes
argument. The current implementation does not seem to support this argument. Please add this functionality.
Do we have a model that need support sizes
argument? According to operators.md
, only one of 'scales' and 'sizes' can be specified. Currently when we convert to onnx Resize
, we always specify scales
, so no need for sizes
. What is the use case here? Thanks.
@jiafatom our usecase is a unet which uses tf.image.resize
in the decoder path, instead of a classical upsample layer. This allows us to match the tensor shapes for the concat operation without padding or slicing. If it helps, I can provide a minimal working example.
@liob Yes, please provide a minimal working example for us to debug, thanks.
@liob Hi, if this is still an issue, please go to onnxruntime to open an issue.