ncnn icon indicating copy to clipboard operation
ncnn copied to clipboard

Add Shape and Gather ops

Open theflyingzamboni opened this issue 2 years ago • 8 comments

Many types of models make use of the Shape and Gather ops, but these still are not supported by ncnn. Using things like onnxoptimizer/onnxsim cannot always optimize these ops a way. Please add ncnn support for them.

theflyingzamboni avatar Aug 17 '22 16:08 theflyingzamboni

If you train your model under Pytorch, please use pnnx to convert your model to ncnn. Shape is glue ops, it is not friendly for developer.

wzyforgit avatar Aug 18 '22 00:08 wzyforgit

ncnn DOES need something like Shape tho, otherwise dynamic-shape inputs won't work.

Here's a case I just encountered: Trace this: https://huggingface.co/Larvik/temp1/resolve/main/autoencoder_pnnx.pt with inputshape=[1,4,64,64]f32, it can make a valid ncnn param, but fixed output 512x512: autoencoder_512x512.param.txt

Trace with inputshape2=[1,4,64,64]f32 will just produce a invalid ncnn param: autoencoder_ncnn.param.txt

The network is ldm ( https://github.com/CompVis/latent-diffusion )'s latent->image decoder. The .pt file is produced by this script export_torchscript(): autoencoder_pnnx3232.py.txt

k70, k73, K74 are manually added shape/size() ops.

DavidHiggis avatar Aug 18 '22 06:08 DavidHiggis

If you train your model under Pytorch, please use pnnx to convert your model to ncnn. Shape is glue ops, it is not friendly for developer.

To add to why it is needed, there are a number of onnx models that use the Shape and Gather ops (as they have been a part of that opset since version 1), and pnnx will not help in converting those to ncnn. As stated before, these ops cannot always be optimized away.

More specific to me, my usecase is that I've rewritten and modified onnx2ncnn in python to integrate it natively into a python app, and using external executables is not an option.

Setting aside that unusual case though, there are still circumstances where this is needed, as previously stated.

theflyingzamboni avatar Aug 18 '22 16:08 theflyingzamboni

You could use the onnxsim to remove the ops like shape, etc.

LRY89757 avatar Aug 19 '22 08:08 LRY89757

I've tried onnxsim, onnxoptimizer, and nvidia polygraphy with graphsurgeon, and none of them will get rid of these ops for whatever reason.

theflyingzamboni avatar Aug 19 '22 15:08 theflyingzamboni

I've tried onnxsim, onnxoptimizer, and nvidia polygraphy with graphsurgeon, and none of them will get rid of these ops for whatever reason.

All right. May be you could use onnx instead of pnnx, if you decide to use ncnn : )

LRY89757 avatar Aug 19 '22 15:08 LRY89757

All right. May be you could use onnx instead of pnnx, if you decide to use ncnn : )

?? I am using onnx.

theflyingzamboni avatar Aug 19 '22 17:08 theflyingzamboni

This is an image of a simplified SwinIR onnx graph. As you can see, there are actually a lot of ops here besides Shape and Gather that ncnn does not support (Mod, Where, Range, ScatterND, etc.). It would be nice to have parity. image

theflyingzamboni avatar Aug 28 '22 18:08 theflyingzamboni

I also encountered this problem, any update?

duany049 avatar Sep 24 '22 14:09 duany049

I also encountered this problem, any update?

The NCNN devs do not appear to be interested in making their framework comparable to others and compatible with prominent model arches. Prominent SR models like SwinIR and RealESRGAN_x2plus can't even be converted because of all the missing ops/inability to optimize or convert unsupported ops. Their wasn't even a Mod op in NCNN last time I checked.

theflyingzamboni avatar Oct 22 '22 01:10 theflyingzamboni

How can i get input shape from model? Not even to say Shape support in model inference.

lucasjinreal avatar Dec 05 '22 06:12 lucasjinreal

How can i get input shape from model? Not even to say Shape support in model inference.

The input shape for NCNN model is dynamic, maybe you should record the input shape when you convert your origin model to NCNN model.

wzyforgit avatar Dec 05 '22 06:12 wzyforgit