ncnn
ncnn copied to clipboard
Add Shape and Gather ops
Many types of models make use of the Shape and Gather ops, but these still are not supported by ncnn. Using things like onnxoptimizer/onnxsim cannot always optimize these ops a way. Please add ncnn support for them.
If you train your model under Pytorch, please use pnnx to convert your model to ncnn. Shape is glue ops, it is not friendly for developer.
ncnn DOES need something like Shape tho, otherwise dynamic-shape inputs won't work.
Here's a case I just encountered:
Trace this: https://huggingface.co/Larvik/temp1/resolve/main/autoencoder_pnnx.pt
with inputshape=[1,4,64,64]f32
, it can make a valid ncnn param, but fixed output 512x512:
autoencoder_512x512.param.txt
Trace with inputshape2=[1,4,64,64]f32
will just produce a invalid ncnn param:
autoencoder_ncnn.param.txt
The network is ldm ( https://github.com/CompVis/latent-diffusion )'s latent->image decoder. The .pt file is produced by this script export_torchscript(): autoencoder_pnnx3232.py.txt
k70, k73, K74 are manually added shape/size() ops.
If you train your model under Pytorch, please use pnnx to convert your model to ncnn. Shape is glue ops, it is not friendly for developer.
To add to why it is needed, there are a number of onnx models that use the Shape and Gather ops (as they have been a part of that opset since version 1), and pnnx will not help in converting those to ncnn. As stated before, these ops cannot always be optimized away.
More specific to me, my usecase is that I've rewritten and modified onnx2ncnn in python to integrate it natively into a python app, and using external executables is not an option.
Setting aside that unusual case though, there are still circumstances where this is needed, as previously stated.
You could use the onnxsim to remove the ops like shape, etc.
I've tried onnxsim, onnxoptimizer, and nvidia polygraphy with graphsurgeon, and none of them will get rid of these ops for whatever reason.
I've tried onnxsim, onnxoptimizer, and nvidia polygraphy with graphsurgeon, and none of them will get rid of these ops for whatever reason.
All right. May be you could use onnx instead of pnnx, if you decide to use ncnn : )
All right. May be you could use onnx instead of pnnx, if you decide to use ncnn : )
?? I am using onnx.
This is an image of a simplified SwinIR onnx graph. As you can see, there are actually a lot of ops here besides Shape and Gather that ncnn does not support (Mod, Where, Range, ScatterND, etc.). It would be nice to have parity.
I also encountered this problem, any update?
I also encountered this problem, any update?
The NCNN devs do not appear to be interested in making their framework comparable to others and compatible with prominent model arches. Prominent SR models like SwinIR and RealESRGAN_x2plus can't even be converted because of all the missing ops/inability to optimize or convert unsupported ops. Their wasn't even a Mod op in NCNN last time I checked.
How can i get input shape from model? Not even to say Shape support in model inference.
How can i get input shape from model? Not even to say Shape support in model inference.
The input shape for NCNN model is dynamic, maybe you should record the input shape when you convert your origin model to NCNN model.