MaixPy3
MaixPy3 copied to clipboard
How to use MaixPy3 nn to load custom models
I have some problem with understanding how to use own models with MaixPy3 nn on Sipeed MAIX-II Dock(V831). In your example scripts is presented resnet model, and it works well. I want to load into sipeed other cnn model, for example yolo3 or mobilenet_ssd for object detection and some other models, that have different formats of popular frameworks( tflite, keras h5, tensorflow pb). In your example model is presented via 2 files: resnet.bin and resnet.param. How can i convert my files of models (h5, or pb, or tflite) into this representation to load with nn lib? Maybe there are some open-source scripts for this purpose? I cant find some way(scripts) to create resnet.param. In this file was written some params, that i don
t understand.
I will be glad your feedback!
I subscribe to the question. How to train and convert your own network models? Ok found https://www.maixhub.com/modelConvert Will only online conversion be possible?
Hi, I have the same question, i've tested the demos and it works, but there is no information how to use custom network models. In this page: https://www.maixhub.com/ModelConvertHelp.html There are instructions to create a ZIP file but not how to create the .bin and .params files.
I hope some body could help us, because Maix-II looks very good and a looks like true replacement for raspberry pi 4 and maybe Jetson Nano for some cases, for simple image processing.
the doc is not ready yet, but you can read this post( wriiten with Chinese so you may need to use google translate ) https://neucrack.com/p/358
It seems that the blog post has translation disabled. Not appearing on Chrome & Safari
the doc is not ready yet, but you can read this post( wriiten with Chinese so you may need to use google translate ) https://neucrack.com/p/358
Thank you very much.
Google tranlator doesn't work, but the source code is in english. By the way the neural networks are made with Tourch, not Tensorflow , with Maix-I the kformat uses tflite as source.
To help others like me here is de summary: The example works with PyTourch first you musta create a model, second translate it to ONNX, and finally translate the ONNX to NCNN.
This are the codes of the functions:
def torch_to_onnx(net, input_shape, out_name="out/model.onnx", input_names=["input0"], output_names=["output0"], device="cpu"): batch_size = 1 if len(input_shape) == 3: x = torch.randn(batch_size, input_shape[0], input_shape[1], input_shape[2], dtype=torch.float32, requires_grad=True).to(device) elif len(input_shape) == 1: x = torch.randn(batch_size, input_shape[0], dtype=torch.float32, requires_grad=False).to(device) else: raise Exception("not support input shape")
print("input shape:", x.shape)
# torch.onnx._export(net, x, "out/conv0.onnx", export_params=True)
torch.onnx.export(net, x, out_name, export_params=True, input_names = input_names, output_names=output_names)
onnx_out="out/resnet_1000.onnx"
ncnn_out_param = "out/resnet_1000.param"
ncnn_out_bin = "out/resnet_1000.bin"
input_img = filename
def onnx_to_ncnn(input_shape, onnx="out/model.onnx", ncnn_param="out/conv0.param", ncnn_bin = "out/conv0.bin"): import os # onnx2ncnn tool compiled from ncnn/tools/onnx, and in the buld dir cmd = f"onnx2ncnn {onnx} {ncnn_param} {ncnn_bin}" os.system(cmd) with open(ncnn_param) as f: content = f.read().split("\n") if len(input_shape) == 1: content[2] += " 0={}".format(input_shape[0]) else: content[2] += " 0={} 1={} 2={}".format(input_shape[2], input_shape[1], input_shape[0]) content = "\n".join(content) with open(ncnn_param, "w") as f: f.write(content)
First you must have the model and the input shape:
model = xxxxxx. # Code to crate a custom model input_shape = (3, 224, 224). # Input Shape of my custom model onnx_out = "model.onnx" ncnn_out = "conv0.param" ncnn_out_bin = "conv0.bin"
Then you have to call the functions
torch_to_onnx(model, input_shape, onnx_out, device="cuda:0") onnx_to_ncnn(input_shape, onnx=onnx_out, ncnn_param=ncnn_out_param, ncnn_bin=ncnn_out_bin)
Which model should I used for transfer learning for my custom detection model. I tried with Detectron2 and Torchvision an I succed in getting my NCNN files but when I try tu upload them to the online converter they exceed the 100 MB constrain.