rknn-toolkit2
rknn-toolkit2 copied to clipboard
您好,我想在开发板上部署一个自己训练的Convnext模型,但是在使用rknntoolkit2进行模型转换并推理时遇到了问题。在关闭量化(ret = rknn.build(do_quantization=False)的情况下,模型的输出会是NaN,并且有如下提示超出FP16的表示范围: “W inference: The range [1.94334077835083, inf] of '/downsample_layers.3/downsample_layers.3.0/ReduceMean_1_output_0' is out of the float16! W inference: The range [1.9433603286743164, inf] of '/downsample_layers.3/downsample_layers.3.0/Add_output_0' is out of the float16!” 我根据该信息定位了模型相关的层,由于我在rknntoolkit2的文档中没有找到在非量化模式下可以打印逐层详细数据的接口,所以我使用转换前的ONNX模型输出了提示溢出的两层的输出张量,但是发现并没有超出FP16表示范围的数值。...
IS there any effort in handling some 3D operations since it is required for some spatiotemporal models.
In rknn-toolkit2, it works well: ``` outputs = rknn.inference(inputs=[img_norm], data_format=['nchw']) ``` but in rknn-toolkite-lite2, there is a error as: ``` self.rknn_runtime.set_inputs(inputs, data_type, data_format, inputs_pass_through=inputs_pass_through) File "rknnlite/api/rknn_runtime.py", line 1008, in rknnlite.api.rknn_runtime.RKNNRuntime.set_inputs...
My current project requires Tensorflow 2.17 and Keras 3.6 (many bugs has been fixed in 2.17), but I can't use RKNN reauires TF 2.14. Please fix the reauirements
报错: I rknn-toolkit2 version: 2.3.0 W config: Please make sure the model can be dynamic when enable 'config.dynamic_input'! I The 'dynamic_input' function has been enabled, the MaxShape is dynamic_input[0] =...
[rknn-toolkit2](https://github.com/rockchip-linux/rknn-toolkit2/tree/master)/[rknpu2](https://github.com/rockchip-linux/rknn-toolkit2/tree/master/rknpu2)/[examples](https://github.com/rockchip-linux/rknn-toolkit2/tree/master/rknpu2/examples)/[rknn_yolov5_demo](https://github.com/rockchip-linux/rknn-toolkit2/tree/master/rknpu2/examples/rknn_yolov5_demo)/[utils](https://github.com/rockchip-linux/rknn-toolkit2/tree/master/rknpu2/examples/rknn_yolov5_demo/utils) /mpp_decoder.h 中的class MppDecoder 有一个属性:size_t packet_size = 2400*1300*3/2; 这个2400,1300 的值是怎么来的? 可以修改吗?会有什么影响?
已经把/librknnrt.so 放到 `/usr/lib/`下了, 但是运行 model_zoo 中的 yolo11, 依然去 `/usr/lib64` 下找,可是 盒子中就没有一个 /usr/lib64 文件夹 只有一个 `/usr/lib` 这怎么搞
开源代码:https://github.com/pcb9382/PlateRecognition rk3588 18ms rk3568 126ms rv1126 62ms rv1106 158ms
训练的自己的数据集,CLASSES已经修改了。 Traceback (most recent call last): File "test.py", line 316, in boxes, classes, scores = yolov5_post_process(input_data) File "test.py", line 157, in yolov5_post_process b, c, s = process(input, mask, anchors) File...
The official provided an example for mobilesam, but I noticed that its img_size is 448, while the original mobilesam model has an image size of 1024. The iou of the...