SoulProficiency

Results 23 issues of SoulProficiency

请问在哪里可以查找到目前rk支持的所有算子? 目前尝试将efficientVit-sam(encoder-decoder架构)移植到rknn平台上,官方训练好的torch模型可以导出onnx模型,目前想将onnx转换为rknn模型,其中涉及到算子是否支持等问题,以下是转换encoder的代码: ``` from __future__ import absolute_import, print_function, division import os from rknn.api import RKNN # onnx_model = './resource/onnx/model_256x256_max_mscf_0.924553.onnx'G:/6666Ground_segmentation0813 onnx_model = './weights/l0_encoder.onnx' save_rknn_dir = './weights' if __name__ == '__main__': #...

在一组写操作占用的时间内,有效时间长度为T1,无效时间长度是T2,DDR带宽利用率=T1/T2,看到官方平台有rk-msch-probe-for-user查看内存信息的工具,请问要到哪里下载呢?

thanks for your work, here is something confused for layman: 1.how to infer a image, where can we get infer.py or how to infer a image by using private image?...

how to export as onnx file and onnx2trt?

how can we deploy E-Vit Sam in other platform?such rockchip platform(RK) or ascend platform(HUAWEI platform)?

作者您好 首先感谢您的开源代码,我在训练目标检测模型的时候出现了问题,使用internimage_t_1k_224.pth作为预训练模型去训练coco数据集时报了如下错误: ![image](https://github.com/OpenGVLab/InternImage/assets/71921740/0988ed9c-c55c-4bbf-b5d5-263a544d92ed) 而后我将mask_rcnn_internimage_t_fpn_1x_coco.py的coco_instance.py替换为coco_detection.py,又替换了几个检测模型时如:mask_rcnn_r50_fpn.py,fast_rcnn_r50_fpn.py均会报以上错误。 期待您的解答,感谢!

自定义一个车辆检测的训练数据集,为yolo格式,在yolov5-6.2框架上训练无问题,但今日使用mindyolo进行训练出现一定问题。 本机平台平台ubuntu22.04,显卡3080ti,cuda12.0,采用docker方式拉取cuda11.6版本的mindspore镜像,验证GPU版本安装成功,而后安装mindyolo。 训练指令: `python train.py --config ./config/yolov5/yolov5s.yaml --device_target GPU` 1.ValueError: invalid literal for int() with base 10:"xxxxxx" 对应错误位置:/mindyolo/mindyolo/data/dataset.py Line 198 `self.imgIds = [int(Path(im_file).stem) for im_file in self.img_files]` 由于采用自定义数据集,图片并非按照 int value.jpg...

HI,Could hailo provide us the yolov8seg and yolov5seg example with python code? it seems to be different with other platform such as nvidia,rockchip. Thanks a lot!

thanks for your contribution. is this repo still suitable for edgeSAM?