CloudGuardian
CloudGuardian
# 平台(如果交叉编译请再附上交叉编译目标平台): # Platform(Include target platform as well if cross-compiling): 在ubuntu上编译,跑在Android手机shell,64位 # Github版本: # Github Version: MNN-2.8.1.zip 24/1/3 直接下载ZIP包请提供下载日期以及压缩包注释里的git版本(可通过``7z l zip包路径``命令并在输出信息中搜索``Comment`` 获得,形如``Comment = bc80b11110cd440aacdabbf59658d630527a7f2b``)。 git clone请提供 ``git commit`` 第一行的commit id...
# 平台(如果交叉编译请再附上交叉编译目标平台): # Platform(Include target platform as well if cross-compiling): Android平台GPU:Arm Mali-G57 # Github版本: # Github Version: 2-8-1 代码如下: ``` std::shared_ptr mnnNet; mnnNet = std::shared_ptr(MNN::Interpreter::createFromFile(model_name.c_str())); mnnNet->setCacheFile(".tempcache"); ScheduleConfig netConfig; netConfig.type =...
# 平台(如果交叉编译请再附上交叉编译目标平台): # Platform(Include target platform as well if cross-compiling): android maliG57 GPU # Github版本: # Github Version: 2.8.1 如标题,我使用OpenCL 推理,在Normal和Low精度下推理结果错误,结果输出为全黑(0) 但Precision_High推理结果正常,且CPU和VULKAN在各个精度下的结果也都正常
Nan value appears in model's weight during training, resulting in inference results is Nan. Why does this problem occur ?
我的模型: Onnx转MNN:MNNConvert -f ONNX --modelFile XXX.onnx --MNNModel XXX.mnn --bizCode biz --optimizeLevel 2 --fp16 MNN文件大小:260M Config: ``` ScheduleConfig netConfig; netConfig.type = MNN_FORWARD_CPU; // MNN_FORWARD_OPENCL; // MNN_FORWARD_VULKAN; // MNN_FORWARD_CPU netConfig.numThread = 1;...