ncnn-android-yolov7
ncnn-android-yolov7 copied to clipboard
Video showing but no object detections
I cloned this repository and I replaced the model in "app\src\main\assets" with one I trained myself on YOLOv7. I trained it using the "yolov7.pt" weights and the results were great. I followed the official WIKI to get the ONNX and then NCNN model: https://github.com/Tencent/ncnn/wiki/use-ncnn-with-pytorch-or-onnx
Basically I used torch.onnx._export() to create the onnx, then onnxsim to create the simplified version and then onnx2ncnn to get the NCNN version. So I got the .bin and .param files and I renamed them "yolov7-tiny.bin" and "yolov7-tiny.param" so that I can just paste and replace the existing files in "app\src\main\assets"
I then just built the project using Android Studio, and my "local.properties" are: sdk.dir=C:\Users\GiannisM\AppData\Local\Android\Sdk ndk.dir=C:\Users\GiannisM\AppData\Local\Android\Sdk\ndk\25.0.8775105 cmake.dir=C:\Users\GiannisM\AppData\Local\Android\Sdk\cmake\3.22.1
The build was successful so I got the .apk file and I installed it on my phone. The app works and I can change front and back camera as well as CPU or GPU. I can see what the camera sees, but object detections isn't working, it's just like a camera app - nothing is being detected.
When I build it with the original "yolov7-tiny.bin" and "yolov7-tiny.param" however, it does draw bounding boxes and it shows the labels. I should mention that when I evaluate my trained model in python, it does draw bounding boxes and labels - it's just that it doesn't work on the android app. Any idea why that is?
~~Lastly, I should say that I couldn't get onnx2ncnn
to work on windows so I installed and used it from WSL (Linux inside windows 11)
I found this post (https://programs.wiki/wiki/622f14b67fea9.html) that mentions things which the official WIKI doesn't mention.
It says that you should have protobuf
and opencv
installed before you install ncnn.
I tried it as it was said on the official WIKI without installing protobuf and opencv and no errors occurred (but like I said, when i open the app I get no detections).
I then uninstalled ncnn, installed protobuf, but the opencv installation fails. Not sure if this is the problem~~
Update:
I actually downloaded the pre-compiled windows binaries that include onnx2ncnn
but the same things happens where I get video feed but no detections
I also managed to build and install opencv on linux (WSL) and then install ncnn afterwards and the onnx2ncnn works, but I have the exact same problem - no detections
Also, with the original yolov7-tiny.bin
I get 5FPS on my "Razer Phone 2", and with my model it's 30FPS which makes me think that it really just doesn't do any forward passes to do object detection at all.
I have the same problem ,I get video feed but no detections
Have you solved it?I have the same problem
No, and I won't be able to solve it myself because I know neither java nor C, and those are the 2 languages used there
不,我自己也无法解决,因为我既不了解 java 也不了解 C,而那t是那里使用的 2 种语言
You can try changing in the yolo.cpp
file ex.extract("out0", out);
out0
out1
out2
. When I make changes based on three output name in the .param
file, it can be work. also the input name in0
But another problem arises, the detection results were terrible
Hi everybody! And big thanks to Xiang-Wuu for the job!!
I have the same problem on android device, with coco it's running fine on phone but when we put our weights and model (.bin and .param) and modify number of classes (nc) and names class, build run correctly but no boundingboxes on screen.
I see a china issue ( https://blog.csdn.net/qq_43268106/article/details/127139216 ) where talking about .param where to delete the first ten layers and modificate the first layer in0 to image but i have no ideas how to do that...
He talk about "generate_proposals" but i don't understand what it's mean!
Thanks to yours feedbacks ;-)
Has anyone managed to resolve or at least find out what the problem is? Apparently, the solution only works with yolo's own weights, it doesn't work with custom weights. This greatly limits its use.
不,我自己也无法解决,因为我既不了解 java 也不了解 C,而那t是那里使用的 2 种语言
You can try changing in the
yolo.cpp
fileex.extract("out0", out);
out0
out1
out2
. When I make changes based on three output name in the.param
file, it can be work. also the input namein0
But another problem arises, the detection results were terrible
Can you explain your suggestion better? apparently out0, out1, out seem to be correct and compatible with the parameter file. What would be the necessary change?
{ ncnn::Mat out; ex.extract("out0", out); ncnn::Mat anchors(6); anchors[0] = 12.f; anchors[1] = 16.f; anchors[2] = 19.f; anchors[3] = 36.f; anchors[4] = 40.f; anchors[5] = 28.f;
std::vector<Object> objects8;
generate_proposals(anchors, 8, in_pad, out, prob_threshold, objects8);
proposals.insert(proposals.end(), objects8.begin(), objects8.end());
}
thx For us , solution(find since 1 week) is to convert model and weights in pnnx and that work Output are good
Le lun. 8 mai 2023 à 21:56, jhony2507 @.***> a écrit :
不,我自己也无法解决,因为我既不了解 java 也不了解 C,而那t是那里使用的 2 种语言
You can try changing in the yolo.cpp file ex.extract("out0", out); out0 out1 out2. When I make changes based on three output name in the .param file, it can be work. also the input name in0 But another problem arises, the detection results were terrible
Can you explain your suggestion better? apparently out0, out1, out seem to be correct and compatible with the parameter file. What would be the necessary change?
{ ncnn::Mat out; ex.extract("out0", out); ncnn::Mat anchors(6); anchors[0] = 12.f; anchors[1] = 16.f; anchors[2] = 19.f; anchors[3] = 36.f; anchors[4] = 40.f; anchors[5] = 28.f;
std::vector<Object> objects8; generate_proposals(anchors, 8, in_pad, out, prob_threshold, objects8); proposals.insert(proposals.end(), objects8.begin(), objects8.end());
}
— Reply to this email directly, view it on GitHub https://github.com/xiang-wuu/ncnn-android-yolov7/issues/8#issuecomment-1538958757, or unsubscribe https://github.com/notifications/unsubscribe-auth/AVRUYBSG4OVLHSCRI4H2FEDXFFFU7ANCNFSM56QH37JQ . You are receiving this because you commented.Message ID: @.***>
Install netron first,open the Model file(.param) with netron(https://netron.app/), find the three Convolution layer above the three Permute layers, click to view details, you can see its name ( the location indicated by the red arrow), fill in the three names with the corresponding out0, out1, out2, and you can run the detection.