OpenVINO-DeeplabV3
OpenVINO-DeeplabV3 copied to clipboard
ARM support
Hi
Firstly, great work! Can I ask if ARM is still not supported even though OpenVINO has support for ARM now? I see the line "Caution: It does not work on ARM architecture devices such as RaspberryPi / TX2." in the README, but then it's followed by a notice update. Not sure if the notice implies that Deeplab supports ARM now?
Thanks!
Explain the meaning of my words.
- OpenVINO officially supports armv7l(ARM)
- My test program does not support the combination of NCS2 + ARM(RaspberryPi / TX2)
- Argmax and some other layers in the DeeplabV3+ model do not support NCS2
- You need a tricky implementation to get DeeplabV3+ working properly
I see. Any idea if your program works on a Celeron J1900 then? I'm still thinking about buying the NCS2 or not...
When using OpenVINO, benchmark results are faster with inference speed using a Intel's CPU than using NCS2. You will have no major benefit in purchasing NCS2.
LattePanda Alpha + OpenVINO + "CPU (Core m3) vs NCS1 vs NCS2", Performance comparison
Btw, If you want to get the best performance, I recommend buying "Google Edge TPU Accelerator". The following URL is my verification article.
https://github.com/PINTO0309/TPU-MobilenetSSD.git
Thanks for the advice. But isn't Intel Celeron unsupported? According to https://software.intel.com/en-us/articles/OpenVINO-InferEngine the CPU target supports "Intel® Xeon® with Intel® AVX2 and AVX512, Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® SSE". Perhaps that's why Gemini91's Celeron result is about 5x slower than NCS2?
It has been confirmed to work with Celeron. However, because it does not support the latest Intel architecture, it seems that sufficient performance can not be achieved. OpenVINO is designed to be optimized for Intel's Atom or higher CPU architecture. https://ncsforum.movidius.com/discussion/comment/4139/#Comment_4139
Does openvino works on raspberry pi without NCS?
@Aaronreb Yes.