recognize
recognize copied to clipboard
Coral Usb Accelerator Support
Looking forward to testing your app out but i was curious if you had put any thought/effort into supporting edge TPUs like the Coral Accelerator? https://coral.ai/products/accelerator/
I use https://github.com/blakeblackshear/frigate/ for my NVR and it does a great job with the coral accelerator and really speeds up the tensorflow detection (like a TON).
Would be a great addition to this app!
Hey!
That sounds really cool! I will see whether I'm able to integrate this without sacrificing ease of setup.. The main point of this app is to make AI easily usable on your own data, after all. Still, it would of course be cool, if people can optionally extend the performance with more config options. I'll first have to figure out GPU support.. (see #67)
Would also be cool to leverage this lib/api somehow to get face dectection in nextcloud as well. https://github.com/jakowenko/double-take
Recognize can detect faces already. It uses your contact pictures as reference pictures.
This issue is essential in enabling fast recognition on low-end devices. Just imagine how long it would take to classify a decent photo library on an old laptop or a Raspberry Pi.
This issue is far more complex than we thought. Recognize uses face-api.js to extract and compare face features, which is not compatible with Google Coral, as indicated by justadudewhohacks/face-api.js#754. However, EfficientNet v2 has a version compatible with Google Coral, EfficientNet-EdgeTPU. With all of that said, I think we need a separate branch to experiment with this.
Which is more likely to be done first, GPU support or Coral support? I ask because I have a "low-end" device (my mid 2010 mac pro tower), and would love to add this to my Nextcloud, so I can back out of Google Photos. It is low-end because it missed the AVX instruction set by one year.
@phirestalker GPU is more likely, IMO. Also note, that what is still being called "JS-mode" in the UI is now much faster in the latest release, because we're using WASM now.
it missed the AVX instruction set by one year
It's also worth noting that you can compile tensorflow yourself without requiring AVX. I once set out to automatically build a range of tensorflow flavors using github actions, but never quite finished.
+1 for adding Coral support
I investigated this a bit and it turns out that you can run tflite models on coral, but you have to compile them especially. Here is a code example: https://github.com/tensorflow/sig-tfjs/tree/main/tfjs-tflite-node-codelab/coral_inference_working
Would also very much like to see a coral implementation. Would be willing to test things out and report on any issues and/or help getting this up and running.
+1 for coral support here too. It would make the app useable in a lot of low-end devices.
I run NextCloud in a VM on my home server, so it would be much easier to get an accelerator like the Coral TPU connected to the VM than it will ever be to get any amount of GPU acceleration in the near future as the hardware compatible with ESXi VGPUs is incredibly limited and generally outside my budget just to get speedy image tagging.
+1 for Coral support. Would be great to speed up the image/face tagging.
Did anything come out of this?
We are currently maintaining this app on a limited effort basis. This means Nextcloud GmbH will not invest further development resources ourselves in advancing this app with new features. That doesn't mean there will be no new features, however: We do review and enthusiastically welcome community Pull Requests. We would be more than excited if you would like to collaborate with us on this issue. Feel free to reach out here in the comments if you would like to work on this and have questions about how to go about it, how to proceed or would like a short introduction call into the code base. I'm here to help with your questions :v: (See https://github.com/nextcloud/recognize/discussions/779 for more information on this)