toz
toz
i have raspberry pi zero W, sounds play ok with aplay, very fast with default test... i confirm search fo an answer here..
> >. I'm not sure if a GTX 1650 would out perform a Jetson Nano i am running deepstack with gtx1660 and nvidia-gpu-container. deepstack takes almost forever ... dont think...
@LordNex CORAL TPU are only for frigate? any TPU-Support configured for deepstack or CompreFace? (i have trouble to get Deeptstack-GPU running with Nvidia-Container, (runs fine on CPU) will test some...
why dont use the coral ressources on frigate? frigate explicitly uses coral. the more - the better. why not use a little bit of already existing corals from there?
> I don't believe a single Coral can be used by both Frigate and the project referenced in this feature request though. You would probably need one for each. you...
maybe i misunderstood, but doubl-take uses "detectores" like compreface for face-detection. so the detector must have an AI-Hardware for face-rec, right? [but compreface dont need it, as mentioned in the...
may you want to check out if other models from compreface-engine "insightface" could run on a coral (or other AI-Hardware like jetson nano) as a AI-REST-Server? -> https://github.com/deepinsight/insightface/tree/master/model_zoo
i looked into the pi-rest server. seems a good option to offload tensor-detection on networked devices. at least they need to run tensorflow. maybe on gpu or on a small...
i have this working, but connection to deepstack is unreliable..  maybe you comment out your "key" in config? - or its a bug?
i started it like this: ```sudo docker run --gpus all -e VISION-FACE=True -v localstorage:/datastore -p 5001:5000 deepquestai/deepstack:gpu``` its running on the host perfectly since start. dont know when it quits......