frigate icon indicating copy to clipboard operation
frigate copied to clipboard

[FR] support for nVidia accelerated detection (CUDA/TensorRT)

Open speedst3r opened this issue 3 years ago • 45 comments

TensorRT 7.2.2 (released December 2020) supports Python 3.8.

As per previous comments, this was the blocker to integrating frigate with TensorRT. As it is now supported, could we get an image that uses NVDEC and TensorRT?

speedst3r avatar Jan 29 '21 13:01 speedst3r

Finally. I will look into it again.

blakeblackshear avatar Jan 29 '21 13:01 blakeblackshear

Great, thanks for the quick response.

Release notes for reference: https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-7.html#rel_7-2-2

speedst3r avatar Jan 29 '21 13:01 speedst3r

Does this apply to the blocking issue for the Jetson's NVIDIA hardware too?

On Fri, Jan 29, 2021 at 8:46 AM speedst3r [email protected] wrote:

Great, thanks for the quick response.

Release notes for reference: https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-7.html#rel_7-2-2

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/blakeblackshear/frigate/issues/659#issuecomment-769813983, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABYSMYOS72W7J5JWKOEADTS4K33FANCNFSM4WY347AQ .

ril3y avatar Jan 29 '21 15:01 ril3y

Would be great if we had cuDNN support!

MEntOMANdo avatar Feb 02 '21 21:02 MEntOMANdo

not sure if this helps - but this was a script to build openCV from Zoneminder that installs cuDNN, CUDA (the user had to download the runtimes as the nvidia license blah blah blah)

#!/bin/bash
#
#
# Script to compile opencv with CUDA support.
#
#############################################################################################################################
#
# You need to prepare for compiling the opencv with CUDA support.
#
# You need to start with a clean docker image if you are going to recompile opencv.
# This can be done by switching to "Advanced View" and clicking "Force Update", 
# or remove the Docker image then reinstall it.
# Hook processing has to be enabled to run this script.
#
# Install the Unraid Nvidia plugin and be sure your graphics card can be seen in the
# Zoneminder Docker.  This will also be checked as part of the compile process.
# You will not get a working compile if your graphics card is not seen.  It may appear
# to compile properly but will not work.
#
# The GPU architectures supported with cuda version 10.2 are all >= 3.0.
#
# Download the cuDNN run time and dev packages for your GPU configuration.  You want the deb packages for Ubuntu 18.04.
# You wll need to have an account with Nvidia to download these packages.
# https://developer.nvidia.com/rdp/form/cudnn-download-survey
# Place them in the /config/opencv/ folder.
#
CUDNN_RUN=libcudnn7_7.6.5.32-1+cuda10.2_amd64.deb
CUDNN_DEV=libcudnn7-dev_7.6.5.32-1+cuda10.2_amd64.deb
#
# Download the cuda tools package.  Unraid uses 10.2.  You want the deb package for Ubuntu 18.04.
# https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1804&target_type=deblocal
# Place the download in the /config/opencv/ folder.
#
CUDA_TOOL=cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb
CUDA_PIN=cuda-ubuntu1804.pin
CUDA_KEY=/var/cuda-repo-10-2-local-10.2.89-440.33.01/7fa2af80.pub
CUDA_VER=10.2
#
#
# Github URL for opencv zip file download.
# Current default is to pull the version 4.2.0 release.
#   Note: You shouldn't need to change these.
#
OPENCV_URL=https://github.com/opencv/opencv/archive/282fcb90dce76a55dc5f31246355fce2761a9eff.zip
OPENCV_CONTRIB_URL=https://github.com/opencv/opencv_contrib/archive/4.2.0.zip
#
# You can run this script in a quiet mode so it will run without any user interaction.
#
# Once you are satisfied that the compile is working, run the following command:
#   echo "yes" > opencv_ok
# 
# The opencv.sh script will run when the Docker is updated so you won't have to do it manually.
#
#############################################################################################################################

QUIET_MODE=$1
if [[ $QUIET_MODE == 'quiet' ]]; then
	QUIET_MODE='yes'
	echo "Running in quiet mode."
	sleep 10
else
	QUIET_MODE='no'
fi

#
# Display warning.
#
if [ $QUIET_MODE != 'yes' ];then
	echo "##################################################################################"
	echo
	echo "This script will compile 'opencv' with GPU support."
	echo
	echo "WARNING:"
	echo "The compile process needs 15GB of disk (Docker image) free space, at least 4GB of"
	echo "memory, and will generate a huge Zoneminder Docker that is 10GB in size!  The apt"
	echo "update will be disabled so you won't get Linux updates.  Zoneminder will no"
	echo "longer update.  In order to get updates you will have to force update, or remove"
	echo "and re-install the Zoneminder Docker and then re-compile 'opencv'."
	echo
	echo "There are several stopping points to give you a chance to see if the process is"
	echo "progressing without errors."
	echo
	echo "The compile script can take an hour or more to complete!"
	echo "Press any key to continue, or ctrl-C to stop."
	echo
	echo "##################################################################################"
	read -n 1 -s
fi

#
# Remove log files.
#
rm -f /config/opencv/*.log

#
# Be sure we have enough disk space to compile opencv.
#
SPACE_AVAIL=`/bin/df / | /usr/bin/awk '{print $4}' | grep -v 'Available'`
if [[ $((SPACE_AVAIL/1000)) -lt 15360 ]];then
	if [ $QUIET_MODE != 'yes' ];then
		echo
		echo "Not enough disk space to compile opencv!"
		echo "Expand your Docker image to leave 15GB of free space."
		echo "Force update or remove and re-install Zoneminder to allow more space if your compile did not complete."
	fi
	logger "Not enough disk space to compile opencv!" -tEventServer
	exit
fi

#
# Check for enough memory to compile opencv.
#
MEM_AVAILABLE=`cat /proc/meminfo | grep MemAvailable | /usr/bin/awk '{print $2}'`
if [[ $((MEM_AVAILABLE/1000)) -lt 4096 ]];then
	if [ $QUIET_MODE != 'yes' ];then
		echo
		echo "Not enough memory available to compile opencv!"
		echo "You should have at least 4GB available."
		echo "Check that you have not over committed SHM."
		echo "You can also stop Zoneminder to free up memory while you compile."
		echo "  service zoneminder stop"
	fi
	logger "Not enough memory available to compile opencv!" -tEventServer
	exit
fi

#
# Insure hook processing has been installed.
#
if [ "$INSTALL_HOOK" != "1" ]; then
	echo "Hook processing has to be installed before you can compile opencv!"
	exit
fi

#
# Remove hook installed opencv module and face-recognition module
#
pip3 uninstall -y opencv-contrib-python
if [ "$INSTALL_FACE" == "1" ]; then
	pip3 uninstall -y face-recognition
fi

logger "Compiling opencv with GPU Support" -tEventServer

#
# Install cuda toolkit
#
logger "Installing cuda toolkit..." -tEventServer
cd ~
if [ -f  /config/opencv/$CUDA_PIN ]; then
	cp /config/opencv/$CUDA_PIN /etc/apt/preferences.d/cuda-repository-pin-600
else
	echo "Please download CUDA_PIN."
	logger "CUDA_PIN not downloaded!" -tEventServer
	exit
fi

if [ -f /config/opencv/$CUDA_TOOL ];then
	dpkg -i /config/opencv/$CUDA_TOOL
else
	echo "Please download CUDA_TOOL package."
	logger "CUDA_TOOL package not downloaded!" -tEventServer
	exit
fi

apt-key add $CUDA_KEY >/dev/null
apt-get update
apt-get -y upgrade -o Dpkg::Options::="--force-confold"
apt-get -y install cuda-toolkit-$CUDA_VER

echo "export PATH=/usr/local/cuda/bin:$PATH" >/etc/profile.d/cuda.sh
echo "export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/lib:$LD_LIBRARY_PATH" >> /etc/profile.d/cuda.sh
echo "export CUDADIR=/usr/local/cuda" >> /etc/profile.d/cuda.sh
echo "export CUDA_HOME=/usr/local/cuda" >> /etc/profile.d/cuda.sh
echo "/usr/local/cuda/lib64" > /etc/ld.so.conf.d/cuda.conf
ldconfig

#
# check for expected install location
#
CUDADIR=/usr/local/cuda-$CUDA_VER
if [ ! -d "$CUDADIR" ]; then
	echo "Failed to install cuda toolkit!"
    logger "Failed to install cuda toolkit!" -tEventServer
    exit
elif [ ! -L "/usr/local/cuda" ]; then
    ln -s $CUDADIR /usr/local/cuda
fi

logger "Cuda toolkit installed" -tEventServer

#
# Ask user to check that the GPU is seen.
#
if [ -x /usr/bin/nvidia-smi ]; then
	/usr/bin/nvidia-smi >/config/opencv/nvidia-smi.log
	if [ $QUIET_MODE != 'yes' ];then
			echo "##################################################################################"
			echo
			cat /config/opencv/nvidia-smi.log
			echo "##################################################################################"
			echo "Verify your Nvidia GPU is seen and the driver is loaded."
			echo "If not, stop the script and fix the problem."
			echo "Press any key to continue, or ctrl-C to stop."
			read -n 1 -s
	fi
else
	echo "'nvidia-smi' not found!  Check that the Nvidia drivers are installed."
	logger "'nvidia-smi' not found!  Check that the Nvidia drivers are installed." -tEventServer
fi
#
# Install cuDNN run time and dev packages
#
logger "Installing cuDNN Package..." -tEventServer
#
if [ -f /config/opencv/$CUDNN_RUN ];then
	dpkg -i /config/opencv/$CUDNN_RUN
else
	echo "Please download CUDNN_RUN package."
	logger "CUDNN_RUN package not downloaded!" -tEventServer
	exit
fi
if [ -f /config/opencv/$CUDNN_DEV ];then
	dpkg -i /config/opencv/$CUDNN_DEV
else
	echo "Please download CUDNN_DEV package."
	logger "CUDNN_DEV package not downloaded!" -tEventServer
	exit
fi
logger "cuDNN Package installed" -tEventServer

#
# Compile opencv with cuda support
#
logger "Installing cuda support packages..." -tEventServer
apt-get -y install libjpeg-dev libpng-dev libtiff-dev libavcodec-dev libavformat-dev libswscale-dev
apt-get -y install libv4l-dev libxvidcore-dev libx264-dev libgtk-3-dev libatlas-base-dev gfortran
logger "Cuda support packages installed" -tEventServer

#
# Get opencv source
#
logger "Downloading opencv source..." -tEventServer
wget -q -O opencv.zip $OPENCV_URL
wget -q -O opencv_contrib.zip $OPENCV_CONTRIB_URL
unzip opencv.zip
unzip opencv_contrib.zip
mv $(ls -d opencv-*) opencv
mv opencv_contrib-4.2.0 opencv_contrib
rm *.zip

cd ~/opencv
mkdir build
cd build
logger "Opencv source downloaded" -tEventServer

#
# Make opencv
#
logger "Compiling opencv..." -tEventServer

#
# Have user confirm that cuda and cudnn are enabled by the cmake.
#
cmake -D CMAKE_BUILD_TYPE=RELEASE \
	-D CMAKE_INSTALL_PREFIX=/usr/local \
	-D INSTALL_PYTHON_EXAMPLES=OFF \
	-D INSTALL_C_EXAMPLES=OFF \
	-D OPENCV_ENABLE_NONFREE=ON \
	-D WITH_CUDA=ON \
	-D WITH_CUDNN=ON \
	-D OPENCV_DNN_CUDA=ON \
	-D ENABLE_FAST_MATH=1 \
	-D CUDA_FAST_MATH=1 \
	-D WITH_CUBLAS=1 \
	-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
	-D HAVE_opencv_python3=ON \
	-D PYTHON_EXECUTABLE=/usr/bin/python3 \
	-D PYTHON2_EXECUTABLE=/usr/bin/python2 \
	-D BUILD_EXAMPLES=OFF .. >/config/opencv/cmake.log

if [ $QUIET_MODE != 'yes' ];then
	echo "######################################################################################"
	echo
	cat /config/opencv/cmake.log
	echo
	echo "######################################################################################"
	echo "Verify that CUDA and cuDNN are both enabled in the cmake output above."
	echo "Look for the lines with CUDA and cuDNN." 
	echo "You may have to scroll up the page to see them."
	echo "If those lines don't show 'YES', then stop the script and fix the problem."
	echo "Check that you have the correct versions of CUDA ond cuDNN for your GPU."
	echo "Press any key to continue, or ctrl-C to stop."
	read -n 1 -s
fi

make -j$(nproc)

logger "Installing opencv..." -tEventServer
make install
ldconfig

#
# Now reinstall face-recognition package to ensure it detects GPU.
#
if [ "$INSTALL_FACE" == "1" ]; then
	pip3 install face-recognition
fi

#
# Clean up/remove unnecessary packages
#
logger "Cleaning up..." -tEventServer

cd ~
rm -r opencv*
rm /etc/my_init.d/20_apt_update.sh

logger "Opencv compile completed" -tEventServer

if [ $QUIET_MODE != 'yes' ];then
	echo "Compile is complete."
	echo "Now check that the cv2 module in python is working."
	echo "Execute the following commands:"
	echo "  python3"
	echo "  import cv2"
	echo "  Ctrl-D to exit"
	echo
	echo "Verify that the import does not show errors."
	echo "If you don't see any errors, then you have successfully compiled opencv."
	echo
	echo "Once you are satisfied that the compile is working, run the following"
	echo "command:"
	echo "  echo "yes" > opencv_ok"
	echo
	echo "The opencv.sh script will run when the Docker is updated so you won't"
	echo "have to do it manually."
fi
 #```

jaburges avatar Feb 10 '21 03:02 jaburges

Yes, I have cuDNN + OpenCV/CUDA running on another machine, and it works well. It's going to be a challenge to package that up as a docker container, though, because when building OpenCV from source as shown above you need to provide the graphics-card-specific architecture number (-D CUDA_ARCH_BIN=x.y). I suppose you could script it; the user would need to look up their numbers first on the nVidia site. In addition, there's a conflict with drivers and the default nouveau X Window manager that has to be dealt with as well. Not sure how that would work within a docker.

MEntOMANdo avatar Feb 10 '21 05:02 MEntOMANdo

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Mar 12 '21 06:03 stale[bot]

Not stale

blakeblackshear avatar Mar 12 '21 12:03 blakeblackshear

Thanks Blake! Few of us lurkers are still here!

On Fri, Mar 12, 2021 at 7:58 AM Blake Blackshear @.***> wrote:

Not stale

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/blakeblackshear/frigate/issues/659#issuecomment-797474067, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABYSM4JZURUVBBAAHZWNSDTDH6XZANCNFSM4WY347AQ .

ril3y avatar Mar 12 '21 14:03 ril3y

Really hoping for a solution to be able to use an Nvidia GPU instead of a coral. Fingers crossed that you'll succeed Blake :)

NicolaiOksen avatar Mar 20 '21 12:03 NicolaiOksen

If I understand well we need TensorRT and TensorRT needs CUDNN.

To elaborate on https://github.com/blakeblackshear/frigate/issues/659#issuecomment-776409336, a few days ago I tested this approach on https://github.com/dlandon/zoneminder.machine.learning, which has almost the same OpenCV build script: https://github.com/dlandon/zoneminder.machine.learning/blob/master/zmeventnotification/opencv.sh. It works, but if you ever remove your container, or have another reason to create a new container, then you can just do everything again... Because the CUDNN installation and the OpenCV compilation is done in the container, not in the image build.

It's not clear to me if OpenCV would benefit from having CUDNN support in this project? If yes, we could build our own image, using blakeblackshear/frigate:stable-amd64nvidia as a base image and with our own nVidia files adapted to our GPU and provide ourselves the files downloaded from nVidia.

Something like:

./docker-compose.yml:

services:
  frigate:
    build: ./frigate/images/frigate
    container_name: frigate
    ...
    deploy:
      resources:
        reservations:
          devices:
          - capabilities: [compute, gpu, utility, video] # for ffmpeg + opencv
    ...
    volumes:
      - ./frigate/images/provisioning/libcudnn.deb:/provisioning/libcudnn.deb
      ...

./frigate/images/frigate/Dockerfile:

FROM blakeblackshear/frigate:stable-amd64nvidia
...
COPY provisioning/ /tmp
RUN dpkg -y /tmp/libcudnn.deb # and the rest of CUDNN installation, and then compile OpenCV if necessary
...

./frigate/images/frigate/provisionning/libcudnn.deb: actually my particular libcudnn8_8.0.5.39-1+cuda11.1_amd64.deb but renamed.

More or less the same logic could be applied to the TensorRT installation...

Wish it was easier!

guix77 avatar Apr 22 '21 21:04 guix77

It seems that there could be a much easier way, look at https://github.com/DeepQuestAI/DeepStack-Base/blob/master/cuda/Dockerfile. Basically, the .deb files for CUDNN are in fact publically available! For Ubuntu 20.04: https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/

Also, there is https://hub.docker.com/r/nvidia/cuda that could be used, like Doods does.

guix77 avatar May 02 '21 09:05 guix77

Is this alive? :) I'de like to replace my coral that seems to drop of the buss more than i'de like with a gpu :)

gurkburk76 avatar Jul 05 '21 12:07 gurkburk76

jetson nano uses heavily gstreamer instead of ffmpeg, but AI should be perfect on jetson nano ... and on NVIDIA Cards. nvidia container already done... "only" tensorRT missing, i guess

ozett avatar Aug 14 '21 18:08 ozett

Now that Corals are almost impossible to get in the US and elsewhere (with either gigantic lead times or just straight up "out of stock"), and since TensorRT is up to Python 3.8, is this back on the table at all? #145 seemed to almost get across the finish line if not for the version issues.

pokemane avatar Sep 16 '21 01:09 pokemane

Thanks for that #145 link.

Great source to get more links on jetson-nano, cuda, tensor-RT and approaches to "frigate" all this

DOODS: https://github.com/snowzach/doods WATSOR: https://github.com/asmirnou/watsor AI-PERSON-DETECTOR: https://github.com/wb666greene/AI-Person-Detector


Now that Corals are almost impossible to get in the US and elsewhere

mouser says m2. is going out go or production/EOL. Maybe time to try also beefier models with big tensorflow/Tensor-RT on jetson-nano, or empower ffmpeg with CUDA-Decoding?

image

ozett avatar Sep 16 '21 19:09 ozett

@ozett Didn't they just release the dual M.2 one not long ago? I doubt they're taking the regular M.2 one out of production, maybe just up for a refresh.

jasonmhite avatar Sep 16 '21 21:09 jasonmhite

@jasonmhite The mouse-page is linked from the google-coral page when your click on the "buy-button". i was wondering aboout the EOL-information, but seems reasonable when you see that the m2 is mostly out of stock. lets wait and see what comes.

ozett avatar Sep 17 '21 05:09 ozett

So TensorFlow, which is used by Frigate, can use the NVIDIA GPU of my PC already? So the CPU is only used by Frigate for all the non-video/non-AI stuff? I want to load the CPU as little as possible and use the powerful GPU instead.

Ignorant bonus question here: NVIDIA Jetson embedded GPU solutions can now also be used by TensorFlow? If that is the case, why can't Frigate be just used on these? Or is there more to it than just changing some TensorFlow libraries/config to let it use the GPU/CUDA?

And what is with OpenCL and Vulcan? And OpenMP?

strarsis avatar Nov 10 '21 02:11 strarsis

would love to use Nvida-gpu for tensorflow and other models, but ist not implemented yet

image


also would love to test jetson-nano performace for rtsp-decoding and tensor, but besides heavy use of gestreamer and some ffmpeg optimizations not fully supported yet https://github.com/blakeblackshear/frigate/issues/1175#issuecomment-944991978 and overall compare between coral vs. nano https://github.com/blakeblackshear/frigate/issues/2179#issuecomment-964581717

if someone jumps in and is experienced enough to help here 👍

ozett avatar Nov 10 '21 08:11 ozett

This project Watsor might be good to checkout https://github.com/asmirnou/watsor

There is support for Nvidia GPUs and Coral devices for object detection using TensorRT

https://github.com/asmirnou/watsor/blob/master/watsor/detection/tensorrt_gpu.py https://github.com/asmirnou/watsor/blob/master/watsor/detection/edge_tpu.py

Would have to rework the object detection into a more general class to support different devices.

slackr31337 avatar Dec 01 '21 18:12 slackr31337

i will also look up watsor. i tried shinobi and viseron today on the nano, but on most projects ffmpeg has no hardware-decoding support for rtsp-streams and without this the rest is no fun.

deepstream on the nano looks great, but also looks like it depends heavily on gstreamer and tensor-rt .

image

ozett avatar Dec 01 '21 20:12 ozett

I have seen watsor, and that's similar to the planned (eventually) approach.

blakeblackshear avatar Dec 02 '21 00:12 blakeblackshear

So should I hold out on trying to do any install of Frigate on my Nano? I guess for now I could just use the Home Assistant add on and then just pass person detection to the Nano for DeepStack Facial Recognition

LordNex avatar Dec 08 '21 16:12 LordNex

lead times on coral are 1+year now, any update on GPU support?

supernovae avatar Jan 17 '22 20:01 supernovae

There is an outstanding PR specific to the Nano that may make it possible sooner. I have heard of some users finding m.2 coral versions in stock recently on various sites.

blakeblackshear avatar Jan 17 '22 20:01 blakeblackshear

There is a guy one eBay and Amazon selling them for about $170. That’s how I got mine and was here in a few days. It’s the same guy in both places. If you need I can find the Amazon link but you shouldn’t have any trouble finding it.

I’m running Frigate on a RPi4 8 gig 64bit aarch64 then running DeepStack on a Nano and CompreFace and Double Take on my Home Assistant cluster. Works fairly well but I still run into encoding and decoding issues. I’m in the process of separating the video traffic to its own vlan and the. Trunking that over to the main network through NAT. I’m hoping that will work better especially once I vlan off my IoT as well. And just a suggestion if anyones looking into a new firewall. The Firewalla Gold is just a beast and has already cleaned up my network a ton. It’s a little pricy but there’s no monthly charge.

All in all the design of the network, especially if your using wireless cameras, really has a huge impact on the type of performance your going to receive. Get a few 4k cameras and you can easily overload most consumer based network equipment

On Jan 17, 2022, at 2:59 PM, Blake Blackshear @.***> wrote:



There is an outstanding PR specific to the Nano that may make it possible sooner. I have heard of some users finding m.2 coral versions in stock recently on various sites.

— Reply to this email directly, view it on GitHubhttps://github.com/blakeblackshear/frigate/issues/659#issuecomment-1014881398, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACGXNJHLWEWA434WEUCSJVTUWR7KXANCNFSM4WY347AQ. You are receiving this because you commented.Message ID: @.***>

LordNex avatar Jan 18 '22 05:01 LordNex

Coral is out of stock everywhere - any variation of it except very expensive devboards/SoMs that basically less useful than a standalone TPU.

It's all gone.

Any supported alternative to Coral would be a great blessing.

$170 for a single accelerator does not scale well when you need even 2 of them - worse if more, and becomes prohibitive if you need multiple local installs in multiple places.

toxuin avatar Feb 06 '22 03:02 toxuin

We have been having success with a Jetson Nano.

discussion

jmorris644 avatar Feb 06 '22 12:02 jmorris644

Sorry if I’m not understanding. But why would you need 2 TPUs? I have 1 USB running with Frigate and it’s handling multiple streams without barely touching it.

Also, there are some catered around eBay but yes they are expensive. The only option that looks interesting is the RockPI

https://www.ebay.com/itm/ROCK-PI-3A-2-4-8GB-SBC-Rockchip-RK3568-Single-Board-Computer-Support-Coral-TPU-/284609958572?mkcid=16&mkevt=1&_trksid=p2349624.m46890.l49286&mkrid=711-127632-2357-0

It has an integrated GPU, TPU, VPU, and NPU for runningTensorFlow or other AI stacks.

My current setup is Frigate on a RPi4 8gig with a USB Coral doing the main feeds from the cameras and storing the footage on an OpenMediaVault NAS. I then have Frigate set to send its event triggers and image to a MQTT topic with a the image cropped and resized to the face. I then have DoubleTake as an add on inside Home Assistant pick that up and MQTT that to DeepStack and CompreFace running on a Jetson Nano 4gig. If both detectors come back with a score over 70% accurate, it sends actionable notifications to our phone with various buttons to unlock doors, turn on lights, or trigger the alarm. So far the setup is working very well. Just trying to get all these crappy Wyze cameras out and good PoE or 5ghz wireless cameras.

But as you can see there are some options out there but none are cheap or easy. This want something that most people could setup. If I didn’t have 25+ years working in IT I would have been lost. But keep at it and RTFM and you’ll get there.

On Feb 5, 2022, at 9:48 PM, Toxuin @.***> wrote:



Coral is out of stock everywhere - any variation of it except very expensive devboards/SoMs that basically less useful than a standalone TPU.

It's all gone.

Any supported alternative to Coral would be a great blessing.

$170 for a single accelerator does not scale well when you need even 2 of them - worse if more, and becomes prohibitive if you need multiple local installs in multiple places.

— Reply to this email directly, view it on GitHubhttps://github.com/blakeblackshear/frigate/issues/659#issuecomment-1030745614, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACGXNJESVNFRW3FBSOX3BRDUZXVPDANCNFSM4WY347AQ. You are receiving this because you commented.Message ID: @.***>

LordNex avatar Feb 06 '22 13:02 LordNex