AnimalRecognitionDemo
AnimalRecognitionDemo copied to clipboard
An example of using Redis Streams, RedisGears and RedisAI for Realtime Video Analytics (i.e. filtering cats)
AnimalRecognitionDemo
This demo combines several Redis data structures and Redis Modules to process a AnimalRecognitionDemostream of images and filter out the images that contain cats.
It uses:
- Redis Streams to capture the input video stream:
all
- RedisGears to process this stream
- RedisAI to classify the images with MobilenetV2
It forwards the images that contain cats to a stream: cats
It uses RedisAI Integration in RedisGears with asynchronous function so the server is not blocked while RedisGears is triggering an inference session in RedisAI.
Architecture
Requirements
Docker and Python 3
Running the Demo
To run the demo:
git clone https://github.com/RedisGears/AnimalRecognitionDemo.git
cd AnimalRecognitionDemo
# If you don't have it already, install https://git-lfs.github.com/ (On OSX: brew install git-lfs)
git lfs install && git lfs fetch && git lfs checkout
For running the demo with make
, run:
make start
make camera
Then open the UI to watch the result streams.
To end the demo, then to stop the containers:
make stop
Run make help
for a few more options.
For running the demo manually, run:
docker-compose up
If something went wrong, e.g. you skipped installing git-lfs, you need to force docker-compose to rebuild the containers
docker-compose up --force-recreate --build
Open a second terminal for the video capturing:
pip install -r camera/requirements.txt
python camera/read_camera.py
Or run the camera process in test mode (without streaming from your camera):
ANIMAL=[cat|dog] python camera/read_camera.py --test
UI
-
http://localhost:3000
shows all the captured frames -
http://localhost:3001
shows only the framse with cats
Limitations
This demo is designed to be easy to setup, so it relies heavily on docker. You can get better performance and a higher FPS by runninng this demo outside docker. To control the FPS, edit the gear.py file.