open-intelligence
open-intelligence copied to clipboard
Creepy stalking tool to process security camera motion triggered images and sort seen objects in different categories, detect license plates and faces. Has PWA ready web front end. Meant to make prope...

Open Intelligence processes any camera motion triggered images and sorts seen objects using Yolo, it provides easy to use front end web interface with rich features so that you can have up to date intel what is the current status on your property. Open Intelligence uses license plate detection (ALPR) to detect vehicle plates and face detection to detect people faces which then can be sorted into person folders and then can be trained so that Open Intelligence can try to identify seen people. All this can be done from front end interface.
Open Intelligence uses super resolution neural network to process super resolution images for improved license plate detection.
Project goal is to be useful information gathering tool to provide data for easy property monitoring without need for expensive camera systems because any existing cameras are suitable.
I developed this to my own use because were tired to use existing monitoring tools to go through recorded video. I wanted to know what has been happening quickly.
Click below to watch promo video

| Cameras view | Plate calendar |
|---|---|
- It's possible to make cameras view play heard microphone sounds.
- Calendar view can open full source detection image by clicking car plate event.
| Face wall | Face wall source dialog |
|---|---|
![]() |
- Face wall is one of the creepiest features.
- You can go trough pile of faces and by clicking them, you can see source image.
Open Intelligence can run with Docker, directly at host or mixed.

Open Intelligence is suitable from private properties to small businesses with medium activity.
Table of contents
- Environment
- Installing with Docker
- Installing manually
- Api side
- Build react front end
- Python side (Windows)
- Process drawing
- Project folder structure
- Python Apps
- App.py
- StreamGrab.py
- SuperResolution.py
- InsightFace.py
- SimilarityProcess.py
- Config ini
- Multi node support
- Cuda GPU Support
- Postgresql notes
- Openalpr notes
- Front end development
- Troubleshooting
- Todo
- Authors
- License
Environment
Everything can be installed on one server or to separate servers meaning that database is at server one, python application at server two and api hosting at server three.

Installing with Docker
Follow Installation-using-Docker instruction from Wiki page.
Installing manually
Terminology for words like API side and Python side:
- "API Side" is
/apifolder containing node api processintelligence.jsand web user interface served by same process. - "Python side" is project root folder containing different python processes.
See Project folder structure for more details about folders.
API side
- Go to
/apifolder and runnpm install - Install PostgreSQL server: https://www.postgresql.org/
- Accessing postgres you need to find tool like pgAdmin which comes with postgres, command line or some IDE having db tools.
- Rename
.env_tplto.envand fill details. - Run
intelligence-tasks.jsor with PM2 process managerpm2 start intelligence-tasks.js. - Run
node intelligence.jsor with PM2 process managerpm2 start intelligence.js -i 2. - Running these NodeJS scripts will create database and table structures, if you see error run it again.
- Go to
/api/front-endfolder and rename.env_tplto.env. - At
/api/front-endrunnpm startso you have both api and front end running. - Access
localhost:3000if react app doesn't open browser window automatically. - Outdated frontend user manual for old ui version https://docs.google.com/document/d/1BwjXO0tUM9aemt1zNzofSY-DKeno321zeqpcmPI-wEw/edit?usp=sharing
Build react front end
- Go to
/api/front-end - Check your
.envREACT_APP_API_BASE_URL that it corresponds your machine ip address where node js api is running. - Build react front end via running
npm run build - Copy/replace
/buildfolder contents into/api/htmlfolder so that api can serve build webpage.
Python side
(Windows)
- Download Python 3.6 ( https://www.python.org/ftp/python/3.6.0/python-3.6.0-amd64.exe )
- Only tested to work with Python 3.6. Newer ones caused problems with packages when tested.
- Activate python virtual env.
.\venv\Scripts\activate.bat - Install dependencies
pip install -r requirements_windows.txt - Get models using these instruction https://github.com/norkator/open-intelligence/wiki/Models
- Download PostgreSQL server ( https://www.postgresql.org/ ) I am using version 11.6 but its also tested with version 12. (if you didn't install at upper api section)
- Rename
config.ini.tpltoconfig.iniand fill details.- Config.ini content settings explained, see Config ini
- For multiple nodes, see Multi node support)
- Ensure you have
Microsoft Visual C++ 2015 Redistributable (x64)installed.- This is needed by openALPR
- Separate camera and folder names with comma just like at base config template
- Run wanted python apps, see
Python Appssection.
It's critical to setup ini configuration right.
Python side
(Linux)
- Install required Python version.
sudo add-apt-repository ppa:deadsnakes/ppa sudo apt-get install python3.6 virtualenv --python=/usr/bin/python3.6 ./ source ./bin/activate - Install dependencies
pip install -r requirements_linux.txt - Get models using these instruction https://github.com/norkator/open-intelligence/wiki/Models
- Download PostgreSQL server ( https://www.postgresql.org/ ) I am using version 11.6 but its also tested with version 12. (if you didn't install at upper api section)
- Rename
config.ini.tpltoconfig.iniand fill details.- Config.ini content settings explained, see Config ini
- For multiple nodes, see Multi node support)
- Separate camera and folder names with comma just like at base config template
- Run wanted python apps, see
Python Appssection.
Process drawing
Overall process among different python processes for Open Intelligence.

Project folder structure
Default folders
.
├── api # Front end API which is also serving react js based web page
├── classifiers # Classifiers for different detectors like faces
├── docs # Documents folder containing images and drawings
├── libraries # Modified third party libraries
├── models # Yolo and other detector model files
├── module # Python side application logic, source files
├── objects # Base objects for internal logic
├── scripts # Scripts to ease things
Python Apps
This part is explaining in better detail what each of base python app scripts is meant for. Many tasks are separated for each part. App.py is always main process, the first thing that sees images.
App
- File:
App.py - Status: Mandatory
- This is main app which is responsible for processing input images from configured sources.
- Cluster support: Yes.
- One computer, multiple instances: Just open app.py on multiple shell's like
python .\App.py - Multi instance command when run on network computer:
\.App.py --bool_slave_node TrueThis slave node option means that script usesconfig_slave.iniinstead of stockconfig.iniReason is that in this case master node has database installation. If database and camera images are accessible else where like other ip and camera images on smb share having same mount letter/path then it's possible to run onlypython .\App.pyon multiple individual machines.
StreamGrab
- File:
StreamGrab.py - Status: Optional
- If you don't have cameras which are outputting images, you can configure multiple camera streams using this stream grabber tool to create constant input images.
- Cluster support: No.
SuperResolution
- File:
SuperResolution.py - Status: Optional
- This tool processes super resolution images and run's new detections for these processed sr images. This is no way mandatory for process.
- Cluster support: No.
- Mainly meant for improved license plate detection.
- Testing: use command
python SuperResolutionTest.py --testfile="some_file.jpg"which will load image by given name from/imagesfolder.
InsightFace
- File:
InsightFace.py - Status: Optional
- Processes faces page 'face wall' images using InsightFace retina model. This is currently for testing use.
- Cluster support: No.
SimilarityProcess
- File:
SimilarityProcess.py - Status: Optional
- Compares current running day images for close duplicates and deletes images determined as duplicate having no higher value (no detection result). Processes images in one hour chunks.
- Cluster support: No.
- Process is trying to save some space.
Config ini
This section explains config.ini file contents which are used by python processes.
Config ini start with contents like below.
[app]
move_to_processed=True
process_sleep_seconds=4
cv2_imshow_enabled=True
...
move_to_processed=> When set to True, processed input image is moved into /processed folder, otherwise file is deleted.process_sleep_seconds=> After every batch of files, process will sleep this amount of seconds.cv2_imshow_enabled=> Set to True will show window showing processed images, bounding boxes and more.ignored_labels=> This will ignore labels, example if you get a lot of false positives from umbrellas and you don't care saving any images of umbrellas anyway then ignore it.camera_names=> Camera name/location name, up to you. Must of separated with comma ,camera_folders=> Input image folders, same camera names and folders must be in same order and same count- postgresql section => fill in database credentials. This should not need explaining.
Other parameters are case specific.
Multi node support
Multi node support requires little bit more work to configure but it's doable. Follow instructions below.
- Each node needs to have access to source files hosted by one main node via network share.
- Create configuration file
config_slave.inifrom templateconfig_slave.ini.tpl - Fill in postgres connection details having server running postgres as target location.
- Fill [camera] section folders, these should be behind same mount letter+path on each node.
- Point your command prompt into network share folder containing
App.pyand other files. - On each slave node run
App.pyvia giving argument:\.App.py --bool_slave_node True
Cuda GPU Support
Cuda only works with some processes like super resolution and insightface. Requirements are:
- NVIDIA only; GPU hardware compute capability: The minimum required Cuda capability is 3.5 so old GPU's won't work.
- CUDA toolkit version. Windows link for right 10.0 is https://developer.nvidia.com/cuda-10.0-download-archive?target_os=Windows&target_arch=x86_64
- Download cuDNN "Download cuDNN v7.6.3 (August 23, 2019), for CUDA 10.0" https://developer.nvidia.com/rdp/cudnn-archive
- Place cuDNN files inside proper Cuda toolkit installation folders. cuDNN archive has folder structure.
Postgresql notes
All datetime fields are inserted without timezone so that:
File : 2020-01-03 08:51:43
Database : 2020-01-03 06:51:43.000000
Database timestamps are shifted on use based on local time offset.
Openalpr notes
These notes are for Windows. Current Docker way makes this installation automatic.
Got it running with following works.
Downloaded 2.3.0 release from here https://github.com/openalpr/openalpr/releases
- Unzipped
openalpr-2.3.0-win-64bit.zipto/librariesfolder - Downloaded and unzipped
Source code(zip) - Navigated to
src/bindings/python - Run
python setup.py install - From appeared
build/libmoved contents to projectlibraries/openalpr_64/openalprfolder. - At license plate detection file imported contents with
from libraries.openalpr_64.openalpr import Alpr
Now works without any python site-package installation.
Front end development
There is separate Readme for this side so
see more at ./api/front-end/README.md
Troubleshooting
Refer to troubleshooting wiki.
Authors
- Norkator - Initial work - norkator
Note that /libraries folder has Python applications made by other people.
I have needed to make small changes to them, that's why those are included here.
License
See LICENSE file.
