multispectral_processing
multispectral_processing copied to clipboard
Multispectral Processing is an implementation in ROS Melodic for Multi-modal Data Processing and Implementation for Vineyard Analysis.
Multispectral Processing - Multi-modal Data Processing and Implementation for Vineyard Analysis
Multispectral Processing is an implementation in ROS Melodic for Multi-modal Data Processing and Implementation for Vineyard Analysis. The main focus of the project is the development of a method for the registration of multi-modal images in order to obtain a three-dimensional reconstruction of the vine enriched with photometric or radiometric data. Furthermore, an artificial intelligence module is developed to jointly process images from the different modalities for a detailed analysis of the plant's condition.
Table of Contents
Requirements
Pipeline
Packages Installation
Source Files
Launch Files
Resources
Execution
Demo Experiments
Figures
License
Requirements
Software
- ROS Melodic Morenia.
- Ubuntu 18.04.5 LTS.
Hardware
- CMS-V GigE Silios Multispectral Camera.
- Microsoft Kinect V2 Sensor: RGB-D Camera.
Pipeline
Packages Installation
Be sure that you have installed the melodic version of the packages below.
-
ueye_cam: ROS package that that wraps the driver API for UEye cameras by IDS Imaging Development Systems GMBH.
$ cd ~/catkin_ws/src $ git clone https://github.com/anqixu/ueye_cam.git $ cd ~/catkin_ws $ catkin_make
-
iai_kinect2: Package that provides tools for Kinect V2 such as bridge between kinect and ROS, cameras calibration, etc.
-
libfreenect2: Drivers for Kinect V2.
-
image_pipeline: This package is designed to process raw camera images into useful inputs to vision algorithms: rectified mono/color images, stereo disparity images, and stereo point clouds.
$ cd ~/catkin_ws/src $ git clone https://github.com/ros-perception/image_pipeline.git $ cd ~/catkin_ws $ catkin_make
-
rosbridge_suite: ROS package that provides a JSON API to ROS functionality for non-ROS programs.
$ sudo apt-get install ros-melodic-rosbridge-server
-
rviz: 3D visualization tool for ROS.
-
rtabmap_ros: A RGB-D SLAM approach with real-time constraints.
$ sudo apt-get install ros-melodic-rtabmap-ros
Source Files
- band_separator.cpp: C++ node for multispectral image separation, pre-processing and processing. Provides GUI for multiple tasks.
- band_separator.py: Python node for multispectral image separation, pre-processing and processing. Provides GUI for multiple tasks.
- backup.cpp: C++ node for saving single frames or stream of frames.
- backup.py: Python node for saving single frames or stream of frames.
- experiments.cpp: This node publishes images when to the topic that band_separator node subscribes. It can be used when no camera is available.
- offline_registration.cpp: Node for publishint the iamgem topics for a simulation of offline image registration.
- features_registraor.cpp: C++ node that detects features from 2 images and align them.
- features_registraor.py: Python node that detects features from 2 images and align them.
- corners_registraor.cpp: C++ node that detects features from 2 images and align them.
- corners_registraor.py: Python node that detects features from 2 images and align them.
- synchronizer.cpp: C++ node that subscribes to image topics and publishes them after synchronization.
- synchronizer.py: Python node that subscribes to image topics and publishes them after synchronization.
- calibrator.py: Node that performs calibration.
- stereo_calibrator.py: Node that performs stereo calibration.
- tf_node.cpp: Tranformation node with C++.
- tf_node.py: Tranformation node with Python.
Launch Files
- cms_cpp.launch: Run multispectral camera, pre-processing functionalities with C++, connection between camera and controller.
- cms_py.launch: Run multispectral camera, pre-processing functionalities with Python, connection between camera and controller.
- camera_configurator.launch: Run multispectral camera, connection between camera and controller.
- kinect2_bridge.launch: Run kinect.
- point_cloud_generator.launch: Generate point clouds from given topics.
- ueye_camera_gige.launch: Run multispectral nodelet to turn on the camera.
- stereo_calibration.launch: Calibtation node for stereo cameras.
- calibration.launch: Run the calibration node for the multispectral camera.
- registration_approach1_cpp.launch: Image registration launch file for C++ node with approach 1 (feature detection).
- registration_approach1_py.launch: Image registration launch file for Python node with approach 1 (feature detection).
- registration_approach2_cpp.launch: Image registration launch file for C++ node with approach 2 (corner detection).
- registration_approach2_py.launch: Image registration launch file for Python node with approach 2 (corner detection).
Resources
- fps_log.yaml: Log file for FPS.
- parameters.yaml: Manufacturer parameters such as serial number, crosstalk correction coefficients, etc.
- multispectral_camera.yaml: Calibration parameters for multispectral camera.
- homography1.yaml: This file contains the perspective transformation matrix between the images for approach 1 (feature detection).
- homography2.yaml: This file contains the perspective transformation matrix between the images for approach 2 (corner detection).
- wr_coefficients: White reference coefficients file.
- data folder: This folder contains multiple images for experiments, etc.
Execution
Functionalities
-
Multispectral Camera:
-
Acquisition & Band Separation.
-
Flat-field Correction.
-
White Balance Normalization.
-
Crosstalk Correction.
-
Vegetation Indices Calculation NDVI, MCARI, MSR, SAVI, TVI, etc.
-
-
Both Cameras:
-
Cameras Geometric Calibration.
-
Multi-modal Image Registration.
-
3D Reconstruction.
-
Permissions
Change permissions to all python files to be executable with the command below:
$ roscd multispectral_processing/src
$ chmod +x *.py
Preparation for Image Acquisition
Follow the steps below to succeed the best image acquisition.
-
Connection of multispectral camera, kinect V2 sensor and PC with ROS installation.
-
Sensor alignment.
-
Adjusting the optics:
- Adjust the "Focus or Zoom" of the lens on an object at the same distance as the vine.
- Adjust the "Aperture" of the lens.
-
Set acquisitions parameters
- Gain (not auto gain, lower is better).
- Exposure time (we can change it as convenience).
- Framerate.
- Others.
-
Pre-processing parameters:
- Set white balance, white reference.
- Set crosstalk correction or not.
- Set flatfield correction or not.
-
Start one of the registration approaches as described below to register Homographies (rotations, translations, scale). Be sure theat the sensors are fixed. "DO NOT TOUCH SENSORS".
-
Save a single frame or multiple frames when running image registration in no capture mode, by using the command below:
$ rosrun multispectral_processing backup.py
or
$ rosrun multispectral_processing backup
Image Registration
For the whole implementation is used C++ and Python code. Every node is developed with C++ and with Python respectively. Image registration is performed between multispectral camera and Kinect V2 camera.
-
Multispectral camera and kinect cameras image registration, via feature detection (C++). Edit args="capture" to start corners capturing or args="nocapture" to start publishing.
<node name="features_registrator" pkg="multispectral_processing" type="features_registrator" args="nocapture" output="screen"/>
and run
$ roslaunch multispectral_processing registration_approach1_cpp.launch
-
Multispectral camera and kinect cameras image registration, via feature detection (Python). Edit args="capture" to start corners capturing or args="nocapture" to start publishing.
<node name="features_registrator" pkg="multispectral_processing" type="features_registrator.py" args="nocapture" output="screen"/>
and run
$ roslaunch multispectral_processing registration_approach1_py.launch
-
Multispectral camera and kinect cameras image registration, via chessboard coreners detection (C++). Edit args="capture" to start corners capturing or args="nocapture" to start publishing.
<node name="corners_registrator" pkg="multispectral_processing" type="corners_registrator" args="nocapture" output="screen"/>
and run
$ roslaunch multispectral_processing registration_approach2_cpp.launch
-
Multispectral camera and kinect cameras image registration, via chessboard coreners detection (Python). Edit args="capture" to start corners capturing or args="nocapture" to start publishing.
<node name="corners_registrator" pkg="multispectral_processing" type="corners_registrator.py" args="nocapture" output="screen"/>
and run
$ roslaunch multispectral_processing registration_approach2_py.launch
3D Reconstruction
For mapping by using rtabmap_ros package:
-
Run one of the registration approaches with args="nocapture".
-
Run the command to start rtabmap_ros package:
$ roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start" rgb_topic:=/multispectral/image_mono depth_topic:=/multispectral/image_depth camera_info_topic:=/multispectral/camera_info approx_sync:=false
or for external odometry use:
$ roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start" rgb_topic:=/multispectral/image_mono depth_topic:=/multispectral/image_depth camera_info_topic:=/multispectral/camera_info approx_sync:=false visual_odometry:=false odom_topic:=/my_odometry
and replace odom_topic:=/my_odometry with the external odometry topic.
Demo Experiments
General
These experiments include only the imagees of the multispectral camera and the included processes. Run experiments with the already captured images located in /data/simulation folder and follow the steps below:
-
Comment the includes below in cms_cpp.launch or cms_py.launch file.
<!-- <include file="$(find multispectral_processing)/launch/kinect2_bridge.launch"/> --> <!-- <include file="$(find multispectral_processing)/launch/ueye_camera_gige.launch"/> -->
-
Uncomment the include of experiments.cpp node.
<node name="experiments" pkg="multispectral_processing" type="experiments" args="2 2020511" output="screen"/>
where
args=<folder id> <prefix of images>
-
Choose the dataset that you want by changing the "args" value.
-
Run cms_cpp.launch or cms_py.launch file.
Offline Image Registration
Perform offline image registration with all approaches. Run experiments with the already captured images located in /data/simulation folder and follow the steps below:
-
Comment the includes below in the selected .launch file of the examined approach as presented below:
<!-- <include file="$(find multispectral_processing)/launch/kinect2_bridge.launch"/> --> <!-- <include file="$(find multispectral_processing)/launch/ueye_camera_gige.launch"/> --> <!-- <node name="band_separator" pkg="multispectral_processing" type="band_separator" args="nodebug" output="screen"/> --> <!-- <node name="tf_node" pkg="multispectral_processing" type="tf_node"/> --> <!-- <node name="static_transform_publisher" pkg="tf" type="static_transform_publisher" args="0 0 0 -1.5707963267948966 0 -1.5707963267948966 camera_link kinect2_link 100"/> -->
-
Uncomment the include of offline_registration.cpp node.
<node name="offline_registration" pkg="multispectral_processing" type="offline_registration" args="1 2020511" output="screen"/>
where
args=<folder id> <prefix of images>
-
Choose the dataset that you want by changing the "args" value.
-
Run the launch file of the image registration approach.
-
Visualize results by using
$ rviz
withFixed Frame="multispecral_frame"
and use the published topics:- /multispectral/image_color: Registered Kinect RGB image.
- /multispectral/image_mono: Registered multispectral image.
- /multispectral/image_depth: Registered depth image.
or use
$ rqt_image_view
Figures
Sensors Position
Robotnik Summit Equipped with the Sensors in Vineyard
Multispectral Image Pixels
Captured Image & Bands by the Multispectral Camera and OpenCV UI
Captured RGB Image by the Kinect V2 Sensor, Captured Bands by the Multispectral Camera
NDVI calculation, Colored vegetation, Colored Vegetation After Crosstalk Correction
Background Subtraction by using Otsu's method
Image Registration with Feature Matching
Image Registration with Corner Matching
3D Reconstruction
License
This project is licensed under the MIT License - see the LICENSE file for details.