ORB_SLAM2
ORB_SLAM2 copied to clipboard
How can I process my own video using ORB SLAM2?
I have taken a video of my desk from mobile phone- Redmi note 4G , Model number-HM NOTE 1LTE. I have converted the video into a stream of images(.png). I thought I would have to just save it into folder /cam3/data and write a timestamp file Examples/Monocular/EuRoC_TimeStamps/sample.txt But in order to make changes in the Examples/Monocular/EuRoC.yaml file I need to know the following:
Camera.fx Camera.fy Camera.cx Camera.cy Camera.k1 Camera.k2 Camera.p1 Camera.p2 Camera.fps
Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB:
My question is how and from where can I know all this information about my mobile phone camera??
The documentation states the following:
"8. Processing your own sequences You will need to create a settings file with the calibration of your camera. See the settings file provided for the TUM and KITTI datasets for monocular, stereo and RGB-D cameras. We use the calaibration model of OpenCV. See the examples to learn how to create a program that makes use of the ORB-SLAM2 library and how to pass images to the SLAM system. Stereo input must be synchronized and rectified. RGB-D input must be synchronized and depth registered."
But I am still not able to find out the camera calibration parameters.
Any help regarding this is appreciated.
qu wen lei jun ba
you should calibrate your camera.
Here is a way, though non-ideal to get started.
-
Save each frame of your video as a separate frame in the PNG format at a resolution of 640x480(according to the TUM1.yaml settings file).
-
Generate a text file like in one of the TUM(or any other) datasets. The file contains the timestamps and filename of each image(frame). You can generate your own timestamps - and I say this again, this is not the ideal way to approach this problem - by taking the first timestamp from one of the text files and add keep that as the timestamp for the first image(frame). Then keep adding 40 ms for the next image(frame) till the last image(i.e. the last frame). You could write a simple program to do this.
-
The images should be saved in a folder named
rgb
in the main folder(let us name ittest_dataset
) and the text file should be namedrgb.txt
and be saved in thetest_dataset
folder. -
Then go in the ORB_SLAM2 folder, launch the terminal(I am assuming that you use Ubuntu) and execute the following command:
./Examples/Monocular/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TUM1.yaml PATH_TO_SEQUENCE_FOLDER
,where thePATH_TO_SEQUENCE_FOLDER
could be/home/username/test_dataset
.
Did anyone of you tried processing your own sequences with this method suggested by @9friday ? Does't it needs Depth images also? what about timestamps? there are many of them....
No depth images needed.
On Mon 27 Aug, 2018, 9:35 PM Ujjval Rathod, [email protected] wrote:
Did anyone of you tried processing your own sequences with this method suggested by @9friday https://github.com/9friday ? Does't it needs Depth images also? what about timestamps? there are many of them....
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/raulmur/ORB_SLAM2/issues/486#issuecomment-416277436, or mute the thread https://github.com/notifications/unsubscribe-auth/AkCptYk6A5F5I2sY4fmIx_7NchkF9O5Nks5uVBjWgaJpZM4Q5azs .
I found that I was able to process video from an iPhone camera. The advice from 9friday is good, but I found I didn't need to change the resolution of the images.
Also, the time it takes for ORBSLAM2 to process the video is dependent on the timestamp. If you use intervals of 1 for each timestamp, it will only process 1 frame a second.
I found that I was able to process video from an iPhone camera. The advice from 9friday is good, but I found I didn't need to change the resolution of the images.
Also, the time it takes for ORBSLAM2 to process the video is dependent on the timestamp. If you use intervals of 1 for each timestamp, it will only process 1 frame a second.
Hi BW25 How you process video from iPhone camera? and How you get iPhone calibration variables?
You can calibrate the camera with some online opencv c++ code. to process the video you need to create images in rgb folder and time-steps in rgb.txt folder.
I found that I was able to process video from an iPhone camera. The advice from 9friday is good, but I found I didn't need to change the resolution of the images. Also, the time it takes for ORBSLAM2 to process the video is dependent on the timestamp. If you use intervals of 1 for each timestamp, it will only process 1 frame a second.
Hi BW25 How you process video from iPhone camera? and How you get iPhone calibration variables?
My earlier post was incorrect. ORBSLAM will run without changed resolution, but the feature tracking doesn't quite work right. I wrote the following bash script to process video into resized .PNGs and create the timestamp file at the same time. This program assumes the video is in the same folder as the file with the code in it, and is a standard iPhone .MOV video. It also needs to be run from the terminal. I found that video shot to be taller vertically doesn't work, so this gives the option to rotate too. If it doesn't work for you, I hope at least it gives you a framework to work from.
Note that this program relies on ffmpeg, but it came preinstalled with ubunto for me. If not, you can apt-get it.
Hope this helps!
echo echo "This program processes video for ORBSLAM2" echo
echo -n "Write the name of the video file in this folder: " read inputname echo
if [ ! -e "$inputname" ] then echo "File not found. Make sure it is in the same folder as VideoProcessor.sh" echo exit 1 fi
echo -n "What would you like the output file to be called: " read outputname echo
mkdir -p $outputname/rgb
echo -n "Choose frames per second. Between 10 and 25 is preferable: " read fps echo
echo "The video must be wider than it is long" echo -n "Would you like to rotate it (y/n): " read rotchoice echo
if [ $rotchoice = "n" ] || [ $rotchoice = "N" ] then ffmpeg -i $inputname -r $fps -vf scale=-1:320 $outputname/rgb/img%04d.png
elif [ $rotchoice = "y" ] || [ $rotchoice = "Y" ] then ffmpeg -i $inputname -r $fps -vf scale=320:-1,"transpose=1" $outputname/rgb/img%04d.png
else
echo "Invalid choice. Choose y/n"
echo
exit 1
fi
#Counts the number of output files imgnum=$(ls $outputname/rgb | wc -l)
echo "# colour images" > $outputname/rgb.txt
echo "#file: '$outputname'" >> $outputname/rgb.txt
echo "# timestamp filename" >> $outputname/rgb.txt
#Uses bc to calculate timestamp increment to 6 places #No spaces around = frameTime=$(bc <<< "scale=6; 1.0/$fps") timestamp=0.000000
for i in $(seq -f "%04g" $imgnum) do echo $timestamp rgb/img$i.png >> $outputname/rgb.txt timestamp=$(bc <<< "scale=6; $timestamp+$frameTime") done
mv $inputname $outputname
echo echo "Your files are ready, and have all been put in a single folder." echo "Please place this folder in ~/Desktop/ORBSLAM2 datasets/our datasets." echo
I found that I was able to process video from an iPhone camera. The advice from 9friday is good, but I found I didn't need to change the resolution of the images. Also, the time it takes for ORBSLAM2 to process the video is dependent on the timestamp. If you use intervals of 1 for each timestamp, it will only process 1 frame a second.
Hi BW25 How you process video from iPhone camera? and How you get iPhone calibration variables?
My earlier post was incorrect. ORBSLAM will run without changed resolution, but the feature tracking doesn't quite work right. I wrote the following bash script to process video into resized .PNGs and create the timestamp file at the same time. This program assumes the video is in the same folder as the file with the code in it, and is a standard iPhone .MOV video. It also needs to be run from the terminal. I found that video shot to be taller vertically doesn't work, so this gives the option to rotate too. If it doesn't work for you, I hope at least it gives you a framework to work from.
Hope this helps!
echo echo "This program processes video for ORBSLAM2" echo
echo -n "Write the name of the video file in this folder: " read inputname echo
if [ ! -e "$inputname" ] then echo "File not found. Make sure it is in the same folder as VideoProcessor.sh" echo exit 1 fi
echo -n "What would you like the output file to be called: " read outputname echo
mkdir -p $outputname/rgb
echo -n "Choose frames per second. Between 10 and 25 is preferable: " read fps echo
echo "The video must be wider than it is long" echo -n "Would you like to rotate it (y/n): " read rotchoice echo
if [ $rotchoice = "n" ] || [ $rotchoice = "N" ] then ffmpeg -i $inputname -r $fps -vf scale=-1:320 $outputname/rgb/img%04d.png
elif [ $rotchoice = "y" ] || [ $rotchoice = "Y" ] then ffmpeg -i $inputname -r $fps -vf scale=320:-1,"transpose=1" $outputname/rgb/img%04d.png
else echo "Invalid choice. Choose y/n" echo exit 1
fi
#Counts the number of output files imgnum=$(ls $outputname/rgb | wc -l)
echo "# colour images" > $outputname/rgb.txt echo "#file: '$outputname'" >> $outputname/rgb.txt echo "# timestamp filename" >> $outputname/rgb.txt
#Uses bc to calculate timestamp increment to 6 places #No spaces around = frameTime=$(bc <<< "scale=6; 1.0/$fps") timestamp=0.000000
for i in $(seq -f "%04g" $imgnum) do echo $timestamp rgb/img$i.png >> $outputname/rgb.txt timestamp=$(bc <<< "scale=6; $timestamp+$frameTime") done
mv $inputname $outputname
echo echo "Your files are ready, and have all been put in a single folder." echo "Please place this folder in ~/Desktop/ORBSLAM2 datasets/our datasets." echo
Thanks! I will try it!
I found that I was able to process video from an iPhone camera. The advice from 9friday is good, but I found I didn't need to change the resolution of the images. Also, the time it takes for ORBSLAM2 to process the video is dependent on the timestamp. If you use intervals of 1 for each timestamp, it will only process 1 frame a second.
Hi BW25 How you process video from iPhone camera? and How you get iPhone calibration variables?
My earlier post was incorrect. ORBSLAM will run without changed resolution, but the feature tracking doesn't quite work right. I wrote the following bash script to process video into resized .PNGs and create the timestamp file at the same time. This program assumes the video is in the same folder as the file with the code in it, and is a standard iPhone .MOV video. It also needs to be run from the terminal. I found that video shot to be taller vertically doesn't work, so this gives the option to rotate too. If it doesn't work for you, I hope at least it gives you a framework to work from.
Note that this program relies on ffmpeg, but it came preinstalled with ubunto for me. If not, you can apt-get it.
Hope this helps!
echo echo "This program processes video for ORBSLAM2" echo
echo -n "Write the name of the video file in this folder: " read inputname echo
if [ ! -e "$inputname" ] then echo "File not found. Make sure it is in the same folder as VideoProcessor.sh" echo exit 1 fi
echo -n "What would you like the output file to be called: " read outputname echo
mkdir -p $outputname/rgb
echo -n "Choose frames per second. Between 10 and 25 is preferable: " read fps echo
echo "The video must be wider than it is long" echo -n "Would you like to rotate it (y/n): " read rotchoice echo
if [ $rotchoice = "n" ] || [ $rotchoice = "N" ] then ffmpeg -i $inputname -r $fps -vf scale=-1:320 $outputname/rgb/img%04d.png
elif [ $rotchoice = "y" ] || [ $rotchoice = "Y" ] then ffmpeg -i $inputname -r $fps -vf scale=320:-1,"transpose=1" $outputname/rgb/img%04d.png
else echo "Invalid choice. Choose y/n" echo exit 1
fi
#Counts the number of output files imgnum=$(ls $outputname/rgb | wc -l)
echo "# colour images" > $outputname/rgb.txt echo "#file: '$outputname'" >> $outputname/rgb.txt echo "# timestamp filename" >> $outputname/rgb.txt
#Uses bc to calculate timestamp increment to 6 places #No spaces around = frameTime=$(bc <<< "scale=6; 1.0/$fps") timestamp=0.000000
for i in $(seq -f "%04g" $imgnum) do echo $timestamp rgb/img$i.png >> $outputname/rgb.txt timestamp=$(bc <<< "scale=6; $timestamp+$frameTime") done
mv $inputname $outputname
echo echo "Your files are ready, and have all been put in a single folder." echo "Please place this folder in ~/Desktop/ORBSLAM2 datasets/our datasets." echo
Hey! Would this process be any different for ORB_SLAM3?