openvslam icon indicating copy to clipboard operation
openvslam copied to clipboard

How to get camera model on iPhone XS mounted fisheye lens

Open skypu3 opened this issue 4 years ago • 11 comments

Hi team,

I want to find pin hole camera model on iPhone XS mounted with fisheye lens So I get "intrinsic matrix" to find the fx, fy, cx, cy from below: https://developer.apple.com/documentation/avfoundation/avcameracalibrationdata/2881135-intrinsicmatrix And I use the camera-calibration-ios github to find the distortion_coefficients. https://github.com/thorikawa/camera-calibration-ios Here is the camera model I find: #==============#

Camera Model

#==============#

Camera.name: "iPhone" Camera.setup: "monocular" Camera.model: "fisheye"

Camera.fx: 6190.428 Camera.fy: 6190.428 Camera.cx: 1493.424 Camera.cy: 1972.364

Camera.k1: -3.2828323353358818e-01 Camera.k2: 7.8798773365756081e-02 Camera.k3: -3.0927226354753820e-03 Camera.k4: 8.6781907034207198e-03

Camera.fps: 30.0 Camera.cols: 900 Camera.rows: 1600

However, we can't get openvslam video slam work normally, does anyone have know how to setup the camera model on iOS ?

skypu3 avatar Jul 17 '19 09:07 skypu3

Would you set Camera.cols and Camera.rows properly in the configuration?

shinsumicco avatar Jul 18 '19 04:07 shinsumicco

@shinsumicco, Camera.cols and Camera.rows are the video resolution, right ?

skypu3 avatar Jul 24 '19 01:07 skypu3

Hi, did you get openVSLAM run on an IPhone, with localization? If so, could you briefly mention you toolset and pipeline how you managed to get it run on ios?

Best regards Lukas

lukasrandom avatar May 06 '20 10:05 lukasrandom

Hi @skypu3, did you just use this example: https://github.com/thorikawa/camera-calibration-ios. I tried to run the app but the app does not show those values as you indicated.

andrewcccc avatar Jul 13 '20 17:07 andrewcccc

@andrewcccc

I did that way > https://github.com/xdspacelab/openvslam/issues/368#issuecomment-649209592

(It was not with iPhone, but the source doesn't matter since all you need is to get some frames/pictures).

mirellameelo avatar Jul 13 '20 17:07 mirellameelo

Hi @mirellameelo. Thanks for the response! I am trying that with an iPad. However, I tried to run the app but the output on Xcode just showing "not found". I am not too sure how to get the camera_parameter.yml with all these camera values. Do you have a moment to explain?

Best regards,

andrewcccc avatar Jul 13 '20 17:07 andrewcccc

@andrewcccc Sure. You can find a description of what I did in the comment https://github.com/xdspacelab/openvslam/issues/368#issuecomment-649209592

Let me know if I wasn't clear on any of these steps.

mirellameelo avatar Jul 13 '20 17:07 mirellameelo

@mirellameelo, Thanks for the response! I will dig in a bit more. But is it applied to Swift as well for the example code? The example codes indicated for python and C++ only?

andrewcccc avatar Jul 13 '20 17:07 andrewcccc

@andrewcccc You want to get your camera parameters and then build your .yalm file, am I right? If yes, print a chessboard, make a video showing the chessboard while making movements around it, always showing the entire board. Later, bring the video into your pc. Is it possible? If yes, with the OpenCV library, read this video and save some frames of it. 15 - 20 frames from different positions should be enough. OBS: you can make it using pictures once the pictures have the same resolution of the video. But for what I remember, Apple is too restrict for camera configurations...so, using the video is safer.

Now, with the frames/pictures of the chessboard in different positions, you can follow the tutorial I mentioned and get the parameters. Or you need this to be developed in Swift? If yes, then I have no idea how to make it, sorry.

mirellameelo avatar Jul 13 '20 18:07 mirellameelo

@mirellameelo, I tried to do the same task using an iphone 6 camera. I recorded a video converted them into images and when I ran the code provided at https://www.learnopencv.com/camera-calibration-using-opencv/ , cv2.findChessboardCorners returned false and null value, which basically means that the code couldnt detect the image correctly. I dont know either my dataset for this task is not suitable for this code or something else isnt working, can you help me out ?

I'm attaching a rgb colored image in my dataset. I've even converted them to graysacle image still the cv2.findChessboardCorners return false and null value. index

ma2b0043 avatar Oct 21 '20 19:10 ma2b0043

@mirellameelo and also i wanted to ask you is there a way to automatically generate a config. yaml file by using a normal video of 3d space not a 2d object like this checkerboard? at runtime?

ma2b0043 avatar Oct 21 '20 19:10 ma2b0043