Feature Request: Provide Camera Intrinsics
AR-Core doesn't seem to provide us Camera Instrinsics matrix. Tango and AR-Kit provides Intrinsic and distortion parameters. All Photogrammetry or Marker based applications require Camera Instrinsics. If they are provided by AR-Core, how can we access them?
This is definitely on our radar. For now you can back out a pinhole model from the projection matrix and transformed UVs, but we hope to provide a directly-defined pinhole model with distortion parameters in the future.
@nbsrujan or anyone at google, is this also on your radar for the ARCore Unity SDK and the other platforms as well?
@eisenbruch I've asked and we expect the Unity and Unreal engine integrations to offer feature parity on this topic.
As far as I understand the matter using the projection matrix to derive a pinhole model is problematic since ARCore adjusts it for crop before returning to the app. This effectively scales one of the axes and cannot be corrected since the app doesn't know the actual values of fx and fy. Am I missing something (if so could you please provide a hint)?
@andrdmi That's why I mentioned "and transformed UVs". Frame.transformDisplayUvCoords() lets you discover the current crop factor.
Note: transformDisplayUvCoords may not use the entire 0-1 texture range, even when the texture is not cropped. Some android camera drivers always generate a square texture for example.
@inio Indeed, thanks!
@inio ,@andrdmi projection matrix gets us to normalized device coordinates. Could you please explain how we can use transformDisplayUvCoords to transform normalized device coords to window coords? I am confused because the output of transformDisplayUvCoords seems to be in 0-1 range and I was expecting the output to be in pixel coords. Thank you!
@namnov Apologies for the super pseudocodey logic below, but hopefully it's enough to follow:
First call transformDisplayUvCoords with the point set (0,0), (1,0), (0,1) and call the output point set (a,b),(c,-),(-,d). where - is a don't-care. From this you can form the matrix M_uv_disp
c-a 0 0 a
0 d-b 0 b
0 0 1 0
0 0 0 1
which transforms normalized display coordinates into UV coordinates.
now you just need to do M_uv_camera = M_uv_disp * scale(0.5) * translate(1, 1, 0) * proj_matrix. The scale and translate are needed to convert clip-space coordinates (-1,1) into normalized display coordinates (0,1).
@inio Thanks!
Seconding this. If ARCore can just give us the calibration file it is using, would be nice.
I know it's an old issue, but I think that @inio's example is incomplete for cases where the screen is rotated. For that, you need the off axis elements too, using "g", "h" instead of "don't care" values.
So given outputs (a,b),(c,g),(h,d), the matrix would be something like this:
c-a h-a 0 a
g-b d-b 0 b
0 0 1 0
0 0 0 1
This is untested and may be in the wrong order, I just wanted to follow up in case it helps someone.
Trying a few things inspired by the first step described here, I was able to find the intrinsic parameters for the supported devices in the ARCore APKs. You need to get the apktool and run this in the same folder where you downloaded the ARCore APK (v1.2 in this example):
apktool -q d -s -r ARCore_1_2.apk -o ARCore_1_2
This will create a folder called ARCore_1_2, then go to ./lib/arm64-v8a and unpack libdevice_profile_loader.so (with 7zip for instance) into a new folder for convenience. Then, open the file .rodata in the newly created folder with a text editor. Most of the file's content is formatted as an XML, just look for the name of the phone you're working with and you'll find the intrinsic parameters within the tag <camera>. There are also some IMU calibration data and extrinsics, but it's not clear how are they're used.
I haven't tested the parameters myself, but will do in the next few days...
@inio I find the function you mentioned in https://github.com/google-ar/arcore-android-sdk/blob/master/samples/hello_ar_java/app/src/main/java/com/google/ar/core/examples/java/common/rendering/BackgroundRenderer.java#L133. However, c-a and d-b are 0 or 1e-8 when I run the code on Google Pixel. Is it normal?
ARCore 1.3 adds access to a simple pinhole model for both the GPU texture and CPU image. It does not include distortion parameters yet so I'm leaving this open
@rmonroy84 do u know how to use imu calibration from arcore
IMU↔︎camera extrinsics are now available (once tracking starts) by comparing Frame.getAndroidSensorPose and Camera.getPose (but note bug #535).
@gao-ouyang I've only used the distortion coefficients & intrinsics for the camera. On the file I mentioned here there are some tags related to IMU intrinsics (b_w_b_a, intrinsics, gyro_noise_sigma, gyro_bias_sigma, accel_noise_sigma, accel_bias_sigma), but I wouldn't know how to use them.
I believe this issue can be closed now since the camera intrinsics are now available. Please open a new issue if you still have questions about how to use these APIs.
Reopening as original request for distortion parameters has not been resolved.
@inio Are distortion parameters obtainable now ?
@inio Still relevant in 2020.
This is a possible linked issue with the current method via ARCore: https://github.com/google-ar/arcore-android-sdk/issues/836
@gblikas Hello, I went ahead in my case and assumed the CPU image is undistorted. Are you using that same aswell?
@alexs7 No, I am not. For my application, that is not a good assumption for us to make. My hope was that ARCore themselves are un-distorting the image their API and exposing that; everything I read implies that ARCore doesn't manipulate the Texture (which they shouldn't).
However, I am not sure why exposing distortion coeffs has been put off for so long. It's such an essential feature in computer vision that it makes little sense for it not to be one of the first things surfaced.
@alexs7 It also seems like the arcore-for-all project, was also accepted with zero consideration for the accuracy of the AR in question for "all" devices. There is also no warning about this, other than visual inaccuracies.
@gblikas I am working on localization against an offline map and I am using the CPU image, which is 480 by 640 in dimension. I think the reason that I didn't use the Texture which is because sending a 1080 by 1920 image to a server caused some delays.
I am not sure but is this something you might find relevant ? https://developers.google.com/ar/reference/java/arcore/reference/com/google/ar/core/ImageMetadata#LENS_RADIAL_DISTORTION
I remember I was looking at distortion aswell and found that but I couldn't get my head around those LENS_RADIAL_DISTORTION params . If I got this right those are for the actual camera frame but in ARCore you get a CPU Image or a Texture...
@alexs7 I tried the LENS_RADIAL_DISTORTION ImageMetadata accessor way back when it first came out. When it was first implemented, I was only every receiving 0's from it; has this changed?
@gblikas It's been ages I don't even remember trying it to be honest
Hi, I'm new to ARCore and I wanted to know how to extract the Camera's Intrinsic Parameters. I tried using LENS_INTRINSIC_CALIBRATION but that just returned an array of zeros. I tried using camera.getImageIntrinsics() for extracting the intrinsics of the CPU Image but this does not provide me the axis skew parameter which is required by my application. How do I extract this parameter?
I'm using CameraManager.getCameraCharacteristics(), but it returns valid lens characteristics only for Pixel devices...
@devbridie, any update on this? LENS_RADIAL_DISTORTION is not working and no clue how to access the distortion parameters in ARCore API. This will be very useful feature since texture images returend by ARCore are highly distorted (at close distances).