Mobile-SDK-Android-V5
Mobile-SDK-Android-V5 copied to clipboard
KeyThermalSpotMetersurePoint with offset
Drone: Mavic 3 Thermal
SDK Version: 5.8.0
I am attempting to utilize CameraKey.KeyThermalSpotMetersurePoint
. I understand how to use the actual key (and got it to work) but I'm unsure how to properly account for the margins of the FPV caused by the aspect ratio.
Starting in 5.8.0
, the camera feed is handled by MediaDataCenter.getInstance().cameraStreamManager?.putCameraStreamSurface()
. I assume that this allows the SDK to handle any video feed resizing caused by the camera's aspect ratio. Because of this, I can only make a "best guess" on how to resize the "interaction" view.
How can I accurately find the exact offset values of the feed so I can use CameraKey.KeyThermalSpotMetersurePoint
correctly?
Screenshot for clarity (from the sample app. Red notes/makrs are my post-capture edits)
Agent comment from yating.liao in Zendesk ticket #103739:
The following code represents how the SDK internally adjusts camera size when ScaleType is CENTER_INSIDE. The core logic here is orthographic projection.
You can determine which area of the screen the camera image will be displayed in based on this piece of code.
public static void fillProjectionMatrix( @NonNull float[] matrix, // length is 16 int frameWidth, int frameHeight, int surfaceWidth, int surfaceHeight, @NonNull ICameraStreamManager.ScaleType scaleType) { if (matrix.length < 16) { throw new IllegalArgumentException("The length of matrix must be greater than 16"); } float aspectFrame = frameWidth / (float) frameHeight; float aspectSurface = surfaceWidth / (float) surfaceHeight; float left = -1; float top = 1; switch (scaleType) { case CENTER_CROP: if (aspectFrame > aspectSurface) { left = -aspectSurface / aspectFrame; } else { top = 1 / aspectSurface * aspectFrame; } break; case CENTER_INSIDE: if (aspectFrame > aspectSurface) { top = 1 / aspectSurface * aspectFrame; } else { left = -aspectSurface / aspectFrame; } break; default: break; } Matrix.orthoM(matrix, 0, left, -left, -top, top, 1, -1);}
°°°
I understand the values of surfaceWidth
and surfaceHeight
, but where do the values of frameWidth
and frameHeight
come from?
Agent comment from yating.liao in Zendesk ticket #103739:
A callback similar to ReceiveStreamListener will push the width and height of video frames. If the camera does not switch camera modes or video sources, the width and height of video frames usually remain unchanged.
Thank you.
°°°