GearVRf
GearVRf copied to clipboard
Obtaining Video frame being rendered onto the mesh
Hello. I am currently working on an application where I will need to process video frames being rendered on to a mesh. I want to capture the render in the users FOV and then process it for some further information. Is there a function in the framework to obtain the data (image/render) in the users FOV?
I have tried using the captureScreenRight(GVRScreenshotCallback) function of GVRContext but all I am getting from it is an all-black image. This is how I am calling it. saveImage is a self-written function to save bitmap as JPEGs on the device.
mGVRContext.captureScreenRight(new GVRScreenshotCallback() { @Override public void onScreenCaptured(Bitmap bitmap) { saveImage(bitmap); } });
EDIT: I have tried all three: captureScreenCenter, captureScreenRight and captureScreenLeft.
@sam16222 We use this functionality in our automated tests and to my knowledge it works. Will double-check. Are you using the MediaPlayer or the ExoPlayer for video playback? Something else? What is your phone model and the Android version? Thanks.
I am using MediaPlayer for video playback. Just to be clear, I don't want the entire frame. I just want the part of the frame currently visible to the user (it's a 360 video playing on a spherical mesh). Phone Model is Galaxy S7 Edge SM-G935V, Android version 7.0.
Do you have any sample available where such a thing is done? It would be really helpful.
I'm not 100% sure i understand what you're looking for, let me try rephrasing it and see if I understand:
You're looking to for which portion of the videotexture (the video being played back is actually a texture that is mapped onto the sphere) is within the viewer's view frustum?
you can get the view direction by calling the GVRCameraRig's getLookAt() method. you can get the intersection point of where the user's gaze intersects the mesh using the picking routines (@NolaDonato may have to elaborate here).
Using either or both of the above, you can probably do some math and figure out the corners of where the user's FOV intersects the sphere.
does that help at all?
I figured as much. But due to my lack of experience dealing with textures via GVRF, I was hoping for a little more detail for going about it. like extracting the texture and processing it as a bitmap. Have any idea about captureScreen?
Sorry, i think i should've read your initial post first :). I was thinking you wanted to so some processing along the lines of something that could be done in a fragment shader. In that case, you could so something similar to the following: https://github.com/Samsung/GearVRf/issues/1047
if you're looking to save, you can capture an eyebuffer (an eyebuffer is a buffer that a left/right eye is drawn to and is then passed to the Oculus or DayDream backend for warping, distortion correction and display). It looks like you found the API for that. And that should work, since as Mihail said, we use it in our automated tests. Do you get a non-black image if you are not displaying a video and instead display something else? Perhaps don't just draw the sphere and instead set the camera's background color to blue? I'm half wondering if DRM for the video is kicking in preventing you from reading back the video image.
I tried removing the mesh and setting the background blue, but it's still giving me an all black image. Am I doing anything wrong? This is my saveImage code: private static void saveImage(Bitmap finalBitmap) {
String root = Environment.getExternalStorageDirectory().getAbsolutePath();
File myDir = new File(root + "/saved_images");
myDir.mkdirs();
String timeStamp = new SimpleDateFormat("ddMMyyyy_HHmm").format(new Date());
String fname = "Image-"+ timeStamp +".jpg";
File file = new File (myDir, fname);
if (file.exists ()) file.delete ();
try {
FileOutputStream out = new FileOutputStream(file);
finalBitmap.compress(Bitmap.CompressFormat.JPEG, 90, out);
out.flush();
out.close();
} catch (Exception e) {
e.printStackTrace();
}
}
The screenCapture function should give me the eye buffer right?
I'll again explain my use-case. That way you might be able to suggest the best way. I want to capture the FOV of the user from a 360 video, and process the image to obtain data from it. I am aware of how to do that from real-time camera feed, but obtaining it from a 360 video is unexplored territory for me. I am trying to go through the framework in order to find the appropriate way to do that, and screenCapture seems like the way to go as of now. Please do suggest any better alternatives, keeping in mind I have just started GVRf dev :)
@sam16222 We have a sample that captures screenshots amongst other things. When I had it capture screenshots it produced black results for me too. Will look into it. The sample is GearVRf-Demos/old/gvr-sample.
@sam16222 For now in your gvr.xml file specify multiSamples="1"
although you may not like how things look.
In Java_org_gearvrf_OvrViewManager_readRenderResultNative, glBlitFramebuffer returns GL_INVALID_OPERATION otherwise. Will be investigated.
@sam16222 To use multiSamples>1 you can try https://github.com/Samsung/GearVRf/pull/1054
@liaxim Screenshots started working fine after changing multiSamples to 1. I am not quite sure I understand how to make it work with #1054 . I am still using the dev snapshot and haven't updated the app to work with the on disk framework and thus cannot make changes to the code.
Can you explain why it doesn't work for >1 ?
Can you tell me how to obtain access to the eyebuffer being drawn onto the texture on each frame, before it is even drawn? That way I can make my own optimizations as per the use-case. Waiting for the screenshot callback would be a little slow when we would like to process each frame.
@sam16222 One of these days the fix will be reviewed and merged. Then we will update the development snapshot so you'll get the fix automatically.
There is (unnecessary) glBlitFramebuffer operation performed when taking a screenshot. If multiSamples>1 then it fails because of "GL_INVALID_OPERATION is generated if the value of GL_SAMPLE_BUFFERS for the draw buffer is greater than zero.". Even after I ensured the draw framebuffer has 0 sample buffers it would still fail with GL_INVALID_OPERATION, most likely due to format mismatch between the read and the draw framebuffers. If we just use glReadPixels without intermediate framebuffer it works just fine.
Thanks!, But isn't glReadPixels a little slow? In the future I plan to process every frame of the video and using glReadPixels would lead to a significant lag in the application. Is there a better way to directly obtain the buffer?
Yes, our screen capturing uses the blocking variant. The better way would be to use PBO (https://www.khronos.org/opengl/wiki/Pixel_Buffer_Object) in conjunction with glReadPixels.
The thing is captureScreen gives a 1024*1024 bitmap, with quite some pixelation due to which the QR codes (object of interest) become unrecognizable. I am using a high res 360 video which looks great via the HMD. I want a way to extract the buffer in it's simplest form and then process it for information. Is there a way to obtain that?
Try increasing the eye buffer resolution. It is not going to be for free performance-wise but there will be a value that works for you. Modify gvr.xml like this:
<lens name="N4" >
<vr-app-settings ...>
...
<eye-buffer-parms ...
resolutionWidth="2048"
resolutionHeight="2048"/>
...
</vr-app-settings>
</lens>
The bitmap you get is identical to what you see in the HMD. You could take a screenshot via the phone and via the GVRF facility and compare them. Let us know if you see discrepancies.
correct, for better fidelity, you'll need to increase the eyebuffer resolution. For faster readback, you'll need to use PBOs. We used to have some code that was used by an internal project that was delivered a while back that did that. I think we'll need to rearchitect it for a more general use case.
I can't afford any more performance lag. How to go about working with PBOs? I need to tap into the framework and get the buffer before it is being rendered onto the texture.
I will look into adapting our code that uses PBOs for use by the screen capturing facility. It might be easier to just reuse bits. Generalizing that code might be better done after the major Vulkan-related changes are in.
@sam16222 Fyi, I will get to it but I can't promise it will happen very soon.