corona icon indicating copy to clipboard operation
corona copied to clipboard

Capture-type texture resource

Open ggcrunchy opened this issue 2 years ago • 1 comments

This is a new type of texture resource.

We can create a texture of a certain size—typically the viewable content—and spawn "event" objects: rather than draw to the screen, these will capture its contents to the texture. We can then sample this texture for our own images and effects.

Events "happen" (rather than draw) according to where they are in the display hierarchy: back-to-front, first-to-last. The event should be found before an object that wants to use it.

Kan posted a nice example on Discord, based on a preview version.


It's implemented with glBlitFramebuffer, when available; otherwise glCopyTexSubImage2D.

Seems to pass muster on Mac (M1), Windows, Android (emulator and device), iPhone and tvos (emulators).


I piggybacked the new classes, TextureResourceCapture and TextureResourceCaptureAdapter, into the corresponding canvas texture files. This was only to avoid adding new files and updating projects / makefiles; it would be fine to migrate them into their own files, if desired.

I made GetParent() a public method of LuaLibDisplay so that the capture event object could also use it. (This is also in the adapter file.)


I added some ContentToScreen() variants that don't do rounding. These might also have applicability elsewhere, e.g. the display.capture() family.

As mentioned in the next section, there is still a tiny bit of shimmer if capturing and immediately redisplaying in-place, possibly just a pixel in either dimension. It might be possible to solve even this, but didn't seem pressing.


Here's a sample:

capture.zip

The part in the green outline may be dragged around. This will drag the outline, the capture event object, and a circle that immediately redraws the captured contents: apart from a little shimmer, it resembles the background, as it should.

The captured contents are also drawn to a couple outlined rects along the right side.

Further, the rounded rect on the left captures one of those capture-displaying rects along with part of its neighborhood. The event object is later in the hierarchy than the green outline, so will show that, too, if it was in view.

Finally, the part up top shows a different area; however, its capture texture is the same as that for the rounded rect. This demonstrates that we can capture more than once, using separate event objects: an image will use the current result. (Due to its sequencing, this capture can also pick up the green outline.)


I also removed a few // STEVE CHANGE things that snuck into past PRs.


Some rough docs:

* system.getInfo, new option:

“gpuSupportsScaledCaptures”:

	If true, capture events are able to scale the region they capture. (TODO: at the moment this is disabled.)

and

graphics.newTexture, new option:

	If you provide “capture” as the type, you will get a capture-type texture.

	This requires “width” and “height” parameters: these are the content dimensions, starting from the upper-left corner, that you might want to capture. (Most usefully, this will be the viewable content width and height.)

	These properties may be read, as well as "pixelWidth" and "pixelHeight", i.e. the true texture dimension. The following method is also provided:

	* newCaptureEvent( [group, ]x, y ): this is called like display.*(), except invoked as a method (colon syntax).

	The result is a display object, in the sense that it resides in the display hierarchy and so is “drawn” at the expected time: the region of the content it occupies—centered on the event’s position, with the capture texture’s dimensions—is sent to the capture texture. (If offscreen, this is a no-op; if partially onscreen, only visible parts are updated.)

	If scaled captures are supported, the xScale and yScale may be used to scale the capture dimensions. (TODO: at the moment this is disabled.)

	Paints may then sample the capture texture to use the results. In particular, those sequenced after a capture event will pick up the most recent update.

	The event object is neither culled nor responsive to hit tests.

	GOTCHA: From Solar’s point of view, the capture will be upside-down.

About the disabled scaling:

On Mac, where I first tested everything, the view surface is created with some multisampling stuff. Other platforms might do likewise.

Unfortunately, rescaled draws require our target texture also have this information. It seemed annoying to try to make these agree—on either side—thus the disabling for now.

It did appear to work when stripping that stuff from the view surface configuration.


On Android, there seems to be a GL_EXT_blit_framebuffer_params extension. (My own phone had neither this nor the ANGLE and NV extensions I'm checking.) The only reference to this I've seen online was... in extension lists. sigh

It certainly sounds relevant, and I have guesses about what it might do, but probably will have to ask in a Khronos forum / server or something for actual details. At any rate, the glCopyTexSubImage2D approach seemed to run okay.

ggcrunchy avatar Mar 20 '22 02:03 ggcrunchy

I was discussing the Vulkan backend with somebody earlier and I just realized that HasFramebufferBlit() probably should not return the "can rescale captures" property: that makes sense in GL but I think in Vulkan you would forgo the FBOs and use the copy / blit image commands. I was hoping to submit a fix for that (with separate CanRescaleCaptures() methods) but my synchronization is a bit of a mess. 😄

Anyhow, they're fairly trivial changes and I have them at hand if needed.

ggcrunchy avatar Mar 20 '22 05:03 ggcrunchy