bevy icon indicating copy to clipboard operation
bevy copied to clipboard

Expose a new function for ViewTarget and MainTargetTextures

Open aojiaoxiaolinlin opened this issue 8 months ago • 11 comments

Objective

  • This PR makes the creation of ViewTarget public, allowing it to be used without being bound to a Camera entity. This enables multiple post-processing steps on a single texture, which is particularly useful when working with Render to Texture. When generating a large number of Camera entities, the FPS can drop significantly, causing lag.

Solution

  • Added a new function for ViewTarget and MainTargetTextureswhile ensuring the internal fields' accessibility remains intact.
  • Exposed the pipeline cache ID field of ViewUpscalingPipeline .

Testing

  • The changes are minimal, and the CI should catch any issues. There were no errors when running locally.

Showcase

Here is an example showcasing the result:
Showcase

In the animation, the glowing effect utilizes multiple post-processing steps on a single texture, with different parameters applied to each glowing part.

aojiaoxiaolinlin avatar Apr 27 '25 03:04 aojiaoxiaolinlin

Welcome, new contributor!

Please make sure you've read our contributing guide and we look forward to reviewing your pull request shortly ✨

github-actions[bot] avatar Apr 27 '25 03:04 github-actions[bot]

I'm not sure that I get the point of this. What's the difference between this, and implementing double buffering between two textures yourself?

JMS55 avatar Apr 27 '25 05:04 JMS55

I'm not sure that I get the point of this. What's the difference between this, and implementing double buffering between two textures yourself?我不确定我理解这个点。这和自己在两个纹理之间实现双缓冲有什么区别?

Hmm... there probably wouldn't be much difference. If I were to implement it myself, I'd basically have to copy ViewTarget anyway, haha.

aojiaoxiaolinlin avatar Apr 27 '25 05:04 aojiaoxiaolinlin

I saw your question on Discord: https://discord.com/channels/691052431525675048/1331533973133725696/1331813878333444240. For multiple post-processing effects, you just need to add nodes like in the Bevy example. As for applying post-processing to specific objects, I looked it up online — maybe you can use the stencil buffer for that?

Touma-Kazusa2 avatar May 28 '25 15:05 Touma-Kazusa2

I saw your question on Discord: https://discord.com/channels/691052431525675048/1331533973133725696/1331813878333444240. For multiple post-processing effects, you just need to add nodes like in the Bevy example.

I haven't fully solved the problem during our discussions on Discord. The current implementation of Flash filter rendering is based on how it's done in the Ruffle project, as shown in the RenderDoc capture below. If it's also possible to achieve this using the Stencil Buffer, I’d greatly appreciate any insights or guidance you could share. render-01 render-02 render-03 render-04

aojiaoxiaolinlin avatar May 28 '25 17:05 aojiaoxiaolinlin

Sorry for my lack of clarity earlier. What I meant was: I’m trying to understand how your code connects to your goal. Specifically, how does exposing the pipeline cache ID field of ViewUpscalingPipeline help enable multiple post-processing steps on a single texture and reduce the stuttering when using many cameras?

Maybe a small demo could help illustrate the effect?

From the image you showed, I think I now get what you're aiming for. The real goal is not applying multiple post-processing steps to regions of a single texture, but rather rendering multiple textures in real-time and using them as sprites, right? Let me know if I misunderstood.

If that’s the case, it might be helpful to state your goal more explicitly so others can better follow what you're trying to achieve.

Touma-Kazusa2 avatar May 29 '25 08:05 Touma-Kazusa2

First, a single shape needs to be constructed from multiple Mesh2d instances (each made from parsed vertex data), because a single visual element may consist of multiple materials. Then, the required texture size for applying filter effects to that shape is calculated. Since I need to perform multiple post-processing steps on the shape (using intermediate textures and not relying on a camera), I use ViewTarget to enable double-buffered rendering for that purpose. Finally, the resulting texture of this structure is composited onto the main render texture.

aojiaoxiaolinlin avatar May 29 '25 08:05 aojiaoxiaolinlin

So the goal of the code is to implement multiple post-processing steps? If that's the case, like I mentioned in my initial comment, you can simply add more nodes — view_target.post_process_write() already takes care of double buffering for you.

Touma-Kazusa2 avatar May 29 '25 09:05 Touma-Kazusa2

However, I'm currently unable to apply this process to a single texture only — I don't need to apply any post-processing to the entire main render target.Would it be possible to achieve this using the Stencil Buffer? The goal is to render many graphical effects within a single frame.

aojiaoxiaolinlin avatar May 29 '25 09:05 aojiaoxiaolinlin

I'm not sure — maybe. I don’t have much experience with this kind of thing. Also, are the different positions possibly using the same post-processing steps but with different parameters? If so,that sounds like it could get a bit complicated...

Touma-Kazusa2 avatar May 29 '25 09:05 Touma-Kazusa2

Each shape requires a different processing pipeline. For example, the glow filter in Flash involves first rendering a blur effect, then compositing the result back onto the texture. In contrast, a color filter only needs to compute and apply a corrected color. Even when the processing steps are the same, the parameters can vary. What’s more, in Flash, these filter parameters can change every frame, making the system highly dynamic. Currently, I'm using a custom render graph to handle this functionality.

aojiaoxiaolinlin avatar May 29 '25 09:05 aojiaoxiaolinlin