Refactor high level integration of VFX in the engine
- [ ] currently all particle components / emitters are updated (simulated) at the start of the frame - which means even particles off-screen are always simulated
- [ ] in order for the particle system to use few camera properties (specifically I think only its position for sorting), it depends on a hacky way the forward renderer sets up _activeCamera - note that this works as expected for a single camera only. Ideally we create multiple mesh instances / meshes inside the emitter, to allow their per camera sorting?
- [x] there is this comment related to this in the particle-emitter suggesting that due to not having camera during the first update (before forward renderer sets _activeCamera), and incorrect shader might need to be compiled initially .. this is fixed later when the camera changes. Fixed in https://github.com/playcanvas/engine/pull/6804
- [x] the emitter calls
material.getShaderVariant()without any parameters, and so we don't have access to camera rendering settings (gamma, tone mapping) - Fixed in https://github.com/playcanvas/engine/pull/6804 - [ ] When a what is called
complex propertyof the ParticleComponent is changed, it rebuilds entire emitter from scratch. This is not particularly efficient. But what is worse is that if the same property is changed multiple times per frame, the emitter gets rebuilt each time. Ideally the emitter's resources are only destroyed at that point, and lazily recreated insideaddTimeat the start of the next frame. When the resources are destroyed, that includes mesh instance, and component uses the mesh instance to set few properties on it, and so a simple solution does not work. We could perhaps change the MeshInstance API to allow it to be created without mesh and mesh supplied later. Emitter would then keep the meshInstance permanently, allowing the component to set its properties.
Ideally:
- This should work similarly to skinned/morphed meshes and splats, where forward renderer executes culling first, and the expensive update only takes place for the visible visuals. This should be at least case for the procedural particles where the bounds can be estimated.
- user might need an option for simulate off-screen or disable
This issue reminds me a bit of gaussian splats which also have both global (shared) state and per-view state.
In the engine, meshes and materials are assumed identical in all camera views (except for shader pass and matrices) and it would be nice if we had a formal way of creating and updating arbitrary per-view state.
Is there a way to tell playcanvas which is the "primary" camera in this instance? I have a use case that may require gsplat sorting in multiple cameras, however only one of these cameras at a time will suffice in the short term.
The engine reports all cameras that the splat is visible for, and at some point if the tech allows us, we would like to sort them per camera. At the moment the code simply picks first camera, as the engine have no idea which camera is the main one. You could patch the engine and pick camera based on name or similar.
https://github.com/playcanvas/engine/blob/ac8a10f44e5abf2bdaa6c09c55f884198f457585/src/scene/gsplat/gsplat-instance.js#L219-L222
https://github.com/playcanvas/engine/blob/ac8a10f44e5abf2bdaa6c09c55f884198f457585/src/scene/gsplat/gsplat-instance.js#L179-L185
The VFX could use the same system.
I thought that might be the case thanks for the heads up I'll bear that in mind when the time comes. Our other use case is when creating reflection probes you can see they are broken in the reflection but its good enough for scene lighting