shady
shady copied to clipboard
Improve tile size detection
With the current design it is crucial that the autotiling and splitting is working correctly and robust. Currently the detected tile size seems to be to big in some cases.
This might be related to the following observation: The computed delta time is quite noisy and this causes the tiling size to be updated very often. The new improved tile size computation is also quite sensitive to this noise, so the tiling sizes are always computed differently then. This(?) also often causes splitting to be ommited for a few frames.
A possible solution to this might be averaging, but the problem with this is added latency which might be cause the tile size to be to big because of too slow adaption. (The tile size should ideally not be too big for even one frame)
The problem is that i currently compute fps for every tile separately. I should instead take the maximum tile fps of a whole frame, because sometimes computation time varies locally within a frame.
Commit 7fbdf42c0be5bfd0863b75695a590b4bb583d6be changed the tile size detection to depend on the maximume time of a tile in a frame and therefore to only decrease tile size inside a frame.
This comes with a new problem though: When the tile size is computed to small once, it needs quite some time to redetect even for the quickest shaders due to increased overhead.
An important example of this happening is that after compiling the tile size is set to small size because the rendering performance of the newly compiled shader is completely unknown and it should be avoided to render a to large tile then. A fix for this could be to render a few test tiles in this situation.
Sometimes shaders stay black after compiling until the shadertoy area is resized. This is most likely due to false tiling detection together with #85