Use scissoring to improve efficiency on tiled renderers
I'm suggesting to the Raspbian folks that they use compton for their desktop compositing with the open source graphics stack. Things are looking good with:
compton --backend glx --unredir-if-possible \
--glx-swap-method buffer-age \
--glx-no-stencil --paint-on-overlay --vsync opengl
However, compton could be a lot more efficient with a small change. The Raspberry Pi has a tiled renderer, so for each 64x64 tile of the screen we load from memory, render any primitives that affect that area, and store it back once all the primitives are done. This means that even if you don't draw to the whole screen, we still eat the memory bandwidth cost of loading and storing the whole thing (the load is skipped if you glClear(), but you still store). However, if you use glScissor() around your rendering, we can use those bounds to skip loading and storing the tiles that aren't affected by your rendering, so that the bandwidth cost of updating the screen scales with the area being rendered.
Hi! I don’t know if I’m too late to ask buy I’ll give a try.
Using i3 in my raspberry and compton recently, but fading looks very laggy, this is my config, any recommendations?
paint-on-overlay = true;
glx-no-stencil = true;
glx-no-rebind-pixmap = true;
vsync = true
fading = true;
fade-delta = 5.0;
fade-in-step = 0.02;
fade-out-step = 0.02;