CRT "afterglow" emulation?
Hello. I apoligize if this has been asked/covered before and I missed it in the documentation.
C64 demos use a lot of color cycling to create nice "pulsating" colors out of the limited palette. This effect works best when there is a little bit of "afterglow" of the old color/picture to smooth over the colorchange. TFTs don't have that built-in smoothing (usually). I was wondering, if such a thing could be implemented in RGBtoHDMI. Like, on the output-frame, mix a small percentage of the previous frame into it before outputting. So, merging the current and previous frame with maybe 75%+25%. Or a "three frames" memory, with something like 70%+25%+5% for an additional "smoothing" frame.
A possible place for this might be after all de-interlacing has been done, but I don't know, if there are enough ressources left. Also, I have no idea about the math that would be required...
But I think, that might be an interesting effect. Sure, a bit less "sharper" and movements would blur, but if it is switchable, one has the choice.
What do you think?
@Purplegopher
I was going to look at adding PAL artifacting for the C64 sometime where two colours are used on adjacent lines which get averaged into a new colour by the PAL decoder so is this something in addition to that?
Hello @IanSB Thanks for replying. I think I should have opened a discussion instead of an issue. Sorry about that.
I am not familiar with PAL artifacingt but if it is on the same frame (just on adjacent lines) then it is something different than what I mean. Your artifacingt suggestion would probably be great for interlaced colors, where you generate more colors than the C64 has by switching it on each half picture.
My suggestion is more a "timed decay" where the color of the same pixel in the previous frame is remembered and a tiny bit of it is mixed into the same pixel on the current frame. It works best when the text or image is "pulsating" and not moving. A bit like the upper part here: https://youtu.be/2xXpXsan0qs?t=2178 Or the text "Editorial" or the cursor here: https://youtu.be/Amxu1BpUzC4?t=949
I made a video of the extremely long phosphor persistence on my Leading Edge DR-1240 green monitor, running in Hercules text mode and maximum brightness. It takes nearly 15 seconds before the afterimage isn't really noticeable in person (the black crush in the video does not represent this fully). The decay seems to follow an exponential curve.
I think it may be possible to simulate this without double buffering by applying a decay function (possibly just division) to every pixel for the current frame, and then or-ing the new pixel values with the current ones instead of overwriting them.
I don't know for sure if the beam can outrace the decay and ramp up the phosphor energy over multiple scans, but I think it can. It looks like it may take two scans to reach peak fluorescence on my monitor. I think this could still be simulated by adding the new pixel values (which must have an upper limit below peak HDMI brightness) after applying the decay function to the current ones but I'm not sure.
https://github.com/hoglet67/RGBtoHDMI/assets/210/b7cb37a1-0ec0-42a7-8d9c-4443a1d0bd11
There is also some bloom when the monitor is driven hard but that's a bit harder to simulate. There may be some algorithmic ideas here: https://github.com/Swordfish90/cool-retro-term