mgba icon indicating copy to clipboard operation
mgba copied to clipboard

[Feature request] Toggle between 24-bit and 15-bit color mode

Open Stormkyleis opened this issue 4 years ago • 7 comments

Some consoles such as the SNES and the GBA use 15-bit palettes: in this color range, the darkest black is RGB(0, 0, 0) or #000000 and the brightest white is RGB(248, 248, 248) or #f8f8f8. However, some emulators such as mGBA stretch this range to 24-bit, making the brightest white RGB(255, 255, 255) or #ffffff. If you pay close attention, you can even see the difference in sites like The Spriters Resource, where certain spritesheets are slightly lighter than others depending on the emulator chosen by the ripper at the time (example: notice how the white in this spritesheet is #f8f8f8 while the white in this spritesheet is #fffbff). This is default behavior and cannot be changed.

Example: Example

I was writing the screenshot guidelines for a wiki and this turned out to be a bit of a problem, since it created a discrepancy with other emulators. For the sake of accuracy and consistency, we have to recommend one emulator over the other. Everyone in the team uses mGBA, but we would rather have the 15-bit color range, which is presumably more console-accurate.

This is why I would like to request an option to toggle between 24-bit and 15-bit palettes, if it's not too much of a hassle.

I'm aware that, according to this old thread, going with 24-bit colors was a conscious decision based on popular demand. You suggested applying a shader to simulate 15-bit colors, but I'm afraid that asking our users to do that would be unintuitive and would result in colors that aren't 100% accurate to the raw values.

Thank you for reading.

Stormkyleis avatar May 22 '21 00:05 Stormkyleis

I'm not sure that presumption for console-accuracy is correct.

With 15-bit color, each of RGB gets 5 bits, allowing a value from 0-31. In other words, colors range from 0/31 to 31/31.

With 24-bit color, it's 8 bits each which means a value from 0-255. So this is 0/255 to 255/255.

A simple way to convert between a 5-bit value and an 8-bit value is to multiply by 8. This is a simple number, like how you could say that a simple way to convert from a square to a circle is to multiply by 3/4. When you multiply 31 by 8, you get 248. This is pretty close to 255, so good enough for most purposes.

But just because this is a convenient and fast way to produce an 8-bit color from a 5-bit color does not imply that the original console display used 8-bit colors and converted in this way. For the SNES, this would've been an analog output - so there'd be no reason to convert to 8 bits. For the GBA, the display likely wasn't capable of 24-bit color and likely took 15-bit directly.

Rather you should view the color as a percentage intensity. 30/31 is 96.77% intensity. Expressed in 8 bit digital format, 96.77% of 255 is approximately 247. Of course 31/31 is 100%, or 255.

To better understand this, imagine a future where all graphics, displays, etc. are using 10-bit RGB values (i.e. 30 bits per color.) In this world, 100% white would be 1023/1023. The easiest way to convert from 8-bit to 10-bit would be to multiply by 4. So 255/255 would become 1020/1023. Do you think it's correct that we lack the technology today to output actual white, and could only do so if we converted everything to 10-bit (or 16-bit, or 100-bit)? Actually, in the same way as 5 -> 8 bit, the inaccuracy of 1020/1023 would just be due to a math shortcut - multiplying by 4.

If you're looking for accuracy in your color values, it's probably best to account for what colors were actually produced on displays the developers of these games were targeting. That means, for the GBA, how it would've actually looked on a physical GBA screen (which is not as simple as multiplying by 8 OR using the percentage, unfortunately.)

-[Unknown]

unknownbrackets avatar Jun 06 '21 20:06 unknownbrackets

The reply is accurate. Neither is "more console-accurate" since the console doesn't have a concept of colors deeper than 15 bits*, so the "best" you can do is either the linear scaling mentioned in the reply or a LUT to match the color reproduction of the display. The LUT approach is significantly more expensive at runtime so I opted for scaling. The remaining difference falls to pedanticism, not to any actual objective measure. You're better off doing post-processing to drop the low bits of each channel if you care. Having this be a configurable runtime option would also be very expensive (though at worst it would just become the LUT approach), though it might be possible to only have it occur on a screenshot instead of when displaying.

If I can figure out a fast way to manage a LUT that would be somewhat reasonable to replace the current implementation (and satisfy a lot of other concerns), but it also runs the risk of making rendering much, much slower.

*The rendering hardware actually is 16 bits according to kevtris, with one extra green bit, but when output to the display this bit is dropped.

endrift avatar Jun 09 '21 01:06 endrift

Assuming the most accurate colors have different ramps per R, G, and B - you could upload a 3x32 texture (using only R/A) and use a post-processing pass to map each component in the texture. In PPSSPP, we have to handle rendered framebuffers being processed via CLUTs, and that's similar to what we do (but it's actually a CLUT so it's 1 pixel tall.) We also use a similar approach for alpha test lookups where bitwise ops aren't supported.

It is a bit slow (especially when a game triggers it in annoying ways, masking R, G, then B for the full screen) but assuming this is done on the output as a single pass before upscaling, it shouldn't be that expensive. We do it at full render resolution in PPSSPP even on Android.

-[Unknown]

unknownbrackets avatar Jun 09 '21 01:06 unknownbrackets

LUTs are cheap on GPUs, which is why the recommended solution is a shader. I'm talking about software rendering, where iterating over pixels is serial (or ideally SIMD, but SIMD and LUTs don't get along).

endrift avatar Jun 09 '21 07:06 endrift

"Console-accurate" was probably the wrong term. Better put, our goal is for our screenshots to be identical to the majority of Nintendo's official screenshots. Here are some examples ranging from 2002 to 2006:

Example9 Source: Nintendo Gamer's Summit 2002 Press Kit

Example10 Source: Nintendo Gamer's Summit 2002 Press Kit

Example1 Source: E3 2003 Press Kit

Example2 Source: E3 2003 Press Kit

Example3 Source: E3 2003 Press Kit

Example4 Source: E3 2003 Press Kit

Example5 Source: E3 2004 Press Kit

Example6 Source: E3 2004 Press Kit

Example7 Source: E3 2005 Press Kit

Example8 Source: E3 2006 Press Kit

It may seem pedantic, and well, it is. It's just the last step before our screenshots are indistinguishable from Nintendo's. The goal of this request is to provide users with an easy way to achieve this, if you see it fit.

Stormkyleis avatar Jun 10 '21 00:06 Stormkyleis

Huh, why would they do that? Imho the correct way is to scale up to 8 bit per color so the colors can be displayed correctly on 24 bit displays. The AGB_FIRM on 3DS also outputs 5 bits shifted left by 3 (0xF8) but they correct that via the scaling hardware.

profi200 avatar Jun 10 '21 10:06 profi200

I thought you developers could add a palette selector like BizHawk which uses your emulator core. I prefer 248 stuff too.

Yave-Yu avatar Jul 07 '21 03:07 Yave-Yu