Can (and should) we avoid the hue shifting that happens at the edge of light sources?
It could be possible to achieve this with 4 color channels instead of 3.
With 3 color channels there are 8 base colors: black, white, red, green, blue, yellow cyan and magenta.
4 color channels would allow having 16 base colors, currently we have 20 colors in the colored block palette, however some of those might be considered darker variants of others, like crimson, indigo, viridian and the grays.
I don't know if it will actually produce a sensible color space, but it is at least mathematically possible to create a linear color space with 4 color channels from the remaining 15 colors.
As for performance, adding another color channel for lighting should be pretty cheap, the only expensive bit would be the conversion before uploading the mesh, but that's tiny compared to the cost of interpolating the lights.
Configuration might be a bigger problem though, as absorption and emission values would need to use an unintuitive 4 color code.
In my opinion: yes. Lighting should be improved in the future.
You could probably have RGB plus a darkness scale
With your suggestion I would need to mix the colors whenever two light sources of different colors are near each other, and it's unclear how the mixed color would propagate. Flood fill lighting only works if all channels are treated independently.
I changed the priority to "Long-Term Goals" because with the new changes that are cooking, hue shifting is a lot more noticeable and distracting.
I found out how we can possibly remedy this issue without a total rewrite. And it's called Lightmaps/Tonemaps!
Basically, light colors are referenced from an image like this. This is the 5-bit color spectrum, which contains every light color in Cubyz.
We can change this image to adjust the color space.
Here's an example that would make the lights more vibrant and polarizing.
Another example that would make colored lights vibrant, but desaturate as they dim.
The root issue remains: There is no way to distinguish a faint red light from e.g. a far away orange light. So matter how you tone-map them, you will either end up turning dark red into orange, or orange into dark red, or both of them into a faint gray.
Also the issue post already contains a solution that has a better chance of working.
Currently the issue is that orange color channels (31, 15, 0) gets attenuated to (16, 0, 0), which is indistinguishable from dark red.
First of all, we could halve the light radius or add an extra bit to make orange into (31, 15, 0) + (32, 32, 32), so that the attenuated orange will be (48, 32, 32) and still distinguishable from red. Then we face the other hue shifting issue which is due to the ratio r/g not being constant, which we can solve with math or tonemapping.
Alternatively, even if we leave the hue shifting, it might be worth offering tonemapping for artistic control over how it happens. As ikabod offered, we could desaturate colors at the fringes. We could also map dark red/fringe red (16, 0, 0) into something like (12, 4, 0) to and explicitly make the tradeoff of supporting red and orange lights.
First of all, we could halve the light radius or add an extra bit
A light radius of 15 is too small in my opinion, especially given the scale of our caves, even 32 is still too small sometimes. And adding an extra bit would mean that light data wouldn't fit into a single u32 anymore, making it more expensive to load in the vertex shader (and yes I have measured in the past that there is a significant difference when adding even just a single additional memory load).
We could also map dark red/fringe red (16, 0, 0) into something like (12, 4, 0) to and explicitly make the tradeoff of supporting red and orange lights.
This does not work as then a red light source (32, 0, 0) would hue shift into orange.
This does not work
Well, either we are forced to hue shift orange into red, or we give ikabod options to have it some other way.
Or we implement what I suggested, and avoid any such compromises.
Could having light data in HSV work?
@codemob-dev strictly speaking storing the data in a different format wouldn't do anything. More generally, it might be possible to modify the floodfill to handle hues better. I explored this on discord but I didn't get very far.
The mathematics of floodfill lighting is very limited. It's required that given a region of cells, recalculating the floodfill function at some block would give the same value again. Otherwise, it might get computationally ugly to find the fixed points, or it might not even converge. Something naive like "average the two colors together" wouldn't work, and you can verify this for yourself with two lights 2 blocks apart from each other. Averaging hues via HSV wouldn't change the convergence issue. It seems like a fundamental problem.
Standard floodfill works with 3 color channels, the R, G, and B, and essentially makes a voronoi diagram (self=min(neighbors)-1) for each channel independently, and then sums the results back together. The only other sane function I could find would be to update the voronoi diagrams in sync with each other, but that prevents color mixing altogether. Though, it would be cool if you could find another one. There might be potential with a function that averages only conditionally on distance, but I can't get that to work consistently.
The main issue is that the color channels decrease at the same rate. HSV could make the math easier because you could just decrease the value, the only problem is color mixing.
The trouble is that most interesting ideas make light sources interfere with themselves. You can't make the math easier until it's valid in the first place.
If you look at #2134, you can see the moonlight is a very vivid blue. It's kind of impossible to have a desaturated blue with our current lighting.
#2134 shouldn't be affected by this at all.