Reworking the fog
This thread is dedicated to discussing technical (re)implementations for the fog.
As other transparent things, fog works differently in naive blending and linear blending.
The currently implementation is to use in GLSL values from the alpha channel of a “fog” image that is generated by the renderer.
I noticed that linearizing it seem to bring interesting results in linear blending, at least for the atcshd map.
Then @slipher said:
The fog image is pretty pointless, all it really does it calculate
sqrt(x). We should just do that in GLSL. I think it only exists because someone ported the code from GL1 in an overly mechanical way.
So, disregarding to naive and linear blending, we may drop that image and do the computation in GLSL.
Now, about that computation, @slipher said:
Square root doesn't make much sense as a model for the fog; probably they just tried random functions and picked something that looked OK. Instead of
sqrt(a*x),1 - exp(b*x)would be an obvious model to use (as if the fog is formed from layered alpha blending).a/bis a constant based on the fog density.
So maybe that square root computation for naive blending is already a workaround for the naive blending being broken by design, and linearizing is like another random guess to get something that pleases, a bit like the known fact that using quadratic light attenuation with naive blending is somewhat canceling the mistake, while both light attenuation and blending should be linear to be done properly.
So, we may need two computations:
- one for naive blending (we have to keep
sqrt()for compatibility I guess) - one for linear blending
Once those two computations are defined, we have to define how to do them. We can do them in a precomputed image the GLSL code picks value from, or we may do the computations in GLSL directly.
Doing the computations in GLSL directly would void the image sampling (and then, binding, I guess), but switching the computation would require to compile the two different codes, while using images can use the same compiled GLSL while we just switch the image.
Please share any thought you may have on the topic!
- one for naive blending (we have to keep
sqrt()for compatibility I guess)- one for linear blending
I don't see why you'd need 2 different ones in the shader when shaders work in the linear space regardless.
Here are some things I want to do with fog. Some of them I have done on a local branch.
- Fix global fog being drawn twice with material system (I already fixed that with the default renderer)
- Fix fog distance to use the real distance, instead of only counting distance along the view axis. When you look directly at something, it is more fogged than if you look at it out of the corner of your eye. Both Quake 3 and global fog are programmed this way.
- Don't use the model coordinate system because this is wrong for scaled models. This bug is just barely noticeable in our assets, for example with the Adv Dragoon which is scaled by 2x.
- NUKE fog image and replace it with a simple computation in the GLSL.
- For "Quake 3" fog, instead of drawing a corresponding fog surface for every opaque surface inside the fog, draw the whole fog with a single draw call, using the depth buffer to find the depth in fog like the global fog shader does. The 6 outer faces of the fog brush will be rendered. For each fragment calculate the intersection with an inner face. The length in fog will be the minimum of the 2 lengths from the intersection and the depth buffer.
So, we may need two computations:
* one for naive blending (we have to keep `sqrt()` for compatibility I guess) * one for linear blendingOnce those two computations are defined, we have to define how to do them. We can do them in a precomputed image the GLSL code picks value from, or we may do the computations in GLSL directly.
Doing the computations in GLSL directly would void the image sampling (and then, binding, I guess), but switching the computation would require to compile the two different codes, while using images can use the same compiled GLSL while we just switch the image.
Well, for the other shaders that needed this so far you used a uniform u_SRGB rather than having a compile variant. We could do that here too for consistency. Though with the new lazy shaders implementation it doesn't hurt us so much anymore to proliferate compile-time variants.
- one for naive blending (we have to keep
sqrt()for compatibility I guess)- one for linear blending
I don't see why you'd need 2 different ones in the shader when shaders work in the linear space regardless.
One of the linear computations is delinearized before display, so we may have to compensate for this transformation.
I just uncovered a big bug affecting fog in this line: https://github.com/DaemonEngine/Daemon/blob/ed4ffa854867a29d4498ba671c2d62547af9fbf6/src/engine/renderer/tr_shade.cpp#L1634
This is seemingly intended as an epsilon intended to make sure the distance from the viewer to a vertex doesn't go below 0. But it's in a vector scaled by 1 / (8 * distanceToOpaque), which means it can actually be a big distance! For example, if the distanceToOpaque is 4096, this artificially adds 64 units to the distance. I believe this is the major reason why fogs have hard edges instead of fading out gradually.
If we just remove that, it changes the appearance by a lot. So we may have to keep it for compatibility. Omitting the fake distance boost is something we could change when defining new fog semantics for sRGB maps.
Do we know if that bug was there in Tremulous or Quake 3?
Quake 3
Don't know if that's global fog or not though.
It's not global fog because it doesn't fill the whole map / screen, also global fog is from Wolf:ET, while Q3 has… Q3 fog.
Well, the fogDistanceVector[3] += 1.0/512; line is there in ioq3.
@VReaperV What's the name of the map?
By the way if testing with ioq3, make sure to use the GL1 renderer for a fair comparison. I actually tried ioq3 with fog the other day and the GL2 one, besides possibly using different code, was buggy: the fog would visibly change in brightness as you moved toward/away from it.
Yes, I was going to suggest that. 😁️
I expect ioq3 renderer2 to not be as tested as the original one and there may be remaining bugs on some stuff (also, it's now less tested than ours).
Found those screenshots. This is one with GL1 on an older version of atcshd. Edges look pretty hard...
By the way if testing with ioq3, make sure to use the GL1 renderer for a fair comparison. I actually tried ioq3 with fog the other day and the GL2 one, besides possibly using different code, was buggy: the fog would visibly change in brightness as you moved toward/away from it.
I took the screenshot with the original q3, 1.16n.
Found those screenshots. This is one with GL1 on an older version of atcshd. Edges look pretty hard...
The fog opacity looks like what we get with linear blending without quirks. 👀
About fogging equations, I found this:
- http://ultrafil.free.fr/fr/tutoriaux%20opengl%20fr/fog.html
It's in French, and it talks about GL1 builtin fog, so it's not Q3 fog, and I guess it's some kind of global fog.
But it gives various operations that were historically used for fogging:
| Kind | Name | Operation |
|---|---|---|
| Linear | GL_LINEAR |
f = (end - z ) / (end - start ) |
| Exponential | GL_EXP |
f = e-(density × z ) |
| Exponential² | GL_EXP2 |
f = e-(density × z )² |
They even give some screenshots:
| Linear | Exponential | Exponential² |
|---|---|---|
Now, why Quake3 did something else…? I don't know.
Actually it may be good to give mappers those various fog options (translated from that French page):
With linear mode, an object located before the fog begins is unaffected. Between the beginning and the end, its color depends on the z distance. After the fog ends, the object is the color of the fog. With exponential mode, an object is located in fog where visibility is fairly good. With squared exponential mode, an object is located in fog where visibility diminishes fairly quickly.
The first sentence seems to imply that GL1 fog wasn't a global fog??? 🤔️
I don't think it really matters which exact formula it uses or whether it deviates from the original implementation, as long as it looks good and doesn't make existing maps look massively different.
To be clear, those different formulas were just used as approximations so it would actually run at a reasonable framerate on the hardware at the time.
We have the problem I complained about in the ioq3 GL2 renderer too, that fogs look different depending on distance away from them. The 1/512 fake distance discussed above contributes to this, since the fake distance is multiplied by "t" (fraction of distance in fog), so it has a larger effect when close to the fog surface. The distance effect still happens even if I remove that factor though, probably due to the fog texture granularity.
The effect is easy to see on map Habitat since there is plenty of room to fly over the fog.
Actually I no longer think the 1/512 thing was just a simple mistake, because FogFactor subtracts it back out. The issues can be caused by the fog texture having a very low resolution for its purpose. The 0-1 texture coordinates are mapped to the range (0, 8 * distanceToOpaque). The texture size in the viewer distance axis is 256. The Habitat fog has a distance of 20000. So one texel of the fog image covers a distance of 20000 * 8 / 256 = 625. Over 600 qu per texel!
So anyway I will proceed with my efforts to get rid of the fog texture.
Let's discuss new fog features.
Configurable alpha curve
Quake 3 uses an equation like sqrt(d / distanceToOpaque), where d is the distance in fog, for determining the alpha of the fog blended over the surface. For an sRGB compatible curve, I will try out c2 * (1 - exp(-c1 * d / distanceToOpaque)) where c1 is some constant, and c2 is a number slightly greater than 1, chosen to make the function go to 1 at the opaque distance. We could make this configurable in the shader by fogCurve sqrt or fogCurve exponential.
Density gradient
To make the fog not appear to have such a hard edge when viewed at angles nearly tangential to the fog plane, we could try tapering off the density near the surface of the fog. Let f(t) be the density of the fog at distance t under the surface, and F be the antiderivative of f. Then to calculate a modified distance in fog for use in the fog curve, use abs( F(t1) - F(t2) ) / cos(a) where t1 and t2 are the distances below the fog where the line from the viewer to the point being rendered enters/exits the fog, and a is the angle between that line and the fog surface's normal. (Would need to be modified for when viewing exactly tangentially to avoid division by 0.)
So for f(t) we would want to think of something that goes to 0 at t = 0, and goes to 1 for large t. Syntax fogGradient <something to be determined>. Legacy fog is using f(t) = 1.
x1andx2
t1 and t2?
So for
f(t)we would want to think of something that goes to 0 at t = 0, and goes to 1 for large t
log( t + 1 ) * modifier with modifier dependent on how large t needs to be for f( t ) = 1 perhaps.