learn-wgpu
learn-wgpu copied to clipboard
Why are color interpolation default, and how can I override it?
Through following the GREAT tutorial, I find a part really confuses me, and this is often omitted in other shader tutorials as well:
The classic rainbow triangle -- We all know what to expect from the shader: It takes the colors from the three vertices and does an average at each pixel/fragment according to its relation with the vertices, great!
But what is written in the shader file is not easily mapped to the behavior
@fragment
fn fs_main (in: VertexOutput) -> @location(0) vec4<f32> {
return vec4<f32>(in.color, 1.0);
}
from here
So when do we specify it to behave this way? This question can be broken down into several smaller ones:
- How do we specify which three vertices to interpolate from?
- If finding the primitives that encloses the current fragment is the default behavior, when does this "primitive search" happen?
- I imaging this part is happening somewhere internally, and by this search happens, every position is in the 2D screen coordinate. So when does this conversion happen? Is this search costly in terms of performance? After all there could be many triangles on the screen at the same time.
- Can we specify arbitrary vertices to use based on the fragment's location (which we did not use either)?
- Why does it have to be three, can we make it four or five?
- If so, how are they passed into the fragment shader?
- language-wise, why is the
fs_main's argument a singleVertexOutput?
- How does returning the
in.colordetermine the color of the fragment? It is supposed to be a vertex color. - Can we fill the vertices with a different scheme other than interpolation? Maybe nearest neighbor? Can we fill somewhere outside of the primitive? Maybe I just what to draw the stroke and not fill it.
- Maybe related: I noticed that the triangle we rendered at this point is kind of jagged at the edge. Maybe there's something in the shader that we can do to change that.
Maybe these issues are addressed later in the tutorial that I haven't read yet, but I do suggest giving a more top-down view of the grammar of the language at this point. And I do appreciate a lot if someone could answer some of the questions above!
https://www.reddit.com/r/rust_gamedev/comments/1inwpg3/need_some_explanations_on_how_shader_works_which/
From reading the your reddit post it seems your mainly concerned with drawing text from Bezier curves. While generally you render text by converting it to a bitmap font. Basically you store all the characters you want to support in a texture like this:
You then draw a series of quads the with texture coordinates sampling the glyph texture. If you need different text sizes and don't want scaling artifacts, you can have multiple textures with more or less resolution.
You can use an signed distance field based approach instead of the simple texture if you want more flexibility in rendering different text sizes. This article covers the basic idea.
If you absolutely need to draw the font from bezier curves, here's a good video on text rendering.
Let me know if you have any more questions.
@sotrh Thanks for the reply! What I want to do is actually different. I asked about glyph rendering because I thought it was similar so to use it interrogate how it would work. I didn't expect that they use rasterized textures to boost performance.
What I actually want is to render by strokes with sampled points in vectors (with info like pressure, tilt, speed of the pen, etc.) to explore simulating brushes and different pens. So I wanted to determine the value of each pixel based on neighboring sampled points. It's more like drawing the heatmap or contour map of a 2D mathematical function and I just learned that it's not how fragment shader works. I can explore the following options.
- I do it with CPU software, pixel by pixel. But I don't know how it would perform if I want a decent refresh rate. I may want to do rapid scaling as well. That's why I asked this question.
- I do it with fragment shader, but I generate a primitive every stroke, or every 5 points, that covers where the stroke is to be drawn. In this case.
- Are the vertices generated on the CPU or GPU?
- Can I access the sampled point's coordinate and fragment's coordinate from the fragment shader?
- I can do it on GPU with other kinds of shaders. Maybe compute shader? I'm still looking up what it does. But it seems to be able to write directly into a GPU buffer. Can I just write in and present the buffer as pixels, then?
You can port this to wgpu and rust. This project rasterizes glyph by gpu.
* Are the vertices generated on the CPU or GPU?
You can generate the vertices on the CPU or the GPU, but they'll get uploaded to the GPU if you want to leverage the GPU's compute power.
* Can I access the sampled point's coordinate and fragment's coordinate from the fragment shader?
It depends on how your pipeline is set up. I can think of a few ways to do this.
- Generate the vertices of the curve and store them as a vertex buffer. You can do this on the CPU, or in the GPU as a compute shader. The CPU one is likely easier and as long as your just uploading the new vertices it should be fairly performant. The compute shader variant seems like it would be more trouble as it's worth as if your only creating a few new vertices every frame, a CPU based approach isn't much more expensive.
- Store the points on the curve and calculate the signed distance field of the curve. This method is the most flexible as you won't have to resize the mesh if you want to zoom in. Rendering is a bit more expensive as you basically do all the math for every curve every frame, but with modern GPUs that shouldn't be that big of an issue.
Here's an example of the SDF approach I've used here
Here's the SDF of the strokes
Ultimately it's up to you what you want to do.
Closing for now