p5.js-website
p5.js-website copied to clipboard
Using rect(0,0,0,0) in shader examples
Currently in the documentation, when a shader is used, where the intent is to create a flat image of size canvas.width
and canvas.height
, it is suggested to use rect( 0, 0, width, height)
as a way to provide the vertex and fragment shaders with geometry. For example (src, live):
function draw ()
{
// shader() sets the active shader with our shader
shader(theShader);
// rect gives us some geometry on the screen
rect(0,0,width, height);
}
I think this approach creates a false mental model. One might get the impression (at least I did) that if you change the width
and height
parameters to rect
, then you can control the dimensions of the image generated by the shader. In actuality, the width and height are completely irrelevant. Using values of 0
yields an identical result.
function draw ()
{
// shader() sets the active shader with our shader
shader(theShader);
// rect gives us some geometry on the screen
rect(0,0,0,0);
}
Digging in to the source code a bit, I think this is because the only thing that matters is the uv coordinates. (The width and height are used only for scaling before rendering, and are not part of the vertex data (aPosition
) generated for the rect geometry and passed to the vertex shader).
p5.RendererGL.prototype.rect = function(args) {
...
for (let i = 0; i <= this.detailY; i++) {
...
for (let j = 0; j <= this.detailX; j++) {
...
const p = new p5.Vector(u, v, 0);
this.vertices.push(p); // <- same value
this.uvs.push(u, v); // <- same value
}
}
...
// Only a single rectangle (of a given detail) is cached: a square with
// opposite corners at (0,0) & (1,1).
// Before rendering, this square is scaled & moved to the required location.
...
try {
this.uMVMatrix.translate([x, y, 0]);
this.uMVMatrix.scale(width, height, 1);
this.drawBuffers(gId);
}
...
}
Using rect(0,0,0,0,)
in the examples, can help avoid a false mental model with regards to scaling of the image generated by the shader.
.----------------------------------------------------------
So, what would one actually need to do if they wanted to scale the generated image? I.e. have the image respond to the width
and height
parameters.
One possible approach, that is consistent with the current p5 implementation, is to somehow make the shaders aware of the dimension transformations made by the modelView matrix.
vertex shader ~~~
For the vertex shader, the only change that needs to happen is to modify how gl_Position
is generated as shown below:
attribute vec3 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
void main ()
{
vTexCoord = aTexCoord;
vec4 positionVec4 = vec4(aPosition, 1.0);
positionVec4.xy = positionVec4.xy * 2.0 - 1.0;
gl_Position = positionVec4;
}
Modified basic.vert:
attribute vec3 aPosition;
attribute vec2 aTexCoord;
uniform mat4 uProjectionMatrix;
uniform mat4 uModelViewMatrix;
varying vec2 vTexCoord;
void main ()
{
vTexCoord = aTexCoord;
vec4 positionVec4 = vec4(aPosition, 1.0);
gl_Position = uProjectionMatrix * uModelViewMatrix * positionVec4;
}
fragment shader ~~~ For the fragment shader, nothing has to be changed if all the coordinates used are based on the uv coordinates provided by the vertex shader.
For example, the code in basic.frag works as is:
varying vec2 vTexCoord;
void main()
{
vec2 coord = vTexCoord;
gl_FragColor = vec4(coord.x, coord.y, (coord.x+coord.y), 1.0 );
}
For fragment shaders that use coordinates based on gl_FragCoord
, a minor adjustment can be made to use uv coordinates instead.
.----------------------------------------------------------
Here is a live demo covering everything mentioned above.
@aferriss I'm tagging you here, since you did most of the work on these shader examples. What do you think?
Hey @JetStarBlues
I think this is probably my fault. I wrote the non-matrix aware version as a way to write the shader more simply for my examples repo. But you're absolutely right in that it creates a false mental model. I think changing the vertex shader to reflect a matrix aware model would be the right move. I think these demo's might have been pulled in from the website that was created at itp, which was partially informed by some of my work.
Since it's not going to break anything, and only clears up the documentation examples, I'd suggest going ahead and making a PR with the changes you've proposed.
Thanks for taking the time to sniff this out!
Thank you for the feedback! Will give it a go.
Hi, just wanted to check in and mention that I do plan on creating a pull request for this issue. I haven't been able to find as much time as I would like to tackle this.
@aferriss @stalgiag Do you have any insight as to why the y-axis increases "downwards" in the p5 coordinate system?
OpenGL's coordinate system is such that the y-axis increases "upwards". That is, if you want to move an element "upwards" from the origin (0, 0, 0)
, you increase its y position e.g. (0, 100, 0)
to move it up by 100 units.
In p5 however, the inverse is true. If you want to move an element "upwards" from the origin (0, 0, 0)
, you decrease its y position e.g. (0, -100, 0)
to move it up by 100 units.
Consider this sketch:
Using the default camera, which has a default up-direction of (0, 1, 0)
, the red square which has a position of (0, -100, 0)
is rendered above the origin. The green square which has a position of (0, 100, 0)
, is rendered below the origin.
When we change the camera's up-direction to (0, -1, 0)
, then the red square renders below the origin, and the green above, which is more inline with the standards OpenGL coordinate system.
Is the camera's up-direction at fault here? Or is something else the reason that p5's WEBGL y-coordinate is inverted?
Relation to shaders:
uv coordinates typically (*) have their (0, 0)
origin at the bottom-left. However, when we use the default matrices to calculate gl_Position
(via gl_Position = uProjectionMatrix * uModelViewMatrix * position
), the origin moves to the top-left.
The gradient at the background of the two images above is generated by a frag shader that sets gl_FragColor = vec4( vec3( uv.y ), 1.0 )
. The gradient is not "correct" with the default up-direction. The expected result is that the gradient is darkest at the "bottom" (where uv.y is 0) and gets lighter towards the top. (Sidenote: using the modified up-direction also affects uv.x, moving the uv origin to the bottom-right. Which makes me think the camera up-direction might be a red herring).
I am not sure why applying the matrices to gl_Position
affects the uv coordinate system. When the matrices are ignored, (as in the current examples), the uv origin is at the expected bottom-left. (Live example here).
Knowing the cause would be helpful in finding a good workaround. E.g. whether every fragment shader should have a line like this:
// TODO, explanation
uv.y = 1.0 - uv.y;
Or whether the solution lies elsewhere...
* Maybe it's just an OpenGL convention?
Hi @JetStarBlues nice research! This decision was made before my time as a contributor but I believe it was made to keep the y-axis similar to the 2D renderer. Why that is the case and the xy-origin is not the top-left corner has always been a mystery to me.
made to keep the y-axis similar to the 2D renderer
Ah, I see. Thank you for the insight!
Thinking of including the following explanation in the updated .frag files. It's a bit fuzzy because I don't quite understand what happens behind the scenes. Would love some thoughts on the explanation.
/* p5 uses y-coordinates that decrease as you go "upwards".
However, conventional? y-coordinates *increase* as you go "upwards".
As a result, fragment shaders are typically written with the assumption
that the origin (0, 0) for the uv coordinates lies at the bottom-left.
Below, we "flip" uv.y so that it follows this convention of increasing upwards.
*/
uv.y = 1.0 - uv.y;