GeometryBasics.jl
GeometryBasics.jl copied to clipboard
Dealing with normals of sharp edges
For a smooth underlying shape, you want normals to vary smoothly as well. That way shading will restore smooth of the curvature that is lost in triangulation. If you have sharp edges though, this effect is undesired. In a cube for example, you want normals to jump when moving from face to face, even if you're looking at the same position.
One way to deal with this is to duplicate vertices (i.e. position and normal) like I did in #36 a while ago. This is the case now for some primitives, likeRect3D
or Pyramind
.
(The bottom face has normals in the wrong direction here...)
Other primitives don't do this, like for example Cylinder
.
We could go through all the mesh generating functions and duplicate vertices where needed, but I don't think that a good idea. My concerns are the following:
- We should avoid repeating data. Currently a
Rect3D
generates 24 coordinates instead of 8. There are also only 6 unique normals, but we generate 24. That's a lot of extra fat... - It's weird to get duplicate positions from
coordinates
as user. For example, in MakieTeX I needed to transform a 3D bounding box and I thought I could just domap(v -> transform * v, coordinates(bbox))
and then remembered that that is 3x more work than it should be.
I think the way to deal with is detach positions, normals, etc, for example by changing faces to have one index per attribute like in obj files. Or by adding some remapping functionality that can map vertex indices (1:24 for Rect3D) to attribute indices (1:8 for positions, 1:6 for normals). Making these changes should also mean that reading obj files (and probably others?) becomes easier. (I think https://github.com/JuliaIO/MeshIO.jl/pull/64 would become unnecessary and we could go back on it, for example.)
I am not very familiar with this, but I believe there are conflicting desiderata for the representation.
-
A general search for "openGL normal per face not per vertex" suggests that per-vertex normal vectors are important for graphics performance. For GPUs, flat representations generally seem to win, sometimes even if they are redundant by e.g. a factor of 3x.
-
But in fact the geometry of
Rect3D
does contain 8 vertices. If 24 is needed to make GPUs happy, that's fine. But 8 is needed to make the mind's eye happy, and as you point out, it is better for other computations.
So it seems beneficial to support both.
I think the way to deal with is detach positions, normals, etc, for example by changing faces to have one index per attribute like in obj files.
How are obj
files loaded in to the GPU? Does the representation need to flattened before sending?
A general search for "openGL normal per face not per vertex" suggests that per-vertex normal vectors are important for graphics performance. For GPUs, flat representations generally seem to win, sometimes even if they are redundant by e.g. a factor of 3x.
I'm not suggesting that this is the representation that should end up on a GPU. Afaik you usually upload your mesh once and try to do all the transformations on the gpu, so reconstructing the flat representation from the compressed one would be a one time cost. I also think the verbose version should still be around and usable if possible .
How are
obj
files loaded in to the GPU? Does the representation need to flattened before sending?
The current process is using MeshIO/FileIO to load it into a GeometryBasics mesh and to put that on the GPU (in GLMakie). Generally an obj file represents faces as
f pos/uv/normal pos/uv/normal ... pos/uv/normal
where pos/uv/normal
are 3 indices (or sometimes it's just pos, or pos/uv). These can be different and refer to indices into a position/uv/normal array. So depending on the file you may need to remapping of these indices and duplicate normals, uvs and or positions to get a valid GeometryBasics mesh. This remapping process was added in the pr I linked.
I'm not suggesting that this is the representation that should end up on a GPU.
But "this" representation is what currently ends up on the GPU, right? That's the reason why Rect3D
contains 24 vertices currently?
Pretty much, yea. https://github.com/JuliaPlots/GLMakie.jl/blob/dc6c460f1f86af5f856a290887ddd919a9f8615c/src/GLAbstraction/GLUtils.jl#L174
I'm thinking of something like this: (view this as pseudocode)
normal_buffer = GLBuffer(mesh.normals[mesh.normal_indices])
where GLBuffer
is just a reference to the gpu array so the intermediate array isn't kept.
I think we should be able to do this with normals = view(normals, normal_indices)
.