rfcs
rfcs copied to clipboard
Primitive Shapes
Can there be fixed point versions of these primitives for deterministic collision purposes?
Did you mean a proposal for 2d and 3d geometric primitives? :)
Although orthogonal to the representation of these geometric primitives, I presume it would be logical to include mesh generation support to all of these. As such, would it be preferred to distinguish between a geometric primitive and a mesh generator, or keep them together? That is to say either this:
let bounding = bevy_geom::D3::Sphere;
// _and_
let mesh: Mesh = bevy_render::shape::Sphere::default().into();
Or
let primitive = bevy_geom::D3::Sphere;
// optionally, if necessary
let mesh = primitive.make_mesh();
In my opinion, I believe that giving bevy_geom
's primitives the ability to generate the corresponding meshes would be the best option, instead of creating two separate Sphere
/Torus
/etc.
Note that bevy_geom::2d
/3d
are not valid module names; hence why I substituted with D3
.
In my opinion, I believe that giving
bevy_geom
's primitives the ability to generate the corresponding meshes would be the best option, instead of creating two separateSphere
/Torus
/etc.
Definitely agree. To keep the attached PRs small, I'll probably split this up:
- Add the primitives
- Refactor mesh generation
- Refactor other systems?
Note that bevy_geom::2d/3d are not valid module names; hence why I substituted with D3.
Nice catch, I always forget about that with "2d" and "3d" π.
Can there be fixed point versions of these primitives for deterministic collision purposes?
This should be feasible; we'd want to pair it with a change to the Transform system to support this too though. I'm genuinely unsure about how we should best split the alternative coordinate representations overall though. It could be done with generics, basic code duplication, feature flags, and external crate...
I was about to say the same thing. I'm not sure I know what the best approach would be here.
Some options in order of implementation difficulty:
- Don't implement fixed point, all representations stay as float types. If someone needs fixed point for simulation, that can be handled internally in their physics system, which can just convert the outputs to floats.
- Don't implement fixed point yet and make it a separate RFC. The scope of that change might benefit from it. Downside is that the longer it waits, the harder it will be to implement.
- Implement fixed point for primitives and transforms. This will almost certainly complicate the API, unless someone has a clever/existing solution without that drawback.
Don't implement fixed point yet and make it a separate RFC. The scope of that change might benefit from it. Downside is that the longer it waits, the harder it will be to implement.
I think this is the way to go; the scope is large, it cuts across quite a few domains and it deserves its own motivation.
Don't implement fixed point yet and make it a separate RFC. The scope of that change might benefit from it. Downside is that the longer it waits, the harder it will be to implement.
I think this is the way to go; the scope is large, it cuts across quite a few domains and it deserves its own motivation.
Is this the relevant issue from the main bevy repo? https://github.com/bevyengine/bevy/issues/1680
I recently had a use case for drawing points/single pixels.
Should this be considered a special case of... several things? I mean a line with length one, a 1x1 rect, circle with diameter 1 etc. could probably be considered points.
But perhaps having a separate type (in both bevy_geom::2d
and bevy_geom::3d
) for it might allow for better semantics and/or optimizations.
I think a separate type for points is better: the semantics are different, and the optimizations are very different.
I'm also of the opinion that it's a valid geometric primitive too: it's used in a lot of different places in modelling and you can approximate stuff quickly by treating them as points.
I think this is ready for review. Happy to discuss any suggestions!
I don't know if name bikeshedding is wanted at this point. If not, feel free to disregard this.
With that said, I think Ray
is a less confusing name than Axis
, both because it is more common and because it's mathematically unambiguous. To me, Axis
implies a basis vector, which does not necessarily have a specific origin. This is the meaning used by the Quat::from_axis_angle
method, for example, and I think overloading the meaning of "axis" in different parts of the bevy API would be a mistake.
I might also rename Line
to LineSegment
, because "line" usually implies an infinite line, but I don't feel particularly strongly about that one.
I also think it makes sense to rename the 3d Circle
to Sphere
, but I feel even less strongly about it.
I'm very happy about making a distinction between points and directions. This will make using the transform API much harder to mess up.
These are great points @Benjamin-L, thank you! Name bikeshedding is definitely welcome, I almost called the axis a Ray
but was unsure if that would be unclear.
I also think it makes sense to rename the 3d Circle to Sphere, but I feel even less strongly about it.
The intent with the 3d versions of planar geometries (circle, etc) was to have a way of describing a circle oriented in 3d space, not a sphere. For example, drawing a circle to make a debug shapes like this:
I've also been thinking about how this interacts with the Transform
object. The first thing I noticed is that we can change the type of the translation
field from Vec3
to Direction
. There are almost certainly other parts of the code where Vec3
s can be replaced with a more specific type. We will want to make a decision about which of these replacements we want to do and part of the implementation of this RFC should involve going through and finding all of these. I would lean towards making the replacements for every existing instance.
Another thing that might be worth including in this RFC is a Transformable
trait. All of the types declared here except for Circle
, Arc
, Aabb
, and Oobb
can be passed through a transformation and produce an object of the same type. There are a few options I can think of for this, with different tradeoffs of API complexity vs granularity.
The simplest option is something like this:
trait Transformable {
fn transform(transform: Transform) -> Self;
}
// implementations for everything except Circle, Arc, Aabb, and Oobb
Notably, the reason that Circle
, Arc
, and Oobb
cannot be transformed to the same type is that Transformation
s can include nonuniform scale. (For Oobb
it's because of shear, which is effectively a consequence of nonuniform scale). We could decide to split up the Transform
type to make a distinction between isometries (rotation + translation), similarities (uniform scale + rotation + translation, and affine transformations (nonuniform scale/shear + rotation + translation). All three of these types work with isometries and similarities. The API could look something like this:
struct Similarity {
scale: f32,
rotation: Quat,
translation: Direction
}
struct Transformation {
scale: Vec3,
rotation: Quat,
translation: Direction
}
impl From<Similarity> for Transformation { ... }
impl TryFrom<Transformation> for Similarity { ... }
trait TransformableBy<T> {
fn transform(transform: &T) -> Self;
}
impl<T: TransformableBy<Transformation>> TransformableBy<Similarity> for T { ... }
// impl TransformableBy<Transformation> for everything except Circle, Arc, and Oob
// impl TransformableBy<Similarity> for Circle, Arc and Oobb
Finally, Aabb
is in a weird position. The reason that it can't be transformed even by an isometry is that it needs the orientation to be axis-aligned. Despite this, there is an operation of type (Transform, Aabb) -> Aabb
that is pretty useful: computing the tightest possible AABB around a transformed AABB. Note that, for AABBs constructed as the bounds around another primitive, the result of this operation is not the same thing as computing the tightest bounds around the transformed primitive itself. It is also not the same thing as transforming the AABB itself, an operation which results in an OOBB. I think the thing that makes the most sense here is to add an Output
associated type to the TransformableBy
trait, allow AABBs to be transformed by a similarity to produce an OOBB, and make a method called Aabb::tightest_transformed_bounds
(or something like that). This also opens up the opportunity for an ellipse primitive and an TransformableBy<Transformation, Output = Ellipse>
impl for Circle
.
The intent with the 3d versions of planar geometries (circle, etc) was to have a way of describing a circle oriented in 3d space, not a sphere. For example, drawing a circle to make a debug shapes like this
In this case we need a way of specifying the orientation of the circle. Following the representation for Plane
, it would probably be a normal: Direction
field. I also think that there should probably be a Sphere
type either way.
More generally, several of the primitives that are defined for 3D have a mapping from 2D, which we could present in the API. Maybe something like this:
trait To3D {
type Output;
// More general mapping from the 2D plane to arbitrary planes in 3D
fn to_3d(origin: Point, plane_normal: Point) -> Self::Output;
// Less general mapping to only the XZ plane
fn to_3d_xz() -> Self::Output;
}
In this case we need a way of specifying the orientation of the circle. Following the representation for
Plane
, it would probably be anormal: Direction
field. I also think that there should probably be aSphere
type either way.
Yep, this was a mistake on my part when copying the 2d->3d types. I have some edits in the works right now to correct this.
Okay, I swear this is the last one (for now):
This representation of frutsum is kinda weird. My general preference for designing types like this is that there should be a single source of truth for what the object is, at least from the perspective of the public API. Additionally, whenever possible, any values for the fields that make up this "source of truth" should represent a valid object. Another way of stating the same idea is "illegal states should be unrepresentable" or "objects should not be overconstrained".
From that perspective, the definition of a frutsum that we actually care about is a cone between two coplanar rectangles where the vector between the centers of the rectangles is normal to the planes that the rectangles lie in. The current representation in the RFC is a set of six arbitrary planes. I think a representation more like this would be better:
struct Frutsum {
near_plane_center: Point,
near_plane_half_extents: Vec2,
normal: Direction,
far_plane_distance: f32,
far_plane_half_extents: Vec2,
}
With this definition, any values for the type's fields will represent a valid frutsum. It also makes me want to consider adding a HalfExtent2D
and HalfExtent3D
type.
I initially thought that this representation would be bad because it's wouldn't be as efficient to do comparisons against, but I actually don't think that is the case. The most efficient representation for doing intersection tests is actually a perspective projection matrix. You can just multiply by the matrix and then check that -1 < x < 1, -1 < y < 1, near < z < far
.
I initially thought that this representation would be bad because it's wouldn't be as efficient to do comparisons against, but I actually don't think that is the case. The most efficient representation for doing intersection tests is actually a perspective projection matrix. You can just multiply by the matrix and then check that -1 < x < 1, -1 < y < 1, near < z < far.
I'm actually not sure that's true. From what I've seen when I looked into frustum culling, often times plane comparisons are used because you can end early depending on the order of your plane checks. E.g. many games don't have much verticality, so a left and right plane test is enough for the majority of entities. With that said, I don't have any supporting evidence for that claim, but I do know the 6-plane representation of a frustum was somewhat common when I looked into this.
My general preference for designing types like this is that there should be a single source of truth for what the object is, at least from the perspective of the public API. Additionally, whenever possible, any values for the fields that make up this "source of truth" should represent a valid object. Another way of stating the same idea is "illegal states should be unrepresentable" or "objects should not be overconstrained".
I completely agree with this. However, we can make the struct fields private and force users of the API to go through setter methods to prevent them from creating invalid frustums. The same reasoning should be applied to Directions
which we will want to always be normalized. Also, the Frustum
struct you propose doesn't allow for oblique frustums. I've gone back and forth on the definition of the frustum struct, and would need a compelling reason to change from what I've landed on. I think the (great) points you brought up are all addressable with the current definition.
I'm actually not sure that's true. From what I've seen when I looked into frustum culling, often times plane comparisons are used because you can end early depending on the order of your plane checks.
Oh, that's a very good point! With that in mind, I think the current representation is good but I agree that we should make the fields private and interact with them via getters/setters. If we want to make efficient incremental updates with setters, we might have to store some additional data in the private fields, but we could also go with having the only way to modify the frutsum be to construct a new one, which I think is acceptable.
Putting this back in draft state to incorporate review feedback.
Thanks @bitshifter, appreciate this review. I don't have your experience here, but I can see it being advantageous to lean on the type system at the game engine level, while it might not make sense at the glam
library level.
I'm going to wait on @cart's feedback before adding any depth to the API. I'm feeling good about the design/layout of the types at this point, but I want to be mindful of scope.
I think its worth while to generate bounding volumes(AABB) for shapes that aren't boxes/spheres. Most game engines will use this information to cull meshes using frustum culling. It would be slightly more performant to generate these when building out the primitive meshes or when loading in a specific mesh.
I think its worth while to generate bounding volumes(AABB) for shapes that aren't boxes/spheres. Most game engines will use this information to cull meshes using frustum culling. It would be slightly more performant to generate these when building out the primitive meshes or when loading in a specific mesh.
@StarArawn I talk about this a bit here and here. I've built a bounding prototype and frustum culling prototype, and I think these are out of scope of this RFC. The types have been designed to facilitate that use case, but I think the implementation detail is better left for another RFC - there is a lot to consider there.
I'd also like to point out the distinction between colliders and bounding volumes I discuss in the link above. From my research into existing implementations, AABB and bounding spheres are really the only two bounding volumes used for broad phase bounding and culling - nothing else (including OBB) match their size and performance.
So I think the
NormalizedDirection
wrapper type is the only thing that I think we need to get right at this stage. Once that's done this has my seal of approval.IMO this functionality should live in a distinct
bevy_geometry
crate, as it will be depended on by bothbevy_render
andbevy_physics
while you may want one of those without the other in either combination.
@alice-i-cecile I think some of these primitives will already be desirable to use in bevy_render
and bevy_physics
, namely AABB
, BoundingSphere
and Frustum
. I don't think I agree with pulling just Direction
out into its own crate.
My thought on normalizing Direction
(or NormalizedDirection
), is the constructors should run .normalize()
on the input any time it is mutated. This prevents the need to constantly normalize()
your Vec3 when using it as a direction in nalgebra
or glam
operations that expect a normalized vector, and makes it a fixed cost of once per mutation. I can specify these methods if you think that is valuable and in scope.
I don't think I agree with pulling just Direction out into its own crate.
Oh! I meant pull the whole batch of them out in a cohesive whole!
My thought on normalizing Direction (or NormalizedDirection), is the constructors should run .normalize() on the input any time it is mutated. This prevents the need to constantly normalize() your Vec3 when using it as a direction in nalgebra or glam operations that expect a normalized vector, and makes it a fixed cost of once per mutation. I can specify these methods if you think that is valuable and in scope.
My suspicion is that there may be applications for which normalizing the vectors is wasted work that you may want to be able to skip sometimes. If you think that's unlikely, I'd stick to only using Direction
and note that it's normalized on assignment / mutation.
I might have a way to do this in a zero-cost way using the typestate pattern. To absolutely minimize the use of normalize
we could memoize the result only when accessed. We don't want to make Direction an enum, as that will add a discriminant, and we don't want to have multiple types of normalized and unnormalized versions of all our primitives. So instead, we could do
struct Plane {
point: Point,
normal: &impl Direction,
}
struct NonNormalizedDirection {
data: Vec3, // private
}
impl Direction for NonNormalizedDirection {}
struct NormalizedDirection(Vec3)
impl Direction for NormalizedDirection {}
When a Plane is mutated or built, the normal will be a NonNormalizedDirection
. Once it is accessed, the Plane
s getter method will normalize the direction, and swap it out with a NormalizedDirection
. Now the normalized direction is memoized in the Plane
without increasing the size of the type!
Edit: This complexity can be completely hidden to users, they will only be exposed to the Plane::normal()
and Plane::set_normal()
methods.
Oh interesting! That's the sort of thing I'll want to see realistic benchmarks for before getting quite that clever :p
Has the usage of Parry been considered? Given that Rapier will be the goto physics library for anything that uses (3d) physics, shouldn't we aim for maximum interoperability and seriously consider just using Parry?
I realize that the type parameters of nalgebra are considered a pain, but everybody who wants physics is now going to have to deal with that on their own, rather than dealing with it at a fundamental level?
At the very least, the choice to not use Parry should be documented.
Has the usage of Parry been considered? Given that Rapier will be the goto physics library for anything that uses (3d) physics, shouldn't we aim for maximum interoperability and seriously consider just using Parry?
I think interoperability with rapier is very important, but then I am using it and expect to continue using it. It seems to me to be a strong option and the best option for Rust projects.
I realize that the type parameters of nalgebra are considered a pain, but everybody who wants physics is now going to have to deal with that on their own, rather than dealing with it at a fundamental level?
There are mint conversions I think so all you need should be a .into()
in most cases?
At the very least, the choice to not use Parry should be documented.
I agree with this. π By that I don't mean to lean toward choosing it or not. I just mean that I think we should make a choice and document why that choice was made in that way.
I agree with this. π By that I don't mean to lean toward choosing it or not. I just mean that I think we should make a choice and document why that choice was made in that way.
Exactly what I meant.
A couple questions to keep this discussion moving forward:
- What parts of this RFC are already covered by Parry, what would we have to implement additionally?
- Is there anything in Parry that is incompatible with this RFC?
My take is that in general, bounding and collision implementation detail and discussion are out of scope.
To reframe - the critical question I want to answer is: Are the provided types in this RFC appropriate for their probable use in meshing, bounding, and collisions? The traits I've provided in this RFC are for reference only, they exist to illustrate how the types themselves might be used and to hopefully uncover deficiencies we can solve.
Whether or not we use Parry in Bevy is inconsequential to me. The only useful discussion I see around this topic is "are these types fundamentally incompatible with Parry's types?" I doubt this is the case because I used Parry/Rapier as one of my primary references. π
I've outlined here why I think we need to have Bevy-owned primitive shape types.
- What parts of this RFC are already covered by Parry, what would we have to implement additionally?
The only overlap lies in the primitive types themselves.
- Is there anything in Parry that is incompatible with this RFC?
Not that I'm aware of. This RFC stakes no claim on how physics or bounding should be implemented. It only provides the types needed to allow people to start making prototypes to test what the Bevy-official implementation might look like. The most fundamental building blocks (AABB/OBB/BSphere) can be implemented using these types without overhead, as discussed in the RFC.
@cart, this is ready for review. I'd like to get your feedback before making any further changes.
I've outlined here why I think we need to have Bevy-owned primitive shape types.
That paragraph is:
Rationale and alternatives
An argument could be made to use an external crate for shape primitives, however these types are so fundamental It's important that they are optimized for the engine's most common use cases, and are not from a generalized solution. In addition, the scope of this RFC does not include the implementation of features like bounding, collision, raycasting, or physics. These are acknowledged as areas that (for now) should be carried out in external crates.
I don't understand, though. Wouldn't choosing to use Parry for bounding, collision, raycasting or physics imply having to use its types? And having separate Bevy types then means converting between them every time you cross the boundary?
Why then, if Parry seems to be the likely candidate (there's hardly alternatives, except custom built stuff), would we even want to own these types?
Alternatively, if we expect people to plug in their own thing, shouldn't we just not include these types at all except maybe as a non-default plugin/library so that the different plugins for bounding, etc can use their own appropriate types?
I've outlined here why I think we need to have Bevy-owned primitive shape types.
That paragraph is:
Rationale and alternatives
An argument could be made to use an external crate for shape primitives, however these types are so fundamental It's important that they are optimized for the engine's most common use cases, and are not from a generalized solution. In addition, the scope of this RFC does not include the implementation of features like bounding, collision, raycasting, or physics. These are acknowledged as areas that (for now) should be carried out in external crates.
I don't understand, though. Wouldn't choosing to use Parry for bounding, collision, raycasting or physics imply having to use its types? And having separate Bevy types then means converting between them every time you cross the boundary?
Why then, if Parry seems to be the likely candidate (there's hardly alternatives, except custom built stuff), would we even want to own these types?
Alternatively, if we expect people to plug in their own thing, shouldn't we just not include these types at all except maybe as a non-default plugin/library so that the different plugins for bounding, etc can use their own appropriate types?
Considering Parry uses nalgebra under the hood, even if we were going to use those types we would need to convert to glam types to interoperate with the rest of the engine anyway. In that sense, owning the types doesn't add overhead to conversion, we would just own the process. Parry plugins already exist for bevy. Adding these primitive types wouldn't break those plugins, it just adds the ability to interoperate.
Using parry types would also mean every bevy crate that wants to implement some geometry focused feature, now needs to use the nalgebra and glam math types, which doesn't make any sense to me.
In addition, parry types are opinionated for physics/raycasting use. The goals of those types simply don't align with the goals I've outlined. Our shape primitives should be able to be used, for example, for meshing in 2d and 3d, our internal frustum culling implementation, and perhaps clustered rendering. If we want to add a 2d arc type for UI, we would need to convince parry to start adding primitives that are completely orthogonal to its goals.
That's exactly the kind of explanation for why not Parry that I meant. Thanks for that, @aevyrie !
I'll leave it up to you whether to put it in the RFC or leave it just here in the discussion. It can at least be searched for.
Just passing through to note that the bounding volume functionality will also be useful for fitting orthographic projections for directional light shadow receivers within part of the cameraβs view frustum and the shadow casters between that set and the light source.
I'm very confused by the code example in the Direction Normalization section, as impl Trait
is not permitted or planned to be permitted (afaik) in struct
s, and it's not clear what it's intended to signify here. Whether or not a direction is normalized has to be tracked somewhere, but this RFC seems to reject both places it could be stored:
If the state is tracked at compile time, Direction
would have a type parameter representing whether or not it has been normalized. This is the typestate pattern the RFC references. But by necessity such a type parameter would have to be propagated to all types storing a Direction
, which seems incompatible with the statement that "we don't want to have normalized and unnormalized types for of all our primitives." Also, this method would require all operations on Direction
, including conversion to a Vec3
, to take ownership and return a new Direction
, since the type(state) of a value can never change in-place.
If the state is tracked at runtime, Direction
would be an enum. The in-place change implied by "Once it is accessed, the Direction
's getter method will normalize the direction, and swap it out with a NormalizedDir." suggests this solution, but the RFC states that "We don't want to make Direction an enum, as that will add a discriminant." It is worth noting that making Direction
an enum would sacrifice shared access to a Direction
, either completely if the getter took &mut self
to perform the swap, or across threads (Direction: !Sync
) if interior mutability were used.
The only other possibility I can think of would be for Direction
to store a dyn Normalizable
trait object, but this solves none of the enum solution's problems and introduces a steep performance cost. A size advantage would only exist on 32-bit platforms, in return for incurring the cost of dynamic dispatch and possibly allocation on basic math operations, which seems like a poor tradeoff.
What is the intended interpretation of this section?
@alphamodder
I'm very confused by the code example in the Direction Normalization section, as
impl Trait
is not permitted or planned to be permitted (afaik) instruct
s, and it's not clear what it's intended to signify here. Whether or not a direction is normalized has to be tracked somewhere, but this RFC seems to reject both places it could be stored:
Yup, you're right, that was an outright mistake on my part!
But by necessity such a type parameter would have to be propagated to all types storing a
Direction
, which seems incompatible with the statement that "we don't want to have normalized and unnormalized types for of all our primitives."
Yes, that is definitely confusing. My intent was to say that we don't need to expose normalized and unnormalized types for all primitives. E.g. it would be unwieldy if users had to handle those types themselves instead of it being handled internally through Direction
's interface. The goal of that proposal was to use the typestate pattern under the hood so we could express normalization constraints at compile time without adding runtime overhead.
If the state is tracked at runtime,
Direction
would be an enum. The in-place change implied by "Once it is accessed, theDirection
's getter method will normalize the direction, and swap it out with a NormalizedDir." suggests this solution, but the RFC states that "We don't want to make Direction an enum, as that will add a discriminant." It is worth noting that makingDirection
an enum would sacrifice shared access to aDirection
, either completely if the getter took&mut self
to perform the swap, or across threads (Direction: !Sync
) if interior mutability were used.
These observations are very helpful.
The only other possibility I can think of would be for
Direction
to store adyn Normalizable
trait object, but this solves none of the enum solution's problems and introduces a steep performance cost. A size advantage would only exist on 32-bit platforms, in return for incurring the cost of dynamic dispatch and possibly allocation on basic math operations, which seems like a poor tradeoff.
Trait objects seem to be a poor choice for this, I agree. Do you have some suggestions on a better solution?
@aevyrie I'm afraid I don't know of any way to perfectly solve this problem at compile-time. In practice, I would provide an unsafe
escape hatch for normalization checks/renormalization and use it to implement as many common operations that provably do not alter the normalization of a vector as possible (e.g. linear interpolation, multiplication by a quaternion, cross product, etc.). In theory, if enough of the common operations on directions were provided this way most users could avoid worrying about normalization. And if a user was performing some weird calculation not covered by the above, and profiling revealed that normalization checks were a bottleneck, the unsafe
escape hatch would still be there for them.
EDIT: That said, the above is coming from the perspective of writing a math library, and might be a bit heavy/opinionated for a set of 'lightweight interoperability types.' π€·
If I had to pick a runtime solution, I'd use an enum with an &mut self
getter and either expect copying when mutable access is not possible or also provide a secondary &self
getter.
I've now tried to address the concrete actionable feedback that I've been able to identify from this discussion:
Status | Action | Source | PR |
---|---|---|---|
β Merged | Shapes should use Shape2d and Shape3d naming convention |
https://github.com/bevyengine/rfcs/pull/12#discussion_r680267479 | https://github.com/aevyrie/rfcs/pull/2 |
β Merged | Remove Point as a component |
https://github.com/bevyengine/rfcs/pull/12#discussion_r680269220 | https://github.com/aevyrie/rfcs/pull/4 |
β Merged | Remove Angle as a component |
https://github.com/bevyengine/rfcs/pull/12#discussion_r680261521 | https://github.com/aevyrie/rfcs/pull/5 |
β Merged | Merge Transform & Bounding and collision sections |
https://github.com/bevyengine/rfcs/pull/12#discussion_r680266155 | https://github.com/aevyrie/rfcs/pull/6 |
There are still some remaining unresolved discussions, and I'm not quite sure to what degree they need to be resolved in order to push this RFC forward.
- If / how to guarantee normalized
Direction
? Compiletime vs runtime ++ - By what descriptors should primitives be created? User-friendly vs algorithm friendly: Specifically surrounding
Box
and to usemin-max
orhalf_extents
thread - Which shapes should have a
Meshable
trait or not? thread
There might be other important discussions I've missed
A few comments:
- There exists 3d shapes that can be defined in terms of 2d shapes. Think of a pyramid with arbitrary base, or a "cylinder" with an arbitrary shape as base.
- A 2d shape in 2d space is different from a 2d shape in 3d space. I think it's relevant to make that distinction.
The problem I've encountered trying to implement my shapes library, at least following this RFC design (with many struct
rather than one massive fat enum
) is finding and defining the proper conversion methods. If a Pyramid
is defined in terms of a 2d shape, what do I store it as in the struct Pyramid
? Which traits to define and which one to use for those specific cases? This is not just about pyramids, as @cart points out, there are many ways to mesh a sphere, how to handle that? It's actually a hard problem! This PR only hand-waves-in Bounding
, Collider
and Meshable
, and doesn't really answer how they are to be used.
Maybe this doesn't need to be answered now! It can be subject to new RFCs. I don't think the "abstract" shapes proposed in the RFC are controversial at all, I like all the propositions everyone made here. Going from abstract shape to concrete mesh or line collection can be discussed in a new RFC. I'd even suggest that the way to handle contributions of new shapes should be discussed in some RFC.
@alice-i-cecile with so much churn on this RFC, I'd like to update my implementation attempt, and we can collect concrete feedback there. I have some thoughts on further modifications to the RFC, but at this point it's been reviewed so many times I don't think further review will be helpful unless we have an implementation to look at.
Here is my stupid simple shapes library https://github.com/nicopap/bevy-cool-shapes/blob/main/bevy_cool_shapes/src/lib.rs . I use it to show outline gizmos on screen like here https://github.com/nicopap/bevy-cool-shapes/tree/main/bevy_cool_shapes_render
Here's my thoughts:
- The initial RFC should only consider convex shapes. Concave shapes are much more difficult to work with, and will require convex decomposition for many uses.
- I suggest focusing on 3D closed shapes, such that all normals are pointing outward. 2D shapes are an entirely different thing, even in 3D - they can't really be closed, lack volume, a point can't be inside, etc.
- Focus on reworking existing shapes (boxes, spheres, and frustums) in the engine to be more consistent, rather than introducing shapes of questionable use. This will make it clear what operations are needed on these shapes.
Methods that would be valuable to me, and probably others:
- Compute the support point and normal for a given direction. This is the farthest point on a shape's surface along a given vector (such that it has the maximum dot product), and the normal of the surface at that point. This is all that GJK and similar algorithms need for fast distance calculation, penetration, etc.
- For shapes that don't need GJK (e.g. AABBs), accelerated intersection tests.
- For simple shapes (box, sphere), finding the smallest shape enclosing another more complex shape
A lot of interesting operations on shapes can be reduced to checking for intersections.