rfcs
rfcs copied to clipboard
A curve trait for general interoperation
This RFC describes a trait API for general curves within the Bevy ecosystem, abstracting over their low-level implementations.
It has a partial implementation in this draft PR.
Integration with bevy_animation
has a proof-of-concept prototype in this draft PR.
Integration with bevy_math
's cubic splines has a proof-of-concept prototype in this draft PR.
The proposal looks great so far. Some thoughts:
- Since these curves are general purpose rather than specifically targeting animation, I see a lot of value in being able to express curves over any arbitrary range, e.g.
-1.0..=1.0
,f32::NEG_INFINITY..=0.0
,100.0..=200.0
. Instead of storing a duration, what about adomain: RangeInclusive<f32>
? - I may have missed in in the RFC, but how are ranges sampled outside of bounds? For a range with duration 1.0, what happens when you sample -1 or 2? Is it defined by extrapolating using the derivative or clamped to the nearest value? This is something that comes up a lot in animation curves.
Might have to have option/result samples. I think the general domain is worth considering.
Also do we want this to be general enough to capture parametric surfaces? Because I think we can do this if T is not Ord (a Vec2 for example).
The proposal looks great so far. Some thoughts:
1. Since these curves are general purpose rather than specifically targeting animation, I see a lot of value in being able to express curves over any arbitrary range, e.g. `-1.0..=1.0`, `f32::NEG_INFINITY..=0.0`, `100.0..=200.0`. Instead of storing a duration, what about a `domain: RangeInclusive<f32>`? 2. I may have missed in in the RFC, but how are ranges sampled outside of bounds? For a range with duration 1.0, what happens when you sample -1 or 2? Is it defined by extrapolating using the derivative or clamped to the nearest value? This is something that comes up a lot in animation curves.
Ah, I'm glad you brought this up. The basic form of Curve<T>
itself is actually based on this animation draft RFC, so it has inherited some properties which might not be totally appropriate in a general setting. I think that your suggestion in (1) is quite reasonable. My main qualm is actually that Rust's Range
types are not Copy
, which is insanely annoying in situations like this where we really have no interest in actually using them as iterators (and this is a burden we pass on to the consumer).
As for (2) — good question. Once again, the animation RFC I linked suggested that sampling would always return a value, but that the way that out-of-bounds samples might be formed would be left implementation-specific. Perhaps @james7132 might chime in with some insight there. I think that it's tempting to believe that returning an error or None
when the sample point is out of range is just enforcing invariants, but I wouldn't be surprised if there is a compelling rationale to the contrary. For curves defined algebraically, extrapolating outside of the given bounds also generally makes sense anyway (e.g. Hermite interpolation between two points), so I might be in favor of that kind of thing regardless.
Also do we want this to be general enough to capture parametric surfaces? Because I think we can do this if T is not Ord (a Vec2 for example).
I guess I'm a little confused by this since there are no Ord
constraints anywhere as far as I'm aware. I imagine that what you have in mind is encoding a two-parameter surface as a one-parameter family of curves; I would think that the main barrier there is the constraint that you need T: Curve<Point>
where T
itself is actually Interpolable
in some reasonable fashion, but perhaps this is actually less challenging than I imagined when I started writing this response; e.g. a variant of FunctionCurve<T>
(the thing holding a function as a wrapper to make a Curve<T>
) could probably do the trick if you store a weight field (and use this to pass on the interpolation to the results of evaluation).
Rust's
Range
types are notCopy
I had no idea, that's pretty unfortunate.
What if sampling outside was done by means of a wrapper? For example a Curve
that wraps another but has a domain over all reals? Sampling could look like: Clamp(my_curve).sample(...)
or Hermite(my_curve).sample(...)
A random thought on the range business: we could implement our own Copy
version of ranges (say, T
), and then use impl Into<T>
or impl TryInto<T>
together with a corresponding implementation for certain Bevy ranges in order to still allow for the use of the ..=
syntax.
What if sampling outside was done by means of a wrapper? For example a
Curve
that wraps another but has a domain over all reals? Sampling could look like:Clamp(my_curve).sample(...)
orHermite(my_curve).sample(...)
I think something like that could work, but it would still be a little annoying to have to unwrap its sampling output all the time despite knowing it would always succeed.
I think something like that could work, but it would still be a little annoying to have to unwrap its sampling output all the time despite knowing it would always succeed.
I was assuming sample
would just return a T
, not an Option
/Result
. If it won't always succeed, we could embed information for extrapolating in the result's Err
variant instead, then maybe provide some convenience method for extrapolating:
fn clamp<T>(x: Result<T, OutOfBounds<T>>) -> T {
match x {
Ok(x) => x,
Err(out_of_bounds) => todo!(), // `out_of_bounds` has all the info we need to extrapolate out-of-bounds sampling
}
}
let curve = function_curve((0.0..=1.0), |x| x);
let out_of_bounds_sample = clamp(curve.sample(2.0)); // returns 2.0
Rust's Range types are not Copy
FYI, this is slated to change over either the 2024 or 2027 edition boundary: https://github.com/pitaj/rfcs/blob/new-range/text/3550-new-range.md
Rust's Range types are not Copy
FYI, this is slated to change over either the 2024 or 2027 edition boundary: https://github.com/pitaj/rfcs/blob/new-range/text/3550-new-range.md
That's good to know! Honestly, I tinkered around and found that RangeInclusive
is honestly kind of poor regardless (doesn't enforce any invariants, lacks virtually any useful methods other than contains
...), so I'm leaning towards making our own Interval
type with a TryFrom<RangeInclusive<f32>>
implementation regardless.
(This wouldn't be very heavy, but would enforce non-emptiness and include methods like length
, is_finite
, and intersect
, and it would be Copy
.)
Okay, here again to explain a couple changes:
Firstly, I added Interval
, which is a lightweight type representing a non-empty closed interval that is possibly infinite in either direction. This has three main benefits:
- Its invariants are enforced at the type level, which allows many cases of input sanitization to be removed from the API.
- It allows the provision of a large number of useful methods; these turned out to be more numerous than I expected while refactoring, so there are a number of those now.
- It is
Copy
.
Secondly, with the addition of sample_checked
and sample_clamped
, I now have a belief about the suite of sample
functions that I am willing to defend: While we might alter what is named what here, I think that this is what the API should actually look like. My reasoning is mainly as follows:
- The existence of zero-overhead (i.e. fully unchecked) sampling in the trait definition is non-negotiable, since sampling is one of the main paths in applications and some may be performance-sensitive.
- I think that
sample_checked
andsample_clamped
cover almost all use-cases to handle sampling outside of the domain.
Outside of this, I don't really care whether sample_checked
gets renamed to sample
and sample
becomes sample_unchecked
or something.
Read through the updates. Looks really good.
The interval type makes sense, and I agree Range doesn't seem like quite the right fit. Among other things, allowing curves over arbitrary intervals could simplify thinking about segments of splines as curves in their own right. I think we may want to investigate functions for sticking/appending curves together based on their domains or splitting them apart.
I do worry that requiring an interval domain will cause problems if we want to work with derivatives. It would be nice to be able to represent the first and second derivatives as "functional curves" where they can be determined analytically (eg. when working with splines), but most Bezier splines aren't generally C2 (and some aren't even C1). I suppose a tangent curve could be a Curve<Option<Vec3>>
but this seems at bit clunky and generally at-odds with the direction of sample_unchecked
. This is a problem for moving frames as well.
One possible solution might be to allow curves over arbitrary measurable sets (or otherwise generalize Interval
) but I'm not sure what that would look like and I'd rather be able to work with derivatives without that sort of theoretical equipment.
Setting derivatives aside, I agree with you're points about domain checking. My preference would be for sample_unchecked
as the main implementation point, and then having the trait provide checked wrappers sample
and sample_clamped
. New users (who may be prone to bounds errors) will probably reach for sample
first. Implementers will know not to write redundant bounds checking because of the function name.
By the way, I can think of two other traits that also provide interpolation: Mix
, Animatable
. You've suggested adding a blanket implementation for VectorSpace
and we could do the same here, but I wonder if we might want to simply generalize Mix
and then have all the others extend it. It seems like there is pretty much one correct way to interpolate any given type, so the only reason to have multiple versions in different traits is name ergonomics.
Also, please disregard my earlier comment about Ord
. I tossed this off in a hurry and was confused something I was thinking about with the RFC (I was playing with sample(t: D) -> R
where D: Field, R: Interpolable
).
The interval type makes sense, and I agree Range doesn't seem like quite the right fit. Among other things, allowing curves over arbitrary intervals could simplify thinking about segments of splines as curves in their own right. I think we may want to investigate functions for sticking/appending curves together based on their domains or splitting them apart.
Yeah, this could be a good idea; I am a bit wary of scope creep at this point, since this seems like something that can be adjudicated outside of the RFC proper, but I agree that it's definitely worth investigating and seriously considering things in this direction.
I do worry that requiring an interval domain will cause problems if we want to work with derivatives. It would be nice to be able to represent the first and second derivatives as "functional curves" where they can be determined analytically (eg. when working with splines), but most Bezier splines aren't generally C2 (and some aren't even C1). I suppose a tangent curve could be a
Curve<Option<Vec3>>
but this seems at bit clunky and generally at-odds with the direction ofsample_unchecked
. This is a problem for moving frames as well.
My thoughts on this have yet to materialize into concrete implementation details, but as far as I can tell, we are mostly in the clear; the main thing I would like to do with our spline code in order to adapt it to this interface is to differentiate classes of curves a bit more at the level of types. All of the spline constructions that we currently support, for instance, are globally C1; in my ideal world, this would mean that they naturally produce something that looks like Curve<(Vec3, Vec3)>
, packaging the points and derivatives together (or some struct does the same thing).
Then, for instance, for the B-spline construction, the produced type would actually be slightly different from the previous one; in addition to implementing the Curve
trait for the C1 data, it would also implement Curve<C2Data>
or whatever. (Both of these concrete return types could also provide access to the underlying Bézier segments as well, not through the Curve
interface itself, and those would have as much information as you could wish for regardless.)
I am unsure on the finer points of what that type-level differentiation would look like (kind of next on my list to investigate), but I guess my angle here is this: most of the things you would want to do with spline output are only going to care about the Curve
interface anyway and not the concrete return types; for instance, most things you would dream of doing with curve geometry really only care that you start with a Curve<C1Data>
(positions and derivatives), and both of the concrete return types would get you there.
So, to be brief, my vision for actual implementation involves reifying the quality of spline constructions more strongly at the type level; I hope that makes sense.
I am a bit wary of scope creep at this point, since this seems like something that can be adjudicated outside of the RFC proper
Quite right. There's very little I would add to the proposal at this point, and I'm not pushing for any of this to be added to the RFC. But I do think it's worth noting future work items here as well as evaluating the sorts of things the RFC lets us build. Please let me know if you think I'm derailing the discussion, that's not my intention.
All of the spline constructions that we currently support, for instance, are globally C1.
Bezier splines are only C1 within curve segments, and may or may not have smooth transitions between segments depending on the position of the control points. I don't think we can or should assume all our curves will be C1.
Part of the problem is that "tangents" can mean like four different things depending on the underlying curve: "Functional curves" are generally going to be either C1 or piecewise C1, which is the difference between a T
and an Option<T>
. Sampled curves can have two sources of approximate tangents: Finite differences, or samples from a derivative.
As I see it:
- We need to be able to specify continuity information as part of a parameter type bound,
- Depending on the bound, the tangents/acceleration may or may not be guaranteed to exist,
- And we need to be able to sample both position+tangent and position+tangent+acceleration as single functions for efficiency purposes.
Then, for instance, for the B-spline construction, the produced type would actually be slightly different from the previous one; in addition to implementing the Curve trait for the C1 data, it would also implement Curve<C2Data> or whatever.
The idea of multiple implementations of Curve
is interesting, but I don't love the fully qualified syntax required to disambiguate between them.
fn foo<C>(curve: C)
where C: Curve<C1<Vec3>> + Curve<C2<Vec3>>
{
let (pos, acc) = <C as Curve<C1<Vec3>>>::sample(t)
let vel = <C as Curve<C2<Vec3>>>::sample(t)
}
Would something like the following work?
// These all have blanket implementations of curve and other Cn/Pn traits
trait C2C1Curve<T> { ... } // C2 + C1: Curve<(T, T, T)>
trait P2C1Curve<T> { ... } // Piecewise C2 + C1: Curve<T, T, Option<T>>
trait P2P1Curve<T> { ... } // Piecewise C2 + Piecewise C1: Curve<(T, Option<T>, Option<T>)>
trait C1Curve<T> { ... } // C1: Curve<(T, T)>
trait P1Curve<T> { ... } // Piecewise C1: Curve<(T, Option<T>)>
fn foo(curve: impl P2C1Curve<Vec3>) {
if let (pos, vel, Some(acc)) = curve::sample(t) {
...
}
let acc = curve.sample_acc(t);
let pos = curve.sample_pos(t);
}
New curves which provide tangents/acceleration would implement the strongest trait they can. The types are a bit cumbersome but I think it would avoid the qualified syntax.
We don't have to spec anything out as part of the RFC, but I'd like a better idea of what this would look like to make sure we aren't locking ourselves out of anything in the future.
Please let me know if you think I'm derailing the discussion, that's not my intention.
You're quite fine. :)
Bezier splines are only C1 within curve segments, and may or may not have smooth transitions between segments depending on the position of the control points. I don't think we can or should assume all our curves will be C1.
It seems I actually misspoke, since the cubic NURBS construction is only C0 (in full generality). What I was trying to get at was really that only the CubicCurve
type itself really says 'C0' for the Bézier constructions; I'm not saying we shouldn't support C0 data, by any means! What I'm really trying to say is that my view is that we should do some differentiation at the level of the curve constructors (the first thing that came to mind was genericity with respect to marker types of some kind, which it seems you have also thought about).
Part of the problem is that "tangents" can mean like four different things depending on the underlying curve: "Functional curves" are generally going to be either C1 or piecewise C1, which is the difference between a
T
and anOption<T>
.
Yeah, I agree here. My main thing is that I would prefer for, e.g., Option<Vec3>
not to appear as a curve output for a "maybe tangent" directly — rather, I would prefer a system where we give access to as much output as possible that we know is valid globally, while also providing type-level tools (rather than through Curve
per se) for them to access data that might not be 'valid' (e.g. continuous across the curve) because of type constraints alone.
For instance, with how things are currently set up, something like CubicCurve
might only have a Curve<Vec3>
implementation on its own, but we could provide access to
- the curve segments, which would be able to give the derivative data individually, and/or
- a "promotion" procedure, wherein we give the derivative data globally, with the user-level understanding that it may not be globally continuous.
Sampled curves can have two sources of approximate tangents: Finite differences, or samples from a derivative.
This is true, but I suppose I see "finite differences" as a "promotion" procedure, something like:
fn numerical_derivative(position_curve: SampleCurve<Position>) -> SampleCurve<PositionAndVelocity>
hence impl Curve<Position> -> impl Curve<PositionAndVelocity>
, if that makes sense; so they are at least the same kind of Curve
.
When you start with a Curve<PositionAndVelocity>
to begin with, you can map
down to just a Curve<Position>
and then resample
and call numerical_derivative
, and you should get something similar to what you would get if you just called resample
to begin with (at least if numerical_derivative
is any good), so this all sort of makes sense to me.
As I see it:
* We need to be able to specify continuity information as part of a parameter type bound, * Depending on the bound, the tangents/acceleration may or may not be guaranteed to exist, * And we need to be able to sample both position+tangent and position+tangent+acceleration as single functions for efficiency purposes.
Agreed on all counts.
The idea of multiple implementations of
Curve
is interesting, but I don't love the fully qualified syntax required to disambiguate between them.
Having sat with it a little, I don't like it much either; I think this is a situation where explicit methods producing Curve
outputs for certain kinds of data would be better.
Would something like the following work?
// These all have blanket implementations of curve and other Cn/Pn traits trait C2C1Curve<T>; // C2 + C1: Curve<(T, T, T)> trait P2C1Curve<T>; // Piecewise C1 + C1: Curve<T, T, Option<T>> trait P2P1Curve<T>; // Piecewise C2 + Piecewise C1: Curve<(T, Option<T>, Option<T>)> trait C1Curve<T>; // C1: Curve<(T, T)> trait P1Curve<T>; // Piecewise C1: Curve<(T, Option<T>)> fn foo<C>(curve: C) where C: P2C1Curve<Vec3> { if let (pos, vel, Some(acc)) = curve::sample(t) { ... } let acc = curve.sample_acc(t); let pos = curve.sample_pos(t); }
New curves which provide tangents/acceleration would implement the strongest trait they can. The types are a bit cumbersome but I think it would avoid the qualified syntax.
We don't have to spec anything out as part of the RFC, but I'd like a better idea of what this would look like to make sure we aren't locking ourselves out of anything in the future.
I think this is reasonable, although I am a little wary of actually putting Option
types in curve return values, because I'm skeptical that we can make interpolation work well for them. (I wrote more on this in the earlier portion of this reply.)
I think what I'd like to do now is to sit down when I have time and actually prototype this (at least to the point where we can convince ourselves we won't be stepping on our own toes); I think we are mostly on the same page, though.
I think this is reasonable, although I am a little wary of actually putting Option types in curve return values, because I'm skeptical that we can make interpolation work well for them.
It seems to me like that is an issue with interpolation. We shouldn't try to interpolate across a discontinuity, and if we want to represent derivatives directly using curves we can only assume they are piecewise continuous. Maybe the concrete curve representations need to know about their discontinuities and treat them as boundaries for interpolation.
Does that make sense? I'll try to expand on this more when I have time.
It seems to me like that is an issue with interpolation. We shouldn't try to interpolate across a discontinuity, and if we want to represent derivatives directly using curves we can only assume they are piecewise continuous. Maybe the concrete curve representations need to know about their discontinuities and treat them as boundaries for interpolation.
Does that make sense? I'll try to expand on this more when I have time.
It does make sense, and I am curious where this line of inquiry leads — especially in the matter of how such a thing would be distinct from a vector of curves.
I highly approve of the technical and design aspects of this proposal. It should simplify learning and support, and encourage compatibility between ecosystem crates which I love. All in all, it's extremely strong and well-thought-out.
That said, I feel we are lacking a broader plan for integration with the existing codebase. I would love to see:
- A list of all of the ad-hoc curve/interpolation implementations already in the engine
- and a brief discussion of if/how each should be ported to support this api.
Off the top of my head, we currently have:
- Animation curves: AnimationClip/AnimationCurves/VariableCurve
- Spline curves: CubicCurve/RationalCurve
- Color interpolation: Mix
- Animation interpolation: Animatable
- Vector space interpolation: VectorSpace
VariableCurves are keyframe based and not supporting them is the main thing I think would halt this proposal at this point.
* Animation curves: AnimationClip/AnimationCurves/VariableCurve
Yeah, VariableCurve
is currently the primary thing keeping me up at night (so to speak). As I see it, a VariableCurve
is something that "wants to" be either:
- A
Curve<Transform>
, although there are some barriers to making this literally a reality, because each individual component gets to declare its own interpolation mode, or - A
Curve<MorphWeights>
(or perhaps a collection ofCurve<f32>
for each morph target, really)
In the first case, there isn't really any getting around storing interpolation mode metadata for the translation, rotation, and scale components, which really introduces a number of different kinds of curves (although the translation and scale components can presumably share everything).
Actually, the main thing of interest in the first case is that cubic spline interpolation does not produce tangents (at least, in the GLTF spec they are not produced) but it does use them. This is an interesting wrinkle, because the keyframe data being stored is actually different from the data being interpolated, so it doesn't fit quite so neatly into the Interpolable
shell. However, that doesn't mean that it can't use Curve
; there are a few options available in this regard:
- Since cubic spline interpolation does produce tangents mathematically speaking, we can put those into a type implementing
Interpolable
and then use all the machinery as one would expect (creating anUnevenSampleCurve
directly) and throwing away the output tangent data at the end; - Implement
Curve<Vec3>
(for example) using an essentially irrelevant interpolation mode onVec3
(or a wrapper thereof) but internally storing tangent data at keyframes and using it for interpolation when sampling; - Retool the definitions of
Curve
/Interpolable
somehow to accommodate situations like this, where the data being sampled is actually different from (typically a projection of) the data that it is interpolated from.
My thoughts on these are as follows: The first is most closely in-line with the present vision of this RFC (but may have some negative performance implications worth investigating). The second is kind of sacrilegious, but only mildly so (after all, the way that a Curve
computes its data is actually not really the API's business), mostly leading to negative consequences for any attempts at using resampling. The third is interesting and, I think, worth investigating, but also kind of risks overcomplicating things for little actual benefit; actually, as I'm writing this, it's kind of impossible to imagine an API for such a thing where something like resample
is even possible — if you can't get the full data intermediately, it's something you simply cannot perform.
Now, for that MorphWeights
business: actually, the tricky thing here is that the size of the set of morph targets is unknown statically (since it is basically determined by the length of a vector loaded through GLTF), which is why I mentioned a collection of Curve
s rather than a Curve<MorphWeights>
(which, when used directly, would suffer issues with allocation — interpolating data for such a thing would require allocating), since, for instance, a Vec
of Curve<f32>
s reifies the fact that the set of morph weights has constant size. I think this can work, but it brings into scope the question of multiplexing over families of curves in a way which is efficient.
As a closing thought on this, (I should have some more concrete proposals in the near future), I don't think there's really anything at all sacrosanct about the way that VariableCurve
is presently set up right now: it's essentially just a mirror in Bevy of how animations are encoded in GLTF, and changing it would mostly involve changing the code in AnimationTargetContext
to function somewhat differently.
I don't think there's really anything at all sacrosanct about the way that
VariableCurve
Yeah, that's absolutely true. I think our current animation system was the first thing merged for the 0.14 cycle, and VariableCurve
wasn't even mutable until yesterday. So now is a really good time to go for further breaking changes.
I just want to make sure we articulate a concrete plan for animation without performance or support degradation.
After reviewing this, the associated PRs, and the animation graph RFC: I'm happy with this. The trait itself is sound and user friendly. The prototype animation PR shows how this can be integrated across the wider engine, and the recent work on exposure curves demonstrates the need for curve abstractions. Everything remaining is just implementation details, and can be handed off to the dedicated working group.
This has now been substantially rewritten. The goals of this rewrite were as follows:
- Confine type-inferred interpolation to a more mathematical core for which it is well-scoped and well-behaved. The imposition of type-inferred interpolation was too opinionated, so the API has been updated to better support explicitly-provided interpolation instead. If
bevy_animation
wants to use general type-inferred interpolation in its internals, its interoperation withCurve
can be built on top ofSampleCurve
andUnevenSampleCurve
(which now use explicit interpolators) rather than using some intrinsic notion of interpolation owned bybevy_math
. - Absorb some changes that were already being made in early review of the code as part of the working group. Part of this was interpolation-related (it had already been confined to resampling methods rather than the trait itself), but there were some other small changes too; e.g. I no longer think it's a good idea to default to non-lazy
map
behavior for sample-interpolated curves — the non-laziness should just be explicit instead.
I think that this ends up pushing the Curve API to be more flexible and less closely married to the original machinations of bevy_animation
(ironically, spinning off interpolation from Animatable
was kind of what spurred this in the first place), but I think the changes are for the better.
Okay. After refactoring the draft library, I ended up with some shared interpolation interfaces which seem pretty useful for implementors. (I used them in the bevy_animation
rewrite, for example.) I ended up adding a section to the RFC that describes them as a result; I think having something like that is pretty important to lower the implementation barrier if trait-dispatched interpolation is off the table.
I would say that now I'm reasonably happy with this in terms of completeness (with the new approach in mind), at least until someone changes my mind. :)
This RFC seems mainly focused on animation, but I'd like to drop a link to https://www.forrestthewoods.com/blog/tech_of_planetary_annihilation_chrono_cam/ , a blog post which discusses using a time series Curve type as the primary means of communicating state from the server to the client in a multiplayer game. In this approach the client state is a big collection of Curve<T> which get updated according to new data from the server (which runs the actual simulation and tracks all the data persistently) and client-side predictions which can override stale server data.
Perhaps a similar approach could work well with Bevy in the future, it seems very ECS-like (though I ended up using a different approach in my project).