Core Constraint Component(s) a la Unity or Blender
Is your feature request related to a problem? Please describe.
I've tried getting into Resonite a few times since release but I keep getting bounced by a few issues, mainly the seeming lack of basic constraint components. For beginners, I think it's a bit unreasonable to expect learning the visual programming language for these features that come out of the box in even professional tools. Adding component(s) that work like in the above programs would help bridge the gap between using any other 3D system and Resonite.
Note a key advantage is because they are simple and universal, it'd be easy to convert functionality and techniques from other places to Resonite.
Describe the solution you'd like
Blender and Unity have very easy to use constraint components to do a lot of heavy lifting. There's a lot of good UI and functionality to be inspired by the below:
Ideally, what I think would work best is a ConstrainTransform component similar to VirtualParent where it has a source slot with pointers to position, rotation, and scale individually, and offsets for each (with a button like "active" to set the offsets to be correct for the current orientation). The difference being that a ConstrainTransform component would work separately on position, rotation, and scale and not require driving the others.
A general, separate-vector ConstrainTransform would be all I'd want but there's others that should be looked at if they don't have equivalents like Unity's Look At Constraint or Blender's Dampened Track, which do the heavy lifting of pointing a rotation towards another object.
Describe alternatives you've considered
There aren't alternatives. I've tried using VirtualParent, but this only works if you want to constrain the entire transform, not a part. I've tried looking into the programmatic method and it's so cumbersome when I need to do it 20+ times per object, and every competitor / collaborator program has simple, consistent constraints as an option.
Additional Context
No response
Requesters
No response
Related https://github.com/Yellow-Dog-Man/Resonite-Issues/issues/2612
Can you please give some additional context about the issue you're trying to solve / what you're trying to build in Resonite, @Archytas79? Is this specific to wanting to add constraints to an avatar?
I'm trying to make a slot have a relative rotation to annother slot, mapping a humanoid rig to a non-humanoid rig using the mechanical principles of levers. I like doing it this way because it's system agnostic as long as there's support for simple constraints.
I take it to a bit of an extreme (I use it to map eye rotation to linear motion) but mostly people use it for mapping human legs to digitigrade legs using parallelograms
On Fri, Jul 26, 2024, 11:22 AM Shifty @.***> wrote:
Can you please give some additional context about the issue you're trying to solve / what you're trying to build in Resonite, @Archytas79 https://github.com/Archytas79? Is this specific to wanting to add constraints to an avatar?
— Reply to this email directly, view it on GitHub https://github.com/Yellow-Dog-Man/Resonite-Issues/issues/2672#issuecomment-2253259859, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANJFYU5PWJMVQI5JSXIOEQDZOKHXBAVCNFSM6AAAAABLPOJSMWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENJTGI2TSOBVHE . You are receiving this because you were mentioned.Message ID: @.***>
For the legs at least people usually use CopyGlobalTransofrm with the position drive removed.
I keep running into issues where I get told we need constraints, (twist bones for example), but I have never been able to really draft an issue for it.
I'm trying to make a slot have a relative rotation to annother slot, mapping a humanoid rig to a non-humanoid rig using the mechanical principles of levers.
Most constraints can be implemented as a series of CopyGlobalTransform, LinearRotationMapper, PositionDeltaDriver, ValueCopy<T>, and LookAt components along with slot transforms. Of course dedicated constraints will ease this and we've been missing them for a while.
Booted up resonite to try CopyGlobalTransform, but it doesn't really do the same thing as the above. There's no offsets, plus it's really easy to accidentally and irreversibly change the whole transform with the position automatically filling like that.
I opened a separate issue about that common later problem, but for safety having it start with all components as null would def be best
I use it to map eye rotation to linear motion
There are already components that exist to drive eyes linearly if that's what you need @Archytas79. It's the EyeLinearDriver component.
The eyes are on separate flat planes that are angled quite a bit apart and are internally asymmetric - if it can handle that, that's pretty sick but from what I'm seeing it looks like it's pretty specific to symetric eyes with similar normals?
Either way, it'd be easier to setup the levers that are system agnostic (with the addition of the proposal) and plug normal eye rotation into it.
BTW what is needed to clarify? I'm not sure where the proposal is too nebulous.
Constraints are pretty useful in general for all kinds of animation, it's a big gap the game's missing compared to most other game engines. Getting too fixated on their use for player avatars is probably missing the wide range of applications they have.
So far i've been able to replicate most constraint stuff from blender using various components in resonite, but I think it's worth doing this for the UX alone. Resonite can be pretty intimidating for new users! I think many reach for a component search tool first and go looking by name, or searching the wiki. Having components with familiar names, and familiar user interfaces, is a good improvement for onboarding even if the functionality is technically achievable in other ways. Maybe when it becomes possible to make our own components, the UX team could build some that wrap built in functionality in more familliar UI and have them included in the default component set?
I'm bring this back up, as I (as well as @BlueCyro) have been poking around at how constraints are implemented in Unity, particularly rotation constraints.
All the other constraints are pretty simple to implement in protoflux (probably aside from the parent constraint), but rotations present a unique problem that's been quite difficult to solve.
I don't know how unity's rotation constraint operates and every time I've tried to look, it's given me no results, and without knowing the math, I can't hope to try to replicate it.
What I do know is that Unity's constraints can have an arbitrary number of sources with a weight, and the result is calculated from the weighted average of the respective transform properties, and this is simple enough to do with vector3, however rotations I haven't quite figured out.
The lack of these constraints are a pretty big blocker for quite a bit of my work, as a lot of my content from other platforms pretty heavily relies on these constraints, and I also think having these available as components would not only be more user-friendly to interface with, but also generally be faster than what can be done in protoflux, since as far as I could tell, the only way to mirror the same behavior is to use an iterative approach, which can very quickly add a lot of overhead.
Here are the constraints that Unity has:
- Aim Constraint
- Look At Constraint (This can already be kind of achieved with the LookAt component, though the lack of multiple sources and a weight makes this not useful for some cases.)
- Parent Constraint
- Position Constraint
- Rotation Constraint
- Scale Constraint
I think having a similar set of constraints that Unity has would not only be a really nice tool to have on hand, but would also massively benefit more advanced avatar/model rigs, as some of those essentially require constraints in order to function.
Isn't it implemented as the sum of:
- the weighted average of each source element in the list, and
- an offset, which can be set using the [Zero] button?
I'll post the previously analyzed doc if I can find it.
- the weighted average of each source element in the list, and
- an offset, which can be set using the [Zero] button?
This is what I can gather, and for everything else it's pretty simple to implement, I was able to do it myself with one async for loop in protoflux, however when it comes to rotations, it seems to get much more complex, as averaging quaternions on its own is already very complex, making it a weighted average seems to make things a bit different.
It's not quite as simple as just replacing the vector3 in other weighted average calculations with a rotation, since the operations are handled very differently for quaternion rotations.
I was able to get a mix of weights working, however the order of the individual sources determines the final result, which is not how they behave in Unity from my experience.
If you can get a doc or any math on the subject I'd love to take a look at it, but I haven't been able to find anything about the weighted average of rotations.
Unity as far as I can tell from their only documentation I could find on this here, uses simple weighted averaging of the quaternions treated as 4D vectors which will only work for quaternions which are fairly close to one another. This will be the fastest way that you can compute this, but the method breaks down catastrophically when the quaternions are more dissimilar which is problematic in the generic case.
The more robust solution is described here.
The method is essentially:
- Form a $4 \times N$ matrix $Q$ of all $N$ quaternions $q_i$ which you want to average, multiplied by their weight factor $a_i$:
$$ Q = \begin{pmatrix} a_1 q_{1}^w & a_2 q_{2}^w & \cdots & a_N q_{N}^w \ a_1 q_{1}^x & a_2 q_{2}^x & \cdots & a_N q_{N}^x \ a_1 q_{1}^y & a_2 q_{2}^y & \cdots & a_N q_{N}^y \ a_1 q_{1}^z & a_2 q_{2}^z & \cdots & a_N q_{N}^z \end{pmatrix} $$
- Multiply this matrix $Q$ times its transpose $Q^T$ to get a new $4 \times 4$ matrix $R$:
$$ R = QQ^T $$
- Compute the eigenvectors $v$ and corresponding eigenvalues $\lambda$ of this matrix $R$:
$$ Rv = \lambda v $$
- Choose the largest eigenvalue $\mathrm{argmax}_j (\lambda)$ and its corresponding eigenvector $v_j$, then compute the final output $Z$:
$$ Z = \frac{v_j}{||v_j||} $$
This is obviously more computationally expensive to do than simple averaging, but it is precise.
Could have RotationConstraintFast and Rotation Constraint components with a warning in the component that the fast version is likely to break as a result of being fast, like how GrabbableSaveBlock has a warning.
Courtesy of LucasRo7, a fairly trivial (and generalizable) way of doing the average of N quaternions is to slerp sequentially from N to N+1 over and over again, which ends up converging to the proper average over about 15-20 iterations. The deviation appears to be very small, and thus quite accurate.
However, I have yet to figure out a way to do weighted averaging using that sequential slerp method.