Add root motion to Skeleton3D to resolve conflicts between Animation and SkeletonModifier which may modify root bone transform
Closes https://github.com/godotengine/godot-proposals/issues/9470
It is still experimental. It is debatable how to get Gizmo out and whether this is a good way to move Skeleton.
Skeleton3D only exposes the setter set_root_motion() to the core. This is because it is impossible to process the root motion by looking only at the Skeleton3D root bone since it must be processed by an AnimationMixer external to Skeleton3D in order to take the root motion of the loop animation into account.
It needs to be processed in the deferred update process of skeleton as well as pose. This means that if you want to manually apply root motion to any Node via get_root_motion(), you must subscribe to the root_motion_processed signal (is root_motion_fixed better naming?).
If the Apply Root Motion option is enabled, the root motion is added to the Skeleton3D transform at runtime, see Class reference.
The root motion of the AnimationMixer and the root motion of the Skeleton3D are different values, because the rotation of the accumulator and the actual object must be taken into account, as done in https://github.com/godotengine/godot/pull/72931. In other words, Skeleton3D's root motion is an easily applicable value.
@BastiaanOlij @Malcolmnixon @dsnopek See if this is helpful and useful for XRBodyModifier's root transformation.
If the use root motion option is disabled, it should move the root bone, and if enabled, it should set delta as the root motion to the skeleton.
However, I do not have the XR debugging environment, so I do not know if the calculations are correct. Especially, I don't know which is processed first in the OpenXR tracker, the rotation or the translation (whether the root translation takes the rotation into account or not).
Thanks! I'll test it as soon as I can. However, this should also update XRHandModifier3D. We still need to figure out how to handle OpenXRHand.
We still need to figure out how to handle
OpenXRHand.
I didn't realise we changed OpenXRHand as well. I believe we should revert the changes on OpenXRHand and not make it work through SkeletonModifier3D. This is an existing class that is in use where we now have a serious breaking change, on a class that is marked as deprecated as we want people to use XRHandModifier3D if they do need/want to refactor their solution.
As for the change itself. I haven't followed the whole discussion leading up to these decisions, so I'm not coming from an informed standpoint. But I do fear very much that we've changed the system in a way that only applies to animations. With what were doing in XR, we're dealing with live tracking data.
We have two scenarios that need to be supported. The first is using the data as is, this is especially important for AR type applications where we want the hand to be placed exactly where the tracking system says the hand is. In the original approach (and I'm using the original names we decided on here) we would setup up our node tree like this:
XROrigin3D
- XRHandTracking3D (with skeleton property pointing to Mesh/Skeleton)
- MeshInstance3D
- Skeleton3D
The logic would place the XRHandTracking3D node where the hand was, and pose the skeleton in local space to this node.
The second scenario is where we stray away from the position of the hand. We either stop the hand from moving through a physis object when the player punches through a wall, or we limit the hand motion based on IK for the avatar the user is using, or there is some other mechanism where we stray away from the hand position. A setup for that first example would be:
XROrigin3D
- XRHandTracking3D (with skeleton property pointing to CollisionHand3D/Mesh/Skeleton)
- XRCollisionHand3D (AnimatableBody2D with script and `is_top_level` set to true)
- CollisionShape3D
- MeshInstance3D
- Skeleton3D
The XRCollisionHand3D node has a script that attempts to keep its global position in sync with its parent, but when it's motion is obstructed it either stops, or slows down, the hand motion. Examples of this are the user placing their hands on a wall or table, or the user pushing against a heavy object that they are able to move.
We decided to change over to the skeleton modifier approach because we were duplicating a lot of logic here, this was all possible before the change that prompted all this, and it seems the necessities of animating skeletons outside of animations have not been taken into account in the decision making process. I hope the above gives some clarity in the use cases we're dealing with, and the question on whether these changes will mean we need to stop basing our work on other systems in Godot, or whether we can combine these in properly. It seems to me there are still things missing.
The XR nodes (OpenXRHand, XRHandModifier3D, and XRBodyModifier3D) had two modes of operation depending on the game:
- Direct Drive - Locate objects (Skeletons, Colliders, Meshes, UIs, etc.) under the XR Node3D and let them move by transform-inheritance
- Indirect Drive - Locate objects elsewhere in the tree (or enable top_level) and let physics processes (move-and-slide, joints, etc.) move them around.
The Direct Drive is the kiddie-training-wheels version people tend to start with, but abandon it for Indirect Drive the moment they want advanced features such as hand-collision, held-object-collision, two-handed-grabbing, etc. There are a significant number of articles, videos, and games which rely on Indirect Drive approaches; and I'm concerned we may have broken a fair number of them.
@TokageItLab If you want an environment where you can play with XR then all you need is a webcam. This video shows setting up a basic avatar in Godot, and driving it with XR Animator using a webcam.
You'll note in this training video that the scene tree has the XRFaceModifier3D and XRBodyModifier3D separated from the avatar because I swap out the avatar and don't want to delete the modifier nodes:
In other videos I've published I implement dynamically downloading the avatars over the internet and switching them at runtime.
The following is a test of upgrading the Godot XR VMC Tracker repository demo project:
Currently the XRBodyModifier3D nodes contain two things that want to be positioned:
- The avatar
- A targeting marker indicating the players position and forward direction
It's possible to fix the warnings by moving the XRBodyModifier3D and the marker under the avatars skeleton - although this does make for a rather complex node-tree.
With this updated node structure we get the following behavior:
- Mainline Godot - the targeting ring moves/rotates but the avatar fails to move/rotate
- This PR - the avatar moves/rotates but the targeting ring fails to move/rotate
https://github.com/godotengine/godot/assets/1863707/f9a16669-5281-45ad-9c67-22b8792727a1
The following is a video testing a few different avatars (AvaturnMe, Mixamo, ReadyPlayerMe, and VRM) with Godot MainLine and this patch. It looks like the SkeletonModifier3D changes also broke the VRM 3D Avatars asset based on how the anime characters hair and dress are being torn off the body.
https://github.com/godotengine/godot/assets/1863707/3b65b648-06e4-40cc-833a-84a985dce2b0
UPDATE: The other thing to note is that while the VRM Avatar (with the messed up meshes) can move with this patch; the other three avatars are still stationary. I suspect it's because only the VRM avatar has a "Root" bone - The other three avatars have "Hips" as the root bone. The XRBodyModifier3D used to apply "root motion" to the avatar via the Node3D transform because it's not guaranteed that there's a root bone.
@Malcolmnixon
This PR - the avatar moves/rotates but the targeting ring fails to move/rotate
How would it work if the Apply Root Motion option is enabled? At least in the game, the Skeleton's transform can be changed, so its child Nodes should also move.
The corruption of VRM avatars should be caused by VRMSpringBone's use of bone_pose_override. Since bone_pose_override was left in place for compatibility, it should work if only bone_pose_override is used, but it is expected to break if SkeletonModifier is used at the same time. We plan to rework VRMSpringBone as a SkeletonModifier.
@BastiaanOlij First of all it was quite problematic that some potential modifier Nodes were running set_bone_pose() outside of the animation system in the past. At the very least it should have used set_bone_global_pose_override(). Because it does not have a mechanism like reset_on_save, it can easily rewrite the actual values of the .tscn. Also the process order issue, as I pointed out in SkeletonModifierPR, but unfortunately, this may not have been taken into account when the design of XRNode was started.
With SkeletonModifier's PR, the value is now not reflected in the actual scene as long as set_bone_pose() is executed in SkeletonModifier::_process_modification(). This PR attempts to do the same for the root motion, but since the root motion needs to be reflected in the actual scene, so Gizmo has a virtual world to allow previewing.
Also, as @lyuma and someone have expressed concern, it may be possible to make the Apply Root Motion be NodePath instead of boolean to apply it into another Node. This would make it possible to have collisions like those used in XR outside of the Skeleton rather than in its descendants.
However, in any case, now the future may modify the bones (and Skeleton's transform) must be done as SkeletonModifier which is a direct child of Skeleton, and if they are currently developing something that does not conform to this, they should redesign it to conform to SkeletonModifier's design.
@BastiaanOlij First of all it was quite problematic that some potential modifier Nodes were running set_bone_pose() outside of the animation system in the past. At the very least it should have used set_bone_global_pose_override(). Because it does not have a mechanism like reset_on_save, it can easily rewrite the actual values of the .tscn. Also the process order issue, as I pointed out in SkeletonModifierPR, but unfortunately, this may not have been taken into account when the design of XRNode was started.
The root problem of this works both ways, as I think those focused on the requirements of the animation system have a blind spot in knowing the implications of real time tracking, so do we on the XR side have a blind spot in understanding the needs and requirements for the animation system. I did not come into this conversation until everything was merged, so I'm still playing catchup to the discussions had on your PR.
My main concern here is that while XR nodes were altered, and their behaviour altered, this was done without proper consultation with knowledge holders and thus we didn't come up with a solution that works for everyone involved. From what I've been told, the change that broke the camels back, were only done at the end though possibly planned for awhile. Now that this breakage is leaving several people in a position that they can't continue their work until things are fixed, we're starting the conversation that should have been had.
Also something that puzzles me. How would I know that set_bone_pose behaves this way? First, if there is a method with 'global' in the name, and one without, I would assume one would adjust things in global space, the other in local space, not that one is safe to use and the other may end up changing data saved into my scene.
There is also an issue that this logic, while now being used in a broader sense now that we have extended this logic to do body and face tracking, and made the system available outside of OpenXR, that this originates from Godot 3 days and has been ported over, so there are decisions made that predate some of the newer functionality in the animation system.
I think we have a broader problem here that we're not good at commenting code/adding design information to our documentation and explaining more about why we do what we do, instead of what it does. Only the latter can be distilled from reading the code, while the first is what is missing. That's a broader problem and we in the XR team are just as guilty of this, as there would be very little in the code that would inform you that these changes would cause the issues they have.
@BastiaanOlij
Also something that puzzles me. How would I know that set_bone_pose behaves this way? First, if there is a method with 'global' in the name, and one without, I would assume one would adjust things in global space, the other in local space, not that one is safe to use and the other may end up changing data saved into my scene.
No, the one with the suffix "override" was safe. The value of pose_override was to override the pose without changing the actual bone pose property. So it is not change .tscn.
However, this design was quite hack and also did not resolve the processing order of the bone deformation. Since the user only wanted to use the global pose, it was confusing without knowing what it was supposed to do just as even you are. This is why we just recently put a deprecated flag on override to use process modification.
These issues and proper usage of override were never documented, so only a few people who developed IK and other modifiers had detailed knowledge of them. It would have been unfortunate that this was not shared with the XR team.
Godot 4 deprecated them, but in Godot 3, in particular, an attempt was made to implement a ModificationStack, to which a local pose override was also added, making the code a nightmare. I am organizing the documentation of the animation API for 4.3 as most APIs have finally begun to enter a stable phase recently, and it is unfortunate that XRModifier is also being developed for 4.3 so that it conflicts with it.
Anyway, I have wrote the general SkeletonAPI process flow and how to get/set a modified pose in the Modifier PR description. If they have any questions about the implementation, please ask in the Animation channel of RocketChat or on the v-sekai server of Discord.
@TokageItLab as with everything, it all has a history to get where we need to go :)
The important thing is that we're getting the right people together now to talk about the issues, and hopefully to create a better understanding within both teams what each others requirements are.
My main concern remains the move that SkeletonModifier3D now needs to be a child of the skeleton it modifies. Part of me agrees with that change, and part of me disagrees with it.
The nice thing about having it as the root was that we had a node that is positioned in the right place to add other mechanisms too (including, like I previously mentioned being able to effect this location through physics) while the skeleton remained updated in local space. This also allows us to enforce the XR nodes to be a child of XROrigin3D which is incredibly important as all tracking data is in relation to that node, not in global space, not in local space of the mesh whose skeleton is updated.
But I also get having it as a child node and it being more descriptive and removing the need for an arbitrary property. There is an alternative option of not updating, or optionally updating the root motion of the skeleton, and moving the placement code to an XRAnchor3D node. That will make it less monkey proof for XR users however..
The modifier has to be a child, but the node itself doesn't.
what I am working on doing in vrm is that the vrm_secondary node can be anywhere, and it manages its own internal SkeletonModifier3d child of Skeleton3d The only downside of internal is you have to pick FRONT or BACK which basically determines if you are at the beginning or end of the update order: the user can't really control the update order directly. For VRM I am doing BACK so it updates last in the frame, which should be perfect for physics.
For some nodes that might be ok, but I suspect for the high level body tracking nodes that the user will want to update some things after the body tracking takes effect, so the only internal mode that can be used is FRONT, but that leaves the user little ability to control the update order granularity or tweak the weight of the modifier (in case they want to smoothly shift from body tracking to animation). Though this can be mitigated by exposing your own weight property in the XR nodes and forwarding the weight to the internal modifier node
We will not go into detail implementation until we know which direction to go, but I have made some changes based on lyuma's feedback.
For now, I changed the "Apply Root Motion" property to a "Root Motion Target" property that allows specifying a Node3D such as a Skeleton parent as the target to which the root motion is applied.
Also, the "root_motion_processed" signal now sends Vector3 root_motion_position and Quaternion root_motion_rotation values. This will make it slightly clearer for users how to use the signals.
The latest patch seems to work quite nicely - especially with the Godot VRM patch to make many-bone-IK work with SkeletonModifier3D.
https://github.com/godotengine/godot/assets/1863707/e405d05d-a700-4b8d-817c-8f949f8a67df
By default only the VRM character can move due to being the only one with a Root bone. The video above enabled "Use Root Motion" on all four XRBodyModifier3D nodes (including the VRM character).
We do seem to have lost XRServer.world_scale control for the driven avatar.
Pre SkeletonModifier3D this would be included in the XRBodyModifier3D transform resulting in all child objects including the avatar being scaled:
// Transform to the tracking data root pose. This also applies the XR world-scale to allow
// scaling the avatars mesh and skeleton appropriately (if they are child nodes).
set_transform(
transforms[XRBodyTracker::JOINT_ROOT] * ws);
This scale which is applied to all XR-driven systems (including headset IPD) is required for:
- Allowing OpenXR (or other XR systems) which work in meters to operate in worlds designed in other units
- Allowing the player to be dynamically scaled (Alice in Wonderland effect)
UPDATE: In discussing with Bastiaan we may be moving towards deprecating XRServer.world_scale; so this may not be an issue.
There's a problem with how root motion is applied. The following video shows a mocap recording of a user running back and forth; however the avatar keeps running away forever:
https://github.com/godotengine/godot/assets/1863707/d284a5ca-45db-4c3e-a34e-cb70917ad364
This error also highlights the fact that the target nodes position is updated:
node->set_position(node->get_position() + root_motion_position);
As such even after the transform order is fixed we may end up with numerical integration errors which will result in character drift over time.
This scale which is applied to all XR-driven systems (including headset IPD) is required for:
I am still not clear on this, but are you saying that there are other modules that need to use world scale? If so, I think the right way would be make each module to use something like XRServer::get_world_scale() to retrieve it from singleton.
There's a problem with how root motion is applied. The following video shows a mocap recording of a user running back and forth; however the avatar keeps running away forever:
This is the reason why I say in above I am not sure if my calculations are correct. Perhaps the order of applying rotation and translation is different in the glTF animation and OpenXR tracking data, and this needs to be corrected.
As such even after the transform order is fixed we may end up with numerical integration errors which will result in character drift over time.
My guess is that the drift is not due to tracker shake, but due to the timing of the application of world scale or something in XRModifier that has caused an error with the actual values (means the drift must be a simple miscalculation on my part due to not knowing what values the OpenXR tracker data is). If all those calculations are consistent and the drift occurs, then I think there needs to be some sort of threshold in the XRModifier.
I have tried to fix the calculation with the problems you have pointed out.
- World scale is now calculated earlier in the process
- Regarding the rotation of the root motion, it now extracts only the Y rotation from the hips since the root bone in the OpenXR input does not seem to contain any rotation, is this correct?
However, I still don't know if this is correct, so I may need to have @Malcolmnixon take over as soon as we have a clear direction for about root motion implementation.
I am still not clear on this, but are you saying that there are other modules that need to use world scale?
In discussing with @BastiaanOlij we're going to hold off on XRServer.world_scale and possibly deprecate it.
This is the reason why I say in above I am not sure if my calculations are correct.
I'll see what I can come up with. I think it's a missing (or extra applied) rotation - the mocap-actor is always walking forwards (in the direction they're facing) and I think that's being translated to the skeleton root always moving in a positive direction.
My guess is that the drift is not due to tracker shake
It's more that we're effectively doing position += delta on each frame. The problem is with numerical precision of IEEE 32-bit floats. For example adding 0.001F to position 1000 times does not give 1.0F. A rough guestimate of single-bit errors of an IEEE 32-bit float with an approximate +/- 3 meter tracking space at 120FPS results in a positional drift of around 60 centimeters per day; so it's probably not something anyone will notice - at least not until we have people running VR office applications for 8 hours a day.
The root bone in the OpenXR input does not seem to contain any rotation, is this correct?
All tracker sources ensure that the Root joint is placed at ground level under the hips with Y+ up and Z+ pointing forwards.
Specifically for OpenXR we do:
// Get the hips transform
Transform3D hips = xr_body_tracker->get_joint_transform(XRBodyTracker::JOINT_HIPS);
// Construct the root under the hips pointing forwards
Vector3 root_y = Vector3(0.0, 1.0, 0.0);
Vector3 root_z = -hips.basis[Vector3::AXIS_X].cross(root_y);
Vector3 root_x = root_y.cross(root_z);
Vector3 root_o = hips.origin.slide(Vector3(0.0, 1.0, 0.0));
Transform3D root = Transform3D(root_x, root_y, root_z, root_o).orthonormalized();
xr_body_tracker->set_joint_transform(XRBodyTracker::JOINT_ROOT, root);
We have similar code for VRM, Axis Studio, and Movella.
I'm going to need a little help with understanding what's going on with some of the code; but I can get it to work if I comment out the "Extract root rotation from hips rotation." block (as its already done in all body trackers) and apply root motion by:
Node3D *node = cast_to<Node3D>(get_node_or_null(skeleton->get_root_motion_target()));
if (node) {
node->set_transform(transforms[XRBodyTracker::JOINT_ROOT]);
}
@Malcolmnixon
node->set_transform(transforms[XRBodyTracker::JOINT_ROOT]);
This directly sets the node's transform, which is not relative and should not be done. Is something missing to extract the difference in XRServer?
This directly sets the node's transform, which is not relative and should not be done
This was just a test to show the tracking data is valid and there is a simple way to apply it which achieves working results.
It's interesting you say "is not relative and should not be done" because relative isn't really important for this type of work - we can get multiple sources of tracking information (headset, controllers, hands, body, trackers strapped on the body, trackers strapped onto props, etc.) that are all in the same XR coordinate space. We actually want to position all of these objects in the same space - their local transforms being the exact values coming out of the tracking data.
If we want to move the entire space around we'd have all these nodes childed to some "stage" Node3D and then move the Node3D around dragging everything else - this stage would have to be synchronized with the XROrigin3D if we're doing VR so that the camera moves with the body.
It's interesting you say "is not relative and should not be done" because relative isn't really important for this type of work
If the root motion of another Modifier or AnimationMixer is relative (at least AnimationMixer has historically been relative) and only the XRModifier sets an absolute value, it will completely override the other Modifiers and break FK/IK blending.
For example, if you have an animation that moves from the origin, if you adopt an absolute value for the root motion, the node will move to the origin every time the animation is played, and the root motion will not do what it is supposed to do. So the root motion must be relative, not just a deformation applied to the root node.
In theory, it is simply a matter of calculating the delta of the frames before and after the tracker and adding or multiplying them. If world scale is obsolete and the accumulation of decimal precision errors is to be ignored, then the only rest issue here now is how to calculate the delta of the root bone rotation value.
@Malcolmnixon I thought there seemed to be a problem with the conversion about the xform of translations, so I changed it. How is it now?
skeleton->set_root_motion_position((current_q.inverse() * node->get_quaternion() * delta_q).xform(transforms[XRBodyTracker::JOINT_ROOT].origin - prev_root_bone_position));
to
skeleton->set_root_motion_position(current_q.xform(transforms[XRBodyTracker::JOINT_ROOT].origin - prev_root_bone_position));
If it doesn't work, I would like you to try to calculate the direction in which the translations will be correct.
I fix XRBodyModifier direction of translation as @Malcolmnixon told me about reference_frame, now the XRModifier use reference_frame as reference of accumulator.
Quote
@TokageItLab in the above example, the SkeletonIK3D nodes require a target for IK tracking. What node path would be used in that example?
We can't require Skeleton3D to be a direct child of XROrigin. Indirect child may be possible but it would carry the implication that the character's position is the center of my play area which is both arbitrary and restrictive. Overall, this requirement would lead to major compat breakage across asset import and networking.
Originally posted by @lyuma in https://github.com/godotengine/godot/issues/90645#issuecomment-2062556704
Is there a reason not to add the use_root_motion option to XRHandModifier3D as well? When using two disembodied hands, I personally think it's simpler to move them using an XRNode3D (as enabled in PR https://github.com/godotengine/godot/pull/90645), but it seems like root motion could be used for this as well. Shouldn't we give the developer the option to do that?
@TokageItLab: To respond to something you wrote on the other issue:
Actually, I have not yet properly grasped the need for XROrigin; I agree that if the Skeleton is split into Left/Right, there needs to be some kind of Node to bind them together, but for use cases where the Skeleton is supposed to be a single I don't think it is necessary. As long as the root motion is transcribed to the Skeleton at https://github.com/godotengine/godot/pull/90361, the XROrigin should never actually move.
The XROrigin3D is what's used to control how the virtual space maps to the real world space. If the user teleports or walks with the joystick, we'll move the XROrigin3D to change that mapping. If there are things that need to move along with the XROrigin3D, then that's most easily handled by those things being children of the XROrigin3D. Also, there's a number of things that exist in the coordinate space of XROrigin3D, for example, all the positions/rotations we get from hand tracking and body tracking are relative to the XROrigin3D.
To relate that to your example:
I agree that if the Skeleton is split into Left/Right, there needs to be some kind of Node to bind them together
The point of XROrigin3D isn't so much to bind the nodes representing the hands together, as it is (1) if we move the XROrigin3D, we need to move the hands too and (2) the tracking data is relative to the XROrigin3D, so if the hands are children of XROrigin3D, we can simply update their local transform to put them in the correct location.
but for use cases where the Skeleton is supposed to be a single I don't think it is necessary
Due to my lack of experience with trying to use it in this context, I don't know the best way to replace that with root motion.
Maybe we could keep the hands or avatar outside the XROrigin3D, and move them relatively with root motion - but then how to do we move them when we move the XROrigin3D (ie when the user teleports or pushes the joystick for smooth motion)? I suppose we could turn that into root motion also? So, we'd move the XROrigin3D but then apply the same relative change to the skeleton(s) as root motion?
Alternatively, maybe the hands and avatar stay as children of the XROrigin3D, such that when we move it, it'll move those as well. Then root motion would just be used to move them around locally within the coordinate space of XROrigin3D?
To a large degree, I think we're going to need to implement this, and then take some time to play with it and experiment, in order to find the best way to actually use it.
@TokageItLab It seems my comment above was cross-posted with a new comment from you on the other issue :-)
To attempt to respond to part of that:
So as far as XROrigin conceivably satisfies the requirement, is the following true for this?
- XROrigin and other XR Modules currently have no role in moving Nodes. For now, the user needs to extract the movement value from the XRServer and apply it to any Node as needed1
By "XR Modules", do you mean the XR*Modifier3D nodes? If so, it is correct that those nodes no longer have any role in moving other nodes. With Malcolm's PR, that would be up to XRNode3D and its children (if desired, you don't have to pair the modifiers with an XRNode3D if you don't want to).
However, the XROrigin can have a role in moving nodes, depending on how the developer has designed locomotion in their game.
- Skeleton need not be a child of XROrigin, but must have the same coordinates with XROrigin
I think there are valid approaches that don't have the skeleton as a child of the XROrigin. But it depends on the way locomotion works in the game.
2.1. When applying a movement value extracted from XRServer to a Skeleton, the same value must be applied to the XROrigin
Hm, I'm not sure I understand this one.
The tracking data that comes from XRServer would be movement in the real world. If you applied the exact same movement in the real world to the XROrigin, you'd either move twice as fast (basically double applying the movement) or stay in the same place (if you applied the inverse motion).
There is a use case for forcing the player to stay in the same place: in some games if you attempt to use your real world motion to walk into a virtual wall, it will apply the inverse motion to the tracking origin, so that you can't walk into the virtual wall in virtual space.
But in most cases, you wouldn't want to move the XROrigin based on tracking data from the XRServer, it'd be in response to teleporting or smooth motion using a joystick.
2.2. If the Skeleton is a child of an XROrigin, only the XROrigin needs to be moved
Yep!
2.3. If the XROrigin is a child of a Skeleton, only the Skeleton needs to be moved
I'm not totally sure about the use case where the XROrigin3D is a child of the skeleton. You wouldn't want skeletal animations to move the tracking space, this would just sort of jerk the camera around in a way that probably wouldn't be too comfortable. :-) And if those animations came from real world tracking data (ie via XRBodyModifier3D), this would be basically the same situation I mentioned under point nr 2.1, where we'd double apply the movement or cancel out movement.
2.4. If the XROrigin is attached to the Root of the Skeleton by BoneAttachment, only the Skeleton needs to be moved
Hm, again, I don't think we want the skeleton moving XROrigin3D.
@dsnopek
After discussing this with @lyuma at this stage, I think it is possible to postpone this PR over to 4.4 without rushing it for now.
First, the purpose of this PR was to remove it so that XRModifier would not implement the wrong move application to core. Perhaps that was removed in the #90645 PR? I remember Malcom saying something to that.
If that is true, then I understand that after the PR of #90645, the XR Module has no role in moving the Node and only provides the value to the user. I see no problem with that since the value can then be modified and used at the user's option. There is no need to rush this PR.
I want to make this most clear: My request to XR at this stage is to ensure that the XR Module does not currently have a role in moving Nodes in order to avoid future conflicts and compatibility breakdowns.
Is there a reason not to add the use_root_motion option to XRHandModifier3D as well? When using two disembodied hands, I personally think it's simpler to move them using an XRNode3D (as enabled in PR https://github.com/godotengine/godot/pull/90645), but it seems like root motion could be used for this as well. Shouldn't we give the developer the option to do that?
I don't know if HandModifier can have the movement for that position. Or it could be possible to extract the movement value from the XR camera as well.
So, the idea I have now is to add a new SkeletonModifier to extract the Root Motion, which would have the name XRRootMotionModifier. It would be able to select which Tracker or Camera to extract the Root Motion from. Although it would replace this PR.