Animation-Instancing
Animation-Instancing copied to clipboard
Layers
Was thinking about the best way to handle layers, looking for some feedback on approaches.
-
Abstract out the concept of a playing animation from AnimationInstancing. So an instance can have multiple animations playing. This would require passing more data to shaders when more then one layer is active. Plus logic for masking. Would change a lot of code throughout the entire runtime flow.
-
Handle it at generation time. Record each combination you want. Would require some new logic to start/record multiple animations playing but seems far simpler then #1. Potentially this could use a lot of memory but I think it would work well for most use cases and seems like a good bang for the buck.
So one issue with #2 I didn't think of is synchronizing would be an issue. Cross fading would probably be good enough in some cases but I don't think it would be good enough for anything production quality.
Ok so another non obtrusive way I think might work even though it's a bit of a hack, is to give AnimationInstancing a layer and then have an AnimationInstancing instance per layer.
MateralPropertyBlock names would include the layer. The shader would change a bit so that loadMatFromTexture would take bone texture params as arguments.
The manager would have two flow changes.
- Skip bone matrix changes and rendering for inactive layers.
- Render would split into two loops. One where it applies property blocks. Here we take the property block settings for all layers but apply them only to the layer 0 instance property block. In this loop instead of issuing draw commands we stash the command parameters in a list. One entry for each set of instance layers. After this loop is done we iterate over the stashed commands and actually issue the draw commands.
The masking in the shader should be fairly straight forward. You could just hardcode which bones are in a layer to start with, pass that data into the shader when you set the bone texture data.
Ok so that pretty much works. I had to add a parameter to AddMeshVertex to allow for instances to set a unique id for the material block, to force it to create one for each instance. So shaders are getting all the info they need I just need to actually handle the masking now.
One thing to consider I think is whether supporting submeshes and multiple materials really makes sense. Solving that in the skinning system adds far more complexity then simply combining your skinned meshes and creating an atlas.
Thanks for your work. The first one is what I have thought. I think the second one is not flexible and cost more memory. If we generate the combination offline, the user need to generate all the permutations. The third one I'm not clear for now, but I think there will be gaps between layers while rendering.
A layer per instance is kind of like the first approach it's just easier to implement without having to change a lot of code. So to get the logic down that's what I did.
The trick is cross fading 'back' into the base animation. Like if you have two layers and layer1 is upper body and you cross fade into an attack. that looks fine. It's cross fading back to match layer0 that is the issue. Which is what I think you probably mean by gaps.
That issue exists regardless of the approach though.
The bigger issue I eventually ran into is that cross fading doesn't blend bone weights. So cross fading animations that are not similar results in very noticeable distortion/stretching in the mesh. Fixing that requires some of the same changes that adding layers does. Mainly that the shader needs to have information on both animations.