pyblish-base
pyblish-base copied to clipboard
Dependencies and Cascading Updates
Goal
Upon updating an asset, automatically update assets which depend on said asset.
Example
A rig
may depend on a model
. So when the model
is updated, so should the rig
. Taken further, if an animation
uses the rig
, then the pointcaches
and animationcurves
generated using this rig
should be updated too.
At the End of the Rainbow
Imagine having an asset in a number of shots and needed a change to propagate throughout all of them. Without this feature, you would be updating the asset and then manually heading into each shot, updating the asset and re-publishing.
What this feature provides you with is the ability to make such a change and programatically re-cache each shot, without ever having to enter into it.
Push
Either trigger a re-cache of all related shots via push
.
$ pyblish publish characterModel.mb --push
# Publishing dependent asset: CharacterRig
# Publishing dependent resource: shot01/Character01/pointcache
# Publishing dependent resource: shot03/Character01/pointcache
# Publishing dependent resource: shot03/Character02/pointcache
# Publishing dependent resource: shot12/Character01/pointcache
# ...
Pushing will look at where an asset is used and propagate the change down-stream.
Pull
Alternatively, make note of what is in need of an update, and automatically update instances prior to publishing using pull
$ pyblish publish shot05.mb --pull
# Updating instance: Character01 (v13 -> v14)
# Updating instance: Character02 (v13 -> v14)
Pulling will look at which assets are used to create this asset and pull for changes. E.g. if there are any more recently published versions than the one currently in use.
Architecture
# Jargon
ASSET: One or more files on disk
INSTANCE: A loaded asset
Upon each publish, ASSETS used in INSTANCE are tagged with a dependency
. For example, if a model was used in the creation of a rig, then the model is tagged as a dependency of the rig upon the rig being published.
CharacterRig
- CharacterModel
In this example, CharacterRig
depends on CharacterModel
. When CharacterRig
is published, a dependency
is added to the ASSET of CharacterModel
.
Note that I say ASSET. We are interested in making a persistent mark into the file with which the contained instance was created with. In this case, we want the ASSET of CharacterModel
, which is located somewhere on disk, to get tagged with a dependency with the ASSET of CharacterRig
, which is also located somewhere on disk.
/Character/model/metadata.json
{"dependencies": ["CharacterRig"]}
As such, whenever CharacterModel
is published, it can look at which ASSETS depend on it and suggest an update of them too.
Similar to our repair()
functionality, this wouldn't be executed per default, but rather be supplied as an option and at the very least logged clearly to the user so he could make an educated decision about what to do.
In Practice
For each publish, I'd like a report of all assets that are in need of an update due this this publish. I.e. assets which have been dirtied by this publish.
From there, I'd like to either run a command which handles the cascading update automatically, as best it can, providing feedback via something like a Web Frontend.
Alternatively, I'd like to have the ability to specify that I'd like this to happen silently in the background when I run a publish. Similar to auto_repair
.
$ pyblish publish my_asset.mb --push --auto_repair
Problems
Of course, not all ASSETS are compatible with such automatic updates. If, for example, the rig changes in such a way that the animation must be manually updated, as is the case if a character was given an extra arm or otherwise had it's interface modified in a way which broke backwards-compatibility.
However, with dependencies in mind, it would be possible to design a workflow which covers the majority of automatic updates. After all, the vast majority of updates to any ASSET are, in my experience, small and mostly invisible.
Added section "At the End of the Rainbow"
it definitely sounds like a nice feature.
I guess the biggest problem would how to revert a mistake?
Not necessarily, as each update would increment a version and thus not break anything existing.
Also, I imagine each automatic publish to go through the same set of validations as a manual publish would and thus failure could be reasonably controlled. We could introduce a strictness
level on automatic publishes, such that if a publish is automatic it could gain an additional set of plug-ins that validate it. Things that would otherwise be a no-brainer when done by a human.
I would also imagine that reverting a version is merely a matter of removing it from disk. Any automatic update could simply look at which versions currently exists, so if any have been discarded, no harm no foul.
Deprecation
Thinking further ahead, I'd imagine there to be a deprecation scheme for versions eventually.
Not only if a version is found to be faulty - in which case it could be marked deprecated and thus not apply to automatic updates (and warn users who use end up using them) - but also for versions that are no longer in use due to age. For disk-space optimisations, deprecated versions could be automatically removed after a certain amount of time has passed.
Things that would otherwise be a no-brainer when done by a human.
Such as
- does this file open?
- are there any unknown nodes in this file?
should have specified my scenario. If I publish a rig where I have broken an arm’s ik and all the animation files gets updated to use this version, would I have to;
- manually revert all those back to the working version?
- quickly publish the fix, meanwhile the animators waits?
- go back to the working version, and republish this version?
I think this could still happen, even without automated updates. But let's play it out.
- 10 shots
- 10 assets
- 10 animators
One of the assets, the main character's rig, is updated. The rig is broken, but still manages to pass through validation and all shots are pushed with this change.
____________
| |
| IK Rig |
|____________|
____________________|_______________________
______|_____ ______|_____ ______|_____ ______|_____
| | | | | | | |
| Shot 1 | | Shot 2 | | Shot 3 | | Shot 4 |
|____________| |____________| |____________| |____________|
Some things to note:
- As the rig would be the first to update, there would be no waiting time for animators.
- As each shot is independent of each other, they could all run in parallel.
We could remedy this in one of two ways; before or after it happens.
After it happens
Due to (2), it shouldn't take long, as such as summary could include:
- A full update-cycle to take less than 10 minutes, regardless of the amount of shots.
- Playblasts from each involved shot.
- A node-graph, like Nuke, of where things are/were happening and what caused it.
This way, errors could be spotted after the fact and remedied. The node-graph feature could, for example, be provided via the Web Frontend and get updated in real-time. Each node could visualise its results in the form of a playblast, like Fusion or Shake, making it easier to spot errors.
This also brings up another potential feature of the front-end; linked events.
However, ideally, the error should never have trickled down the pipe to begin with.
Before it happens
As an alternative, there could be validators in place to spot errors before they are ever extracted.
In the case of validating a rig, you could include a dedicated working scene with applied animation. A scene in which the resulting values of each controller can be expected and tested against.
I've heard of this referred to as a "workout". From looking at it, it would simply look like a t-pose character with each individual control moving and having it's resulting values tested each frame. Thus, if the IK breaks, the values of the resulting joints would change and thus trigger a validator to fail.
Updated "top-down" and "bottom-up" with Push and Pull.
What would happen to shots that are already in process? eg. an animator has a shot open, that needs updating.
Hm, could you expand on that a little?
I publish a rig, while an animator has Maya session open with a shot that needs updating of the rig. The file on disk gets opened by Pyblish, the rig reference updated and saved back onto disk. Meanwhile the animator still has the old version of that shot open, they finish their work and saves the Maya scene, resulting in the shot file is now saved with the old rig reference.
This could be prevented if you had a reference validation in place along with the updating of dependencies, so the animator will just get prompted to update the rig (or Pyblish will do it automatically while publishing).
Ah, this wouldn't affect development files, only published files.
For example, an animator working on a shot will eventually produce a pointcache. This pointcache will have a version which is potentially used in versions of other assets, such as a simulation. These are the files which would be affected by an automatic update.
Private & Public
I realise that I'm assuming there to be a distinction between development/private and published/public files
- Private files being manually edited and published from.
- Public files being automatically generated (by Pyblish?), shared and immutable.
I've seen studios that aren't making this distinction, using private files directly. E.g. by referencing a rig directly from a users development file. Though this workflow is fine, it wouldn't be suited to this type of automatic updates.
Here's how versions would increment in the event of a manual update with dependencies.
Asset | Updated | From | To |
---|---|---|---|
CharacterModel | manually | 10 | 11 |
CharacterRig | automatically | 8 | 9 |
Shot 5 | automatically | 16 | 17 |
Shot 8 | automatically | 3 | 4 |
Shot 5 | automatically | 6 | 7 |
When separating between private and public files, an animator working on Shot 5 would not be affected, but might be required to update his rig before being allowed to publish. Once published, Shot 5 would simply increment again.
Asset | Updated | From | To |
---|---|---|---|
Shot 5 | manually | 16 | 17 |
More conversation here: https://groups.google.com/forum/#!topic/python_inside_maya/KF2wYEdqe5g
The problem still occurs when only affecting the public files, and even more so.
Say a private shot file is worked on, and it is referencing assetA version 7. If the private files don't get updated, then every time the private files gets published, public files will be referencing version 7 unless you continually update the public files.
but might be required to update his rig before being allowed to publish
do you mean that there will be a validator in place for making sure that you have the latest version when publishing?
Say a private shot file is worked on, and it is referencing assetA version 7. If the private files don't get updated, then every time the private files gets published, public files will be referencing version 7 unless you continually update the public files.
Isn't this still a problem without this type of automation?
It's always the artist's responsibility to ensure he's using up-to-date versions, unless there is a validator in place to check for it. There are a number of ways to deal with it, but it's mostly outside the scope of Pyblish.
- An artist can look for versions prior to publishing anything.
- A notification can be provided to artists upon having an asset they are using become updated.
- An "Instance Manager" could provide artists with an interface to their currently used version and versions available/approved/deprecated.
Are you referring to allowing public files to be edited? That's fine, but isn't suited for this type of automation as it blurs the responsibility between private and public content.
I think if there is a reference version validator in place, then it'll solved the problems.