Workfiles: Add basic workfiles api
Changelog Description
Moved functionality from workfiles tool and moved a lot of responsibility to host integration.
Additional info
First of all we did not have functions that would mimic what workfiles tool did, which made it difficult to use workfiles in automations (for example). At the same time the functionality in workfiles tool was "generic" which worked for most of DCC integrations, but not all of them -> it was necessary to more the responsibility to host integration.
Hosts responsibility
Host integration now has more targeted functions that are related to what it really does, and is context acknowledged. Instead of just open workfile and save workfile it has "Open workfile in context", "Copy workfile to context", "Copy representation as workfile to context", "List workfiles for context" and "List published workfiles for context". These functions are pre-implemented on IWorkfileHost interface to do what was done before this PR, but they can be overriden by host integration if necessary.
- NOTE: This causes tha
IWorkfileHosthas pre-implemented methods that should be abstract, but that would break all existing host integrations and most of them would implement the exact same logic.
API functions
The logic in workfiles tool did trigger events before/after workfile was saved and on context change. This functionality was kept, BUT based on my (quick) research it was used only by host integrations to run additional functionality on context change/file change -> with the new methods introduced that could be potentially ommited over time. The event triggers are not reliable as it has to go through the utils functions in workfiles API now.
Please write down comments what is good/bad, should be moved elsewhere, should happen in different order etc.
Testing notes:
- Workfiles tools still does it's job.
- Using workfiles api functions does the same job.
Host integration now has more targeted functions that are related to what it really does, and is context acknowledged. Instead of just open workfile and save workfile it has "Open workfile in context", "Copy workfile to context", "Copy representation as workfile to context", "List workfiles for context" and "List published workfiles for context". These functions are pre-implemented on
IWorkfileHostinterface to do what was done before this PR, but they can be overriden by host integration if necessary.
There being dedicated entry points for a save to another context - I like, it may help simplifying some logic that we currently trigger when opening a workfile from another context.
However, from the methods I'm not entirely sure how one would identify that the context is actually a different context than the one the user was/is currently operating in?
Say I wanted to reimplement that method and do something like "if saving to another context then do X". Actually, I'd love it if the event system would actually emit that too.
- NOTE: This causes tha
IWorkfileHosthas pre-implemented methods that should be abstract, but that would break all existing host integrations and most of them would implement the exact same logic.
I think this is absolutely fine actually. Perfect.
The logic in workfiles tool did trigger events before/after workfile was saved and on context change. This functionality was kept, BUT based on my (quick) research it was used only by host integrations to run additional functionality on context change/file change -> with the new methods introduced that could be potentially ommited over time. The event triggers are not reliable as it has to go through the utils functions in workfiles API now.
Please do keep the events - they are the easiest way to 'install' any custom studio behavior on save/load or any event that does not require actually changing host integrations. I think actually it'd be welcome if we'd also start emitting custom events for "save to different context", "open from different context" (incl. before/after files)
Please write down comments what is good/bad, should be moved elsewhere, should happen in different order etc.
This is a massive PR, doubles the line count - but lacking the production examples / use cases to actually TEST with and confirm the changes against too. I think we should find a few cases that warrant these changes and try (as for testing the PR) to implement those.
Also, I couldn't directly find - API-wise, how to easily set e.g. a workfile "note". Was that API already existing so that could be easily done? Do you have an example snippet perhaps?
However, from the methods I'm not entirely sure how one would identify that the context is actually a different context than the one the user was/is currently operating in?
Host is responsible for holding current context, most of them are using default implementation (using env variables). So you can compare current context with context passed in. I don't think, API wise, that you should pass to host it's previous context, the host know that better than whoevery is triggering the method.
Also, I couldn't directly find - API-wise, how to easily set e.g. a workfile "note". Was that API already existing so that could be easily done? Do you have an example snippet perhaps?
True, I focused on moving what we had in workfiles tool. Is there more api functions that comes to you mind?
I would say that the host workfiles implementation should be also responsible for change of current context (because it is host's responsibility), which is not true now. The reason why is because that triggers the events -> we would have to move the responsibility to trigger the events to hosts.
I got this in Nuke
Traceback (most recent call last):
File "E:\Ynput\ayon-core\client\ayon_core\tools\workfiles\widgets\files_widget.py", line 269, in _on_workarea_save_clicked
self._controller.save_as_workfile(
File "E:\Ynput\ayon-core\client\ayon_core\tools\workfiles\control.py", line 530, in save_as_workfile
self._workfiles_model.save_as_workfile(
TypeError: save_as_workfile() missing 1 required positional argument: 'description'
I would say that the host workfiles implementation should be also responsible for change of current context (because it is host's responsibility), which is not true now. The reason why is because that triggers the events -> we would have to move the responsibility to trigger the events to hosts.
I understand the reasoning - I'm just wondering what we'd gain (in actual use cases as of now) by doing so. Why couldn't some of that just be relying on an event getting triggered?
In that, core does same "base logic", but if you want additional things to happen, then you connect to the events? The upside of events is that you can hook into them even from custom addons, instead of only being able to access them from the host addon itself. That way, it's just a set of 'things to respond to' instead of completely changing around how workfiles API (may work) in each DCC. Is it maybe adding too much code/maintenance for little gain (if any?) in actual production use cases?
What I am missing related to the discussion about Workfile API is the actual list of things it provides, so here it goes, extracted from the code:
WorkfileInfo dataclass - 🔗 link
PublishedWorkfileInfo dataclass 🔗 link
IWorkfileHost interface - 🔗 link
save_workfile(...)open_workfile(...)get_current_workfile(...) -> Optional[str]workfile_has_unsaved_changes(...) -> Optional[bool]get_workfile_extensions(...) -> list[str]save_workfile_with_context(...)open_workfile_with_context(...)list_workfiles(...)list_published_workfiles(...)copy_workfile(...)copy_workfile_representation(...)
Maybe not for this PR, but I think we should add some type of workfile localization into the API. So far I think only Nuke has something native. But especially with massive workfiles being able to transparently localize the scene so it is opened much faster is really beneficial. But that adds a lot complexity (local storage, cleanup, etc.) So just an idea.
In that, core does same "base logic", but if you want additional things to happen, then you connect to the events? The upside of events is that you can hook into them even from custom addons, instead of only being able to access them from the host addon itself. That way, it's just a set of 'things to respond to' instead of completely changing around how workfiles API (may work) in each DCC. Is it maybe adding too much code/maintenance for little gain (if any?) in actual production use cases?
It's not "additional", host integration is responsible for "current context" and with IWorkfileHost for workfiles handling. And if you "open workfile" you also "change context", it is not because someone called method to change context, but because the workfile is related to some context. It is not 2 step operation, it is 1 step operation.
Maybe not for this PR, but I think we should add some type of workfile localization into the API. So far I think only Nuke has something native. But especially with massive workfiles being able to transparently localize the scene so it is opened much faster is really beneficial. But that adds a lot complexity (local storage, cleanup, etc.) So just an idea.
I don't understand what do you mean by "workfile localization"?
Where @BigRoy mentioned that creating the work directory like this doesn't fully initialize the work directory and it should happen in a similar manner to
Also, when copying a workfile, we need to consider/check if there are any existing workfiles and upversion accordingly.
There are 2 things
- Host api, what we require from host class to implement. To do what it has to to, which does not have to be same for each host and may be dependent on the order of thing that do happen.
- Pipeline api that might do more "generic" things than the host api.
Host integration should care about what is it's responsibility. Open workfile and set it's context. Should not care if there is newer/older version, if the file already exists, it is as: "I told to open/save file here and there in this context, then do it.".
Pipeline api can do the "additional steps" like create extra directories.
But what I don't know is where the line is for some of the features we do have now.
- Trigger of events before/after workfile open, context changed. Currently lives outside of host, but it becomes more and more clear that host should be responsible to do it.
- Create workfile entity -> is this something host should be responsible for?
Is it ok that pipeline functions do combine functions that are meant to be called by developers (e.g. in automation), and functions that are meant to be called by host? Because it is confusing as hell now even for me, and there is no simple solution, there are also functions that could be called by both. (For example: open workfile in context -> should I call it on host or via function in pipeline.workfiles).
Also, why do I have to compute and pass dst path? I expected AYON should be able to compute the dst path itself since it already knows the context I want.
I don't have straight answer for that. That for me is util function that wraps the base logic with some cherries, and if you give me examples of how you want to use it then I can implement it, but we need the "base" functionality to make these utils functions simple and reliable as possible.
The function in your example can be simplified significantly with this PR, e.g. using save_current_workfile_to would reduce a lot of code, but finding last workfile and version it up is something that could be another utils function
Events
Another thing related to events is, that there are now hosts that are 100% based on the events and their order. They do use events to handle workfile -> context changes, which is what this PR is trying to fix to be able to "Open workfile in this context", "Save workfile in this context", BUT they would now break if we would change the default order of events.
I think it is ok to trigger the events, but the host integration should not be based on the events, the host integration should trigger them instead.
After some thoughs I got to conclusion that "host is responsible" for almost everything. Moved a lot of the code from utils to IWorkfileInterface. Also added more responsibility to HostBase, it is now fully responsible for change of context.
Added few "helper" methods to host integration so host can do certain things before/after workfile/context change so it does not have to listen to events and does not have to override the whole base methods.
The events are still triggered, but should not be used by host integration at the end. In ideal case scenario the events would be used only by addons that do "additional" stuff different from host integration.
I'm a bit confused about the version argument that can be passed in optionally. If I don't pass it - is the version 'calculated'? (I couldn't find it.) It seems to just not set it if not passed.
Also, what if I pass just a random version number (not necessarily the version number in the filename. What happens?
Are we actually using this version on the workfile entity anywhere? if so, shouldn't we formulate the API in such a way that the value is reliable, or?
I'm a bit confused about the version argument that can be passed in optionally. If I don't pass it - is the version 'calculated'? (I couldn't find it.) It seems to just not set it if not passed.
Also, what if I pass just a random version number (not necessarily the version number in the filename. What happens?
Are we actually using this version on the workfile entity anywhere? if so, shouldn't we formulate the API in such a way that the value is reliable, or?
The version is not calculated automatically, it is not filled in workfile entity if is not passed.
The workfile entity is now used to show workfiles created by other users that are not on your disk, and it will probably be used in future, e.g. in launcher tool to launch application on specific workfile.
From my current point of view the version is a metadata for UI. We technically do allow to create workfile without a version. if its' not created by our api for example. And if you, as a developer calling the method, don't fill it then you probably don't know how the get the value either, so how could we guess it, or?
The version might have larger meaning in future, but right now I don't know what would be the usecase?
The version is not calculated automatically, it is not filled in workfile entity if is not passed.
The workfile entity is now used to show workfiles created by other users that are not on your disk, and it will probably be used in future, e.g. in launcher tool to launch application on specific workfile.
From my current point of view the version is a metadata for UI. We technically do allow to create workfile without a version. if its' not created by our api for example. And if you, as a developer calling the method, don't fill it then you probably don't know how the get the value either, so how could we guess it, or?
The version might have larger meaning in future, but right now I don't know what would be the usecase?
In short, the version isn't actually anything at this point - just unused data and also 'unreliable' data because it's not set to anything meaningful. Which leaves me to wonder, is there even still any merit in having it?
I suppose this is because they are not using the host.save_workfile() logic.
Method save_file is kept to "just save" workfile as was before. It has to be changed to use different method. But it is a good point, because there is no method/or helper function to save current file to current context (does not require folder and task entity).
Method
save_fileis kept to "just save" workfile as was before. It has to be changed to use different method. But it is a good point, because there is no method/or helper function to save current file to current context (does not require folder and task entity).
Why couldn't save_workfile still end up setting up the author metadata accordingly? Isn't that what we're after that the simplest entry point still does it? (Or ANY saving through AYON workfiles API generates the metadata we want?)
I would say that it should just pass along the current user as the author - by default.
Why couldn't save_workfile still end up setting up the author metadata accordingly?
It is the method that is 100% overriden by host integrations (because is abstract). It was not designed to do any of that. Calling that method can't fill workfile entity... We would have to monkey patch it, with monkey-patching we would also have to handle to NOT do it if it's done with one of newer methods because they already do that.
So the remaining question is if the optional arguments for the new methods are ok, or we should wrap them into some object to make our lives easier for future (talking about project entity, anatomy, workfile entities, etc.).
So the remaining question is if the optional arguments for the new methods are ok, or we should wrap them into some object to make our lives easier for future (talking about project entity, anatomy, workfile entities, etc.).
I'd say if they belong together (ie. they need to be passed together) they should be grouped. Otherwise no.
I've wrapper optional speed enhancement arguments into a wrappers. All the values are then transformed into a dataclasses that are used for rest of the logic.
I actually think host.save_file(filepath, workfile_info) would've made more sense, where you could attach for example:
Method save_file is existing method that can't be easily changed to do more stuff. This must be backwards compatible to some degree and changing save_file is not an option from my point of view (first argument is optional).
Where if e.g. author is not passed, it defaults to the current user?
Yes.
I'm also still a bit confused by the fact that we can pass version and comment metadata. Because it seems weird I can pass it an arbitrary version and completely take control over the filename, hence not even matching the version info. The same goes for comment which is the "in-filename" comment string - which seem weird to pass a different comment for than what is actually imprinted in the filename. I wonder if that comment and version should even be arguments for metadata to begin with?
version and comment are metadata about the file stored to workfile entity. The logic really can't parse that information from the path, and the same path does not have to follow our templates, so the only information about the metadata can come from the place which created the path.
Also, I have my doubts about the OptionalWorkfileData design. It's quite unclear to me whether I can initialize that with only some of its arguments, so that the rest still gets auto-populated?
There are no OptionalWorkfileData? I guess you mean ...OptionalData classes? All of them are optional each passed argument would make the logic faster tho.
That seemed to work. But admittedly - all I'm really trying to do is save a new workfile version + set a description. Feels a bit complex/verbose for an API.
There are helper functions in ayon_core.pipeline.workfile e.g. save_workfile_with_current_context, but in this case what you do is better as you have all needed prepared. It is verbose, because the workfile logic is not as simple as it was.
And furthermore, the before_context_change and after_context_change methods - I'm unsure why that isn't just an emitted event that the host integration can install callbacks against. Why does this need the methods on the Host class itself?|
To be honest I never liked that it is actually done with events as there is no window for host to affect the logic in any way. But the logic is sometimes mandatory to happen in sequence and to e.g. not allow save in certain context, or ask user for some information, or store context metadata to the file metadata etc. That is just impossible to achieve with events.
Long story short better explicit then implicit. The methods are meant to move the responsibility of workfiles logic to host because it is needed. Workfiles are not like create, publish and load, workfiles are very closely tight to host without any plugins so the methods do expect a lot because it is just mandatory. The main reason is that we should always create workfile entity when we work with workfiles, to have infromation about workfiles on server, even if you save them out of workfiles too. Almost every client is requesting that, how would you achieve that without having all that information passed to the method? This PR won't solve that, there is still a long way to achieve that, but without these changes it would be just chasing own tail.
There can be helper functions to simplify it's usage, but I would like to avoid creating helper functions that would not be used then. I think that e.g. save_workfile_with_current_context and save_current_workfile_to are best example of what we need 99% of time when we're working out of workfiles tool. We can change save_current_workfile_to to expect folder path and task name for example
To be honest I never liked that it is actually done with events as there is no window for host to affect the logic in any way. But the logic is sometimes mandatory to happen in sequence and to e.g. not allow save in certain context, or ask user for some information, or store context metadata to the file metadata etc. That is just impossible to achieve with events.
Maya uses an event callback system for saving, opening, etc. and in the pre-* events, like pre-save you can set a value on the event (essentially returning a true-false state) whether the save should continue or not. As such, an event callback system can definitely be made to interrupt and stop the current process if needed.
The same for having the events registered in an ordered manner - that's totally possible and we do have an ordered event system inside AYON too.
The main reason is that we should always create workfile entity when we work with workfiles, to have infromation about workfiles on server, even if you save them out of workfiles too.
If this is the case - we should strive to make it absolutely clear that for workfiles, save_file should NOT be called and all existing callers of that method (of which I suspect all are about workfiles) should hence be refactored, similar to: https://github.com/ynput/ayon-maya/pull/311 - However, I still see no reason why the replacement method requires more arguments. We know sufficiently about the current context to always compute them from the current context, including the user/author, etc.
We can change save_current_workfile_to to expect folder path and task name for example
I think just having more arguments is worse, no? I think perhaps it should just not require folder entity and task entity, and if not passed assume to keep the current context.
There are no OptionalWorkfileData? I guess you mean ...OptionalData classes? All of them are optional each passed argument would make the logic faster tho.
For example, SaveWorkfileOptionalData, my IDE seems to be having a really hard time presenting me with the input arguments needed.
It does seem that I'm able to pass only partial info to the class - however, the fact that the info I get from my IDE about what arguments it takes I'd say this is worse now than having additional arguments on the method itself.
E.g. this view of the arguments is missing this from the parent class:
project_entity: Optional[dict[str, Any]] = None,
anatomy: Optional["Anatomy"] = None,
project_settings: Optional[dict[str, Any]] = None,
Pretty sure data classes with inheritance would work to solve that?
version and comment are metadata about the file stored to workfile entity. The logic really can't parse that information from the path, and the same path does not have to follow our templates, so the only information about the metadata can come from the place which created the path.
Then I think we're missing the method that instead takes the individual components, like context, version and comment and constructs that filepath and saves it?
Method save_file is existing method that can't be easily changed to do more stuff. This must be backwards compatible to some degree and changing save_file is not an option from my point of view (first argument is optional).
I'd say that due to how this negatively impact the simplicity of the API - it may have been better to monkeypatch or in another way work around existing subclasses of the host interface so that we can just keep simple host.save_workfile(path) working, but still register with an author. The only reason I can see to NOT do that is if we want to be able to save a file from a host integration without workfile info (e.g. save a temporary file) with a unified host API.
Anyway, at this point it feels like a hard debate to continue - but I just want to remark again that I think host.save_workfile_with_context (or whatever we'll end up naming it) should be able to just save into my current context without requiring the context arguments - it should just default to my current context?
Maya uses an event callback system for saving, opening, etc. and in the pre-* events, like pre-save you can set a value on the event (essentially returning a true-false state) whether the save should continue or not. As such, an event callback system can definitely be made to interrupt and stop the current process if needed.
The same for having the events registered in an ordered manner - that's totally possible and we do have an ordered event system inside AYON too.
Even system is not written that way, and it would make sense only if the host would be the only one who is listening to events. Still the logic should be part of host, not as events handling. If a workfile is opened and context is changed at the same time, I should not need to save global states that workfile is being saved -> so ignore context change event callback -> do something when workfile save is finished in different event callback. That's obscurity that is really weird and hard to follow. With current approach it is 100% clear when is called what and it is up to host to do it. To me the events are upredictable and in some cases impossible to achieve specific results. Event handlers should not have power to shut down the main logic from where the event was triggered.
If this is the case - we should strive to make it absolutely clear that for workfiles, save_file should NOT be called and all existing callers of that method (of which I suspect all are about workfiles) should hence be refactored.
Yep, but how to tell that we have to change it? I don't know... From my point of view it is about creating issue in known repositories. Other than that, it is API change that would be in changelog. This PR will bump minor version.
I think just having more arguments is worse, no? I think perhaps it should just not require folder entity and task entity, and if not passed assume to keep the current context.
I've already changed it, it was change to expect folder path and task name instead of folder entity and task entity (it is fetching them now). Number of arguments did not change. There is save_current_workfile_to and save_workfile_with_current_context, better explicit than implicit with what it does. Having the folder path and task name option could potentially hide bugs if you would like to save to specific context, but somehow would end up with both values set to None. From API point of view it is more clear what to use and how.
Then I think we're missing the method that instead takes the individual components, like context, version and comment and constructs that filepath and saves it?
That would force to use the workfile template, which is also something that is not necessary. I do understand what are you trying to say, but you can have automation that is e.g. creating 2 different workfiles without using workfile path template. It has use-cases when you don't have to fill it. Again, can be simplified to create helper functions, when you'll give me reasonable usecase I'll do it.
It does seem that I'm able to pass only partial info to the class - however, the fact that the info I get from my IDE about what arguments it takes I'd say this is worse now than having additional arguments on the method itself.
E.g. this view of the arguments is missing this from the parent class:
Pretty sure data classes with inheritance would work to solve that?
Dataclasses were not used on purpose, because we might want to change the arguments in future. With dataclasses it would be harder to handle deprecation warnings about passed arguments. But I can change it to show it correctly in IDE.
We know sufficiently about the current context to always compute them from the current context, including the user/author, etc.
We don't know the context (folder and task), version, comment and description, which is exactly what the methods are expecing + Optional data that will make it faster and won't need to fetch anything from server further if are filled.
Anyway, at this point it feels like a hard debate to continue - but I just want to remark again that I think host.save_workfile_with_context (or whatever we'll end up naming it) should be able to just save into my current context without requiring the context arguments - it should just default to my current context?
Again, that;s what helper functions are for. The implementation of what it does on host class, must be 100% defined this way.
Even system is not written that way, and it would make sense only if the host would be the only one who is listening to events. Still the logic should be part of host, not as events handling.
Actually - I'd argue the opposite actually. The "workfile lock" profiles could e.g. be completely build using events, from another addon - if it were implemented like Maya does it. With how it's now up to each host addon - there's no way for an addon to customize this behavior from another addon + they'd need to make DCC specific changes instead.
Again, that;s what helper functions are for. The implementation of what it does on host class, must be 100% defined this way.
I see - fine with me. Just saying, from an API standpoint I'd expect those helpers to be available on the host itself, like host.save() instead of save_file_current_context(host, version=1) since it looks off.
That would force to use the workfile template, which is also something that is not necessary. I do understand what are you trying to say, but you can have automation that is e.g. creating 2 different workfiles without using workfile path template. It has use-cases when you don't have to fill it. Again, can be simplified to create helper functions, when you'll give me reasonable usecase I'll do it.
Correct - I'm saying the low-level save functionality we have now is fine. But admittedly from an API standpoint, I think save_next_version() is something 99% of the use cases would most likely be... especially looking at our own codebase.
Anyway, I figured the API was to make it easy to work with workfiles. Like define a nice dev-facing interface to interact with the workfiles in a given host. Now it seems heavily focused on being the "technical interface" that makes it work - which is fine in a way, but still means you'd end up seeing lots of boilerplate code around all the usage everywhere because the majority of the use cases will save e.g. a new version in current context or save a specific version in another context. If it wouldn't need to deal with how the filepath looks like, I can imagine that actually being much easier to use for what covers the most prominent use cases of this API.
It's like if host.load_container would need to take tons of additional arguments, even though in the majority of times I just want to load a particular representation with a particular loader. Yes - under the hood there may be much lower level functionality, but I still think most of the use cases want that simple entry point.
This API is lacking in a similar manner as that the publishing API currently is. E.g. you'd expect you could just do publish() on e.g. the CreateContext or whatever interface we have for the publishing logic - but we do not have an API that does so, so we need to push it through pyblish iterators, etc. meaning everyone will still need to go low-level. I just thought this PR would include that higher level (and more importantly, clear examples on how to use it - which seem lacking.) In that sense, testing this PR is like: "here's some code, figure out how it works and use it."
There's a use case inside Fusion where I may need to pass additional information to the save functionality. In Fusion's case it can have multiple comps open at the same time. When publishing, I want to ensure it increments the comp that is being published, and not one of the other ones.
The current logic is here. But if we were to go through host.save* then we'd still need to make sure it triggers through that particular comp. What would be the approach to solve that?
The default host.save_workfile will save the current comp which may not be the comp being published (it may have gone into e.g. background in meantime).
One approach could be to have a Fusion host specific function that does e.g.
class FusionHost(...):
def __init__(self):
self._enforced_current_comp = None
@contextlib.contextmanager
def enforce_current_comp(self, comp):
original = self._enforced_current_comp
self._enforced_current_comp = comp
try:
yield
finally:
self._enforced_current_comp = original
And then update the save logic to do:
def save_workfile(self, dst_path=None):
comp = self._enforced_current_comp or get_current_comp()
comp.Save(dst_path)
So that the increment logic could become:
host: FusionHost = registered_host()
with host.enforce_current_comp(comp):
host.save_workfile(path)
So that perhaps we do not need to change the signature of save_workfile. Yet at the same time, functionality like this would disallow potential other use cases because it couldn't run that logic simultaneously if the host allowed it to run simultaneously with another save. May not be a production issue usually, but just pointing out that this 'workaround' may only help to solve this use case.
@iLLiCiTiT any ideas?
but still means you'd end up seeing lots of boilerplate code around all the usage everywhere because the majority of the use cases will save e.g. a new version in current context or save a specific version in another context. If it wouldn't need to deal with how the filepath looks like, I can imagine that actually being much easier to use for what covers the most prominent use cases of this API.
I've asked. Give me reasonable use-cases I'll add it. I don't know all hosts, I don't know all use-cases. Currently is this PR adding host api to be able to actually afect how workfiles do work in the host, which was not possible before, with few helper functions I know will be needed.
You have more use-cases in mind? Great, write here the use-case. I still don't see how is more arguments more issue, right now if you just use save_workfile_with_current_context instead of host.save_workfile it will immidietelly handle 90% issues of what we need to handle.
There's a use case inside Fusion where I may need to pass additional information to the save functionality. In Fusion's case it can have multiple comps open at the same time. When publishing, I want to ensure it increments the comp that is being published, and not one of the other ones.
I don't get what is the issue? You want to save workfile without opening it and keep the current opened? We can add open_workfile=True to arguments, similar to copy_workfile. I actually had it there, but removed because I didn't have use-case.
Correct - I'm saying the low-level save functionality we have now is fine. But admittedly from an API standpoint, I think save_next_version() is something 99% of the use cases would most likely be... especially looking at our own codebase.
This is actually reasonable example. But again that is helper function, which is at the end preparing the data for the interface methods.
Actually - I'd argue the opposite actually. The "workfile lock" profiles could e.g. be completely build using events, from another addon - if it were implemented like Maya does it. With how it's now up to each host addon - there's no way for an addon to customize this behavior from another addon + they'd need to make DCC specific changes instead.
I don't think we'd be able to implement locking without either implementing it in the host class or in the host addon.
You have more use-cases in mind? Great, write here the use-case. I still don't see how is more arguments more issue, right now if you just use
save_workfile_with_current_contextinstead ofhost.save_workfileit will immidietelly handle 90% issues of what we need to handle.
It solves the issues - yes, but the API to use is worse from a dev/user POV because you're suddenly flooded with the need of passing more arguments which in the most common scenario seems like a lot of extra needed things, like the folder entity and task entity - where I can see use cases where someone just wants to save, in current context, a new version. So yes, it works but from a pure "i want to quickly save something in a script" the API is a bit more cumbersome then you'd expect, because you're already in a current context - why do I need to pass my current task entity, etc?
I don't get what is the issue? You want to save workfile without opening it and keep the current opened? We can add open_workfile=True to arguments, similar to copy_workfile. I actually had it there, but removed because I didn't have use-case.
The issue is that, host.save_workfile_with_context will save the current active comp.. during publishing, however the 'current active comp' at that point in time MAY NOT be the comp that publisher is publishing. Just because during publishing someone may continued working in another comp (quite common on our end). As such, we'll need to make sure to save the actual relevant comp explicitly, as the pseudocode example showed... enforce passing the actual comp (essentially the opened scene) to save... instead of whatever Fusion at that point of time thinks is the active current comp.
This is actually reasonable example. But again that is helper function, which is at the end preparing the data for the interface methods.
Yes, totally aware - just saying we're lacking "ease of use" on the API. The "interface" on the host class seems fine - but calling the API in a simple manner, that's where it's really lacking ease of use due to the need for passing the context, etc.
I don't think we'd be able to implement locking without either implementing it in the host class or in the host addon.
The code that is in Maya - would be possible completely with such an event system if it could block the save from continuing. That's my only point. Whether it's worth it, I'll leave up to others to argue about @antirotor @philippe-ynput. Design-wise it'd allow more than we have now - plus the implementation for the features are not required to be ON the host class itself.
It solves the issues - yes, but the API to use is worse from a dev/user POV because you're suddenly flooded with the need of passing more arguments which in the most common scenario seems like a lot of extra needed things, like the folder entity and task entity - where I can see use cases where someone just wants to save, in current context, a new version. So yes, it works but from a pure "i want to quickly save something in a script" the API is a bit more cumbersome then you'd expect, because you're already in a current context - why do I need to pass my current task entity, etc?
I agree and some of it was part of my initial objections (about the need of so many arguments, etc.) I consider this a low-level API that would really benefit from what we are now calling helper functions but from my perspective are just higher-level API that will be probably used in the world out there more than the low level?
With the workfile locking - I think the AYON event system could be definitely used for that, maybe better than the file locks. The only real advantage of file locks is that anyone without pipeline but with access to filesystem knows that the file is opened (but that's IMO edge case we shouldn't really consider). If the event handler can stop scene opening in the host or not is up to the implementation and capabilities of individual hosts - but even if it cannot interupt loading, just displaying generic warning dialog is enough. The only thing I am missing from events is TTL that would automatically clear the lock after certain time. We are already emitting event in case of opening or not? For time tracking/etc.
One thing to add there: it would help if we could define here the list of helper/higher-level API functions that would help developers.
I agree and some of it was part of my initial objections (about the need of so many arguments, etc.) I consider this a low-level API that would really benefit from what we are now calling helper functions but from my perspective are just higher-level API that will be probably used in the world out there more than the low level?
To avoid confusion, what do you mean? Because I'm now confused AF. Can you tell what should happen where? What exactly, no abstract concepts, exact logical steps.
How we want to make sure that we do handle stuff that is considered as mandatory from now on. There are multiple requests that are forcing us to "make sure all workfiles have entry in AYON", "know which host created which workfiles (and possibly which version in future)" etc. Because I have no idea how that would be possible if it would not happen on host without breaking compatibility of every host. The order in which the stuff is happening is also super important BTW.
With the workfile locking - I think the AYON event system could be definitely used for that, maybe better than the file locks. The only real advantage of file locks is that anyone without pipeline but with access to filesystem knows that the file is opened (but that's IMO edge case we shouldn't really consider). If the event handler can stop scene opening in the host or not is up to the implementation and capabilities of individual hosts - but even if it cannot interupt loading, just displaying generic warning dialog is enough. The only thing I am missing from events is TTL that would automatically clear the lock after certain time. We are already emitting event in case of opening or not? For time tracking/etc.
What? Current events logic is there to tell "hey this happened", that's it. What you described is completelly of the scope of its and our capabalities (or at least mine). I have no idea how that would be even partially possible, especially if it would not be part of host logic.
One thing to add there: it would help if we could define here the list of helper/higher-level API functions that would help developers.
Currently there are 2 save_current_workfile_to and save_workfile_with_current_context. As I wrote before, I don't have more use-cases that would benefit from having helper function, if you have more use-cases please write it here. Only reasonable use-case since then was save_next_version which would be a "guess" helper function as it has to parse the filename to get version and comment. But other than that now other use-case came out.
Dev note
Just to give you an idea what was reason of the workfile api need: "We have to be able to do, what happens in workfiles tool, but from code so it is not tight to workfiles tool." What is now implemented here, is moving all the code from workfiles tool to host. I did few additional changes to make it a little bit better (e.g. change of context), but other than that, this is what happens in workfiles tool.
I've tried to implement all the stuff you're telling me that should not happen in host outside, that was first week I did spend on this PR, and I was not able to do it. At the same the api for workfiles handling would be even more confusing, and it would not solve any of the issues we have in hosts like harmony or silhouette. Current state is the best I can do with all requirements I know I have to be able to achieve or be prepared to achieve.
To avoid confusion, what do you mean? Because I'm now confused AF. Can you tell what should happen where? What exactly, no abstract concepts, exact logical steps.
How we want to make sure that we do handle stuff that is considered as mandatory from now on. There are multiple requests that are forcing us to "make sure all workfiles have entry in AYON", "know which host created which workfiles (and possibly which version in future)" etc. Because I have no idea how that would be possible if it would not happen on host without breaking compatibility of every host. The order in which the stuff is happening is also super important BTW.
Host implements: host.save_file() (or host.save_workfile or host.save_current_workfile whatever it's named)
Fine with me. Host doesn't need to care about the context changes in general, and it's easy to implement, just whatever API call is saving inside an integration. Perfect, 99% of the time that's all we really need to override in a host integration (if not ALL the time). This is perfect, it makes Host class implementations extremely easy.
# pseudocode
class Host(...):
def save_workfile(self, filepath):
return dcc.save(filepath)
def open_workfile(self, filepath):
return dcc.open(filepath)
No need to make this more complicated.
Then host.save_file() being lower level and usually not what you'd want to call! I guess needs to be clearly documented. Because it's what I think most of the people will start to look for first, and expect to just call that. There's still reason have this 'save' abstraction that doesn't necessarily set workfile info so that we can still abstract away how a host saves, without necessarily the save being a workfile. E.g. it can just save and extra file somewhere which is fine, and good that we expose an abstracted API.
There is the annoying side effect that all hosts are currently calling host.save_file directly, so let's refactor that assap so host.save_workfile_to_context (which again, I think should be possible without folder and task entities so that you can just save to your current working context) but if that needs an additional helper method, I guess that's ok.
Then I think we should at least implement a higher level helper function that does not take a filepath, but just a context (again defaulting to current context) and a version number - it then internally constructs the filepath according to anatomy. Like e.g. host.save_new_workfile_version(version=18, description="Fix world") and potentially even allow easy increment by one host_save_new_workfile_version(description="Fix world"). (Which would increment from the latest existing workfile for your context, not your current workfile)
I have a feeling that saving a 'new version' is the most common use case. And if the user doesn't need to care about filepaths then, that's perfect!
What? Current events logic is there to tell "hey this happened", that's it. What you described is completelly of the scope of its and our capabalities (or at least mine). I have no idea how that would be even partially possible, especially if it would not be part of host logic.
Our Event system is currently blocking, it's not asynchronous and it always waits for all events to have finished. As such, whatever emits the event can just as well take whatever return value from the callbacks OR even respond to a particular raised error (if you just want to raise instead of setting a return value).
What Maya API essentially does is pass a mutable datablock to the event, like user data... and allows the events themselves to alter that data. Like a data.allow = False and since it's mutable, whatever emitted the event can be aware of this and stop whatever it's doing, like disallowing a scene open or save to continue.
The event system can essentially respond to whatever the 'callbacks' are doing.
Currently there are 2 save_current_workfile_to and save_workfile_with_current_context. As I wrote before, I don't have more use-cases that would benefit from having helper function, if you have more use-cases please write it here. Only reasonable use-case since then was save_next_version which would be a "guess" helper function as it has to parse the filename to get version and comment. But other than that now other use-case came out.
save_current_workfile_to and save_workfile_with_current_context to me can both be the same function, save_current_workfile_to without folder_path and task_entity could just internally call host.get_current_context() and hence save into current context. I don't see the need for the separate function there.