OpenUSD
OpenUSD copied to clipboard
Created usdotio python script to add or extract .otio data from USD
Description of Change(s)
Adds a python script to add OpenTimelineIo data to and from a .usd file. The .otio information is stored in a tree fashion for easy inspecting it with usdview or similar. You can have multiple .otio timelines, so that, for example, a TV set model carries its own .otio timeline differently from the main .otio at the root.
Its usage is:
usdotio add input.otio sphere.usda # sphere.usda is modified in place usdotio save output.otio sphere_with_otio.usda # output.otio is created from the .otio data found at the root of the usd file.
- [x ] I have verified that all unit tests pass with the proposed changes
- [x] I have submitted a signed Contributor License Agreement
Filed as internal issue #USD-9412
:heavy_exclamation_mark: Please make sure that a signed CLA has been submitted!
/AzurePipelines run
Azure Pipelines successfully started running 1 pipeline(s).
❗ Please make sure that a signed CLA has been submitted!
I did. I got this email back:
redacted
❗ Please make sure that a signed CLA has been submitted!
I did. I got this email back:
Great, thank you! You're all good then.
Hi! How can I get this merged? It passed all OSes tests but choked on the Azure Pipeline. Is there anything I can do or help debug it?
Hi @ggarra13!
We don't automatically merge PRs. Requests typically go through our internal engineering process, which includes code and architectural reviews. This can take a while, considering the volume of requests we receive and the amount of resources we have available.
For plugins, infrastructure, or additions of new tooling - like in your request - we also discuss whether we want to ship the additional code with the OpenUSD repo: Doing so implies that we take on maintenance of the added code, and this requires discussion beyond just the engineering team.
There is typically also a step that involves engaging with the community. For example, for this PR we would want to verify that the schemas align with how the rest of the community expects the data to be represented in USD. Inclusion with the OpenUSD repo implies that we impart an opinion on how this data should be represented across sites as opposed to it being site-specific data. I am not sure if this has happened with your project, just putting it out there as an example. Here is a good place to initiate the community involvement: https://github.com/PixarAnimationStudios/OpenUSD-proposals
Thanks again for your contribution and for offering your help with getting it merged. A good next step would be for you to initiate a high-level conversation about usdotio on https://forum.openusd.org or submit a proposal to https://github.com/PixarAnimationStudios/OpenUSD-proposals. @jesschimein is a great person to reach out to for progress updates.
Doing so implies that we take on maintenance of the added code, and this requires discussion beyond just the engineering team.
Sounds reasonable.
For example, for this PR we would want to verify that the schemas align with how the rest of the community expects the data to be represented in USD.
Again, that sounds fair. I am curious to know if Pixar or Animal Logic have already added OpenTimelineIO to USD. NVidia I know currently uses its proprietary Sequencer schema, albeit for a different goal. And OpenUSD used to support an Audio schema, which it seems it was partially deprecated.
Here is a good place to initiate the community involvement: https://github.com/PixarAnimationStudios/OpenUSD-proposals
I will let Michael Davey, who originally hired me to implement this, to comment on his needs for this. I can open the proposal and discuss it technically, and my thinking of how can this be used in the future.
Just wanted to correct - UsdMediaSpatialAudio
is not deprecated, and is in use in the Apple ecosystem, at least. It's more like OpenUSD itself provides no "rendering" capabilities for it, given the difficulty of cross-platform audio and our limited resources. We are always on the lookout for a good implementation!
OpenUSD itself provides no "rendering" capabilities for it, given the difficulty of cross-platform audio and our limited resources. We are always on the lookout for a good implementation!
I am not sure about the requirements of UsdMediaSpatialAudio, but have you guys looked at RtAudio (here on github)? We are using v5.2 (not sure about the latest one). @darbyjohnston and myself are using it successfully on Linux, macOS and Windows. I only found a subtle bug on Linux with heavy multithreading, but was able to work around it.
Thanks for the info - that sounds interesting!
RtAudio is a stable and reliable choice for sure. UsdMediaSpatialAudio's requirements are very similar to those of classic OpenAL, and RtAudio provides a superset of the requirements of OpenAL. Qt also has a spatial audio module with capabilities sufficient for UsdMediaSpatialAudio (https://doc.qt.io/qt-6/qtspatialaudio-index.html), so that might be the simplest way to enable audio rendering in usdview when it comes right down to it. The missing piece is the equivalent of Hydra for audio, or at least some custom traversal code, to introspect the scenegraph and create and update a runtime environment reflective of the scenegraph state.
/AzurePipelines run
Azure Pipelines successfully started running 1 pipeline(s).