Syphon-Framework
Syphon-Framework copied to clipboard
Metal Syphon
Hi,
Here is our updated version of Syphon implementation for Metal using the new subclassing refactor.
- The server uses a traditional redraw of the user's texture
- For optimisation, when possible, the server makes a texture blit
- MSAA handling is partially implemented (disabled in this PR), it needs a bit more digging
I am of course welcoming your feedback to improve the PR !
Looking forward to taking a look at this. Please can you move your demo apps out of this PR and into their own project (they'll be very useful, but this project only contains the framework).
Thanks! Sorry for the delay! Im going to take a look at this today! Very much appreciate all the work and support!
Hi.
So firstly, thank you again for this work!
I have a question or two - mostly a question regarding how the Metal API should be expected to be integrated into other apps.
It looks like when initializing a Metal Client or Server, a device is passed in and an internal MTLCommandQueue
is created, necessitating its own MTLCommandBuffer
to be created and submitted.
I am curious why not pass in an existing MTLCommandQueue
, and use a users submitted MTLCommandBuffer
to publish? This would ensure that metal GPU commands are synchronized and submitted once, lowering overhead and no command queue switching would need to happen.
I may be overlooking some important use case that requires a separate MTLCommandQueue
- its been a minute. But unlike OpenGL we should not need to worry much about stepping on state for managing a context internally.
For the client server, it might be helpful to manage an MTLHeap from which we manage textures from. I know this is helpful for managing the cost of allocations - but to be frank im not sure if IOSurface backed textures mitigate any of those gains, or if its helpful in any way, so feel free to ignore this paragraph :D
Im going to attempt to use integrate your Metal Branch as is and report any issues.
Another good note is, this looks very easy to wire in the experimental Float support from our old Float branch.
Thanks again!
Oh, one last note. It would be a nice quality of life addition if for the public headers there was some similar documentation in the code - but I understand thats a big ask w/o knowing if a PR will be merged. Maybe we can Q/A check this and add it later. More of a running TODO note.
MSAA might be the only functionality blocker I see right now. And to be fair ive not experimented with that in Metal yet.
Regarding my CommandQueue point:
Pro: Users can create a command queue and a command buffer from it, submit their own commands, and then interleave a server publish command wherever they want. Their app only has a single command encoder (or as as many as they need) and a single command buffer for the frame.
Con: Users of the API need to submit their command queue themselves. Is that a con? Thinking aloud.
Hi,
Regarding the MTLCommandQueue
idea we decided to provide two APIs leaving the user to choose the best method for its use case:
-
API Method 1: User makes its own
MTLCommandBuffer
and has to render content directly on the Syphon Server texture. We use a block of code approach to handle texture resizes before user rendering and call the server publish once the Command Buffer is completed.
[syphonServer drawFrame:^(id<MTLTexture> syphonTexture, id<MTLCommandBuffer> commandBuffer) {
// Here : user renders into the syphon texture inside this block of code
// The provided commandBuffer is 'myOwnCommandBuffer'
} size:textureSize commandBuffer:myOwnCommandBuffer];
- API Method 2: the user gives an MTLTexture to send and the Syphon Framework takes care of everything. We made this API because it's a good one-liner solution and it brings the openGL API features to metal (imageRegion and flip).
[syphonServer publishFrameTexture:theUserTexture];
// alternatively
CGRect region = CGRectMake(0, 0, serverTexture.width/2, serverTexture.height)
[syphonServer publishFrameTexture:theUserTexture imageRegion:region flip:YES];
For the MTLHeap
I'm not sure it would bring anything since we only allocate textures at startup and when the resolution changes and as you said, it's IOSurface backed anyway.
What do you think about this API?
And of course no problem to provide better docs once the API is validated!
Hi there! I wanted to say thank you for all your hard work on this. We're in the process of updating QLab to use Metal, and had been expecting to have a tough slog ahead of us in getting Syphon updated, so this work is super appreciated! We've largely switched to using raw IOSurfaces internally, and we could manage to make that work with the current Syphon implementation for now, but it would be really amazing to be able to link to a pure Metal implementation instead.
Personally, one minor thought I had is that the first API method would look a little more idiomatic if the block were the last argument (e.g. -drawFrameWithSize:commandBuffer:drawingHandler:
or something like that). That would particularly help with the Swift interface, which would then be able to use a trailing closure.
That said, we're using Objective-C, and would be using the -publishFrameTexture:
method, so I don't feel super strongly about it either way.
I see what look like maybe hopeful hints of next steps in another PR, but in the meantime, is there anything we can do from our end to help validate this, aside from pulling it in and putting it through its paces?
Hi @siobhandougall
Using this and letting us know what makes sense for you API-wise would be a great help (plus noting any problems you hit). We've all had commitments elsewhere, but I'm aiming to spend some time on this in the next couple of weeks, it would be great to firm up a final API.
I would like to test this out.
-
I need to download the code from you branch, build Syphon.framework, and then use it in my project -- is that correct?
-
I would like to publish each frame generated by my rendering code, which ends like something like this:
commandBuffer.present(drawable)
commandBuffer.addCompletedHandler { _ in
self.semaphore.signal()
}
commandBuffer.commit()
}
At what point in pipeline I should publish the texture from the server? I don't actually need to wait for it to be presented, I can publish it when finished drawing, right?
Hi akuz,
Two demo apps (server and client) are available on this commit: https://github.com/anome/Syphon-Framework/tree/11dc66758bc9142bb3a497ea774e473fafe5a7e2
Hope this helps!
- [x] Re MTLHeap, I agree with @mto-anomes it seems overkill as we only allocate a new surface on size changes. Even if it turns out to be useful for some reason, it's its own issue and shouldn't be part of this PR.
- [x] In OpenGL we don't return from any publish... method before the frame has been published (ie IOSurface sync has happened) (by calling
glFlush()
). What's desirable behaviour? Whatever the answer, document it. - [x] We don't need four versions of 'publishFrameTexture:', each one argument shorter.
- [ ] Decide on the user-draws-to-texture API. I'm not a great fan of it as is. Decide based on what will be most useful and least limiting - ie we do internally everything everyone will have to do, but as little as possible that constricts what they can do.
- [x] Decide if we have an internal MTLCommandQueue or not
- [x] I need to tidy up the minor merge conflict in the Xcode project
Hi,
We have updated the PR
- Fixed conflicts with master
- DrawFrame API removed. We agree there doesn’t seem to be use cases for it. Its only advantage is to save one re-draw in case the user only wants to send a texture via Syphon. @siobhandougall do you have a use case to share that would make this API worth keeping?
- We cleaned the second API to keep only one prototype
- MTLCommandQueue is removed. Users must provide their own CommandBuffer to the framework to work with and commit the buffer themselves afterwards. This removes the (hypothetical?) risk of multiple commandQueue de-synchronisations. This also removes the ambiguity if the method is synchronous or asynchronous. The framework only writes gpu commands to the buffer and the user decides when to commit it and how. Overall this seems to be more compliant to Metal Best Practices.
- Docs added
- Methods slightly renamed to be similar with OpenGL API
- Safety check added if user tries to send a nil MTLTexture
Updated demo apps : MetalSyphonDemoApps.zip
Waiting for your feedback
Sorry I missed these questions! For our part, we don't have a use case for the drawFrame:
API; the only reason I brought it up was a syntax suggestion. But we're using an offscreen renderer to draw into a texture already, so the publishFrame
method works perfectly for us. Thanks!
Hi, I just wanted to put a plug in for this patch. We tested this last week on multiple computers running for an entire week with many simultaneous syphon servers/clients (eg. > 30) under very heavy load and it worked great - more stable than using OpenGL (That's not Syphon's fault in particular, mostly Apple's old OpenGL implementation).
Thanks a lot for putting this together!
I'm guessing that even once this has been merged and people start using it, it will be a long time before client apps can stop also looking for OpenGL servers? I'm assuming the Metal client isn't compatible with OpenGL servers?
What's the strategy here? Is anybody thinking about how to abstract this concern (if that's even possible) to simplify migration?
Providing documentation feels important here, e.g.
- Minimal examples of how to capture OpenGL textures from legacy Syphon servers and convert these to Metal textures (based on this Apple document?) as a stop-gap to dropping OpenGL support in future
- Best practices for consuming both types of server at once
- Examples of how to migrate from OpenGL to Metal for server developers
How much of this migration strategy can be abstracted away from developers and embedded into the framework, and how much is just tricky work that everybody needs to do?
They are compatible. A metal client can see a OpenGL server and vice versa due to IOSurfaces underlying backing.
http://vade.info
On May 21, 2022, at 12:10 PM, Michael Forrest @.***> wrote:
I'm guessing that even once this has been merged and people start using it, it will be a long time before client apps can stop also looking for OpenGL servers? I'm assuming the Metal client isn't compatible with OpenGL servers?
What's the strategy here? Is anybody thinking about how to abstract this concern (if that's even possible) to simplify migration?
Providing documentation feels important here, e.g.
Minimal examples of how to capture OpenGL textures from legacy Syphon servers and convert these to Metal textures (based on this Apple document?) as a stop-gap to dropping OpenGL support in future Best practices for consuming both types of server at once Examples of how to migrate from OpenGL to Metal for server developers How much of this migration strategy can be abstracted away from developers and embedded into the framework, and how much is just tricky work that everybody needs to do?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.
They are compatible. A metal client can see a OpenGL server and vice versa due to IOSurfaces underlying backing.
So both client and server developers can use whichever technology they want, and they'll all be able to see each other? Amazing!
Correct!
http://vade.info
On May 21, 2022, at 12:24 PM, Michael Forrest @.***> wrote:
They are compatible. A metal client can see a OpenGL server and vice versa due to IOSurfaces underlying backing.
So both client and server developers can use whichever technology they want, and they'll all be able to see each other? Amazing!
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.
Hi...
I'm not sure why this Merge Request seems to have stalled...
As I'm mostly interested in SyphonIOSurfaceServer
, I could factor out that rather small change into a separate merge request, which should be fast to review?
@smilingthax where are your IOSurfaces coming from? If you're able to work with the existing SyphonServerBase and the methods in SyphonSubclassing.h you'll see better performance as we can reuse a single IOSurface.
Can SyphonIOSurfaceServer be its own PR anyway? I don't like it though for the above reason ^.
This PR needs to be rebased to remove all the redundant commits with demo apps, etc.
Well, the patch surrounding SyphonIOSurfaceServer seems to allow just that (AFAIUI): To reuse the same IOSurface, cf. change in SyphonServerConnectionManager.m line 334: only send new surfaceId if it is different.
In my use case, the contents are rendered using the cairo graphics library, which could go directly to the IOSurface (possibly using CGBitmapContext), so going to GL or Metal would be an extra detour.
@smilingthax right, so it sounds like you could use SyphonServerBase and the methods in SyphonSubclassing.h to obtain a surface from the server, then fill it, then publish.
@smilingthax right, so it sounds like you could use SyphonServerBase and the methods in SyphonSubclassing.h to obtain a surface from the server, then fill it, then publish.
Yes, directly using SyphonSubclassing "just like" a SyphonIOServer works for me. One could say that only the documentation is missing? I already have a C++ wrapper around the Obj-C interface, where I can handle any ownership/resize/... concerns.
Other than that I (reproducibly) get crashes in SimpleClient/Syphon-Framework after hard-exiting any Syphon Server... but that belongs into a separate issue.
Yes, directly using SyphonSubclassing "just like" a SyphonIOServer works for me. One could say that only the documentation is missing?
There is one pitfall: Syphon uses NSDistributedNotificationCenter e.g. to detect already running servers, which requires a spinning CFRunLoop (e.g. via NSRunLoop). When using a non-GUI Server one might be tempted to skip this... – In my case, using the glib/gtk cross-platform framework, I used Glib::MainLoop (does not spin CFRunLoop) instead of Gtk::Main / Gtk::Application (does spin CFRunLoop)...
@smilingthax this has nothing to do with this PR, please open a new issue
I've dropped the pre-subclassing commits from this, as well as the commits with the demo apps and some changes which weren't related to Metal. The final state at cc4b564 is identical to the original, minus the unwanted history.
I've fixed a bunch of leaks of MTL objects (we will move the whole project to ARC after this is merged), added missing documentation, deleted some unused code, added synchronization around access to the surface in SyphonMetalServer so the class is completely thread-safe.
@mto-anomes and @anome I'm adding copyright notices to the files you've added - it would be good to add your names. How would you like to be credited (ie by what names)?
Hi @bangnoise,
Thank you. You can add this credits : Maxime Touroute & Philippe Chaurand (www.millumin.com)
Best. Philippe
Massive thanks to @mto-anomes and @anome for the initial work on this, and to those who've offered suggestions and feedback. There are a few remaining potential enhancements which I've opened as
#85 Use a single SyphonServerRendererMetal per MTLDevice #86 Possibly allow the host to avoid server Metal pipeline state if they will always blit #87 Metal drawFrame: API
Any comments welcome on those issues - I'll do #85 at least in the near future. Thanks all!