psi
psi copied to clipboard
Plans for Mac/iOS support for media capture
Hello!
I was wondering if there's any existing plans for Mac support for Media Capture and Imaging? It seems like the Linux Imaging component uses SkiaSharp for encoding/decoding, so perhaps that could be used on MacOS as well, but it might be worthwhile to go the CoreImage route there?
Similarly, the FFMpeg Native Reader used in the Linux component could also be used, but it might be better long term to use AVFoundation for media capture, because it would also pave the path for capturing from sensors on iOS, for writing mobile applications in the future.
Also, I suspect using the native approach would result in more CPU and/or memory efficient capture.
- I was wondering if the dev team has any thoughts about this?
- If there are no plans, and I write one for my needs, would it be more appropriate to submit a pull request or release as a separate component?
Hi Chirag,
You’re correct that the Linux ImageEncoder
/Decoder
components are actually not Linux-specific. They’re cross-platform .NET Standard and the SkiaSharp dependency should work on Mac (though not tested). Perhaps these components should be moved to Microsoft.Psi.Imaging
(sans .Linux
). You’re also likely correct that the FFMPEGReader
may (again, not tested) work on Mac.
You’re probably right to think that CoreImage and AVFoundation would be more finely tailored to Mac & iOS. We do have examples of platform-specific alternatives (e.g. Windows Media based encoder/decoder), in addition to the cross-platform components. The pattern would be to release a Psi.Imaging.Mac
assembly for example.
We don’t have imminent plans to work on a high performance, Mac-specific implementation and we would love to have contributions from the community! If you’re motivated to build Mac-specific components we would certainly be eager to accept your pull request.
On the other hand, it might be nice to exercise the path of community contributed components released separately. The long-term plan is to curate a “catalog” of Psi components, external to the main repo. In fact, we may move our own components out (as well as separating PsiStudio from the runtime) at some point. This would be similar to many other ecosystems such as ROS, Node, and others. Let’s talk about the options over DM.
Hi Ashley!
I hope you're doing great! Thanks for the information. Yeah I'd be happy to write Mac-specific Imaging and Media components; I'd probably start with the Imaging component first to familiarize myself with all the steps of releasing a component before writing the Media component.
Just before we move the conversation offline, for posterity, I wanted to mention that my opinions regarding the CPU and memory benefits from going the CoreImage / AVFoundation route comes from ideas presented in this WWDC 2018 presentation: https://developer.apple.com/videos/play/wwdc2018/219/
I'm familiar with using the Grand Central Dispatch to run certain Encoding/Decoding tasks on serial queues; I'm not entirely sure yet how that might work in terms of integrating SkiaSharp. Also, AVFoundation has the ability to add and remove devices to a capture session once the session is already running, which might be useful in certain situations. I agree these and a couple other design considerations are probably best discussed offline.
I believe Github removed the DM feature, so would you prefer that I send you an email at the address mentioned in your profile, or use the psi support email address from before the Github release?