Robert Swain
Robert Swain
Here is information about how to add the framework to an Xcode project (choose the section depending on whether you're developing for OS X or iOS): https://github.com/EricssonResearch/openwebrtc/wiki/Building-OpenWebRTC#using-the-results Here is a...
1) `navigator.getUserMedia` should be present and it seems we also overwrite `navigator.webkitGetUserMedia`: https://github.com/EricssonResearch/openwebrtc/blob/master/bridge/client/webrtc.js#L1224 (note there and just below, also note the FIXME) 2) Not yet but that is a good...
So the getUserMedia API is found now? But TokBox says that the browser is not compatible with WebRTC? Do you know if they do any user agent sniffing?
Does [this getUserMedia test app](http://googlechrome.github.io/webrtc/samples/web/content/getusermedia/) or [our point-to-point audio/video test app](http://demo.openwebrtc.io/) work?
`*dtls*:5`
Set the GST_DEBUG environment variable to that.
Also, when it crashes, can you get a backtrace? All of this requires building bowser from source, which is not too difficult if you haven't done it already.
There is code in openwebrtc-ios-sdk that injects some simple JavaScript into the WebView which in turn makes an XHR request to a local http server that serves some more JavaScript...
I think the GStreamer camera source element (called avfvideosrc) we use on OS X and iOS supports screen capture. I've never tried that functionality though and I don't know what...
Someone needs to implement it. @stefhak implemented this for the Safari extension and that also uses the bridge. I expect that code could be repurposed for camera / microphone selection...