webrtc-native icon indicating copy to clipboard operation
webrtc-native copied to clipboard

Add example to consume streams

Open piranna opened this issue 9 years ago • 16 comments

Add an example about how to consume the MediaStream and MediaStreamTracks objects from Node.js, and how to add them programatically. There are some modules like node-ffmpeg or node-speaker that could be used for this since you can pipe the streams to them. but I'm not sure if the formats are the same... :-/

piranna avatar Jul 04 '15 09:07 piranna

I'm planning to provide capturer/recorder for png/jpeg/webm/wav from MediaStream / MediaStreamTrack.

prototype example:

WebRTC.getUserMedia({ video: true }, function(stream) {
  var rec = new WebRTC.Recorder(stream, 'image/png');
  var frame = 0;

  rec.on('data', function(data, prop) {
    console.log(prop.width);
    console.log(prop.height);

    fs.createWriteStream('img-' + frame + '.png').end(data);
    frame += 1;
  });

  setTimeout(function() {
    rec.end();
  }, 5000); // End capturer after 5 sec

  new WebRTC.Recorder(stream, 'video/webm').pipe(fs.createWriteStream('webrtc-video.webm'));
});

vmolsa avatar Jul 04 '15 10:07 vmolsa

Seems nice, but I'm was asking more something like node-ffmpeg that allow to pipe it's stdio to fetch or stream video data.

https://github.com/fluent-ffmpeg/node-fluent-ffmpeg#outputtarget-options-add-an-output-to-the-command https://github.com/fluent-ffmpeg/node-fluent-ffmpeg#pipestream-options-pipe-the-output-to-a-writable-stream

Maybe a helper function/stream object would convert the data from the ffmpeg format (that seems it's the same content that could go to a file) to a MediaStream or a MediaStreamTrack object (that's mostly the same :-) ).

piranna avatar Jul 04 '15 10:07 piranna

If input stream for webrtc is plain yuv frame stream then it could be implemented by writing custom webrtc capturer. anyway this could be implemented quite easily.

http://sourcey.com/webrtc-custom-opencv-video-capture/

vmolsa avatar Jul 04 '15 19:07 vmolsa

And where does get the example the source from the camera? Does internally use v4l? Seems to be transparent to the user... El 04/07/2015 21:08, "vmolsa" [email protected] escribió:

If input stream for webrtc is plain yuv frame stream then it could be implemented by writing custom webrtc capturer. anyway this could be implemented quite easily.

http://sourcey.com/webrtc-custom-opencv-video-capture/

— Reply to this email directly or view it on GitHub https://github.com/vmolsa/webrtc-native/issues/18#issuecomment-118543282 .

piranna avatar Jul 05 '15 08:07 piranna

You know that googles webrtc provides native capturer for webcam/screen/window and webrtc-native is just interface / wrapper to googles webrtc?. And yes webrtc is using v4l on linux.

Default pipeline process for MediaStream is webcam (v4l) -> capturer -> yuv frame -> MediaStreamTrack -> MediaStream.

For example if you want to send video.mp4 or ffmpeg stream to browser it could be done simply rewriting the capturer but first the stream or any input must be decoded to yuv frames before writing it to MediaStreamTrack.

(source) -> (decoder) -> yuv frame -> MediaStreamTrack -> MediaStream -> Peerconnection -> browser -> MediaStream.

So the source can be whatever only if it can be decoded to yuv images. And for converting MediaStreamTrack to video.mp4 or plain file stream we need to use VideoRenderer.

MediaStreamTrack -> VideoRenderer -> yuv frame -> encoder -> file.

https://chromium.googlesource.com/external/webrtc/+/master/talk/media/base/videorenderer.h https://chromium.googlesource.com/external/webrtc/+/master/talk/media/base/videocapturer.h

http://sourcey.com/webrtc-custom-opencv-video-capture/ <- Custom capturer for wrapping opencv.

vmolsa avatar Jul 05 '15 09:07 vmolsa

You know that googles webrtc provides native capturer for webcam/screen/window and webrtc-native is just interface / wrapper to googles webrtc?. And yes webrtc is using v4l on linux.

Yes, I know, in fact I have been using them very actively at work the last two years :-P My question was more related to the fact that I wasn't sure about how cam could be fetch on a Node.js app, but if the module can be able to do it transparently the same way I could do it on the browser then that's cool :-D

(source) -> (decoder) -> yuv frame -> MediaStreamTrack -> MediaStream -> Peerconnection -> browser -> MediaStream.

I think ffmpeg can be able to generate a yuv frames stream, so since MediaStreamTrack don't have a public constructor on the WebRTC spec, it could be added one that accept a Node.js readable objects-mode stream (or whatever node-fluent-ffmpeg emits, or another format if it's simpler for webrtc-native and can easily integrated on Node.js standards) so we can have a MediaStreamTrack object that can be combined in a MediaStream object, and the same the other way to read from the MediaStreamTrack by using a Writable Stream object. This way it's not necesary to rewrite internal classes and can be easily extended by using standard Node.js stream objects. What do you think?

piranna avatar Jul 05 '15 11:07 piranna

This is obviously if the MediaStreamTrack objects needs some wrapper on top of the YUV frames stream or some API adjusts, if not maybe it could be directly added... :-)

piranna avatar Jul 05 '15 11:07 piranna

Then you know basics :+1: :D

The purpose is that webrtc-native have exactly the same API than it is in browser, getUserMedia etc.. But due the lack of actual window or rendering we can't use canvas styled recording or image handling. So we need to implement this rendering part somehow. and for webrtc it's easiest way using those VideoCapturer / VideoRenderer c++ classes. And audio can be implemented using AudioRenderer.

Chromium source provides jpeg/png/webm/vp8 sources so i think that it is quite easy to create own encoding / decoding handling that could be transformed to MediaStream.

prototype example:

function onSuccess(stream) {
  // Convert image / video back to png images
  var source = new WebRTC.MediaSource(stream, 'image/png');
  var index = 0;

  source.on('data', function(data, prop) {
    // write images to disk.. img-0.png, img-1.png ...
    fs.createWriteStream('img-' + index + '.png').end(data);
    index++;
  });

  // if we are creating video
  source.pipe(fs.createWriteStream('video.webm'));
}

fs.createReadStream('video.webm').pipe(new WebRTC.MediaSource('video/webm', onSuccess));

// For plain yuv images
var yuv = 'YUV IMAGE DATA';
var source = new WebRTC.MediaSource('image/yuv', onSuccess);

source.write(yuv); // or just end(yuv);
source.end();

new WebRTC.MediaSource('video/webm') // <-- Creates decoder
new WebRTC.MediaSource(stream, 'video/webm') // <-- Creates encoder

Because of spec of WebRTC we can't use MediaStream or MediaStreamTrack directly, but instead of we need to create another way of how it is able to manage MediaStream/MediaStreamTrack. MediaSource could do it, and it would be really simple way to implement it.

vmolsa avatar Jul 05 '15 13:07 vmolsa

Currently, there are some problems. Everything is now running in 3 thread. v8/uv (main), signaling thread, worker thread. Adding more threads or actually stopping those threads is causing segfault.

Camera capturing is working only on windows? and camera streaming is not working. see node2browser example. audio capturing / streaming is working on all platforms?

vmolsa avatar Jul 05 '15 14:07 vmolsa

Hi @vmolsa Looks like the MediaStreamCapturer class was intended to capture images from a video stream. Is there a particular reason for removing it?

benweet avatar Apr 03 '16 12:04 benweet

@benweet Yes i was prototyping the capture method but there is possibility for sigsegv..

vmolsa avatar Apr 04 '16 16:04 vmolsa

@vmolsa I've seen you've added a new MediaCapturer class. It looks like a more advanced version. I guess I'll play with that. Do you have any js sample/test code by any chance? Thanks

benweet avatar Apr 04 '16 18:04 benweet

@benweet Sorry i don't have any. and for now OSX videocapturer isn't working so i couldn't get any mediastream source except from browser. But maybe it would be better to create own videocapturer module that would use libuv threads for processing data (NSRunloop, GetMessage, Event Loops) from camera on all platforms? and when re-implementing the videocapture module allows us to create custom source from video, image..

vmolsa avatar Apr 04 '16 20:04 vmolsa

I'm afraid I won't be much help on that as I'm only planning to capture frames from video stream coming from the browser.. I just noticed the MediaCapturer is not exposed by the module. I presume I will have to export it and see how it goes.

benweet avatar Apr 04 '16 21:04 benweet

Any updates on this? Is there a way to get the raw stream from the MediaStream currently?

tonyf avatar Jul 13 '16 18:07 tonyf

+1

mike-aungsan avatar Aug 22 '17 23:08 mike-aungsan