MediaStreamRecorder
MediaStreamRecorder copied to clipboard
Missing timing informations in recorded blobs
I'm sending the recorded video/audio data to a server in pieces. Once the recording is done I'm combining the media files using the following command:
ffmpeg -f concat -i files.txt -codec copy combined.webm
The issue with that is, that the first few seconds (from the first blob) works fine but than the video stops the only the audio continues. During the combining I'm seeing a lot of errors like the following one:
[concat @ 0x7fa59981cc00] DTS 0 < 2700 out of order
[webm @ 0x7fa599829e00] Non-monotonous DTS in output stream 0:0; previous: 2700, current: 0; changing to 2700. This may result in incorrect timestamps in the output file.
How do you combine the recorded blobs on the server? Me current fix is to re-encode the single files into mp4 before combining them, but that obviously takes a long time.
@muaz-khan I tried various demo using the this library in Firefox and in Chrome also but unfortunately none of is working fine. I am tired now to solve this same issue :-1: I also followed same approach in firefox to sends small chunk video blob to server and at last merge that to the final file using FFMPEG, but there is not length information in the chunks file so FFMPEG creates the corrupted file.
In Chrome as we two separate file for audio and video , I am sending that small chunks to server and convert it to Video file and finally merge that small video to the final video file but the issue is that starting bits of each small file is corrupted and its giving "trrrrrrr" sound on each interval If I generate final file. Also video get's hang :-1:
I have also asked question on stackoverflow : http://stackoverflow.com/questions/32312505/record-video-and-audio-and-upload-to-the-server
@dipaktelangre I'll add a working (NO-trrrrrrr in audio) demo soon in a few days. That demo will be able to record longest possible recordings and will solve all sync/noise/echo/silence issues (on desktop platforms only). It will be nodejs based demo, though.
Actually I'm trying to write a reusable script that can be integrated in any WebRTC project. That's why it may take a week or two to publish it. I can't share code in the moment, though.
Hi, not sure if this is the same issue, but when using .save(), only the first blob plays back in the resulting wav file.
Using Chrome on a mac.
Getting the same behavior when using your online demo, only the first segment plays back.
The file does seem to be larger than just the first blob, so something with the timing data?
Thanks
Having a similar issue while concatenating video/audio webm files:
[webm @ 00000000003a5fe0] Codec for stream 0 does not use global headers but con tainer format requires global headers [webm @ 00000000003a5fe0] Codec for stream 1 does not use global headers but con tainer format requires global headers Output #0, webm, to 'output4.webm': Metadata: encoder : Lavf56.33.101 Stream #0:0: Video: vp8, yuv420p, 640x480 [SAR 1:1 DAR 4:3], q=2-31, 30 fps, 30 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: opus, 48000 Hz, mono Stream mapping: Stream #0:1 -> #0:0 (copy) Stream #0:0 -> #0:1 (copy) Press [q] to stop, [?] for help [concat @ 0000000000361e20] DTS 0 < 14911 out of order [webm @ 00000000003a5fe0] Non-monotonous DTS in output stream 0:0; previous: 149 11, current: 0; changing to 14911. This may result in incorrect timestamps in th e output file. [webm @ 00000000003a5fe0] Non-monotonous DTS in output stream 0:0; previous: 149 11, current: 48; changing to 14911. This may result in incorrect timestamps in t he output file. [webm @ 00000000003a5fe0] Non-monotonous DTS in output stream 0:1; previous: 148 69, current: 59; changing to 14869. This may result in incorrect timestamps in t he output file.
Any idea why?
Hello everybody
I also had the same problem. However you can concatenate properly using ffmpeg as follows:
ffmpeg -i blob1 -i blob2 -i blob3 -filter_complex "[0:v][0:a][1:v][1:a][2:v][2:a] concat=n=3:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" combined.webm
Where in brackets indicates which stream catch of each blob, for example [0:a] takes from blob1 the stream audio and [1: v] takes from blob2 the stream video. The parameter n=3 indicates how many inputs were given and v=1 indicates that the stream has video and a=1 indicates that it has the audio stream.
If you want to recode just add the desired codecs, for example:
ffmpeg -i blob1 -i blob2 -i blob3 -filter_complex "[0:v][0:a][1:v][1:a][2:v][2:a] concat=n=3:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" -acodec libfdk_aac -vcodec libx264 combined.mp4
For example to concatenate audio blobs:
ffmpeg -i blob1 -i blob2 -i blob3 -filter_complex "[0:a][1:a][2:a] concat=n=3:v=0:a=1 [a]" -map "[a]" combined.webm
That was the only way it can correctly concatenate the blobs.
I hope this helps you