mediacapture-transform
mediacapture-transform copied to clipboard
Mediacapture-transform Timing Model
This is an overall tracking issue for timing-related issues in mediacapture-transform.
Questions:
- What timing information is provided by various sources (capture, canvas, etc.)
- What VideoFrame/AudioData attributes correspond to that timing information?
- What timing attributes are maintained between Video/AudioFrames and encodedChunks (encoder) and between encodedChunks and Video/AudioFrames (decoder)?
- What is done with this timing information in the Web platform? For example, how are MSTs sync'd within a MediaStream?
Timing Model discussed at November VI
VideoFrameCallbackMetaData in rVFC specification includes receiveTime, captureTime, rtpTimestamp, mediaTime.
Resolution: “file specific issues on specific specs”
Related: Issue 601: Expose in VideoFrame
Use case: https://www.w3.org/TR/webrtc-nv-use-cases/#auction
Timing-related issues:
#80 Expectations/requirements for VideoFrame and AudioData timestamps #86 Playback and synchronization of MediaStreamTracks created by VideoTrack Generator #87 What is the timestamp value of the VideoFrame and AudioFrame from a remote webrtc stream track? #96 What is the impact of timestamp for video frames enqueued in VideoTrackGenerator?