webrtc-stats
webrtc-stats copied to clipboard
Interframe delay stat for video receive stream.
This could be a useful metric to detect freezes in video, detect lost frames or network problems.
Definition: It is a difference between time moments when two consecutive received frames are decoded. For the first frame it should be set to 0.
As according to guidelines, it should be reported as a sum of interframe delays for all received frames. Reported value is actually a delay between the time of decode of the last received frame and the time of decode of the first received frame.
Then average interframe delay may be calculated by dividing the reported value by the number of received frames minus one.
From https://codereview.webrtc.org/2965033002/ about usage case of this stat:
We need it to measure lags on screenshare. In some sense, this metric is better than e2e delay - e2e delay we can calculate only for the frames, which passed through to the other side. Interframe delay will account for lots of lost frames due to some death feedback loops, like we had for screenshare.
Lost frames due to network problems might be covered by existing metrics?
There's RTCMediaStreamTrackStats.partialFramesLost, fullFramesLost. There are also packet lost counters and RTCInboundRTPStreamStats.lastPacketReceivedTimestamp.
Not sure about detecting freezes in video, but I believe that should be covered by RTCMediaStreamTrackStats.estimatedPlayoutTimestamp if updated to stop increasing when video is frozen (#222). That's only if you poll it frequently through. This proposed metric would steadily increase for jittery video with repeatedly small lags that might not be detected if you poll for last played out sample.
I should add, that the main reason is actually to have this metric in stats saved by a downstream project so we could analyze it later and debug if problems are found.
Does this need to be an absolute value of the difference to expected interframe delay, or should it be a sum of delays? Just adding up the interframe delays will give you a number equal to the time since stream started, which is not terribly useful. One possibility for freeze detection would be to have a value that summed up the difference between expected (per frameRate) interframe delay and actual interframe delay, but only when positive (larger than expected delay).
Sorry, I forgot to post here, but the metric was updated a little. Now we are reporting a maximum interframe delay over 1 second window. If polled once per second, it gives quite a good measurement of received video stream quality.
@ilyanikolaevskiy you didn't answer my previous question (whether it's a stat for deviation from nominal frame spacing or whether it's an absolute value). This also matters when frame rate changes within the call.
@alvestrand, I thought it were clear from the comment. New stat is the actual absolute inter-frame delay (not the deviation from nominal).
It would be interesting to see sender side interframe delay and the corresponding interframe delay at the receiver, then may be we would be able to compare this.
My main concern is when the sender frame rates change, what happens? If the sender has temporal scalability and depending on the structure of the frames a higher frame delay might not matter at the receiver.
@ilyanikolaevskiy : I see that you mention a window of 1 second in the above comments. I am trying to evaluate if WebRTC can be used for a live video streaming for Drones for video-freeze detection. I have camera with 30 fps , so I was wondering if WebRTC can detect a delay at least not more than 50ms. Which means if the first frame receives at time t0, and second frame at t1, and t1 - t0 equals > 50ms, I need to provide a detection signal to my application. The ideal case would be to get a notification every 33ms. I could not find any statistics about how much minimum interfame delay WebRTC can detect. Other scenarios for notification is packet losses, corrupted frames received.
My video server is an embedded system device running NXP iMX6 (which makes portability of WebRTC challenging), and the receiver/client is a Ubuntu PC. The communication link between server/client is WiFi.
Could you provide some pointers for the same?
@venkatesh-kuppan WebRTC can detect any delay. It's just that it returns the maximum interframe delay observed for the last 1 second. The intended use is to call GetStats every second to check if there were any freezes after the last call..
You can call GetStats() as often as you want, so you will detect freezes of any size. But the downside is that if the freeze happened, it will affect the results for the whole next 1 second. So you can't distinguish between one single freeze or many freezes happened within 1 second.
This is because it's hard to get per-frame callback in JS.
If you are not using webrtc within browser, I would suggest you to calculate inter-frame delay yourself for each received frame.
@ilyanikolaevskiy Thanks for your input. Do you have any suggestions on how to calculate this delay? I need a common reference at Tx and Rx, to calculate this delay and I have no knowledge of this reference. I tried accessing PTS/DTS, but this serves no purpose since I do not know when at Tx the frames were sent. Here is a post that describes what I am trying to do. Do you have any suggestions for this use case ?
Regards
Do you use WebRTC in-chrome, or as a standalone library. In the former case i think you can already access each frame NTP timestamp, which is the capture time of the frame in the receiver clock, Thus, you can detect jumps in two consecutive frame timestamps and see the difference. This timestamp is estimated after we have an RTT estimate, so you either need receiver to send some video back, or enable RTRR for receiver-only RTT estimation, which can be done with a hack in SDP string.
Alternatively, you can access RTP timestamps, these are generated by webrtc by taking local capture time in milliseconds and multiplying it by 90. So if frames are exactly 33ms apart, you will see 33*90=2970 differences between RTP tiemstamps.
Here: https://cs.chromium.org/chromium/src/third_party/webrtc/api/video/video_frame.h?sq=package:chromium&dr=CSs&l=88
timestamp() is RTP tiemstamp
ntp_timestamps() is the capture time. It will be -1 if RTT estimate is unavailable or not enough RTCP SR packets were received yet.
If you are working in-chrome, then i am not really sure.
@ilyanikolaevskiy So far, I haven't used WebRTC in my application. I was looking into WebRTC to analyse if it can be used here. My application uses GStreamer framework, which provides RTP plugins and timestamps, video rendering plugins, encode/decode plugins, etc. So there are no browser involved as of now, its just plain video rendering using the GStreamer plugins.
Here are the pipelines:
Tx(iMX6 device):
v4l2src fps-n=30 -> h264encode -> rtph264pay -> rtpbin -> udpsink(port=5000) -> rtpbin.send_rtcp(port=5001) -> rtpbin.recv_rtcp(port=5002)
Rx(Ubuntu PC)
udpsrc(port=5000) -> rtpbin -> rtph264depay -> avdec_h264 -> rtpbin.recv_rtcp(port=5000) -> rtpbin.send_rtcp(port=5000) -> custom IMU frame insertion plugin -> videosink
Here, I have explained the issue in a more detailed way.
Please let me know if you need more information.
Regards
As I told, you can simply look at the difference between RTP timestamps to see if there're some extra delays between frames at the receive side. You can use NTP timstamp to see E2E delay for each individual frame in WebRTC. You can use standalone WebRTC C++ library for these purposes.
@henbos [a] google-chrome --version yielded Google Chrome 73.0.3683.103 [b] lastPacketReceivedTimestamp does'nt seem to be present in RTCInboundRTPStreamStats. We badly need it in our application. [c] Read from the spec that 'inbound-rtp : Statistics for an inbound RTP stream that is currently received with this RTCPeerConnection object. It is accessed by the RTCInboundRtpStreamStats.' [d] Alas lastPacketReceivedTimestamp does not seem to be mandatory as the spec says 'An implementation MUST support generating statistics for the type RTCInboundRtpStreamStats, with attributes packetsReceived, bytesReceived, packetsLost, jitter, and fractionLost.' [e] tried to get lastPacketReceivedTimestamp through the following code and inspected the console in the browser. this did not yield the lastPacketReceivedTimestamp that i was looking for.
var receivers = window.curr_pc.getReceivers();
myconsole.log('[getRtpTimestamp2]:numReceivers=' + receivers.length);
receivers.forEach((receiver,rindex) => {
myconsole.log(`[getRtpTimestamp2]:rindex=${rindex},receiver=`, receiver);
receiver.getStats().then(function(stats) {
stats.forEach((stat, sindex) => {
myconsole.log(`[getRtpTimestamp2]:rindex=${rindex},sindex=${sindex},stat=`, stat);
});
});
});
Result (for stat with type:inbound-rtp)
bytesReceived: 696475
codecId: "RTCCodec_video_Inbound_96"
firCount: 0
fractionLost: 0
framesDecoded: 95
id: "RTCInboundRTPVideoStream_2902881011"
isRemote: false
kind: "video"
mediaType: "video"
nackCount: 0
packetsLost: 0
packetsReceived: 638
pliCount: 0
qpSum: 843
ssrc: 2902881011
timestamp: 1566836637484.665
trackId: "RTCMediaStreamTrack_receiver_2"
transportId: "RTCTransport_audio_1"
type: "inbound-rtp"
__proto__: Object
Note that lastPacketReceivedTimestamp is absent. Any help rendered to get lastPacketReceivedTimestamp would be greatly appreciated.
Regards
I am not sure if you would be able to poll getStats fast enough to capture the lastPacketReceivedTimestamp (LPRTS). AFAICT, browsers may return cached getStats and not generate a new getStats.
If you really need a inter-frame delay, the best we can do is capture measure totalInterFrameDelay and framesReceived, dividing the two will give you the average over a certain period. Which you can compare with capture frame rate or similar.
related to debugging frozen frames:
lastPacketReceivedTimestamp may not be sufficient, because if there a decoding error (missing IDR for example), the LPRTS will not help because and the framesReceived may be increasing and you need to check if framesDecoded is updating relative to framesSent and framesReceived.
About the implementation of LPRTS: While lastPacketReceivedTimestamp is not MTI (mandatory to implement), you can report an issue (if it is not already created) on the respective browsers. AFAICT Chrome has issues related to each webrtc-stat that is not implemented.
I will let @henbos @jan-ivar comment on the bug reporting/implementation status bits.
@ksridhar lastPacketReceivedTimestamp was added in Chrome 75.0.3767.0 and is comparable to timestamp, both of which you can construct Date objects from, and should be easily comparable to current time.
@vr000m @henbos Thank you'll for the response. Our use case requires us to capture the RTP timestamp when say a button is clicked. We are looking to use this (lastPacketReceviedTimestamp) to perform some work back at the streaming server end by transmitting this metric back to the server. We are thus not aiming to poll getStats() rapidly. We are ok with an error margin involved in capturing this metric. The timestamp in version 73.0.3683.103 was the epoch time of generating the metric and not that in the RTP header. As suggested i will try it on Chrome 75.0.3767.0 and get back on this. Thanking you once again.
Sorry I just inspected lastPacketReceivedTimestamp on tip-of-tree and while it does increase correctly (goes up by one every second as long as it is receiving packets), it has a different offset than the RTCStats::timestamp. So the new Date(stats.lastPacketReceivedTimestamp) results in a time that is not comparable to new Date()... this is a bug (not sure if it regressed recently or if it was always wrong offset, I seem to recall that it was working when it was added but not sure now), I filed https://crbug.com/webrtc/10919
@henbos
Thank you once again. Pardon me for seeking some clarification.
The spec says
lastPacketReceivedTimestamp of type DOMHighResTimeStamp
Represents the timestamp at which the last packet was received for this SSRC.
This differs from timestamp, which represents the time at which the statistics were
generated by the local endpoint.
Now what i can understand from this is
[a] lastPacketReceivedTimestamp
is the time at which the RTP packet was received.
[b] timestamp
is the time at which stats were generated (which would be >= lastPacketReceivedTimestamp
)
It is possible that lastPacketReceivedTimestamp
is the timestamp in the
RTP packet itself ? If this is so then
this is luckily what i want but not what was intended as per the spec (as
you have rightly pointed out).
Finally this leaves me with the question: Is there a way through statistics or by other means to get the timestamp present in the RTP packet itself ? May be a timestamp range for the RTP packets in the decode buffer (sorry for being wishful).
RTCRtpReceiver.getSynchronizationSources exposes not only timestamp but rtpTimestamp. Unlike getStats(), this API can be polled frequently without performance concerns. The rtpTimestamp is a recent addition, it first appears in Chrome 76.0.3806.0.
Oh! Thanks a lot. i was missing this totally and was searching for it in getStats(). In this context i think that the bug you raised holds water. This is a lesson for me to read the spec thoroughly.
While going through the spec for rtpTimestamp i noticed the following
rtpTimestamp of type unsigned long, required
The last RTP timestamp, as defined in [RFC3559] Section 5.1, of the media played out at timestamp.
i think that the reference must point to RFC3550 which relates to RTP rather than RFC3559 which relates to SNMP ! Hope i am not making a fool of myself here.
Thanks I filed an issue!
Very old issue... We have totalInterFrameDelay and totalSquaredInterFrameDelay as of a long time, I suppose we can close this?