We recently added a metrics service which allows us to track things like RTT, Jitter, etc. from the mediasoup server on incoming producers. But we immediately saw some very large and strange values for jitter that don’t seem to match up with actual observations.
To test this I setup a local client to output it’s producer
remote-inbound-rtp stats every second, and on the server output the server
inbound-rtp stats once every second. The results were something like this:
Obviously it’s hard to draw direct correlations because the client only receives reports on a given interval, not every second, and the timestamp formats do not exactly match up. We also have to take into account the client is reporting in seconds as per the spec, and the server appears to be reporting in ms. But there is still a question of why there is such a large discrepancy from the point of view of the server. Keep in mind the client and server are running on the same host so jitter should generally be in the 10’s of ms or lower.
My primary question is if anyone else has experienced this sort of discrepancy or if there is a simple explanation for it. If not, does anyone have suggestions of where I can debug this further in the mediasoup code? Just looking for a starting point and hopefully not needing to delve into libwebrtc
edit: looks like this is calculated directly in mediasoup by
RtpStreamRecv so I will dig in there and see if anything jumps out as a potential cause.