Strangely offset start_time_realtime values

Hello!

When reading start_time_realtime in rtp streams coming from Mediasoup with libavformat, i am getting strangely offset values, for example just a few minutes ago i have received 2086112954464000 which (in microseconds since Unix epoch) refers to date somewhere in 2036. Yes, units appear to be correct, if i initiate a stream 10 minutes later i get a value 600000000 higher. What can be the reason? Is there a way i can get a “base” value to correct for this?

Thanks!

after a server reboot, same time was rebased to be 1.5 days EARLIER (values now, after about 40 minutes, are around 2085978588041000, and yet grow with the time, at correct rate of about 1 million per second). been 2085978588041000 around 05:14:26.164678 UTC then 2085978776021000 around 05:17:33.754839 UTC… any idea?

another reboot and it rebased a little bit forward. now so time seems to be consistent between stream restarts, but not between server restarts…

my mediasoup version is the most recent.

clock test (https://github.com/btorpey/clocks/blob/master/ClockTest.sh) shows nothing suspicious:


clocks.c
                    clock             res (ns)                 secs                nsecs
             gettimeofday                1,000        1,592,374,104          900,182,000
           CLOCK_REALTIME                    1        1,592,374,104          900,188,713
    CLOCK_REALTIME_COARSE            4,000,000        1,592,374,104          898,249,772
          CLOCK_MONOTONIC                    1                  938          896,005,238
      CLOCK_MONOTONIC_RAW                    1                  938          461,692,747
   CLOCK_MONOTONIC_COARSE            4,000,000                  938          894,051,756

ClockBench.cpp
                   Method      samples          min          max          avg       median        stdev
           CLOCK_REALTIME      1023        27.00        28.00        27.41        27.50         0.49
    CLOCK_REALTIME_COARSE      1023         0.00         0.00         0.00         0.00         0.00
          CLOCK_MONOTONIC      1023        26.00        29.00        27.53        27.50         0.60
      CLOCK_MONOTONIC_RAW      1023        27.00        29.00        27.78        28.00         0.53
   CLOCK_MONOTONIC_COARSE      1023         0.00         0.00         0.00         0.00         0.00
              cpuid+rdtsc      1023      1936.00      18212.00      2005.29      10074.00       564.58
                   rdtscp      1023        34.00        36.00        34.07        35.00         0.38
                    rdtsc      1023        20.00        24.00        21.48        22.00         1.09
Using CPU frequency = 1.000000

Set JAVA_HOME to run Java benchmark```

Here’s what IETF RFC 3550 for RTP says:

The initial value of the timestamp SHOULD be random, as for the sequence number. Several consecutive RTP packets will have equal timestamps if they are (logically) generated at once, e.g., belong to the same video frame. Consecutive RTP packets MAY contain timestamps that are not monotonic if the data is not transmitted in the order it was sampled, as in the case of MPEG interpolated video frames. (The sequence numbers of the packets as transmitted will still be monotonic.)
RTP timestamps from different media streams may advance at different rates and usually have independent, random offsets. Therefore, although these timestamps are sufficient to reconstruct the timing of a single stream, directly comparing RTP timestamps from different media is not effective for synchronization. Instead, for each medium the RTP timestamp is related to the sampling instant by pairing it with a timestamp from a reference clock (wallclock) that represents the time when the data corresponding to the RTP timestamp was sampled. The reference clock is shared by all media to be synchronized. The timestamp pairs are not transmitted in every data packet, but at a lower rate in RTCP SR packets as described in Section 6.4.

So I think it is supposed to be that way.

timestamps of packets are completely fine, as are timestamps of frames. no problem with them, they begin with 0 and monotonously increase with increments equal to the packet duration, except for connection disruption periods.

i am speaking of stream start time, start_time_realtime, described here https://ffmpeg.org/doxygen/3.0/structAVFormatContext.html#aa5ddb5cee1df28f21739133f2e37f1c5 . it is initially AV_NOPTS_VALUE, but after a few processed packets, sets to some fixed value, which is different from real unix time in nanoseconds by some value which is, for my current case, around +16 years and differs slightly between server restarts, but not between stream restarts or my receiving application restarts.

i know this is true because a moment of time when the first packet of a stream is received minus this value, differs from stream to stream by an almost equal number all the time - differing by only a few tens of milliseconds which corresponds to a random time it takes to start pulling a stream. plus 16 years or so, give or take a few days, from restart to restart of the server machine… i can give some examples but i believe this is a bug in mediasoup.

this is the problem i am trying to resolve.

No matter what that link says, there is no “stream start time” information in a RTP packet nor in the RTP protocol. There are other timestamps (such as the NTP timestamp in RTCP Sender Report) but all them have a random base value. None of them mean “stream start time”.

Not sure how this question is related to mediasoup libraries BTW.