Help for clarification on Direct/Plain transports and Jitter Buffer

Hello all.

I have some dudes about a few things, and I really appreciate any clarification. I don’t want support, only clarifications.

mediasoup acts as a webrtc endpoint, like a peer, so for instance, are browser and mediasoup handling between them all the things, like jitter buffer, etc?

When I use a directTransport for consume RTP packets from produces (browser), I get these packets passed to any mediadoup jitter buffer, or are transparent consumed in directTransport ( directTransport.on('rtp', ...) ) app?

The same happens with plainTransport? (of course, through udp)

I will details my issue. I am using mediasoup only for audio, between a browser (the client) and a server which host mediasoup and opus de/coders to pipe to a local audio system:

browser mic --- > internet ---> server ---> mediasoup direct/plainTransport ---> udp/rtp depayload ---> opus decoder ---> local pcm audio out

browser spk <--- internet <--- server <--- mediasoup direct/plainTransport <---udp/rtp payload <--- opus enconder <--- local pcm audio in

All runs like a charm, but… after few minutes, experiment increasing delay from browser mic over the time. I look at webrtc-internal in chrome and see that totalRoundTrip is high. I am not experiment delays in reverse order, from server to browser.

So, the last question is, I need to make a jitter buffer for received rtp packets in the transport to mitigate this issue?

Thank you

PD: If developers think this post must go in off topic, please move it.

mediasoup is a SFU rather than a RTP endpoint. It relays packets as fast as possible. Jitter buffer and audio+video synchronization is done by receiver RTP endpoints.

1 Like

This answer all my questions!

Thank you.

For muxing audio, most people use ffmpeg or gstreamer hooked up to a plain transport. These tools will handle things like dropped/out-of-order RTP packets.

1 Like