For low quality conection of publisher recording is much worse then cunsumed stream in browser. Recording freezes for several seconds or even die with connection timed out while stream in browser freezes only for several frames.
We tried different recording demos:
Recording of video streams require a proper jitter buffer since RTP packets from the server may be lost or may be received out of order. mediasoup will request retransmission of lost packets but will not order them (it’s a SFU, not a video decoder).
I don’t know how those projects you mention deal with jitter buffer, but obviously this is not a “mediasoup libraries” topic, so I’m changing the topic category.
We tried to modify gstreamer command but it always fails on stream resolution change.
We modified ffmpeg command by adding -max_delay 500000 and freezes became less often. Picture freezes forever until calling consumer.requestKeyframe(). So, our current solution is to request keyframes by interval or by specific message in ffmpeg stderr like Missed a picture, sequence broken
You should ensure that you run gstreamer witha proper jitter buffer and that gstreamer requests keyframes (via PLI or FIR, depending on what it supports) when it needs (rather than requesting keyframes from mediasoup at intervals).
Is there any way in the server side that we can configure the jitter buffer? I’ve been reading about jitter buffers, having a large jitter buffer introduces latency but it makes that stream much better. I asked because if that is possible then we can probably just use MediaRecorder on the client side in the browser for recording.
I’ve searched over and over, and there seems to be no way to configure jitter buffer locally in WebRTC.
You can do it in the browser. I just meant that, if you want to do recording in a RTP endpoint (gstreamer, ffmpeg) such a endpoint must run a proper jitter buffer to be ready for unordered video frames.
This is not something mediasoup can/should do, it’s a SFU, not a RTP endpoint.