How to forward stream from one mediasoup v3 server to another?

Thank you. From your practical experience, what is the typical size ratio between VP8 keyframe and interframe? Somewhere around 100:1 or something like this?

Hello!

Trying this out now, can you please explain me what will be a valid procedure to test? I set the keyFrameWaitTime to 10000ms, just to try it out. What should be my steps to see the behavior of a delayed keyframe/video waiting for up to 10 seconds to play now? Just want to make sure the parameter works, essentially my plan is to switch it between default (1000) and 10x as much, and to verify that ā€œsomething changesā€. What should be my steps? Right now i connect to the edge server twice in short interval (second 1-2 seconds after the first) and both video streams still take around 1 seconds to start playing. My expectation would be that the 2nd one will stuck for anywhere from 0 to 10 seconds, or am i wrong?

Thanks!

Alexander

There is an optimization when Consumers connect, which just happen once. To test this you should force generation of PLI in 2 consumers are the same time. You can call consumer.pause() and then consumer.resume() in both.

And the idea is that happening, one of the 2 consumers will get itā€™s video stuck for the time which is anywhere from 0 to keyFrameWaitTime, i.e. with keyFrameWaitTime being high enough, i should enjoy at least 1 of the 2 video being stuck for the few seconds?

You can enable the ā€œtraceā€ event in Producer and Consumers to log keyframes in all them and see how it behaves.

The delay works, it is super tested. I just added a setting to make it configurable. I cannot help much more on demonstrating that it works.

Doesnā€™t seem to work. Trying to pause/unpause consumer, get this in log:


the keyframe wait time is set to 10000ā€¦
what can be wrong?

Another test. Opened 2 windows of client pulling stream from edge server, and clicked pause/resume (single button) on both of them as quickly from one another as possible, then waiting some seconds before clicking again. Result:

As you can see, a couple times i managed to get keyframes even less than 1000ms apartā€¦ what can be wrong?

the option is clearly set correctly:

console.log(ā€˜produceā€™,keyFrameWaitTime);

[[10:51:57.743]] [LOG] produce 10000
[[10:51:54.370]] [LOG] produce 10000

[[14:01:50.462]] [LOG] trace { direction: ā€˜inā€™,
info:
type: ā€˜keyframeā€™ }
[[14:01:50.854]] [LOG] trace { direction: ā€˜inā€™,
info:
ā€¦
type: ā€˜keyframeā€™ }

actually, once even more frequently than the minimum possible, 500ms spacing. because ssrc of all streams is the same, clearly one stream. apparently, the limitation doesnā€™t work at all :frowning:

I kept playing with it. sometimes (once in maybe 20 tries), it works, and i get the expected behavior on pause/unpause: it stalls for ~0.5s, then audio continues but video still freezes, then after a few seconds, video unfreezes and keyFramesDecoded metric in chrome://webrtc-internals, increments. Managed to reproduce it 2x out of about 40 tries. In all other cases, there is no delay and keyFramesDecoded increments instantly and video continues to playā€¦ So apparently thereā€™s some bug with this limitation code which makes it work only in some narrow conditionsā€¦ By constantly clicking on pause/unpause, without restarting the producer, i am able to get about one keyframe in 2 seconds on average with the keyframeWaitTime being 10000ā€¦

And, the problem has nothing to do with restreaming and happens on the origin server, too.

On the good side of things, there seem to be no bugs as i see it. Just this problem prevents me from moving on with the projectā€¦ Please help @ibc :slight_smile:

There is an explanation for that behavior which, indeed, is not the desired one. Let me work on it.

Thanks! If you need any help reproducing, see my code, access my servers, or anything, please ping. Also maybe there is a workaround?

No workarounds here. I was just fixing it.

Please try ā€œv3ā€ git branch (no release yet). This is the commit. Note that the option has been renamed to keyFrameRequestDelay.

It works fine for me BTW. It completely behaves as expected.

Thanks. First very crude smoke test worked out fine. We are continuing to test today and tomorrow, i will reply with a final conclusion within 16 hours.

Verified. Works as expected. Thank you!

1 Like

@ibc I have been using media soup for the past week or so for support n-way audio (multiparty conference with only audio) for about 20 participants in a room. I need to figure out how to scale it to having 60-100 participants in a given room assuming everyone has sufficient bandwith and resources.
Also is there an example or library to help me horizontally scale it as well for multiple rooms?
Every bit of guidance is really really going to be helpful. Thanks in advance sir.

PS: everything is already in production, need help just scaling it up, more than happy to pay for a consultancy or donation. Thank you so much for the wonderful ecosystem you have created.

Please avoid direct naming, Iā€™m not the only one that can reply in this forum.

Here: mediasoup :: Scalability

Hi, sure, my bad

I went through the docs, anyway I can have more than 50 participants having audio only producers ? I did go through other articles where you mentioned there can be an assumption that there will be a max of 50 producers. But can a router worker support more for audio only conference ?

Definitely I cannot say numbers (or I donā€™t want to say numbers). Yes, audio is less expensive than video. Limits must be explored by each application and specific usage.

1 Like

At 50-100 participants you might also want to consider encoding all audio streams into a single one (MCU). If you use the standard SFU approach, each client has to receive all other audio streams. Now audio is not as resource intensive as video but if it only uses 40kbps and you have 100 participants then thatā€™s still 4mbps for each of the participants and more than 400mbps on the server. The server can handle this with decent hardware, mediasoup can do it. But can each of the 100 participants handle this?

If you instead encode all audio into one stream, then you reduce your users bandwidth requirement by 99%.

An alternative approach could be to maybe pause (auto mute) participants and require them to press a unmute button in order to speak so not all streams have to be active at all times. This might be a good idea not just because of scalability but because with so many people you will have too much noise all the time.

Just food for thought.

3 Likes