About single Router consumers limit and "pipeToRouter"

I have a question about single Router consumers limit, I’m sorry if this question has been mentioned, because I went through almost all topic about pipetorouter and didn’t find a clear answer, here is my question:
We know that a Router can support about 500 consumers. For example, I have a Router1, which already contains 100 producers and 500 consumers. Now I transport 100 producers in Router1 to Router2 use pipeToRouter.
question 1: Is the number of producers in Router1 also limited by 500? In my example, Router1 contains 100 producers and 500 consumers. Is this feasible? What about 500 producers + 500 consumers in a single Router?
question 2: How many consumers are there in Router1 now? 501 or 600?

Router have no consumer limit, it is worker which have the limitation. Now 500 is for a normal CPU it differse between machines so if you have a heavy machine then you can support more than 500 per worker.

Lets assume it is 500.

The other thing is 500 is sum of producers + consumers, not 100% sure about it. But producer do take around same cpu as the producer takes.

As for question 2 on router 1 you will have 600 consumers after piping.

Thank you so much for your quick reply, yes, I know that “Depending on the host CPU capabilities, a mediasoup C++ subprocess can typically handle over ~500 consumers in total.”, thank you for the reminder。

“As for question 2 on router 1 you will have 501 consumers after piping.”
this is exactly the definitive answer I desperately needed to know, thank you very much.

Sorry it will be 600, you are piping 100 producers, so 500+100 = 600, I was assuming that 1 producer was being piped.

You must be disappointed after hearing this :slight_smile:

Yes, Haha, this will make it a lot harder to scale mediasoup, because a producer in a Worker whose processing capacity has reached its upper limit will be consumed at any time.Is there any good solution?

You will have to take some measurements before hand to tackle that. You must not use worker to its full capacity leave some space for sudden spikes.

There can be multiple solutions to this as per you usecase.

Ok, thank you very much for your reply, I now have a clear direction and know what to do, thank you again and wish you a happy life.

No problem, there are some resources on this platform about scaling, you can search and read through them. I can share you the ones which I followed in some time.

You must read them and understand them before starting working on scalability otherwise you may end up on dead end.

Ok, thanks for the reminder, I’m reading some topics related to scaling, and there is indeed a lot of work to be done, requiring careful planning. :innocent:

1 Like

Here is the list of topics I followed, they cover scalability deeply enough to get started:

Thank you very much, these materials are like treasures to me, I will read and understand carefully, I am lucky to join this warm community, I am new to this great product of mediasoup, but I will study hard and Grow and help others in the future. Like you! :yum:

1 Like

A worker does not need to use an entire core of power, in production I would run many workers about 4-8 per core with different weight factors. This means if I had a server with 32 workers to handle broadcasts at 15 slots per worker, I would allow such if in idle stances but if the server grows in use at say 80-90% overall I would restart workers and re-route users to a new server immediately.

The overall is that all servers will sit around 50-80% usage depending on session size.

I’ll never know if a room will be 10 viewers one minute and 50 the next minute, I just prepare or find space for the users and restart workers if necessary to expand stream range.

Zaid shared some good topics, really follow those and whatever you find of use, not single way of doing this.

1 Like

Thank you so much, this helped me a lot, kudos!
“restart workers and re-route users to a new server immediately.”
This is a very bold idea. I also had this idea, but I gave up for a moment and didn’t practice it, because I don’t know how much impact it will have on users. How does the user feel when the worker restarts? Is it a momentary thing (basically imperceptible), or will all videos stop for a while?

Videos will stop for a while like 2, 3 seconds, it depends. You can use loader in the video player or user profile picture just like zoom to improve the UX, for video it doesn’t matter, but for audio if it takes more than 2, 3 seconds then user will surely notice it. But in some scenarios this will be totally inevitable so this is all ok at the end. The better your scalability system calculations are the lesser will be impact of it on video/audio start stop problem.

I understand now.
When a Worker is about to exceed our expected load suddenly, this solution can be used as the last barrier of our various expansion solutions to ensure that a Worker that is about to exceed the load will not stop the service. Of course, the premise is that we have made resource planning according to various situations.
Thank you all.

An amazing impact, not only can you correct loads when a room becomes massive or returns to small sizing, you can actively stop network attacks and in theory never be put offline. Users won’t notice much of this outside a 1-3 second pause for whomever was on the selected worker while another is found.

If users are experiencing network lag, they’re happier to see I can navigate them to different servers and still scale.

That’s right, thanks for the pointer, I’ll apply it to my extension model. Once I’ve finished the preliminary model, I’ll come back to share and discuss it.

I still have a doubt here, I don’t know what is the reason for the worker restriction.
mediasoup official said: “A Worker represents a mediasoup C++ subprocess that runs in a single CPU core.”
and : "Depending on the needed capability, the server side application using mediasoup should launch as many workers as required (no more than the number of CPU cores in the host) and distribute “rooms” (mediasoup routers) across them.”

Does it mean that if my server has 4vCPUs and I only create one worker, even if this worker exceeds the load (500), then there are still 3vCPUs idle?

Yes one worker can’t go beyond one core. To fully utilise the CPU, you must create multiple workers, recommended is to not open more workers than the number of cores in CPU but you can open more workers than the number of cores but it is not recommended as if not used properly that will trigger the race condition among the worker processes causing problems.

So when you see the worker is getting close to its limit then open another worker and distribute the rooms between them as mentioned in docs.

I understand now, thank you! :slight_smile:

1 Like