Maximum number of consumers per worker is 500 w.r.t CPU?

It is mentioned in the official docs here: mediasoup :: Scalability and in discourse topic as well here: A room with 100 participants

Does this mean that there is hard limit of 500 for a worker or does this mean that if I use high end server then the same worker can handle more than 500 consumers?

I am using AWS instance g4dn.8xlarge which is a high end server and it has 32 cores.

I am using 1 worker per call so each call has it’s own worker. I am showing max 20 streams to each participant so a worker will handle max 25 participants as 20*25 will be 500 consumers. Is this right?

Can you please confirm that by having this type of high end server I still have the limitation of 500 consumers per worker?

This is CPU utilization with 15 participants which equals 210 consumers, cpu core is at around 20% as per that cpu core of that worker will be filled to 100% with around 1050 consumers. So a worker on this server can handle 1050 consumers, Is this right?

~500 consumers is subjective to your architect/design/hardware.

Your usage seems correct, but some factors to consider, the CPU cost of piping routers and additional producing/consuming for spreading the load.

Consider the rooms, some may have a 3x6 session and another 6x12 session. Logically we’ll want to put these two rooms on a single worker but what if that worker gets over-loaded one of the sessions suddenly gets huge! The load-balancing will be very hard to do and ensure you can use 100% of the CPU core.

Just some two cents.

This makes sense, thanks for guiding.

Instead of using 1 worker in multiple rooms what if I open 1 worker per room and if specific room want some more participants then I will open another worker for that room. This will keep the things simpler as every room knows that the worker it is using is it’s own and is not being used by anyone and can destroy it anytime without thinking. What do you think about this?

The only issue I think is that the number of workers can exceed the number of cores available but I guess OS will handle it efficiently. What do you think about this?

Instead of using 1 worker in multiple rooms what if I open 1 worker per room and if specific room want some more participants then I will open another worker for that room. This will keep the things simpler as every room knows that the worker it is using is it’s own and is not being used by anyone and can destroy it anytime without thinking. What do you think about this?
You can definitely use more workers than your logical core count but you may collide with other cores. Your OS will balance the threads as best as possible but you can potentially lock resources if two workers max out a single-core.

You’ll feel like you’re scaling but aren’t really, just program your signalling server to strategically keep track of the users on different machines/workers.

When a user broadcasts your signal server should know what server is producing them, if there’s space to consume or if it should be piped to another server and further coonsumed and in this case you could have Server A accept the broadcast but it’s full, Server B re-produces this broadcast after pipeToRouter/PipeTransport is connected so when a new user joins, they’ll know Server B is potentially available for consuming the broadcast. Now sometimes you could have Server A/B/C/D/E/F/G all involved with a single room so having the signal aware which server to tell close transports/etc will allow you to easily sort an algorithm that best matches your needs.

Just my two cents.

Ok this looks good, thanks for detailed anwer.

Just for confirmation, my calculations about this is correct?

Seems correct, you just need to be sure on CPU usage; 20% could raise to 30% for several seconds; knowing a min/medium/max is a good start for establishing a good number/range.

Yes I noted that, it goes to +5-10% for few seconds and then goes back to normal. Thank you for your guidance

@BronzedBroth is there a way in mediasoup to decide whether a new worker is needed or not? Like it is mentioned in docs that it can handle around 500 consumers w.r.t cpu. For my server it is around 1050 consumers per worker as mentioned above.

This is manual way, is there any other way like any api etc to automate this process? By which I can check when a worker is going to be full and a new worker is needed to support more consumers.

That’s all up to you, I have made a small post on this but the idea is all up to you and how to deploy it.

This is right but this will not be a perfect way to do this.

If I don’t know exactly when a worker is going to be full then the rest of the things are just estimation. May be my server core can handle 1500 consumers if I estimate it to 800 falsely then that can cause issues like I may open more workers than needed and number of workers can cross the max cores of the cpu and can trigger race condition.

You cannot guess, you must know what won’t push the core past 95% CPU. So your server may not handle 13 producers, 13 re-produce, 1-6 pipe transports all at once. You’ll have to make your calculations there and see what it can handle and set limits.

If tests show PipeTransport costs 10% CPU (not really); then we could only ever call it 10 times, but with that in mind would we call it ten times, or just a few times.

So definitely make the assumption of what any user can potentially achieve whether they do or not. This is the idea some cores may never exceed 50% usage but the day that worker gets a massive room on it, it’ll load right up!

I agree with this approach, I have actually implemented it this way but still it is manual way. I was wondering why mediasoup worker can’t tell us that the core in which it is running is going to be filled soon? May be something to add in future in mediasoup but this is definitely needed.

Because that’s not up to the workers, that’s up to your signal/routing server to decide. A worker should never tell management what to do, with allowing so you’ll have every worker screaming at you for no logical reason to determine if they are busy or not…

IMO Boss your workers around.

Makes sense, thanks for your time

@BronzedBroth the piping cost is usually around 10% cpu or it was just a random number?

I think it should be small number as it is producing from one end and consuming on another end so it should put only 1 consumer load on cpu? Is this right?

It’s definitely less, that’s just an example so with your routing you keep CPU usage in mind to not for instance over consume/produce on a server.

Ok am I right about the fact that pipe transport should put only 1 consumer load on server?

No… PipeTransports is the connection between workers remote/local. Each produced stream would need to be consumed to the PipeTransport and produced and consumed to all viewers.

So in other words, there could be several users sharing the same PipeTransport but that Pipe has many produced/consumed items to fan it further.

Yes this is right but I was actually talking about the load pipetransport itself has on cpu not the load of users consuming the media on piped router. So pipetransport itself has load of n consumers where n is the number of producers piped from router 1 to router 2. This seems right, is that so?

I was checking this library pidusage here:

Do you think it will be way better to check worker’s cpu usage instead of manually checking w.r.t max consumers?