Maximum number of consumers per worker is 500 w.r.t CPU?

Not at all. Too many issues with that.

I can imagine a broadcast could have 1 viewer turn to 100 viewers but if we non thinkingly put many broadcasts on this same server and they all start turning from 1 viewer to 100 viewers that’s an overloaded server and this CPU check you made never worked…

It’s best to have a formula to avoid that entirely.

Do you mean that the calls to check cpu usage with that library will be many and as they will use some cpu as well so they can cause some bottlenecks?

The library I guess executes some commands in shell and captures the response from there so it shouldn’t be using much resources.

Sure…

I’m just hinting it’s not used for load balancing media servers at all, it actually conflicts more than anything but I’m not really trying to detail all the problems; the pros are minimal in your case and I’d only consider monitor CPU to be useful for perhaps opening more workers if necessary and closing the smallest one if overloaded… (even then I’d not really count on it at all… find yourself in scenarios of many rooms and different usage and it all crashing on you due to weird load offsetting).

Beyond that you’re on your own trial around and see. Sort a formula for sorting users by code, not by CPU usage; know by stress testing how high you set your limits.

Can you please explain this one, how this will happen?

Whenever there is a new consumer needed I will check on the worker to see if it is ok to consume from there or it will be good to create a new worker

This is one of those, I told you already situations.

Unfortunately without routing logic (formula) you will in fact over-load a worker. This means if several rooms were using the same worker and consuming exceeded its limits–you have several rooms now stuck or lagging hard.

You can put the rooms all on different workers but rooms can exceed the limits of a worker (even individual broadcasts viewed by many) so that’s not a good idea.

Only time making more workers is beneficial is when your workers cannot milk the entire core. So perhaps going from 4 → 5 workers on a 4vCore machine could be optimal in some cases but that’s assuming you’re scaling well and have ran a medium/max use test.

I already have the formula to distribute rooms between workers. I come up with a number as per my cpu capability and based upon that I distribute load between workers. I was checking for an automated way to distribute load between workers for that I was planning to use that pidusage library. I was planning to see how much load in % is worker putting on one core but you suggested that it will be good to stick to manual calculations.

I still can’t understand how checking the load of worker process on one core can cause the problems you mentioned. May be I am missing something but I will give it a try to see how it goes.

Thank you so much for your time brother.