CPU/Bandwidth results for many-to-many rooms in production.

Funny enough, be careful of user expansion. Few nights ago I had 100s of users spawn up out of no where and truthfully this was tough on the network it was telling people NO!!! We’re full. I had enough servers but few sessions were quite large maintaining while other sessions were small and failing to find place but servers had resources so a reason I spawn workers my software will destroy them when it’s time for full expansion.

So with this design it’s optimal you have someone or a mechanism re-balancing servers periodically or immediately when boost in viewers or broadcasts occurs.

That makes sense, that is definitely on benefits side. Thanks for the deeps insights.

Only one issue with separating producer servers from consumer servers is that we will have to use piping every time as the producers are on separate servers and consumers are on separate servers so if we will have to pipe the producer whenever it needs to be consumed.

@BronzedBroth when you say ‘room taking up this worker’ you mean the worker is fully loaded? And when you say ‘using 60% of its cpu’ you mean 60% of the one core or overall 60% of cpu means all cores?

That’s speaking of a single CPU core. A worker can’t exceed the core.

But to follow up, I’ve changed my mechanisms more. If you broadcast you’re now weighted, so if you say use a single transport you’re using a single weight. IF your viewers grow you’ll improve weight and say minimize a producer server from 30 broadcasts (audio/video) to 15 broacasts if there’s say 4 or more pipetransports required.

Now the mechanism is a bit more fancier than explained but a tip as I was closing workers if they got to hot and let them rebalanced but was able to code this in to atuomatically happen and be super fast.

Thanks, one more thing regarding this topic:

How do you manage aws load-balancing? I mean the new instance takes few minutes to load up, do you wait users for that period or you are pre-loading new instance let’s say when cpu reaches 30% for example?

Even with this 30% formula still we can have situations where users will have to wait for new instance to load up like thousands of user joined the meeting at once.

How do you tackle this type of scenario?

I run more servers than I require, this allows me to handle outages and user-growth. This as well buys me that time if I do need to deploy servers and it’s taking some time.


In your case since it’s a meeting these are generally dedicated sessions, it’d be more than acceptable to deploy when a meeting is about to start. You can show users the meeting is being setup. If you want immediately deploy though you will need the servers ready ahead of time which can get costly and at that point is it worth it?

In my case the platform will grow steadily enough to predict, if I get hit with a mass of users It’s as easy as deploying more servers I can announce to the entire website that I need to open more servers and they’ll be okay with that especially if it’s online 24/7 and little down-time. :slight_smile:

Thanks, in my case it is public platform and number of users can grow really fast so I think these are the things that I can do:

  • Either open a new instance when meeting starts based upon it’s user strength but starting instance will take time and user will not wait for that much time so I will have to ignore this option.
  • Keep 2-3 server free all the time, so if users grow really fast they can handle it and I have enough time to start new 2-3 instance in meanwhile programatically. This seems to be good option.

What are your thoughts about it?

Yup. More free servers the better but keep it cost effective, that’s where the first option to wait till servers spin up can spare you especially at those sizes but some cons, you’ll have to decide.

Thanks, I think you should change the title of the post to something load balancing related stuff for others to see and move the initial topic to the new one.

I work to hard, I don’t have the time to help users build their companies in the direction most suitable. I make zero dollars from this and develop for free most times. I rather donations to IBC.

Now with said I may be slowing down my help for users and if I obtain funds I can send off, I’m helping @ibc

You guys need to figure things out more on your own, I am sharing too much to be honest. Again not paid enough for this.

You are right about this.

@BronzedBroth within the same machine do you use node clustering to divide the socket connections over all the cores? I am trying to do this but facing this issue mentioned here:

Any suggestions to better handle this thing?

I was thinking not to use node-cluster, instead just stick with node on one core, is this ok? How do you manage this?

I’ll respond to your thread.


I fork my media servers x many times, when they go online they connect to a broker server and keep sync that way. In other words a single server knows the state of every mediaserver in its holding.