I have been at this for months, trialing different configurations and it wasn’t easy but I think I can explain this for you guys nicely. Here’s my numbers though first and just scale them up!
Bitrate: 10000000 (I/O)
Cores: 2 (1 for producer 1 for consumer)
1 core = 12 viewers (6 broadcasts)
4 core = 24 viewers (12 broadcasts)
User Weight for Viewers (Bitrate had affected these results more than CPU could):
12 viewers = 1 weight
24 viewers = 2 weight
36 viewers = 3 weight
48 viewers = 4 weight
60 viewers = 5 weight
72 viewers = 6 weight
So how this weight system works is, your consumer server is rated 6/6 total space per core, if producer needs to be sent out to 24 people you’ll cost the consumer server 2 slots, if 12 then 1 slot.
So a single consumer core here will allow 12 viewers to view 6 cameras/audio without issue. Some adjustments however to ensure no buffer problems/etc.
Producer server will get its first broadcast and wait till a user comes to view, a first attempt at creating the pipetransport will be made with the consumer server and then the request if no stream exists to have it re-produced for the user and remembered for the next connections.
If all of the viewers leave the consumer server for the producer resets unless there is an active broadcast elsewhere (like different room).
So if you’re a user hanging alone on camera it shouldn’t cost a viewers’ server anything.
The numbers are not perfect, but it could be interesting for those to know during my tests CPU would not maintain a consistent % to user-count; there was at least a few factors to involve/consider but maybe if I sort that, these numbers will be much getter.
Now additional if I were to add more power to this, it’d be to have producer able to use 2 or more consumer-servers or more than 6 weight for the big rooms.
Enjoy that guys,