1 - N(8000-1000viewers) Broadcasting [Urgent app for production]

I am developing a 1- N broadcasting architecture with mediasoap, here are my questions:

  1. If 1 router can handle 500 consumers (max), can I have 16 routers within same worker to handle my 8000+ consumers

  2. The article on scalability suggests I open 1 worker & 1 router per consumer and pipe streams to the routers in their respective workers, meaning I will have 17 workers & 17 routers and 500 consumers goes to 1 router each and the producer takes its own as well

Pls I need advicse on what to do based on my taught process(1&2) above

I wouldn’t think so, once you exceed the core capability with the broadcasts you will need to further pipe them to other servers/cores. So to sum you won’t get 500 per core, you’ll probably have to assume less and designate weights so you don’t overwhelm things.

A user is expected to be piped to more than one server for fanning to viewers; it all depends but sort that logic.

I have about 20 cores more than 64GB and over 500GB of SSD.

So lets say 450 viewer per worker per core per router and I spawn to 16 cores each has its own worker and router.

Do u think I will achieve my goal with this design?

And what do you suggest pls

Let’s go about requirements, I run many 4vCore @8GB RAM and all I can say is a machine at 60% overall usage isn’t using more than 2GB ram. Requirements can be quite low, as for the storage I use just OS and the service requirements which is a few GB’s.

Most processors run same instructions/speeds/etc these days so little difference but let’s sum this fast, the single broadcasts on a single worker (core) could be over consumed and take more than the worker and resolve adding latency and other defects to the real-time situation.

You would need to ensure that you understand what usage may be, weight it and set limits so like if you know you need to fan to other servers for viewers they can provide properly.


So to be easy on you, let’s say you set a single core to 15 broadcasts and you assume they will be piped 2-10 times, the core/server that consumes them will transmit to viewers and you’ll make sure not too many are over using the viewer server.

YOUR GOAL IS VERY POSSIBLE, I would say though have atleast 2Gbp/s network up/down; possibly better.


Do take my above specs as a real-time usage case, I don’t perform HLS or any recording, it’s live and not logs.

1 Like

I think with the piping effort, implement HLS and some CDN. If you want real-time, it’ll cost CPU ten-folds serving hundreds-> thousands. If you can sacrifice some seconds for viewers you can HLS; I would suggest HLS for large audience like settings with some real-time; with that said you should weigh your usage and make sure you can do what it is you need, you may need to pipe a stream to many servers to be made into an HLS format for 10-50->100K viewers you design how crazy it gets just do good testing.

But if you need all real-time, sub-second latency; you will want to mix my above comment with some some transports for some sessions to consume otherwise have all viewers view a HLS-Playlist file/etc.

I am building for realtime @BronzedBroth

Yeah so get good with setting limits and auditing a server so that you can allow a user to broadcast with video and/or audio to be piped to many servers/cores where a user can view them.


Let’s be insane with it now, let’s say 1 broadcast used up an entire core fanning to viewer servers, we would need additional servers/cores to serve their stream out. It get complexed but if you can figure out all of that you should be fine. 1,000 viewers may be tough depending!

–

It’s quite a bit of work something doable but not easy.