Broadcasting of stream from One to Many using multiple Mediasoup Server instances

Dear Team,

Firstly, Congratulations for developing such an Outstanding library.

I have a requirement where in, I want to publish a video stream and there are around 30K+ users who are going to subscribe to that live video stream. As i read from a document, One mediasoup instance can handle upto 450-500 users. But in my case subscriber number is large.

So, Using mediasoup V3,

  1. I am planning to have 100 physical VMs, and mediasoup server will be running on each VM.

  2. Would i be able to send the published stream to 1 instance to all other 99 physical instances? If Yes, How would i do it using the mediasoup V3?

  3. Is it possible for those 30K+ subscribers connected to 100 different instances to view the published stream?

It would be helpful, if i can have some working demo example provided for this, as i can kick start my POC.

Awaiting your response.

Thanks & Regards,
Amit

There is documentation about this:

It would be helpful, if i can have some working demo example provided for this, as i can kick start my POC.

I’m afraid there is no “working demo example” for that use case, but there is documentation. Obviously we are not gonna develop demo examples for all use cases. We may offer consultancy services in same
cases.

Dear IBC,

I had gone through the documentation on scalability.

I have the following questions on it.

  1. If a stream can be routed from mediasoup instance 1 to mediasoup instance 2(both are on two separate physical machines), How is it going to happen? Do we need to specify somewhere the ip address and port of the instance on which we intend to route the stream?
  2. Also, if i need to send a stream from one mediasoup to many(example 100 instances). Is it possible to do so? If yes what would be the configuration to be made?

As the documentation says:

It’s also perfectly possible to inter-communicate mediasoup routers running in different physical hosts. However, since mediasoup does not provide any signaling protocol, it’s up to the application to implement the required information exchange to accomplish with that goal. As a good reference, in order to pipe a producer into a router in a different host, the application should implement something similar to what the router.pipeToRouter() method already does, but taking into account that in this case both routers are not co-located in the same host so network signaling is needed.

So you have to check the source code of router.pipeToRouter() and do something similar but creating PipeTransports in different hosts. It’s up to you (the application) how to communicate your different hosts to signal the corresponding PipeTransports’ IPs and ports and so on.

Hi IBC,

Thanks for your last response.

With a quadcore machine, i am able to route the producers to different router which are on other workers. In my case there was a single producer and there were more than 10 consuming users who are subscribed to the producer’s stream… All the 10 subscribers were on different routers(routers are also associated with different workers on the same host). I had used router.pipeToRouter() to send this producer to other routers on the same host. The subscriber who is on other router was able to see the stream of producer.

Now i am more curious on sending this producer on a different host. Mediasoup has already an implementation of router.createPipeTransport().

while using this, At what times do i have to provide my private IP or Public Ip in listenIP field?

After using the above pipetransport mechanism, is there a on hook listener on there host to listening to this router?

It would be really helpful for me, if you could suggest me an idea of using it.

Thanks in Advance,
Amit M

Use whichever IP is appropriate to communicate both servers using UDP in both directions.

Can you detail it a bit more? I do not understand.

Let me take an example over here, lets assume we have a user who only wants to consume the streams(subscriber). We have a user who is only producing both audio and video stream(publisher). Publisher is connected to say server x.x.x.1. On this x.x.x.1 we have four workers created. Subscriber is connected to server x.x.x.2, this also has four workers. Now i would want to send the streams of publisher from server x.x.x.1 to subscriber’s server x.x.x.2. To achieve this, Router.createPipeTransport() will be used on x.x.x.1. Now how would server x.x.x.2 will know that someone is sending media stream to it?

Is there a transport listener on x.x.x.2 which is used to know the incoming transports or media streams?

Because you must call transport2.connect({ ip: "x.x.x.1", port: PORT_1 }) and transport1.connect({ ip: "x.x.x.2", port: PORT_2 }), being PORT_1 and PORT_2 retrieved via transportX.tuple.localPort.

And of course you must communicate those ports to each server. By using HTTP or WebSocket or any Pub/Sub system or any protocol you may wish to use to communicate servers.

No. Just check the code of the pipeToRouter() method. That’s exactly what you must implement, so you must signal stuff between both servers all the time.

Hi IBC,

I will go through the implementation of router.pipeToRouter(), and see how can i use it for my usecase.

And about the pub/sub mechanism, i will search which mechanism would be best for communicating node servers on different hosts.

You must do the same as pipeToRouter() but taking into account that you will create pipeTransport1 and pipeTransport2 in different servers, the pipe consumer in transport1 and the pipe producer in transport2. So you have to subscribe to events of those elements (in each respective server) and, when something happens, signal it to the other server.

Hi IBC,

As suggested by you, i had tried to implement what is there in pipeToRouter() for my usecase.
I used hazelcast to have the distributed cache and for pub sub i am using Redis.

I start the hazelcast server and redis server, then i start the mediasoup server on machine x.x.x.1, create a router on this machine and store the router in hazelcast(routersMap.put(router.id, router)), Through redis i send the router id of this x.x.x.1 to x.x.x.2 so that when getting the router from routersMap, x.x.x.2 should know the id of the router created on other machine. After having the router from the hazelcast routersMap.

When i try to call createPipeTransport(), it gives me a error of createPipeTransport() is not a function(i am doing somewhat like this otherMachineRouter.createPipeTransport(transportOptions)).

Hi, please let’s avoid direct naming. I’m not the only one that can provide support here :slight_smile:

Unclear to me where you are calling that method. It’s a Router method, so router.createPipeTransport() does exist for sure.

It seems that you are using hazelcast, which I assume is like a distributed memory. Well, you cannot store a JavaScript object and its prototype (methods, members, etc) into shared/distributed memory. Distributed memory is just for storing/sharing data (let’s say pure JS objects or JSON objects or like that).

I cannot help much more with that. Just can say you that you must call router.createPlainTransport() in a real mediasoup Router instance in both servers. I cannot help with distributed info strategies.

Hi Team,

As far as, i can understand from the previous responses. If i want to achieve broadcasting with multiple physical hosts. I shall implement something similar to what router.pipeToRouter() internally does(i.e, creating pipe transport pair). Based on this i am trying to create a pipeTransport on a router which is created on other host.

Am i doing the right thing?

If yes, can u suggest me something where i can have router1 of host 1 on to be sent to host to having javascript object and the prototype.

Thanks in Advance,
Amit M

We cannot suggest architecture designs, that’s up to the application and we are supposed to just provide free support about mediasoup itself. Said that, you should somehow communicate both servers (using whatever) and call routerX.createPipeTransport and routerX.connect(...) in both servers. I insist: this is not about sharing “data” via distributed memory, you must send somehow a message to server2 from server1 and run routerX.createPipeTransport() in router2 (and the same in server1.

hi, i have implemented cascade with mediasoup。I use grpc to communicate between sfu1 and sfu2.
The flow chart looks like this:

sfu1->sfu2: Create remote pipe connection
sfu1–>sfu1: Create local pipe connection
sfu1->>sfu2: Connect remote connection
sfu1->>sfu1: Connect local connection
sfu1->>sfu2: AddSubscriber
sfu1->sfu1: AddPublisher

Yeah, that makes sense. The important thing here is that you are using grpc to intercommunicate servers and tell other servers what they must run locally. So, for instance, you are running routerX.createPipeTransport() and routerX.connect() locally in each server, as it must be done :slight_smile:

Hi,

Thanks for your response on this thread.

sfu1->sfu2: Create remote pipe connection
Here, are we doing a router.createPipeTransport() from sfu1 providing the ip address of sfu2?

sfu1–>sfu1: Create local pipe connection
Here, are we doing a router.createPipeTransport() on sfu1 providing the ip address of sfu1?

sfu1->>sfu2: Connect remote connection
In step 1 and 2 we get transports, lets say transport1 and trasnport2. Now we will do transport1.connect() with the sfu2 ip address?

sfu1->>sfu1: Connect local connection
transport2.connect() on sfu1 with ip address of sfu1.

sfu1->>sfu2: AddSubscriber
sfu1->sfu1: AddPublisher
Here, i am a bit unclear.

Also, it would be helpful if you could help me with a github link of your implementation.

Thanks in Advance,
Amit M

Never stop looking at the router.pipeToRouter() method. And take into account that, being both routers (this and router) in different hosts, localPipeTransport, producer and pipeConsumer exist in host1 and remotePipeTransport and pipeProducer exist in host2.

  1. sfu1->sfu2: invoke grpc to call createPipeTransport (listenIp of sfu2) in sfu2, and you need return sfu2‘s tuple.localIp and tuple.localPort to sfu1 by grpc.
  2. sfu1->sfu1: call createPipeTransport(listenIp of sfu1) in sfu1, and you will get tuple.localIp+tuple.localPort of sfu1
  3. sfu1->>sfu2: invoke grpc to call PipeTransport.connect(tuple.localIp and tuple.localPort of sfu1) in sfu2, and the PipeTransport is the one which you have created in the first step
  4. sfu1->>sfu1: call PipeTransports.connect(tuple.localIp and tuple.localPort of sfu2) in sfu1, and the PipeTransport is the one which you have created in the second step
  5. sfu1->>sfu2: invoke grpc to call PipeTransport.consume in sfu2
  6. sfu1->sfu1: call PipeTransport.produce in sfu1

After doing these steps ,sfu1 will receive the media packet from sfu2

1 Like

Yeah, that’s the point :slight_smile:

And you should also inter-communicate certain pipeConsumer and pipeProducer events, like here.

1 Like