pipe Media to different processes in same host

hi mediasoup team !
im using Node.Cluster to use all cores in host for better performance
so we have different processes and want to share media between them . so …
lets think i have 40 processes . if i use pipeTransport and in maximum situation i had to pipe each router in each process together i should open 1560 port for each of my room (40 * (40-1))
it is too much …
if i have 100 room in same time i need 156000 port that is not possible
i read docs … i know i can use pipeToRouter but it needs Router

image

can you suggest me what can i do ?
can do sfu between my processes ?
thnx for your support

This has been covered many times in this forum, even recently. Please check other similar topics.

hi .
i checked from yesterday and read a lot of topics … but exactly i didnt found something to help me …
can you please link me some topics can help me ? or any small advice or help to find out what i can do for forward media in host workers (node workers) without opening more ports or a few
thanks a lot :rose: :rose:

can DirectTransport help me ?
if yes how i should use it ?

If you type pipeToRouter or PipeTransport into the search field on the forum you’ll find a bunch of relevant threads.
You can also go to the repo and check tests to see how to use it and why pipeToRouter requires router.
It doesn’t make a lot of sense answering the same question multiple times.

Once you read those and have some more specific question, it would be possible to answer meaningfully.

For Remote (summed up I hope):

  1. Your signal server tells media-server-1 to
    let MyPipe = await Router.createPipeTransport({ “listenIp”: “123.123.123.123”, “enableRtx”: true});
  2. You reference this transport to a server ID to clear out later but as well connect.
  3. Send signal server back the IP/Port with Server ID for reference.
  4. Signal server sends this information over to media-server-2 to then check if it had already been created the transport, and if not do so now (repeating step 1).
  5. Finish the connection with
    MyPipe.connect({ “ip”: Received.ip, “port”: Received.port });

Once their connected, it’s really about re-producing the broadcast on media-server-2 and allowing
consumers to view the producer. :slight_smile:

I posted this elsewhere you may find it “searching” but it’s relatively easy and if you haven’t clustered your media-server with workers, re-fork the process!
(forgot how to properly embed the link there’s lots more too; that way these topics will filter to the right places).

1 Like

ok i will check again ! maybe i missed something or i dont explained well my Question
anyway thanks

thanks … already im using pipeTransports and its working well
but problem is here that if i want below scenario my port limits won’t let me.
imagine i have 8 nodes that each node has 8 cores to (64 cores finaly)
if i connect all cores (each core is worker in 1 of 8 nodes) together with pipeTransports (full-mesh) i need 4032 ports (cores * (cores - 1))
its okay for now but in (ex.) 800 cores or above i need >600000 port just for setup my workers to work together …
if i can just handle workers(processes) in 1 node without using full-mehs then my program will work with big scale

Where does that number of ports comes from? Each participant will need 2 ports when they both send and receive data. Each pipe transport needs one port on each side. Also I don’t understand why are your talking about full mesh. Browser will run out of memory and crash long before you saturate even 8-core server if you forward everything to everyone.

1 Like

this is not about clients
you are right each clients need 2 port in both receive and send
lets look at this first

i have 9 workers ok?
my room peers with round-robin algorithm will connect to one of 9 process which every peer can be producer and consumer

if we think first peer is producer and produce some media in router1 (in worker1 of node cluster)
others need to consume it so i should createPipeTransport between worker1 and other workers
(1 to many) :
worker1 createPipeTransport (1port) → worker n createPipeTransport (1port) → connect them

if peer that in worker 2 wants to produce this should be happen again
worker2 createPipeTransport (1port) → worker n createPipeTransport (1port) → connect them
(i know except worker 1)

i can not figure out which peer wants to produce (every one can) so i have to connect all workers together in server start
this is what i mean full-mesh
1 → 2 , 3 ,4 ,5 ,6 ,7 ,8 ,9
2-> 1, 3, 4, 5, 6, ,7,8,9

wish i can explain cleary

in my case i dont know what scenario i can do !!
to scaleup this ! think i have some other node too
N node * N core(worker)

There is something wrong if you have a full mesh of so many workers. It is almost impossible that you need so many clients talking to everyone else at the same time.

You either need more smaller groups of clients, in which case single worker is probably enough, or you have few “speakers” and a lot of “viewers”, in which case you have cascading and avoid full mesh too.

Either way, problem you describe with many-many thousands of ports doesn’t exist.

2 Likes

i got it
Let we think My rooms is same as you said … like 50 peers 1 speaker 49 listeners
As i said i have N core in my node so i have N worker and each runs in different pid
When request comes to my server
It goes to one of that workers

How can i sticky a room just uses 1 worker all the time

I know its not about mediasoup and support of it
But i have no choice

Can. You please help me about this one ? :rose:

Just create a new worker for every room. It wouldn’t even use full CPU core :man_shrugging:

1 Like

I was just confused :no_mouth::expressionless:
I dont even need for node cluster because i can create n medaisoup worker in one procces
But my bottleneck will be signaling server
I have to do something about scaling ws without using multiple instance of my app
:joy::man_facepalming:t2: Anyway i learned a lot
Thnks you and others :rose::rose:

If you have any advice for signaling server scale in this case please tell me :sweat_smile::heart:

Nazar is great!

You should consider avoiding designs like this!
image

This be the idea of having a room open on two chat-servers and the users are talking between the servers. This could be light load but assume both servers lit up 100%, the handler of the message queue would overload and the chat would lag.

You’d get much higher thru-put keeping users belonging to a room on their own server.

Here’s a fun example,
Discord uses a message queue it’s quite large and they use this to synchronize the data across all users. The pro here is, you can be on different servers and still connect to chats and see what’s going on.

The con however is response time, it’s very slow at times to fetch millions of queued up messages; so in scenarios a busy server is spammed, users across many servers may not see messages for quite a bit of time till this unloads. Not just that, these don’t generally scale well in the sense double the power!

Would suggest doing some homework on Unicast, Sockets, MQTT, WebSocket, and loads more and be careful of overloading the later points.

1 Like