access workers, routers, transports objects from one node process to another

Is there any way I can access workers, routers, transports objects from one node process to another?
I was thinking that may be I can construct object of workers, routers, transports from their respective process ids?

I am actually planning to make multiple processes of my app i.e run on all cores of my machine. Now that my workers, routers, transports instances are on one of the processes and I need to access them in another process of my node app? Is their any way I can do that?

EDIT:
You can use this approach:

But the better version of this is what @BronzedBroth described here by using a broker server which routers the requests to the relevant media server:

Is there any way I can access workers, routers, transports objects from one node process to another?

No, the processes are strictly locked to their scope and heap in memory. This means that you can’t summon a function on another thread without inter process communication (IPC).

I was thinking that may be I can construct object of workers, routers, transports from their respective process ids?

As long as when you go remote the server sending/receiving has a unique ID on the network so other servers/broker know how to get in contact. I’d probably just stick with the UUID for workers, routers, transports, server, etc…

I am actually planning to make multiple processes of my app i.e run on all cores of my machine. Now that my workers, routers, transports instances are on one of the processes and I need to access them in another process of my node app? Is their any way I can do that?

As said, you’d need to commit to some kind of IPC/Signaling, this can be done through WebSockets/HTTPS for example if connecting many remote servers; or through master/slave communications.

Just know that the more threads you add and communicate between the more complicated the routing becomes and performance gets pinched. So you won’t necessarily have linear scaling without carefully planning your app.

Ok thanks, one way I am thinking of is that I will use redis pub/sub to do inter process communication between node processes, so let’s say the mediasoup worker I am trying to access is inside process_2 but I am in process process_1 then what I will do is publish event to redis and the process_1 will pick this event and do the necessary operations on mediasoup worker.

You can probably imagine right away that process_1 could send many requests to process_2 and if process_2 is full and tries to warn process_1 of this; process_1 may still send more requests till this warning is seen. It’d be at this point you’d likely have lost sync with any type of count.

I could suggest for ease a simple websocket server all media servers and chat servers connect to and this servers job is to route users and keep track of where people are and counts.

Valid point, inter process communication via redis pub/sub can cause this bottleneck issue you just mentioned.

The other approach you mentioned can you please tell me more about it like apps will connect to websocket server and then websockets server will sent the socket event to correct mediaserver? How this will work? Is there any name of this approach for which I can search over the internet?

The broker server acts as a mediator for all the servers, you’d connect all media servers to this broker as well chat servers. You’d keep track of servers, rooms, broadcasts, subscriptions and more through this server.

A bit tough to explain but see it as if we had one room on two servers we’d need a mediator, we’d need to send the message to a server to fan it out.

So in this setup, broker calls every shot and may perform tasks like,
CloseBroadcaster, CloseConsumers all sorts and will need to know which server consumed/produced/etc.


As far as I’ve seen, there’s no solution ready-made for this type of stuff, so it’s all going to be custom generally for your app.

Ok so the signaling i.e the socket events from client apps, will be received only by this broker server and then it will decided to which media server I should call for action, respective media server will perform the task and send the response back to broker server and then this broker server will send back the response to the app via socket etc. right?

If yes then how will broker server will talk to other media server processes? via something like redis pub/sub or something native like node’s net library?

It’s all pub/sub, there’s not much to it except with a broker server you can do processing and keep track of events easily enough. Without this processing, we may as well let all the servers scream at eachother and see who listens. LOL

Web Socket is really fast, there’s always the fear you’ll max out resources and your design needs to change slightly but overall you could route 10s of thousands of media servers before that becomes an issue, could imagine .5s to loop the list of servers.

Now that’s not speaking on chat servers connected, depending on routing for chats you may want to modify your design to be efficient. By that, my Chat Servers connect to a ChatBroker, this is where I can isolate room’s chats/actions without bugging the MediaBroker much.


Scaling will always remain a complicated process, you’ll definitely hit points you overload and need to make adjustments.

If yes then how will broker server will talk to other media server processes? via something like redis pub/sub or something native like node’s net library?

Over Web Socket. When the broker tells MediaServer to produce a stream for broadcasting, the broker will know which server, which chat, which room they belong to and can use this information for later subscribers all sorts.

The concept can kind of lean towards advanced routing so start small, try communicating to two chatservers with the broker routing one message from ChatServerA → Broker Server → ChatServerB (All for the same room).

Ok thanks, the only difference this approach has with the other (the redis pub sub one i mentioned above) is the division, chat server, broker, media server are all divided. Is this right?

Can this communication over web socket between chat, broker, media servers costly resource wise? I think socketio type things have there limitations.

Wouldn’t it be better that I keep all the things on one server and use inter process communication between processes? Communication should be fast as compared to websockets.

Opps edited the post above by accident, but there’s not much of a speed difference we’re talking micro seconds. Limitations exist no matter what, good routing fixes that. But knowing the routing isn’t going to error is crucial.

Oops, I lost your important comment :smiley:

Ok nice explanation, I will give it a try, thanks for your time :+1:

Consider this, discord’s gateway has 1,000~ voice servers, they have about 2.6+ million users in voice channels with traffic at more than 250gbps in 30+ data centers.

A broker would represent a region/zone/center, we’d start with US East first and assume we have about 50 - 200K users to handle and about 30-40 voice servers to route. Doesn’t seem so bad now.

With that said routing media is probably the lightest task, it’s not often spammed; chat however would be spammed and it’d be ideal to open a separate server/s to handle this task. So you could have MediaBroker and a ChatBroker handling different types of routing.

Nice, I am still confused a little, I think that confusion can only be eliminated after getting my hands dirty with it, but for confirmation let me explain my scenario and how I understood this concept.

I don’t have chat server, I just have webrtc media server that is dedicated to all this call stuff. So I have media servers only. There are n node server processes running. Where n is the number of cores in my cpu which is lets say 20.

All users gets connected via socket io on this node server to proceed with the call. Whenever call is initiated the specific nodejs process creates mediasoup workers, routers, transports, consumers, producers, all these things.

This is how I proposed how I wanted to do:
Now let’s say that call_1 is on process_1 and now user_2 wants to join it so he came on our app and socket connected him to our server but not on process_1 but on process_2. So when he signals server to let him join the call then process_2 which check the workers, routers etc of the call_1 and finds out that they are not on process_2 instead they are on process_1 so it routes it to process_1 and then process_1 handles it and all works perfectly. But as you said this will have some bottleneck issues.

This is how I understood of what you described:
Instead of letting the users connect to media server directly via socket, they will connect to the broker server via socket and request it to let them join the call so broker will check the request analyze it and routes it to appropriate media server. So the above example with this approach will be like this:
We have call_1 on process_1 and now user_2 wants to join so he joins with broker server via socket and broker server check which media server(s) are handling the call_1 and they found process_2(media_server) is handing it so the broker server will pass this request to process_2 and everything will work great and not issue at all.

Yes exactly.

However you do have a much simpler setup, you can get away with doing much less but the idea sticks.

You have a single process that handles I/O to determine user state, so if a user does broadcast this process is aware of such and any requests that come in the broker can route them properly. Follow this idea and you’ll have all the workers communicating just nicely!

Having no chat-server is fine just route dataChannels if you intend on using those for chatting.

Oh thanks I got the full idea now, this broker can be separate machine on the same host or it can be a process on the same machine. But the better would be to have a separate machine, If it is the process on the same machine then it will choke at some point because of the load because every user is connecting to broker right?

And by routing we mean the inter process communication right?

Could choke due to network limitation; can always choke from using too much resources as well but that’s why we design our broker to hand off many jobs. For instance if all the users connecting to this server starts lagging the broker with computations; we can do several things like:

  1. Connect chat servers to the broker to handle many connections.
  2. Open a pub/sub server that can handle anywhere from 1-25 chat servers, we’ll connect this to the broker instead.
  3. Create a second broker for the purpose of user msg, pvtmsg, sysmsg, actions unrelated to broadcasting or consuming.

Should have no issues with a setup you can optimize like this to have a lot of users able to chat and use the platform.

Route is a way or course taken, IPC is a mechanism that allows processes to communicate with each other and synchronize their actions. The broker would be our IPC.

So with this chat server ahead of broker server we have now this structure:
chat servers <> broker server <> media servers

With this our broker server will not get loaded with the user connections, as this is not it’s job it’s job is to router or IPC, instead chat servers will bear this load and if they want to communicate with the media servers then broker will help them communicate with media servers right?

And now broker server will let chat servers communicate will each other and media servers with each others as well as let chat servers and media servers communicate with each other right?

And this is how users will connect and pass request and get response:
All users will connect via socket to chat servers and then chat servers will do what user requested and then if they need to perform some operation on media server they route request to broker server and the broker server, which have kept track of all the media server things, will route this request to relevant media server to do the job and pass back the response to chat server and which then pass back the data to user.

You have mentioned pub/sub server, what it will do?

100%

Yup and you can apply further optimizations in time to boost chat throughput.

Yes, exactly! Now you may want a simple mechanism to determine which chat server is to be used but other than that sounds like goals. :slight_smile:

If your chat servers start to overwhelm your broker, you can lessen the usage. Imagine we have 1,000 chat servers we want to signal really fast because perhaps we have a system message going out global. We can send this to the pub/sub servers say 5-10 of them and have them push the message to all chat-servers they manage and handle. Substantially less connections and a far easier route in some cases.

As your project becomes demanding you’ll want to offload tasks/processes as much as possible, eventually though there won’t be much more we can squeeze from the setup and we’ll require opening another zone for handling. Ex. US-East-1 US-East-2