Is it "best practice" to have one router per worker, or is it okay to have more than one router per worker?

Hello,

I have integrated the Mediasoup API into my own project and I was able to get multiple audio producers and consumers working on different devices with some hiccups, but it did work.

My current design integrates a “Room Manager” who is in charge of orchestrating the peers inside of a single “room”. When a router starts to get saturated, it requests another router from any of the workers that have the least load (currently my design allows for multiple routers within a single worker).

Currently, a “room” is made of many routers that interact with each other within a “Room Manager”.

I was reading the source code for https://dogehouse.tv and their implementation of mediasoup seemed so simple that it made me question if I was overcomplicating and over engineering my own solution.

Currently I have discovered the following 5 things and I want some feedback from someone who has already implemented mediasoup within their application.

  1. In general, 1 peer will have 2 transports. 1 transport dedicated to sending and 1 transport dedicated to recieving. This peer will have 1 producer (if it chooses to produce audio) and N consumers (assuming listening is the default state of the app)

  2. It is up to the application to keep track of the producers, consumers, transports, workers and routers and to release any references to these objects once they die.

  3. It is better to have 1 router per worker, than N routers per worker.

  4. Within these routers, you should designate 1 or 2 routers as “producer routers” the rest as “consumer routers” using pipeToRouter. This will help to scale within a single machine.

  5. If you want to scale horizontally, meaning having multiple SFU’s instantiated on N machines, you will have to have a signaling server to facilitate that conversation. This is something I am still confused about. Are we supposed to create another transport and start producing to the other voice server? Is it as simple as pipeToRouter and changing the listenIp to the remote ip? I don’t understand this part as much yet.

Thank you for your time and help to anyone who chooses to engage with this post.

– Juan

You should really read the documentation it’s well written, explains most of this.

  1. Yes. Can be two or more transports for send/recv;
  2. That should be done by your app as well so watch that.
  3. No difference.
  4. Can’t speak on others but my tests would suggest a high consumer approach if you want many-to-many.
  5. Create pipe connection between the two servers and share producer id to be re-consumed and sent back out again.

Read the documentation. It’s all there! :stuck_out_tongue:

1 Like

Trust me, I have been reading for several weeks. I appreciate the feedback :slight_smile:

I will continue to read! Thank you!

Lots of examples do exist to help with further understanding, few weeks though isn’t bad would estimate a few more to get comfortable it’s not a small library and is quite powerful. Luckily well documented but yeah take some time there.

I’ll update this though, #5 if asking about signalling servers it’s not documented that’s up to you but some examples will use socket .io /etc to make this happen. If more comfortable with websocket secure, that’s do-able to to relay your messages server-to-server-to-peer.

1 Like

This also seems to be my understanding so far, as there is no reason to track the complexity of multiple routers and there is no scalability advantage I’m seeing from having multiple routers. Honestly, I’m not entirely sure why the API even supports having multiple routers per worker… it would seem like it would be a lot simpler–and not have any disadvantages–if the concept of “worker” and “router” were simply merged?

It depends on the use case. You may want to run a big Router (many transports) plus N minor Routers (a few transports and producers/consumers) into the same Worker (so in the same CPU core) to distribute your custom app load. Let’s not assume that every Router is gonna handle a full “multiparty videoconference with many users”. That’s up to each application.

1 Like

@ibc FWIW, I’m not assuming that… in fact, my pet use case involves Twitch-like streaming with no “multiparty videoconference”. I just don’t see any reason for the existence of an API that supports multiple routers… if you want to have three video calls of five people each and ten Twitch-like streams, it seems like if you allocate them all into one router there would be no performance difference over allocating them into fifteen routers, but it means you don’t have to think about the existence of multiple routers. Even features like the audio-level observer don’t work at the level of a router: they take lists of individual producers. What am I missing? You designed this API, and you seem to think people might sometimes want to have multiple routers, so if you are willing to enlighten us, I’m sure we would all appreciate it greatly ;P.

A Router means an isolated context of producers and consumers with a specific set of media codecs. You are missing that.

Just to make sure I understand this, if you plan to use the same set of media codecs everywhere, there’s no benefit (performance or otherwise) to using more than one router per worker?

Max

If you call router.close() you close all transports created in that router, so hence all producers and consumers. Other than that, there is no other benefits. We can imagine a router as a container of transports.

Awesome! Thanks for the clarification.