"Best practice" suggestion appreciated. Better to use one webRTCTransport/SendTransport for all or one for each producer?

Concentrating on the producer side only for now, I have seen some implementations, which prepare this sequence once at the time their JS client joins a room:

  • server-side “createWebRtcTransport” is called on behalf of the client
  • result is used for client-side createSendTransport
  • “connected” event of the createSendTransport is used to trigger the server-side “connect” function
  • “produce” callback is waiting for the client-side trigger (sendTransport.produce) and things are going well

I also have seen implementations, which do all the described setup stuff with every single “produce” request from the client. From own experience I can tell, that works too.

Now I’m in doubt, what would be the better approach:

  • Having server-side webRtcTransport and client-side sendTransport up and connected once during app initialization and just re-using it with every new producer come and go?

  • Doing it all in a single sequence as described.

As said. Both approaches are working for me. I just do not know what e.g. would be the recommended approach w.r.t. resource-usage and maybe WebRTC hidden secrets.

TIA

This is recommended:

Mentioned by ibc here:

Which means send all producers over one transport and the reason is it is better for ‘bandwidth estimation’, also more transports means more calculations which means more CPU consumption but CPU diff is not much and AFAIK you can ignore it.

There is some nice discussion on my topic here regarding this:

After knowing all of this I still use one transport for one producer/consumer for better scaling on my server side :stuck_out_tongue:

On the side note the title of your topic doesn’t include the context of ‘Best practice’. If you can update it, it will be better for other people to search for.

1 Like

Perfect. This matches my expectation. I was also totally overwhelmed by WebRTC-Internals, when I saw, that there were really three different WebRTC connections (stat-pages) for three producers.

Thanks for the quick answer. I’m falling more and more in love with Mediasoup.

Regards

1 Like

Great. Yes each transport is a peer connection and is shown separately in webrtc-internals.

This is meant to be :smiley: :

BTW: This is a very good point and literally the perfect explanation for the choice “one for all”.

1 Like

Yes below topic enlightened me, you can read more about it:

Thanks!!

1 Like

In fact, ‘connect’. An event of the sendTransport.produce, not createSendTransport. This is very often misunderstood.

Yes, I was a bit sloppy, sorry. You are right of course

A single transport is best if we’re speaking on efficiency, we can produce stream to it and consume many streams from it and lessen the calculations required per second by simplifying its loop.

Now realistically, you aren’t going to affectively get away with a single transport in production as this assumes a single worker will be handling the load which may in time reach 100% CPU. Additional workers and pipe-transporting will be required.

With this said, you’ll likely run into scenarios where a single room may have the requirement of one to potentially hundreds of workers. So you could find that a room has many transports open client/servers side.

So overall, it’s based on your requirements but many times you’ll be pushed to many transports for optimal fail-over and scaling purposes.

With this said, you’ll likely run into scenarios where a single room may have the requirement of one to potentially hundreds of workers. So you could find that a room has many transports open client/servers side.

I’m not building a conference system, however the concept of having more than one worker per room is not fully clear to me (not yet). I also have read about the possibility to have rooms spread over different physical machines…

That seems overkill for my use case, but you never know. Right now I’m intending to start with distributing workers (form the pool of max workers I create at server startup) round-robin between the rooms as they appear (also a pattern I found very often in the samples). Each client will create exactly two transports (send and receive) per peer (socket connection, usually one per client) and on top of it a theoretically, but not practically unlimited number of producers/consumers.

Hoping it will work out

This is good start but as we move along we will be seeing scenarios for which @BronzedBroth is talking about.

Lets take an example.

Lets say you have 8 works in a machine ready to serve different rooms ok and each worker can serve max 500 producers/consumers. We are assuming that you will be using one transport for all producers and one transport for all consumers on client side as you are doing right now.

worker_1 have served so far and the rest of workers are free.

Lets say worker_1 is serving room 1-10 so far and each room have 5 users so in total 50 users and assuming 2 transports for each user so 100 transports in total and 100 producers if video and audio both is on for all and around 400-500 consumers I guess.

Lets say worker_1 have used around 90% of it’s capacity. So good so far.

Lets assume that 10 users started sharing the screen i.e one user in each room. And you will create 10 more producers in worker_1 as your transport is there and you will create around 150 more consumers i guess. But as we said worker_1 have already reached 90% of it’s capacity how can it serve 10 more producers and 150 more consumers? It will cross it’s limits and will cause lag in video/audio of all users and causing so many more issues.

You see 2 transport per user is not going to serve us well.
Based upon the capacity of the workers we will be needing more and more transports.

So at the end what we need is this:

  • More than 2 transports per user
  • More than 1 worker for room. (It doesn’t mean we need to dedicate the workers for a room, you can it depends, but you will check if a worker can server more than

In some of these topics we have discussed things related to it:

Thank you very much for your valuable explanations.

But those numbers are far, far beyond of all what I have in mind with this.

Thanks anyway for making me aware of this. Up to now these numbers were all just black boxes to me (I literally started last week to deal seriously with Mediasoup).

Yes that totally depends upon the application.

It’d be best to just create a transport per user if they broadcast, as well as an additional if they screenshare. The idea for a broadcaster is:

  1. Transport_1 will contain both or optionally video/audio of main desktop/device.
  2. Transport_2 if supported, will contain screenshare and its audio if applicable.

So you could have a 12 broadcast x 12 viewer room using 12 transports each (client side) but if they all screenshare, 24 transports open on all clients (server-side is a different story way more…)

This is to ensure we can fan out the stream to many workers (cores) or remote workers (cores) and users can collectively use different servers while in the same session and be fully scaled.

Whether this makes sense now or later, you’ll need it as you will exceed server usage in no time. There’s no other way to detail it. But get into the habit even if for now each transport connects to the same server but different ports it’ll save a lot of you developers.

I thank you for the information, although I’m not yet sure if it serves to clarify, or rather confuse. To be honest, I didn’t expect this level of complexity at this point. Whether the discussion is going too far for me now, I can’t even gauge.

Perhaps I should lay out my use case to better determine the optimal design.

My application fits into the umbrella term “drone fleet management.” Here, not everyone radios wildly with everyone, and screen sharing is also rather the exception. The system can theoretically also be planned well in advance, because the usable rooms must first be configured, the maximum number of which is always known.

The standard use case is: One drone per room produces only video, while an unknown number of viewers observe this. In addition, there is one or less more “special” users who are able to control the drone via the data channel and receive telemetry. This can be delegated if necessary or the telemetry is distributed to a few other users. Audio is used, but again only sporadically (e.g., in a drone operator’s communications with emergency responders). Of course, the number of viewers is unknown in advance and also their distribution across the rooms (drones).

This is the scenario and it has been covered quite well so far with Kurento.

My original question was whether it would be better to pull up and re-schedule the server-side WebRTC transport for each Producer and their mapping in the client on demand, OR to initially pull up two transports per client once at initialization time (send and receive) and then use them at runtime for Producer and Consumer, with Consumer being the main use for most.

My impression was that the recommendation was to do the latter. Anything more is a bit beyond my imagination right now, and I’m also not sure I’ll ever get to the level of complexity mentioned last. I’m currently not sure if I should already be worrying unnecessarily about things that will never occur in my application anyway…

I also assume that there is no planning support in advance, right? Any tool or some rules of thumb to help one program the optimal behaviour?

You’d be fine keeping it simple as long as you don’t exceed worker limitations. If you do, then the above scenario may apply.

Sorry for any confusion.

Thanks. We’ll see. :slight_smile: