WebRtcServer question about binding.

Documentation states for
listenInfo: Listening protocol, IP and port objects in order of preference (first one is the preferred one).

How does this exactly work if I’ve enabled both UDP/TCP and preferTCP for creating transport but the webRtcServer has in order:

{ protocol: "udp", ip: "0.0.0.0", announcedIp: IP.Public, port: 15000 },
{ protocol: "tcp", ip: "0.0.0.0", announcedIp: IP.Public, port: 15000 }

Will both protocol port bindings apply?

What if there were multiple ports for any of the protocols how would that work as well. ex:

{ protocol: "udp", ip: "0.0.0.0", announcedIp: IP.Public, port: 15000 },
{ protocol: "tcp", ip: "0.0.0.0", announcedIp: IP.Public, port: 15000 },
{ protocol: "tcp", ip: "0.0.0.0", announcedIp: IP.Public, port: 15001 }

Would the port 15001/tcp be functional/listening and used at all?

Some insight be nice.


Not sure if we can get a rtcMinPort/rtcMaxPort option as well in cases we have already in use ports when running many instances of soup. When we create it, it’d look like this.

{ protocol: "udp", ip: "0.0.0.0", announcedIp: IP.Public, rtcMinPort: 15000, rtcMaxPort: 15032 },
{ protocol: "tcp", ip: "0.0.0.0", announcedIp: IP.Public, rtcMinPort: 15000, rtcMaxPort: 15032 }

Idea here is once available port is found, bind it.

Preferred items will get higher ICE priority and otherwise the order of items will mean priority (first one means higher priority).

So for the last part, if I’m running many instances would it be suitable to utilize the worker min/max ports for now?

WebRtcServer doesn’t use worker min/max ports.

That’s a bummer,

The worker min/max ports allow me to run many applications featuring media-soup and never have port conflict. If port is in use the next available is selected.

If webRtcServer does not support this dynamic behavior yet or won’t; that’s completely fine. I just thought I’d ask.

I’ll stick with my workers for now to define ports, awesome feature anyways!

If you run Apache in your server do you expect it to find an available random listening port or do you ensure that ports 80 and 443 are not used by any other process?

I mean depending on the service (including apache) I would dynamically update port bindings and report them back to a load-balancer. So yes/no?

I see no issue with dynamically binding ports (finding first available) in this case; there could be a lot of opening and closing of workers to fit resource requirements so ports would get confusing then. A provided range however would prevent confusion and allow us to dynamically handle instance creation/destruction.


Just a thought, I’m AOK with the worker method I’m not fussed about using a single port, it just could be nice. :slight_smile:

It’s not difficult to assign a port range if you know the exact or even potential number of workers on a server. In my case I set aside up to 32 ports (max 32 workers == max 32 CPU cores) over which to spread WebRtcServer ports (regardless of protocol). We find that packets-per-second limitations of AWS become an issue well before CPU usage so in reality we never run more than a 16 core vCPU instance. This load of course assumes you are scaling “rooms” across workers (or even instances) using router piping. This is an area where the docs are a little unclear, but in our case we then set aside another large N*N (where N = # CPU cores) internal port range which the routers can access via rtcMin/MaxPortRange.

So for example if we have a 32 vCPU instance we have 2 config sections:

workerSettings: {
        // These ports will be utilized when piping between routers (internal)
        rtcMinPort: RTCServerConsts.WorkerMinPort, // e.g. 41000
        rtcMaxPort: RTCServerConsts.WorkerMaxPort, // e.g. 411024 (32 * 32)
},
 webRtcServerOptions: Array(numWorkers) // numWorkers = 32
        .fill()
        .map((_, workerIndex) => ({
          // IPs are determined by environment
          // e.g. EC2 -> listen =  internal_ip, announced = external_ip
          // e.g. LOCAL ->  listen = 127.0.0.1, announced  = undefined 
          listenInfos: ips
            .map(({listen, announced}) => [
              {
                protocol: 'udp',
                ip: listen,
                announcedIp: announced,
                port: RTCServerConsts.WebRtcServerStartPort + workerIndex, // where WebRtcServerStartPort = 40000
              },
              {
                protocol: 'tcp',
                ip: listen,
                announcedIp: announced,
                port: RTCServerConsts.WebRtcServerStartPort + workerIndex,
              },
            ])
            .flat(),
        })),

And to create a worker something like:

static async _spawnWorker(workerIndex) {
    const worker = await mediasoup.createWorker({
      ...Config.workerSettings,
    });

    const webRtcServer = await worker.createWebRtcServer(
      Config.webRtcServerOptions[workerIndex]
    );

    worker.appData.webRtcServer = webRtcServer;

   // other stuff

So this seems to work for us to support both port-per-worker WebRTC semantics and worker <-> worker piping. Obviously there is an unused gap of ports between 40032 and 41000 in this example but that’s not super important. We only expose 40000-40032 externally.

To be fair the docs are a little thin on the interplay between the createWebRtcServer config and how it interacts with the underlying createWorker config settings. But this seems approach seemed like a safe way to conservatively cover all our bases and works fine. YMMV when piping between actual instances instead of just across cores of a server.

1 Like

That’s perfect, my only issue is I run my workers separately cause they’re not all the same code so I’d need a safety mechanism to check ports in use before I apply them but that’s actually given me an idea.

Smart! May take another whack at it again soon. :slight_smile: