Advantages of WebRtcServer over the "old way"

Hi there, I was wondering if anybody can share real life experiences of improvements (if any) of using the new WebRtcServer with a single port per worker over the previous way? I was reading the threads in this group and some alluded to performance advantages of reading from the single port vs multiple random ports. Are there any drawbacks for moving to WebRtcServer?
Thanks!

You need to open less public ports in your server if you use WebRtcServer. No drawbacks at all.

Thats it? no performance gain? what about local port usage? for example we do a lot recording and that takes a few ports, sometimes there is local port collision, would this help there? thanks!

Transports are still handled the same way, just port binding is shared. CPU usage will be the same.

This is just a convenience, you’d still be utilizing a range to some extent if this were a multi-core server, so for collision you’ll need to be a bit better with the programming to avoid that.

It’d be much more easier when running many instances to have the single port collide.

There is some performance gain because many UDP packets can be received together by a single UV read call, but it’s not something huge.

Local port usage: this is just for WebRTC connections, WebRtcServer doesn’t work for plain RTP since it depends on ICE to identify client transports.

Anyway: main purpose of WebRtcServer is to minimize the number of ports that needs to be publicly open, specially important when using AWS or Google Cloud (internal NAT with public port mapping etc).

4 Likes

The biggest advantage for us was that it opened up the opportunity to use AWS Global Accelerator since we have a much smaller port range to cover. Prior to this the port range to support enough users was too large to feasibly use a single global accelerator listener with many backend servers.

question on the side when using createPipeTransport with webrtcServer, does that still require certain amount of ports open?

thanks advanced

@alexciarlillo If you have time, I would love to hear more about your experience with Global Accelerator. Specifically, how did you architect this? Do all producers in a room route through global accelerator to the same mediasoup instance? Did you find this improved latency/packetloss/jitter/etc?