Heads up: "default" port ranges, webRtcServer, and EC2 ephemeral port range

Just dropping this here in case someone else hits this problem. We recently moved to using the new webRtcServer. We kept our webRtcServer ports in the low 40000 range as the mediasoup demo is setup (and how we have been setup using just the normal webRtcTransport range for some time) and setup workers to use a separate range. We started randomly getting errors like this:

uv_listen() failed [transport:tcp, ip:‘10.x.x.x’, port:40000]: address already in use [method:worker.createWebRtcServer]

We thought maybe we had a race in how we initialized workers, but eliminated that. It turns out that port 40000 is in the “ephemeral port range” for these EC2 instances. They appear to have a much larger ephemeral range than some other distributions and larger than what you might find when googling for what the usual ephemeral range is:

[ec2-user@ip-10-x-x-x ~]$ cat /proc/sys/net/ipv4/ip_local_port_range
32768	60999

What this meant is that it would use ports in this range when establishing outgoing connections, e.g. to redis:

lsof -i :40000
COMMAND  PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
node    6523 webapp 1033u  IPv4  46295      0t0  TCP ip-x-x-x-x.region.compute.internal:40000->ip-x-x-x-x.region.compute.internal:6379 (ESTABLISHED)

And this would only happen randomly when the OS selected such a port, and we failed to initialize the worker prior to trying to hit redis (or whatever external service that needed TCP).

So we have now moved our mediasoup webRtcServer ports outside this larger ephemeral range. The ephemeral range can vary between OS’s and distributions so do your own due diligence here.

Hope this saves someone some time and frustration in the future. This is obviously on us for just using the “typical” mediasoup ports, we just didnt ever hit this issue when using webRtcTransports w/o webRtcServer since we weren’t specifying the exact port we wanted.

2 Likes

Thank you for this. Have you found that moving to non-ephemeral ports has helped with connectivity at all?

Sometimes we have users that are able to connect to our socketio signaling layer, but for whatever reason can’t seem to get any UDP connectivity in the 40000-49999 range. Often times these users are on VPNs. I haven’t dug into this too much, but after reading your post I’m wondering if moving to non-ephemeral ports would have a greater chance of connectivity.

Psst. Those users shouldn’t be on your platform in the first place as the host blocked it. Forget them…

I don’t think this will help you. It sounds like you probably need a TURN server and/or to allow TCP connections along with UDP to support those users.

That’s interesting. Is there any reason a user might be able to connect to a TURN server but not directly to my mediasoup server? I figured that since my mediasoup was on a public IP, I didn’t need a TURN server.

I suppose this might help for countries that seem to have random IPs firewalled.

Firewalls that block all traffic from clients but TCP on port 443 (AKA HTTPS).

that’s true… I suppose I could run a TURN server on 443 on the same host as mediasoup to minimize latency.

Consider this, there’s really no need for TURN for 90%+ of the internet. If your domain is blocked it’s blocked, a user would need to use a VPN anyways.

If you do host a TURN server I’d not be scared to host it on each of your servers and think it’d ruin performance as it’d be lightly used.

To touch up on this and ISP could care less about port usage, it’s what packet headers/etc are being transmitted, they could simply block RTP as a whole and thus a user could need a VPN to bypass.
(https://example.com:4029 is still HTTPS you dig?)


To sum it up as I have used these types of services for many years, encourage the user to use a VPN. This is the best fix, it’ll save them in the long run, you can only do what you can do and be reasonable right.


Side note many countries have blocked some of my projects because I lacked privacy assessment. If this is you, fix that when you can and wish for the best but I find many new developments get slapped because of this…