RTCPeerConnection limit exceeded, can we apply iframes some how safely?

I notice sometimes with my massive to massive broadcasting that I have peers run out of RTCPeerConnections but I close them properly. Sure enough webrtc-internals does not garbage collect and keeps traces till refresh.

This in return throws errors after some time. If a user AFKs they are eventually met with the error even in the lightest of conditions.

Not sure if I’m to re-use RTCPeerConnections or if I’m to refresh the user upon error but I found a few options that may help. If RTCPeerConnection is loaded via iFrame; upon iFrame closure the debugger is cleared and it’s as if you never had a limit but my small implementation did error somehow. So I ponder and inquire! :slight_smile:

What do you guys to keep in the range of chromium’s 500 limit.

Just use 2 PeerConnections

Yeah, Chrome folks keep saying garbage collection works fine, but admittedly not always :upside_down_face:

Though you only really need 2 connections in absolute majority of cases (as bonus, you can actually use priorities for different transceivers that way, which is not really the case with multiple connections).

I based opening many transports off of other sites that did similar, but knowing I can do this differently that could solve lots of issues at least give days for a user to not have issues.

But curious what if my viewer transports are at different IPs? I have producer/consumer servers so the IP may vary any-time so a Transport cannot connect to just one IP and potential for 48 different IPs is great when handling 48 broadcasters.

(I can’t just re-ice, or perform any stunt but connect another transport so my limit still can be exceeded unfortunately because I run 1vcore servers x 100s :x:X:X).

I know at least until some time ago Janus (another SFU implementation) worked by opening multiple connections, but that is a drawback in my opinion, not something you need to try to replicate. Just create one receiving and one sending transport and you should be good to go.

I see it as a drawback if I could route one transport but do those handle the multiple consumer servers?
I don’t suspect when each viewer comes in at a different IP that I can simply re-use the transport like that. Tends to cause errors so I just reset entirely and reload and request re-consume/produce if necessary.

In other words, the 48 viewer sessions could have 48 different viewer servers they connect to so can’t imagine all those different IPs under a hood of a transport but you guys study them better than I do.

Hm, if you have multiple servers then it is tougher, but as a workaround you can pipe all of the stuff to one “viewing” server and have just one transport from there.

Gets tough with CPU usage, I already per-core have to make a producer server serve 13 broadcasts (both audio/video @ up to 500KB/s) and from there they are allowed to re-consume twice per piped connection made.

I weigh usage so my consumer servers allow 3 weight, and per-weight they allow 12 viewers, so a publisher on a producer server will take 1/13 slots but each viewer 1-12 will take 1 weight of a consumers total 3 weight. So in theory,

A 6x12 broadcast room would use 1 producer server (1core) and 2 consumer server (2core) a total of three IPs, but this scenario gets insane when we go say 24x48… So many many IPs so I’m only at the solution of isolating the RTCPeerConnection in an iFrame and upon closing the peer, we close the iframe and all is refreshed OR I upon detection of error perform a FATAL ERROR that suggests user should reload to clear this history.

Not sure if a single viewing server could handle it, I am trying to achieve near real-time 20-100ms; so no transcoding takes place but I am open ears if there’s some sneaky tricks with Transport Streams/etc.


I will add that I attempted to do a slight modification with media-soup by telling this._pc to be a correspondent of an iFrame but some difficulties arised when stressing this configuration; possibly due to my lack of client understanding with all the shims (derp).

It did clear though 100% upon closing few trials I’ve done but it shows up as about:blank in internals (iframe stuff…).

People have been trying some weird workarounds to force garbage collection. It’s a known bug in Chrome: 825576 - chromium - An open-source project to help move the web forward. - Monorail

1 Like

Yeah I feel that figured I’d ask, but yeah only got the two options then.

iframe shim, or fatal-erroring upon problems with the RTCPeerConnection. If people get any idea though do feel free to update this.

You can also comment and start linked issue, I sometimes feel like there is just 20 of us in the world reporting issues back to Chromium team.

1 Like