Transport connectionstate changes do disconnected

Hi Team,

The client transport connectionstate goes to disconnected after few minutes randomly. It works perfectly for like 30 minutes then disconnects. This happens for most of the transports and majorly if the transport tuple has TCP protocol. My client browser stops receiving the remote streams.

In this case, do i have to start producing and consuming the streams from start? Or just restarting the ice on the transport on server fix the problem and will have no impact on stream producing or consuming?

Thanks in Advance,

The PeerConnection should eventually reconnect from “disconnected” state to “connected”. You can also restart ICE (need to call API methods in both client and server as documented).

The thing here is: why do you get those disconnections?

I am not sure. What logs can i post here to make you understand about it why it happens. This happens randomly but i see it many times. Also, does restarting the ice eventually changes the connection state to connected from disconnected? Because i do a few things on my client based on the state Connected

“30 minutes then disconnects” is not “randomly”. I’d bet my bottom dollar that you have a proxy in between your clients and your server that is shutting down the ICE/DTLS connection, and that you will see the timing reliably in your client’s transport.on(“connectionstatechange”, handler) events.

Figuring out the layer that is interfering isn’t something that folks without your production keys can do.

Good luck!

1 Like

Hi Doug,

You are correct. Few of my clients are in a corporate network and all the traffic goes through the proxy server. But this also happens when the client is on his own wifi network and the webrtcTransport tuple has UDP protocol. And 30 mins is in most of the cases, but in some of the cases the transport gets disconnected even after few minutes of stream consuming start.

Is it possible your signalling service connection is being broken, and that you have timeouts or event handlers somewhere that are watching that and responsible for the connection state change, maybe from your own mediasoup server side?

This is hard to debug over the internet, but since it’s happening from both behind a corporate firewall and not, I’d wait to suspect dynamic port closure of the established ICE/DTLS connection. What are you using for signaling? How is your server deployed (e.g AWS elastic beanstalk, bare metal, etc.)?


I am currently using socket io for signalling between the server and the clients. I make the socket connection on 443 port. We have our own data center and the architecture we are using is in the below image,

All the three mediasoup servers are having the same announced ip as 45.114.x.x and using proxy balancer of apache webserver i direct the socket io traffic to one of the mediasoup servers considering the load on each one of them(i have that login in place to decide that). And Each mediasoup server is given a range of rtcMinPort and rtcMaxPort, using the firewall rules we direct the streams data to specific mediasoup server on which the room is created.(if traffic comes on 10001-12000 port range it should go to mediasoup server 1). Client will always get the announcedIP and a random port in the range of that specific mediasoup server on which the socket io connection is being established(In the iceParameters).

Note: We have kept the range of ports open on firewall for both UDP/TCP bidirectionally.

Gotcha. I’ll repeat my initial question, and rephrase it.

Repeated: “Is it possible your signalling service connection is being broken, and that you have timeouts or event handlers somewhere that are watching that and responsible for the connection state change, maybe from your own mediasoup server side?”

Rephrased: Do you get server side disconnects (and are you trapping them and shutting down transports for those clients?) before your client transports change state?

Also, do any of your timeout troubles correspond to apache websocket proxy timeout default values (or values that you have specified?).

BTW ping/pong mechanism (ping sent by clients instead of by the server) is painful. It’s based on a JS timer and clients/browser slow down them when the app goes to background etc.

I had to disable the ping/pong mechanism of (in client and server) due to that issue. And there is no API to disable it so I had to call clear timeouts in internal objects of the Socket instance.

1 Like

Hi Doug,

No such event handlers. We just have pingtimeout and pingintervals. And whenever a pong never reaches for a ping sent either on the server or the client then only socket io connection breaks itself. And when this happens we close the transport on both the server and the client

It is never the case, my socket io connection neither breaks on the client or on the server. We keep the connection active even when the transport gets disconnected assuming that it would eventually go into state connected again.

No proxy timeout default values, if this was the case i would definitely see that the connection of socket io is broken. Socket io is still working after the transport disconnects.


But the ping intervals give me an added advantage, here i could leverage the auto reconnection mechanism of the so that the client never has to make all the clicks to start producing or consuming as he did for the first time.

Using your favorite tools (browser developer tools, nc, wireshark, etc.), you see websocket requests and responses, but media packets stop flowing over the ICE/DTLS udp/tcp negotiated port? That smells like firewall trouble.

[edit] Does an ICE restart get media flowing again?

Hi Doug,

This is what i exactly wanted to know. If i restart the ice when i detect a transport connection state change as disconnected. Does that make my media to flow again without calling produce and consume events again? I am about to test it. Thought it is better to ask it on forum. Anyhow i am going to try that now and will get back to you with an answer on this.

Yes it will work. However I recommend you focusing on the problem (strange disconnections) rather than the workaround.


Yeah, surely i will do that. Whenever this transport disconnects i will check the logs on firewall if the firewall port is getting blocked somehow after a few minutes of producing/consuming.

What @ibc said here. Doing ICE restarts (no need to produce/consume again) might get your media flowing again, but ignores the trouble, and is playing a game of whack-a-mole with (presumably) your firewall.

1 Like

I got same issue. I assume some user gets bad wifi system( intermittent randomly). If we can keep mediasoupclient reconnecting in background by retry parameter, it will be great.
My case is libmediasoupclient for android.

Make your client side app restart ICE when disconnection is detected. No need to ask for any internal magic in libmediasoupclient.


If the peer connection (transport) is ‘disconnected’ then it have around 7 seconds to auto get back to ‘connected’ state without any effort from our side, or you can restart ICE if you want as @ibc mentioned, but if peer connection doesn’t connect back after around 7 seconds then it goes to ‘failed’ state, and at this stage peer connection is closed and will not auto connect, and restarting ICE is not going to help as well so in that scenario you will have to make peer connection i.e transport and produce and consume again as per your app logic.

is this current specification of default setting?