Peer A/V freezes for one user until client restartIce() is triggered

Greetings, team.

Having an interesting situation that is easy to reproduce for one individual on wifi only on a shared office space location that I’m hoping is just something simple that we’re missing.

Latest mediasoup server and client libraries.

Consistently, between 10 and 20 seconds after this person joins the call, the audio & video streams of other connected peers freeze. During that period, he can see and hear everyone properly.

We can all see & hear him still.

Triggering restartIce() client side on the transports brings everyone back to life for the remainder of the call for him.

The application was based off of the mediasoup demo during the initial development and has largely only diverged for device management & track association.

I have added client side code that if there are no packetsReceived in receiveTransport.getStats() for 1000ms that it will trigger restartIce() programmatically at which point it works properly for him.

When he is on a lan connection in the same office or if he’s connected to NordVPN, it works perfectly.

iceTransportPolicy is set to all with turn servers introduced only when the transport connection state goes to failed or has been stuck in the connecting state for longer than 5 seconds.

Have front-end transport connection state logging going over the web socket and can confirm that his ice connection state immediately goes to connected as expected.

When I join after him, I see nothing in the server logs. However, when he joins after me, I see the following.

0|sfu  | 2022-12-02T19:37:12.216Z mediasoup:WARN:Channel [pid:3259056] webrtc::ProbeController::Process() | kWaitingForProbingResult: timeout
0|sfu  | 2022-12-02T19:37:17.667Z mediasoup:WARN:Channel [pid:3259056] webrtc::ProbeController::Process() | kWaitingForProbingResult: timeout
0|sfu  | 2022-12-02T19:37:22.667Z mediasoup:WARN:Channel [pid:3259056] webrtc::ProbeController::Process() | kWaitingForProbingResult: timeout
0|sfu  | 2022-12-02T19:37:27.667Z mediasoup:WARN:Channel [pid:3259056] webrtc::ProbeController::Process() | kWaitingForProbingResult: timeout
0|sfu  | 2022-12-02T19:37:32.673Z mediasoup:WARN:Channel [pid:3259056] webrtc::ProbeController::Process() | kWaitingForProbingResult: timeout

After the last entry, his connection bombs at which point the client side restartIce() will fix it.

Obviously something like a race condition happening as the demo works perfectly for him, just not entirely sure where in the process this might be since it works in the demo properly.

Have went back to the demo defaults as best as I can tell for constraints, encodings, codecs, etc. with no change.



That’s normal behavior, if a user cannot maintain the upload or download, they buffer incredibly high to a point they may lose connection to the media server they are producing to or consuming from. Client will see this first as a disconnect and you can re-ice to re-establish a connection.

Truthfully with these services requiring good network you can’t do much but lower bitrate. They must accept their hardware/network is not adequate over wireless.

I guess I should clarify that it works perfectly for him for the duration of any calls on WiFi after restartIce is triggered in his client.

So which protocol were they on before restarting ICE and then after they restart it?

To me it sounds like they’re asking for a TCP connection, if you allow it; it’d make sense?

Fortunately you’re handling things right. Consider testing for yourself to exclude these use-cases, software like TMeter Administrative Console or an equivalent is great for producing lag.

If that user can’t afford a better setup, why are you letting them waste your time? Truthfully I’d tell them to sort it with their ISP… :slight_smile: