I’m testing a small app that I made with mediasoup and am streaming from a laptop using Safari to a desktop running Chrome over a local WIfi network. It works pretty well though every now and then there are small stutters in the video or even noticeable hickups of 1-3s.
I’ve noticed that when these happen, the simulcast layers drop down from 2/2 (spatial/temporal) to usually 1/0 and then within a few milliseconds to 1/2. After a good while they go back to 2/2. Sometimes the drop goes even to 0/0, then pretty much immediately to 0/2 and then after a while climes back to 2/2. The drops are always followed with a bump up in the temporal layer within a few dozen ms.
First I thought it might have something to do with BWE but using the trace events it seems that no drop in bandwidth was detected. Available BW is always higher than desired BW. I then looked at the consumer getStats() output and noticed that there are a few dropped packets. I ran ping probes both from the laptop to the desktop as well as back but in both directions they show no packetloss whatsoever. I then tried to broadcast using Firefox and there the hickups or layer changes don’t seem to happen but I noticed FF was not sending the full 720p as Safari did. In fact it changed the resolution itself without changing layers or visible stutters.
So I’m not sure what to look at next. I didn’t spot any errors anywhere.
One other thing I noticed that after a good while the Safari client actually showed a transport: connectionstatechange = disconnected event but the stream kept on going just fine on the receiver side. The transport never recovered from this. Should I restart the ICE state when that happens?
It usually happens that the sender’s WebRTC stack (i.e. libwebrtc in Chrome and Safari) decides to stop sending the high simulcast stream due to CPU usage or wrong uplink.
Thanks for the hint regarding CPU usage. Turns out Safari pretty much maxes out a core of this Macbook Air (90%+) when doing 720p. I’ve used “height: { exact: 480 }” in the getUserMedia() call and CPU usage dropped to under 50% and the stutters and layer changes are pretty much gone. Only once was there a hickup in several minutes of testing. The spatial layer seems to be fixed to 1 (I guess it still considers 2 to be the theorical layer for this camera even if excluded via constraint).
When setting the height constraint to 576 there were still small stutters but very brief and not as common. Layer changes are rare. So the resolution and therefor CPU usage plays a big role.
Unfortunately the layer changes are very noticable so maybe it would be better to force a lower max resolution. Better a lower but smooth and consistent video than a higher quality ones that sometimes stutters. Would you recommend monitoring the packetsLost as an indicator or just the frequency of layer changes?
In Firefox on the other hand it says always spatial layer 2 no matter the resolution. It started out at 480p, then went down to a height of 336px and eventually settled at 240p and never went back up. This is reproducable. I made sure that no other apps are open that could impact CPU usage in each test and verified via System Monitor that CPU usage is in fact not peaking out. Not sure if it is possible to get FF to go back up to a higher resolution. The layers obviously wont work if it’s already at 2. I also tried to specify a min: 360 constraint but FF doesn’t seem to care?
It seems that there are still big inconsistencies in how different browsers handle WebRTC. Is there anything that exposes to JS the reason for resolution or layer changes?