I’m trying to wrap my heard around this. I’ve seen this behavior both in our app and in the mediasoup demo app but only using FF. The config uses 3 spatial layers for the webcam producer. This are just the default demo settings.
STRs
Open two FF (v84) session in different computers, same network:
- Share the webcam
On the consumer side the currently used layers are S[2] T[2]
In the producer remote stats I see 3 RTP video streams (rIds: 0,1,2) with 3 layers each. All the ll the bitrates for the layers are set.
In the receiver remote stats I see a video RTP stream for the highest layer with rId 2 and 3 temporal layers.
- Stop sharing the camera
- Share the camera again, repeat this process
After repeating this a couple of times the higher RTP streams for the producer end that with 0 bitrate temporal layers and eventually are not created so the producer eventually just sends one RTP stream for the lowest spatial layer
Notes:
- The downlink BWE estimation stays the same throughout the whole test
- I have tried to force setMaxSendingSpatialLayer to 2 every time I create the webcam producer but it doesn’t seem to have any effect.
- I have also tried to setConsumerPreferredLayers to 2,2 after the consumer is created but also doesn’t seem to have any effect.
- I have checked that every time the producer is created the right encodings are used
Maybe Mediasoup is creating only a spatial layer 0 stream because it thinks the client doesn’t have enough bandwidth or maybe because “the browser may stop sending spatial layer 1 due to CPU load, etc” as I read here?
It’s not happening with Chrome (although it just creates 1 spatial and 2 temporal layers but that is probably Chrome’s implementation limitations) so might be some sort of FF bug but I jut wanted to make sure that this is not me doing some obvious thing wrong or missing some parameter somewhere and I’m not the only one seeing this behavior.
Thanks!