Talking heads application, 352x288, strange scalable video coding

I have a talking-heads application … faces in small windows embedded in websites for personal finance, etc. I’m trying to figure out if MediaSoup is the way to go for the future.

To save client power and bandwidth, avoid spinning fans and all that, we’ve set this up to run at CIF resolution: 352x288 x 15fps (with an older way of delivering media). It simply doesn’t make sense to use higher resolutions just to scale them down to the small windows.

But, when I hack up the mediasoup_sandbox to set those constraints for .getUserMedia(), the mediasoup / SVC system chooses a very low bandwidth and the video comes out nasty-looking.

(I’m new to SVC. It’s possible I’m thinking about this wrong in the SVC world.)

Can somebody point me in the right direction to figure out how to resolve this?

And, thanks for MediaSoup, great stuff!

Please clarify: which video codec and Producer encodings you are sending?

Maybe CIF resolution is too small to use a Simulcast/SVC configuration. We need to know the encoder and the exact simulcast constraints you used.

Please excuse the slow response to your questions. I want to make sure I have a clue what I’m talking about before answering.

My requirement is low-res talking heads, with manageable bandwidth for mobile. I have something working the way I want it to. But I don’t understand why it works. It is a kludge. I appreciate any wisdom you can share about this.

I use these three encodings

  const encodings = [
      { maxBitrate: 128000, scaleResolutionDownBy: 4 },
      { maxBitrate: 384000, scaleResolutionDownBy: 2 },
      { maxBitrate: 512000, scaleResolutionDownBy: 1 } ]

And these constraints

  const userMediaConstraints = {
    video: {  width: { ideal: 704 }, height: { ideal: 576 },
                 frameRate: { min: 10, ideal: 15, max: 15 } },
    audio: true }

When I do this I get two spatial layers, based on the first two of my three constraints. For the higher resolution spatial layer he decoded / displayed videoWidth and videoHeight (on the <video> element) are 352x288, the scaleResolutionDownBy=2 value. This works this way on iOS Safari and on Windows Chrome, Edgium, and Firefox.

Why does this work this way? Why can I not use just two encodings and gUM constraints of 352x288. When I do that I only get one spatial layer. Again, I seek understanding.

(I only use one video codec, h.264 Constrained Baseline Profile, 42e01f. FIrefox only accepts that profile.)

If you pass low resolution constraints to gUM then the libwebrtc encoder will just generate low resolution simulcast streams. Anyway, this is how libwebrtc/Chrome/Firefox works. mediasoup does not and cannot change their behavior.

1 Like

How do you know? Are you sure you actually get two streams or one, not 3?

I know I have more than one layer because I get layer-change events and I sometimes see the low-res layer (it’s hard to miss, it’s so ugly). I don’t believe I have the high-resolution layer because I never see it decoded

This could help you: media/engine/ - external/webrtc - Git at Google

1 Like