Can the clients documentation get a contentHint demonstration to assist with screenshare?

It would appear that documentation and further reading WebRTC documentation that this isn’t fully clear, or perhaps known to be a possibility by some (or maybe many).

I think it’d be great to document that users could during track selection before producing set their contentHint.

Ex.

if (track.kind == "video") track.contentHint = "motion";


The reason for introductory is that by default many browsers use “detail” as preferred method for screen-share and this limits the FPS to <5. With motion you can successfully stream on a few hundred KB/s with some blur on larger resolutions.

Could this be worth wild sharing for users reading documentation, improve their clients a bit?


Video content hints
"" No hint has been provided, the implementation should make its best-informed guess on how contained video content should be treated. This can for example be inferred from how the track was opened or by doing content analysis.
"motion" The track should be treated as if it contains video where motion is important. This is normally webcam video, movies or video games. Quantization artefacts and downscaling are acceptible in order to preserve motion as well as possible while still retaining target bitrates. During low bitrates when compromises have to be made, more effort is spent on preserving frame rate than edge quality and details.
"detail" The track should be treated as if video details are extra important. This is generally applicable to presentations or web pages with text content, painting or line art. This setting would normally optimize for detail in the resulting individual frames rather than smooth playback. Artefacts from quantization or downscaling that make small text or line art unintelligible should be avoided.
"text" The track should be treated as if video details are extra important, and that significant sharp edges and areas of consistent color can occur frequently. This is generally applicable to presentations or web pages with text content. This setting would normally optimize for detail in the resulting individual frames rather than smooth playback, and may take advantage of encoder tools that optimize for text rendering. Artefacts from quantization or downscaling that make small text or line art unintelligible should be avoided.


Audio content hints
"" No hint has been provided, the implementation should make its best-informed guess on how to handle contained audio data. This may be inferred from how the track was opened or by doing content analysis.
"speech" The track should be treated as if it contains speech data. Consuming this signal it may be appropriate to apply noise suppression or boost intelligibility of the incoming signal.
"speech-recognition" The track should be treated as if it contains data for the purpose of speech recognition by a machine. Consuming this signal it may be appropriate to boost intelligibility of the incoming signal for transcription and turn off audio-processing components that are used for human consumption.
"music" The track should be treated as if it contains music data. Generally this might imply tuning or turning off audio-processing components that are used to process speech data to prevent the audio from being distorted.


If we could somehow encourage users to explore these options when their streams are working and setup right, they’d perhaps love this additional information.

Definitely we are not gonna document WebRTC or MediaCapture things in mediasoup docs.

Worth a shot, just noticed default settings many browsers it was really hurting screen-broadcast performance which was key focus, that and didn’t find this till a year in research. I know there’s a tricks section where it’d be appropriate but your call sir. If you reconsider, I think it’d help many as its helped myself and users appreciate the feature more outside broadcasting.