I can share the screen normally.
But how do I share the screen and capture audio from the shared screen?
For example, I share a youtube tab playing a playlist, and for users who enter the room to be able to hear the audio coming from that share
I tried this approach below but to no avail. Do I need to define any other parameters?
It should be the same as simple video/audio. First thing to get is the share screen stream with audio which I think you are getting.
After that you will have 2 tracks first is the share screen video track and second one is share screen audio track. You will have to produce these 2 tracks instead of one track (which happens normally in share screen without audio).
After that consume both tracks on other side. Done.
Capturing audio with getDisplayMedia is only supported by some browsers. And what is captured depends on browser and OS. So, forget it and just use getUserMedia to get an audio track, unless you have some very specific task in mind.
Good point. But with getUserMedia he will not be able to capture the audio of that specific screen, i.e youtube tab, he wants to capture using getDisplayMedia.
He will have to do this only on supported browser and leave those which don’t support it.
Mobile browsers don’t really support screen share, so we can probably disable that for most part there. For desktop users the browsers all vary with how they work, it’s preference base but they all work the same way for acquiring the video/audio track.
Essentially when you get the stream from a screen-share it will always contain video, audio is optional so we just perform a getAudioTracks check and if it’s valid we produce that as well.
Here’s an example, may not be correct…
const startScreenSharing = async () => {
const stream = await navigator.mediaDevices.getDisplayMedia({ video: true, audio: true });
const transport = await createTransport();
const videoProducer = await transport.produce({ track: stream.getVideoTracks()[0] });
// Check if an audio track is available before creating an audio producer
const audioTrack = stream.getAudioTracks()[0];
if (audioTrack) {
const audioProducer = await transport.produce({ track: audioTrack });
// Send producer information to the server
socket.emit('newProducer', { videoProducerId: videoProducer.id, audioProducerId: audioProducer.id });
} else {
// Send producer information to the server without an audio producer
socket.emit('newProducer', { videoProducerId: videoProducer.id });
}
};
I’d avoid itinerary conditions and simplify the functions till you sort this fully.