Is it possible to exchange a mediatrack of a producer resp. consumer on server side?
According to the documentation it seems like feeding a producer with a mediastream/track is only possible with the implementation of the mediasoup-client library. And the implementations of the mediasoup module keeps among other things the rtp information but no access to the track itself. Is that correct?
So if I intend to manipulate a track server sided and use that instead of the original one for the peer comsumer, I would have to build a seperate client that consumes the track, manipulates it and return it to the server by using a new producer instance.
Could you specify that? I’m not sure where to use a plain transport.
Lets assume I want to compose two media streams from two conference participants with ffmpeg.
Do I need to create a webrtctransport from each of the clients to get the producers on the server and then create a plain transport on the server to a third client (lets say a “track manipulation” client) on which f.i. ffmpeg does the composition and then root back to the server via plain transport?
Or do I need to create a plain transport from the two source clients and do the rest on the server itself?
Do I actually have access to the tracks on the server with a plain transport and feed for example a canvas with these tracks?
This won’t work, because you can’t connect a PlainTransport to a libwebrtc-based client.
I’m not sure what you mean by “access to tracks”, but a PlainTransport will send RTP packets to an endpoint (like ffmpeg) where you can do whatever you want with them: decoding, manipulating, reencoding, etc. Keep in mind that all this will introduce latency and cost CPU cycles.
Another idea of mine was to use a canvas server sided to dynamically position two or more videos next to each other and capture the canvas and use the received mediastream as a new producer. Kind of a “fake MCU”. Locally I use for this video elements (with mediastreams as source) as source for the canvas. But I’m not sure if thats a good idea anyway.
But I guess I can use rtp packets as a source too, right?
Either doing it with a browser (on the previously mentioned third client for composing) and use the html canvas
as you say with a headless browser on the server itself (or the third client) with f.e. node canvas.
But for both solutions I need to use a WebRTCTransport, if I got you right?
I was mainly looking for a good solution to compose videos. In a way that the composition can be used as one stream (= two producers/consumers audio&video).
The simplest solution seems to be capturing a canvas with all webrtc vidoestreams of the conference locally on one of the peers (or kind of a broadcaster). And then create two new producers for audio and video. In that way it should be possible to only consume this on all the peers instead of the streams of each peer.
And I was wondering if its possible to create such a composition on server side with ffmpeg or a canvas. To reduce the load on the peer side which would have to create the canvas.
You can do on one of the peers as well if you have one to one call or may be1-5 participants in call, it should work well. You can draw on canvas then capture stream and then produce it as one stream on server. But this will not work well if you have more than 5 participants as client device has it’s own limits.
But the better way is to do it on server and the easiest way is using headless browser via puppeteer as described by @jbaudanza.
Have been using headless browser for quite some time for MCU implementation. I think you need just one canvas and you can draw your video streams on it and that should work fine.
Sorry for the late reply.
Thank you for sharing your experience. This helps me a lot and sounds exactly after what I want to do. I will try to make usage of puppeteer. Which brings me back to my initial question:
How do I draw the stream on a canvas (in the headless browser) on the server from the producer/consumer without access to the mediatrack? I need to have the stream in an video-element as a source for the canvas. I don’t get a track out of the webrtctransport. What am I missing?
You will open a page in headless browser via puppeteer and that page will use mediasoup client to connect to your mediasoup server and it will make all the transports needed and consumer producers etc and will get the stream from server and that stream will be set to video tag and then from there to canvas.
The way you join a call from normal browser it will be same as that. Try opening you app in headless browser you will understand it more.