Exchanging mediatracks on server

Is it possible to exchange a mediatrack of a producer resp. consumer on server side?

According to the documentation it seems like feeding a producer with a mediastream/track is only possible with the implementation of the mediasoup-client library. And the implementations of the mediasoup module keeps among other things the rtp information but no access to the track itself. Is that correct?

So if I intend to manipulate a track server sided and use that instead of the original one for the peer comsumer, I would have to build a seperate client that consumes the track, manipulates it and return it to the server by using a new producer instance.

Yes, this is usually done with a PlainTranport to ffmpeg/gstreamer/whatever.

Thanks for the fast reply @jbaudanza

Could you specify that? I’m not sure where to use a plain transport.

Lets assume I want to compose two media streams from two conference participants with ffmpeg.

Do I need to create a webrtctransport from each of the clients to get the producers on the server and then create a plain transport on the server to a third client (lets say a “track manipulation” client) on which f.i. ffmpeg does the composition and then root back to the server via plain transport?

Or do I need to create a plain transport from the two source clients and do the rest on the server itself?

Do I actually have access to the tracks on the server with a plain transport and feed for example a canvas with these tracks?

Correct

This won’t work, because you can’t connect a PlainTransport to a libwebrtc-based client.

I’m not sure what you mean by “access to tracks”, but a PlainTransport will send RTP packets to an endpoint (like ffmpeg) where you can do whatever you want with them: decoding, manipulating, reencoding, etc. Keep in mind that all this will introduce latency and cost CPU cycles.

Thank you so much for your explanation!

Another idea of mine was to use a canvas server sided to dynamically position two or more videos next to each other and capture the canvas and use the received mediastream as a new producer. Kind of a “fake MCU”. Locally I use for this video elements (with mediastreams as source) as source for the canvas. But I’m not sure if thats a good idea anyway.

But I guess I can use rtp packets as a source too, right?

I’m not sure what “canvas server” is. Is that like running a headless Chrome/Firefox browser? If so, you’d use a WebRTCTransport.

I just meant implementing a html canvas in javascript code :slight_smile:
Either doing it with a browser (on the previously mentioned third client for composing) and use the html canvas
OR
as you say with a headless browser on the server itself (or the third client) with f.e. node canvas.

But for both solutions I need to use a WebRTCTransport, if I got you right?

What is the functionality you want to achieve?

When I said headless browser, I meant using an actual browser like Chrome, and run it in headless mode with something like Puppeteer. This would connect with a WebRTCTransport.

I don’t think you’re going to have much luck with node-canvas.

I was mainly looking for a good solution to compose videos. In a way that the composition can be used as one stream (= two producers/consumers audio&video).

The simplest solution seems to be capturing a canvas with all webrtc vidoestreams of the conference locally on one of the peers (or kind of a broadcaster). And then create two new producers for audio and video. In that way it should be possible to only consume this on all the peers instead of the streams of each peer.

And I was wondering if its possible to create such a composition on server side with ffmpeg or a canvas. To reduce the load on the peer side which would have to create the canvas.

Ok, I see.

So basically wherever I want to create a canvas (of what kind ever) I need the out put of a WebRTCTransport. Thanks!

You can do on one of the peers as well if you have one to one call or may be1-5 participants in call, it should work well. You can draw on canvas then capture stream and then produce it as one stream on server. But this will not work well if you have more than 5 participants as client device has it’s own limits.

But the better way is to do it on server and the easiest way is using headless browser via puppeteer as described by @jbaudanza.

Have been using headless browser for quite some time for MCU implementation. I think you need just one canvas and you can draw your video streams on it and that should work fine.

Sorry for the late reply.
Thank you for sharing your experience. This helps me a lot and sounds exactly after what I want to do. I will try to make usage of puppeteer. Which brings me back to my initial question:

How do I draw the stream on a canvas (in the headless browser) on the server from the producer/consumer without access to the mediatrack? I need to have the stream in an video-element as a source for the canvas. I don’t get a track out of the webrtctransport. What am I missing?

You will open a page in headless browser via puppeteer and that page will use mediasoup client to connect to your mediasoup server and it will make all the transports needed and consumer producers etc and will get the stream from server and that stream will be set to video tag and then from there to canvas.

The way you join a call from normal browser it will be same as that. Try opening you app in headless browser you will understand it more.

Ok, so I still need the mediasoup-client in the headless browser, I see.

I think I got the theory now and will try to make it work in my setup. And I’m sure it will work as intended.
I’ll give an update when it works or if I have any further question.

Thanks again for your help!

Yes it will be a normal webpage will will use mediaosup client and consume the streams and then draw it on canvas or whatever.