Hi, I am Looking To Implement Conference Recording In A Web Application That Uses Mediasoup. I Have Tried Using Puppeteer To Join Meeting As A Ghost Participant And Then Use Puppeteer-Stream Library To Record It But That Is Using Too Much CPU And The Results (Frame Rates) Are Not Very Good. Can Anyone Suggest Me The Best Way To Record A Mediasoup Conference. Any Help Will Be Much Appreciated!
Recording is going to be CPU intensive, because of the video, audio decoding and capturing stuff. How much cpu it is taking in your side? You are already using good approach which uses extensions to capture page stream rather than classic canvas based solution.
There is another way where you use puppeteer without headless mode and then use xvfb to display it to virtual display on server and then capture the things directly using getDisplayMedia that way no MediaRecorder is needed and no transfer for data from browser to nodejs is required that can definitely reduce CPU usage.
Hey, Thanks For Your Reply. I Have Used The New Headless Mode (getDisplayMedia Works In That) But Was Unable To Capture Tab Audio. Also You Said Here No MediaRecorder, So How Do I Record It Without MediaRecorder?
How you used it?
This one I mentioned here is different, you donât use puppeteer-stream library instead you use simple puppeteer and launch browser in headful mode and use xvfb as virtual display on server and that enables you to capture video, audio using getDisplayMedia js right from browser.
Yes, I Had Tried That But Was Unable To Capture Tab Audio For Some Reason
Yes, But That Will Also Use MediaRecorder, Right?
No, you can capture both video, audio directly from xvfb via getDisplayMedia without using MediaRecorder.
Using FFmpeg (X11Grab And Pulse Audio), Right?
Not sure, but we used it while ago and we were able to get both video, audio with xvfb
@HGB467 we use xvfb, getDisplayMedia to capture both audio, video from the tab below are the options we use for getDisplayMedia:
const constraints = {
video: true,
audio: {
channelCount: 1,
sampleRate: 16000,
sampleSize: 16,
volume: 1,
echoCancellation: false,
noiseSuppression: false,
},
systemAudio: âincludeâ,
preferCurrentTab: true,
};
These 2 parameter are the ones that make getDisplayMedia to capture audio of the tab:
systemAudio: âincludeâ,
preferCurrentTab: true,
These are the parameters we use in puppeteer:
const puppeteerArgs = [
ââautoplay-policy=no-user-gesture-requiredâ,
ââenable-usermedia-screen-capturingâ,
ââallow-http-screen-captureâ,
ââno-sandboxâ,
ââauto-select-desktop-capture-source=Go-liveâ,
ââdisable-setuid-sandboxâ,
ââdisable-web-securityâ,
ââuse-gl=eglâ,
ââdisable-gpuâ,
ââenable-webgl-image-chromiumâ,
ââstart-maximizedâ,
ââstart-fullscreenâ,
ââenable-webgl-developer-extensionsâ,
ââenable-webgl-draft-extensionsâ,
];
Might I suggest a different approach?
It seems that your workflow has overhead that may be unnecessary (hence your CPU issues). This is how I conceptualize what youâre doing:
Step 1: (source client: raw stream -> encode -> producer ) -> RTP stream ->
Step 2: (mediasoup server: producer -> router -> consumer) -> RTP stream ->
Step 3: (puppeteer client: consumer -> decode -> raw stream -> encode)
In Step 3 youâre transcoding the same stream unnecessarily because youâre using puppeteer. Iâm not sure if you can get around this with puppeteer because itâs built on libwertc which I think doesnât give you access to the raw stream. You have other options -
If youâre also developing the mediasoup server, you can use the server-side consumer.on(ârtpâ) to capture the raw RTP stream (reference: mediasoup :: API). Youâll have to extract the encoded video from the RTP stream (no idea how to do that, but itâs probably not that hard). FFMPEG can probably do that (StreamingGuide â FFmpeg); naturally VLC too. The point is you want to avoid decoding (and especially encoding) the video since thatâs where the CPU gets taxed the most.
If youâre not developing the mediasoup server, thatâs going to be a bit more challenging. The hard part is figuring out the RTP stream port/IP and connecting it to something like VLC in headless mode to save the contents of the RTP stream.
Good luck!
This approach is even more performant but if we involve ffmpeg, gstreamer it still complicate stuff. I think people prefer headless browser because it let them have freedom over the streams and especially the design where you can easily change the appearance of the recording session you have.
Hey, Thank You For This Solution. The Problem Here Is How Do We Get That Stream In Nodejs Land Or To Save It To A File. Do We Use Mediasoup To Produce It And Then We Get Access To It Through Direct Consumer?
You will have this stream in browser side and these are ways to use it:
- produce it to mediasoup server
- or upload this stream to nodejs server using some api.
- or use mediarecorder to upload the chunks to nodejs side and save it to file
- or there may be way in puppeteer where you can directly access the stream in nodejs but that will require some r&d
Not sure if you were referring to my reply, but assuming you were. On the Node server, I believe you would use a DirectTransport (mediasoup :: API - the documentation explains the process). I think the workflow is that you would make a DirectTransport on the same Router (a) where your stream is being produced (source client â producer â WebRtcTransport â Router [a] â Node server producer). Then you DirectTransport.consume() the Node server producer. Finally, use DirectTransport.on(ârtcpâ) or consumer.on(ârtpâ) to capture the packets on the Node server.
But with this appoach you are recording individual streams, not the entire page with all layout and mixed sound.
Agreed - wasnât sure what his use case and server setup was. He was saying that he was CPU bound so I figured he might need to work around the transcoding.
Thank You So Much For These Solutions. I Had Tried Few Of Them Before As Well. What I Have Come To Know Is That The Best Way To Capture Display Is By Using FFmpegâs X11Grab And Pulse Audio (Or ALSA). I Am Currently Trying That And Getting Good Results