I have been taking streams of all peers in room, showing them in div container. I tried many ways but not able to record conference call. Do help me with this!
Look for “recording” in this forum. There are already many entries that will help you.
Hey, first of all thanks for reply. There are entries for ‘recording’ but they are for single call record not for conference call recording. I have been trying to record conference call from weeks. Do help with this!
Believe me or not, but we are not mandated to help anyone. Read the docs and related external projects.
Are you looking to record all the streams to the same file?
If so you will probably need to record the streams separately, and then use ffmpeg to merge them all into one file, which can be a nightmare.
Easiest way would probably have one user screen capture the conference and record that stream.
Recording and playback of a production-grade solution today is the single biggest weakness of MediaSoup IMO. Once that is solved it will be the best “complete” option by far.
You have to understand… as amazing as MediaSoup is, you will have to spend most of your effort and time on creating production quality recording and playback solutions. Wish it was different, but it is still way better than having to do it all.
Bottom line: There is no good solution yet, because it is so hard to do. We created a production quality solution for audio only and working on the video part next. Why don’t we share it? Well, we cannot support it in a public group and it is part of how we need to pay for all the effort to get it working in the first place
Best of luck,
mediasoup is a media router, not a media endpoint. Recording means decoding, integration with libav, etc etc. That’s not what mediasoup is intended for.
Completely Agree… Let me rephrase: “Recording and playback of a production-grade solution today is the single biggest weakness of MediaSoup Eco-System”
In fairness, no–recording does not necessarily mean “decoding”. There are multiple ways to record streams, some of which do not involve decoding, but depacketization and reconstruction of compressed audio or video.
These are both of the methods used by gstreamer/ffmpeg code I’ve seen floating about–they work, although only marginally so.
That’s why we may add a “encoded frame based API” in the future.
Another thing you could do (haven’t tested yet), if you just need an active speaker, you could for example let ffmpeg record with rtp and let ffmpeg wait for a new connection when you just switch producer and reconnect to ffmpeg. ffmpeg has a parameter
-stimeout 1000 which lets you set the timeout before it closes the rtp stream.
Then just like @ethand91 said, you can combine all the files together. I’m currently working on a tool that uses ffmpeg. With as input only the
paths to the video files (of the seperately recorded users) and their
start time. It then outputs a single video just like a video conference where you see multiple people coming on and off.
The layout is also customizable.
Now just adding some more support for when there is video-only, or audio-only mixed into it and also putting the most active speaker in front (based on audio or preset or something).
Still work in progress but I’ll probably release it soon.
I published a first version for the package to combine separately recorded videos into a single file with different video call layouts:
Feel free to do some pull requests and create issues, there is still some improvement possible.
Thank you so much for providing the access to the project.
Can we use separate .webm as well to do the merged recording?
and what if we give 0 has the starttime value for all the individual recorded files in new Media() object?
I haven’t tested with .webm files yet, but should normally not cause any problems as they will be reencoded.
If the starttime is 0 for every file then they will all show up on the screen at the same time in the chosen layout.
I am getting error when running the example, Raised a git issue for the same. Please help