Would like to share our implementation and some questions
We are working on a solution where we have one producer (video/audio) and thousands of viewers.
We made our deployment using aws ec2 instances (m5n.8xlarge). We are using 32 workers (= # of vCPUs) and 1 router in each worker. We use pipeToRoute for all the workers (and pipeToExternalRoute which we implemented for other instances in the cluster).
In production we have several instances/machines.
We do not use simulcast or SVC, as we got very low-quality video when using simulcast (not sure why yet).
Video is maxed at ~ 100KB/s (VP8) and video is at ~4KB/s (opus).
We tested (synthetic tests…) and found that each aws instance, can handle 3000 users (3000 video consumers + 3000 audio consumer)
With 3000 users and on one ec2 instance (m5n.8xlarge), we are at:
Network Out: 315MB/s
Network In: 5.3MB/s
We do tests by opening mass number of servers on multiple aws regions with multiple instances of puppeteer on each which opens the web page with our session and save an image on a central machine - for us to review the image quality, and additional data we put on screen (bandwidth etc).
To lower the bandwidth required by the producer (standard mobile device), we set keyFrameRequestDelay to 4 seconds (didn’t implement ‘re-encoder’ yet) and we are setting ideal video in getUserMedia to 640x480
Still, we have a lot of unsolved areas and questions -
- Did you implement a “re-encoder” as described in https://mediasoup.org/documentation/v3/scalability/
- What is the bandwidth required by the producer? Does it change when there are multiple viewers?
- Can you share how where you managed to test with multiple OSs?
- Do you know what is the video quality your viewers receive?
- Can you share your simulcast configuration?