$$$ for a snippet of code for horizontal scaling using pipeTransports

Hi All,
First of all, thanks to the mediasoup team for such an amazing project, it’s truly a pleasure to work with.
I am putting together a prototype for scaling mediasoup across physical servers.
I know how to connect the 2 servers using pipeTransports, and I am implementing a version of what “pipeToRouter” does but across the physical hosts.
Since I am fairly new to medisoup, I would like to compare notes with somebody who has done this in a production setting, to make sure I build this the right way.
Willing to pay $$$. I need this ASAP, preferably today/this weekend.

the goal is to have the presenterRouter on host1 “replicated” on host2 so that viewerRouters on host2 can serve the producers on presenterRouter1 on host2. (few to many broadcast)

In host 1 I have worker 1, with presenter Router1, and this presenterRouter is being “pipedToRouter” to all other viewersRouters on the same host.
Additionally in this host, I am doing presenterRouter.createPipeTransport(…)
on host2 I have all workers and viewerRouters ready to accept connections

Now from the other physical server, I am getting the ip/port of the host1/presenterRouter1 and I am connecting to it.

Looking to get real production ready code for what needs to happen “next” on host2 to achieve the goal above


I’ve been thinking on doing something similar, but didn’t have time yet to work on it. Subscribing to the issue to get notified in case there’s some interesting discussion here :slight_smile:

It sounds like you already have 95% of the work done. It would be helpful if you described specifically what your question is - what is the blank you want filled in? It sounds like this is more of an infrastructure question rather than how to use the code, since you’ve already figured that out. If so, would you describe your network topology/capability, number of servers, number of clients, and scaling needs?

A few ways way to consider doing this at mass scale:

  • Chaining servers. Server A streams to B which passes the stream off to C and so on. Least strain on individual servers, but adds latency at each hop.

  • Variation on the first idea, except A streams to B and C, which each stream to two sub servers, and so on.

  • A streams to all servers B-Z, and they stream to the clients. High load on A but minimal latency.

  • Multicast for devices on the same subnet. I have no idea if this is possible with WebRTC/mediasoup, but worthwhile investigating since it could help reduce strain on that subnet’s networking equipment for massive deployments.

HI!, dimochka, sorry If I was not clear, the part that I am missing is the “pipetorouter” implementation but for sending a specific producer in router1 in host 1 through the pipeTransport to host2, viewerRouter2

Hi @BronzedBroth ,
the only part that I am missing is explained above

You’re trying to connect two different IPs? Utilizing the PipeTransport method?
For a producer connecter to consumers only scenario.

  1. When consumer comes online send a createpipetransport message (via your signalling method), this will share the listenIP and producer will create their pipe and connect that IP on received. If producer comes online, it tells any consumer to create the pipetransport first then send the listen IP to connect.

  2. Once connected, you would produce a stream and take that ID when someone wants to consume it and re-broadcast it from producer over to consumer to re-consume it and send it out to anyone who reaches that server and see’s it’s already available.

  3. Not really a third, it’s two giant steps. :slight_smile:

Thanks for clarifying. So it sounds like you have pipeTransports already set up. You are trying to implement a method that takes a producer on mediasoup server A, consumes it on A to send it to mediasoup server B, and then have it appear as a producer on B. The same way that pipeToRouter transforms a consumer on router A into a producer on router B.

I’m not exactly sure how best to do this, but the good news is the code already exists in pipeToRouter and is super readable. Check out the method in Router.ts in the mediasoup server code - it should be easy to adapt to your use case:

				pipeConsumer = await localPipeTransport!.consume(
						producerId : producerId!

				pipeProducer = await remotePipeTransport!.produce(
						id            : producer.id,
						kind          : pipeConsumer!.kind,
						rtpParameters : pipeConsumer!.rtpParameters,
						paused        : pipeConsumer!.producerPaused,
						appData       : producer.appData

So you’d want to use the consume() call on transport A, transmit the rtpParameters over the network separately (ie websocket), then pass those to the produce() call on transport B.

This is my best guess. I’ve never actually done this :joy:

Thanks you all. I’ll play with it this weekend ! thanks

1 Like

During last weeks I’ve been working on Mafalda SFU, an implementation of a masively parallel SFU based on Mediasoup.

So far I have implemented vertical scaling (use multiple CPUs in a single machine). I’ve designed it to being as simple as possible to use, just only create an instance of MafaldaRouter (with and API heavily influenced by Mediasoup Router API), and Mafalda will manage the scaling itself, being totally transparent to the user. It’s main features (in addition to the simplicity of usage) is that’s designed with performance and minimal use of resources, and that code has a strict 100% tests coverage both in lines, functions, branches and statements.

I’m currently working on implement horizontal scaling (multiple machines), and after I have a simple implementation, my plan is to implement a federated discovery mechanism to allow to have a decentralized scaling on the network of Mafalda instances, inspired on P2P networks.

At this moment Mafalda is closed source, but I’m open to talk about selling usage and maintenance licenses, or doing integrations or customizations, or about any other kind of offer or collaboration you are interested about. You can contact me faster sending an email to jesus.leganes.combarro@gmail.com.