DataChannel endpoint: are DirectTransports really needed?

I’m trying to connect two Routers in two different machines using a PipeTransport, and my plan is to use a DataChannel as signaling channel to send the Consumer and Producer events between them, so there’s no need to use an external signaling server (just only for the initial handshake).

I’ve read the docs and the forums and seems to be few info or examples about DataChannels / SCTP with Mediasoup… According to mediasoup :: Communication Between Client and Server, it seems it’s possible to consume the DataChannel messages on server-side by adding a DirectTransport, create a DataConsumer there, and connect to the DataProducer in the PipeTransport. It’s easy, but I’m a bit concerned about the extra DirectTransport. Maybe a bit stupid question, but can’t they be consumed directly from the PipeTransport Producer? Or does they need to be managed by the Router, and later “send” to another Router using a Transport… just only that DirectTransport Consumers and DataConsumers sends data anywhere, just they keep inside the Node.js process? It’s said, are DirectTransport Consumers and DataConsumers in that way comparable to GStreamer FakeSink where you can get dumps of the data and manage yourself, or in a similar way to node-webrtc RtcVideoSink where you can get raw video frames?

In addition to that, it seems like DataConsumer.send() is only available in DataConsumers NOT created by a DirectTransport, since they are going to send the messages to somewhere (not to local Node.js process), so this .send() method allow to inject and send messages to be send to another place, isn’t it? It’s said, in that case, create a DataConsumer in the PipeTransport and call send() on it, it seems equivalent to create a DataProducer in a DirectTransport and consume it from the PipeTransport DataConsumer, isn’t it? Why should I choose one over the other?

Missed another question: in terms of Mediasoup memory consumption and CPU processing, is it ok to create a pair of DataProducer / DataConsumer for each stream I want to propagate events, so the matching with their Producer and Consumers is implicit and direct, or is it better to have a single signaling DataChannel connection in the PipeTransport, and send the Producer and Consumer events with their IDs and do the matching of the events by hand?

You can utilize DataChannel to signal for sure, it’s probably the more advanced approach and could potentially unleash scaling powers.

DataChannel should work locally and across remote instances however (and I could be wrong). PipeTransport between two servers won’t reveal the DataChannels for some reason as if they’re locked to the node.js process and their peers.

I may have it wrong however or this may not yet be implemented.

CPU consumption will be far greater than that of memory; you can get away with 256MB->1GB ram per core depending on your operations, code and see!?

As for your signaling question, why not both? Keeping (ACK)nowledgement with each end is good keepAlive and crash handling (if connection is lost).

1 Like

I feel honored with a message like this, thanks :smiley:

What I’m talking about here is to create a SCTP (DataChannel) connection between the two servers, and manage them from Node.js, they would not be exposed to clients, it’s just an implementation detail. Are you saying maybe it could not work? According to the documentation, PipeTransport has support for SCTP messages…

So, it’s better to have a single DataConsumer / DataProducer pair, and do the mapping of the events to the Producer / Consumers by hand, isn’t it? I’m asking about this because WebSocket spec says that multiple WebSocket instance must use a single TCP socket and multiplex messages, so mapping is provided by free, and DataChannel API and spec is based on WebSocket one, so maybe in this case it’s happening the same and we have multiplexing by free too… maybe?

I was talking about Producer and Consumer IDs, since having a SCTP connection for each pair, I would not need them, but you are right that ACK messages are ok, I wanted to just send the event names since reliable messages would already provide some automated control, but probably I would need to use JsonRPC anyway to have some crash handling.

It’s a maybe to DataChannel working over PipeTransports connected remotely, that’s what I had done a small test with.

As for questions, there’s a lot going on to go into details on each one!

Hopefully I understand.
DataChannel would work in theory the same however are much more raw, you’d need to make it handle the way you want (but same with WSS). All Transports come with a UUID to uniquely identify them so not much mapping is needed if these keys are known. For processing large events like global messaging to an audience you may want to map that entire list of users instead to avoid having to loop through your router.

It’s always ideal to keep a form of ACK to make sure the delivery is successful otherwise you may had lost an entire section of users and aren’t compensating for it. lol

You can consume a DataProducer from a normal DataConsumer created in a PipeTransport in the same Router as the DataProducer. No need for DirectTransport here.

This is a hack due to hot changes. All this will be refactored in v4.

1 Like

Let’s hope DataChannels / SCTP works with PipeTransports between different servers… :-/ By reading pipeToRouter() code, there should not be problems.

I know, my question was about doing mapping by using ad-hoc annonimous functions, or by iterating over a list, but now that I think about this… first option could be more performant, but second one is memory savy, so seems better to iterate and do the mapping by hand.

Yeah, not only to know that the connection was broken, but also what messages where in-flight :slight_smile:

Yes, I understood this part about routing the messages. My question was about consuming and processing them server-side, not just route them.

Oh, great :smiley: Any actual development / estimation date / roadmap to take a look? Can we help you with it someway? :slight_smile:

They do

1 Like

Of course it consumes more since it goes to Node loop and goes back in between, but that’s the way to go.

TIP: if Node single loop (and single CPU core usage) is a problem, you can create Node Workers and have each of them manage a mediasoup Worker.

1 Like

Great :smiley:

I don’t think it could be an issue, but it’s an interesting aproach to have a Node.js Worker for each Mediasoup Worker, I’ll take it in account :slight_smile: