I run into the issue where if I produce stream to media server and then consume it to pipetransport, the signaling to create/connect the pipetransport may delay my ability to produce and consume (and throw errors). Issue would be that connection was not yet ready and things occurred out of order.
I solve this with queues/callbacks over network but this code-blocks.
If I call produce on pipetransport and it’s not ready yet, that’s fine. Users will consume this producer and expectation if it’s not existent or ready yet it will be in time or my signal server will destroy it.
Obviously some users may not favor this and so potentially a toggle to allow forced transmission.
Forced could be an option applied to all producible/consumable items.
If there’s questions or concerns please share.
This is a semi run-down of how I scale. I connectPipeTransports-> ReProduceStream->CreateConsumerTransport->ConsumeStream
This is the hierarchy, many times we may find ourselves deleting steps
- connectPipeTransport-> ReProduceStream->CreateConsumerTransport->ConsumeStream
- ReProduceStream->CreateConsumerTransport->ConsumeStream (Produce could be audio/video, we may see this happen 1 or more times but we don’t need to re-connect pipe here in my case.)
- CreateConsumerTranport → ConsumeStream
- ConsumeStream (Transport would exist already)
Given so many stages to remotely transmit you may see where there could be issues when several or more several servers are involved. This may affect local piping but my tests do not truly include those.
Looking to overall increase my parallel requests no more call-backs/message-queues.
PipeTransports utilize CPU when idle; not much but enough if say I wanted to pre-connect/introduce 1,000s of cores. If idle state remained at 0% CPU and I could pre-connect during startup I could skip the communication step of creating/connecting pipe transports as it’d be done already. With the PipeTransport connect function not having a callback of when its connecting/connected I am unsure what it’s actually doing or why it’d need to consume CPU every few seconds.
Otherwise I open pipes when they are required and share when there is space. But some ideas for the pot, if anyone wants to add.
It’s not true that pipeTransports need to be connected before you can produce on them.
The other topic you mention is about having a reliable communication channel between servers that guarantees message delivery order, same as we do internally between mediasoup Node and C++ processes.
And replying to your first question (if I understood properly), no, it’s not possible to consume from a non yet created producer for obvious reasons. Consume what? Whatever? The consumer is created based on the Producer data plus the RTP capabilities given by the consumer side.
Yeah the issue is mostly with first question. If the PipeTransport is not connected the producer isn’t found and consuming cannot occur.
This is not overly a problem for me yet, but I’m trying to be prepared for intense scenarios as I am constantly seeing anywhere from 20 → 1,000 users on the network.
Goal is to improve my I/O performance by dropping my message queue, callbacks over socket, and promises. This however reveals a bit difficult when signaling the subscribing process for consumers.
So to that, if there’s ideas to ensure the pipe transport has the producer ready for consumption that are reliable and fast I’m all ears, I’m doing a bit of a reconstruct on my current routing.
I don’t really understand what you mean here.
Imagine there’s two media servers, different IPs/machines.
We can’t pipe locally, and must pipe remotely.
I would produce stream on server(a) and when I have viewer request I create/connect pipetransports on server(a) and (b) consume it to pipetransport on server(a) and immediately try to produce it from pipe on server(b) to consume to all viewers.
Issue is server(b) may not be yet connected to server(a) and so there’s no evidence of this track just yet on server(b).
Most of this has to do with the amount of trips it takes to signal, create/connect the pipe transports it’s 5-6 steps going back and forth from signal to server(a) and server(b). During these steps it’s easy to fire off a produce/consume call on server(b) when it’s not yet done its steps to connect fully.
Definitely the solution is not about “consuming what it doesn’t exist” but design a reliable comm channel between servers. Not something that mediasoup itself can help with.
With reliable communication channels between servers I must code-block to ensure these steps are finished in order and even then it’s not guaranteed the connected state.
If we’re to assume they’re connected at this point we’ll run into problems. The only solution really is to create queues of requests that may or may not be valid anymore and inefficient.
Not really trying to nit-pick this process I may need to improve my handling of pipes but figured I’d share its been a slight problem assuming states.
I just wish I could produce over remote pipe transports forcefully if stream has yet to reach endpoint but if that’s not possible I’ll try to improve my handling. Heck if we could know if the pipe is connected be nice.
We use awaitqueue - npm for these kind of things. There is little else I can help regarding this topic. Communication between servers is not mediasoup business and definitely this is not gonna be solved by having
transport.consume() allow not passing
producerId or passing a
producerId that doesn’t yet exist.
Okay, and yeah similar setup here and it’s working just the odd errors as described. Welp, thought I’d bring it up.
I only check DTLS state but didn’t appear there was one for pipe transports and for most part don’t touch any of the other listeners. Can I reliably listen for producer/consumer events over pipe transport and incorporate that into the signaling process.