Is there a way to allow two producers WebRTC audio streams to be played back by a consumer at the same time?
Not without transcoding. This is assuming you want two produced audio streams to equate to a single consume.
The transport could however handle 1 or more audio streams.
You’d want some type of HLS setup and would suffer with 3s> latency. Verus WebRTCs sub 500ms.
You say that the transport can handle more than one audio stream. If I have two producers sending audio. How do I send both of the streams to one consumer?
The transport can handle one or more audio streams as in, you’d consume many times to the transport and recognize the audio by the ID of consumer.
If you want it where you have many audio’s but consume to the transport once, you’d require a middle man like FFMPEG/GStreamer to transcode (decode/encode) the many stream, so it could be a situation where you run a puppeteer bot or a separate recording server to consume all these audio endpoints and convert it into a single stream to forward.
The issue here is you may have several seconds delay that could ruin conferencing or anything requiring the sub 500ms latency. If you just perform one to many type streaming, this be more than fine actually perfect but many-to-many consider that delay!
It’ll not be possible to send many produced items video/audio as a single consumed item without assistance by a transcoder. So you will initially need to consume every produced item for it to exist!
Read the documentation. A Consumer is, by definition, the receiver of a stream generated by a Producer. And you can create as many Producers and Consumers as you need in the same transport.
I’m attempting to mix them this way from two producers but it’s not working any hints?
Just don’t mix them, ever. Play in two separate audio (or video) elements.