Hi all,
I’m working on an app to receive multiple audio streams using webrtc. I’ve got it working in basic form using PeerJS with one peer as the main receiver, and multiple other peers sending audio to it. I then send the audio out to different local audio devices via mediaelements and setsinkid etc.
But I wanted to move it to a node app which removes that ability.
After going around the houses, I’m now trying mediasoup. I was hoping I’d be able to easily modify one of the examples (like the recording example) so that instead of sending to ffmpeg or gstreamer, it was played to a sound card (this is on a mac).
There must be some way of getting ffmpeg or gstreamer to rout the output to an audio output device but I can’t work it out and I’ve already spent a long time learning new things to get this far… ffmpeg seems like a whole lifetime of experience.
I could use chromium in the node app to replicate my browser app, but that seems overkill.
Is there a way to take the audio stream and send it to an audio device that anyone could help me with please?
I appreciate this isn’t core mediasoup, and I’ve noticed a few complaints in replies to other people re similar, so please, if you can help in anyway, that’s great. But if not, please don’t start a row. I’d rather have no replies than a row. We’ve all got more (and too many) things to do.
Thanks.