Hi, I am working on a project where I have Node.js mediasoup-server, standard mediasoup-client for browsers and other browser clients. Let’s call this second client as
bot connects to the server, consumes all streams then makes some audio processing using web audio api and then produces this audio back to the server. This bot is served as a headless browser using selenium.
Now I know that it is not the best idea but I am stuck with this for now.
However, I want to try to rewrite this as Node.js client. In documentation I found that this should be possible but I need to implement Handler interface but it seems like a very tedious and log task to implement myself.
So is there an easier way to do it? Optionally If I could write a python client it also would be ok.
I do not want to use
c++ client because I do not have enough experience with the language.
We’ve recently tried to do that too for backend recording. Getting a working handler is not as hard as you might think. You can just copy any recent handler file of mediasoup-client. Of course, it will not work since it depends on certain classes that are provided by the browser WebRTC API. But to address this you only need to replace the missing classes with their respective node implementations.
In our project we used node-webrtc and it worked. Unfortunately, the package is not actively maintained anymore and has a few bugs that might break your use-case. It did for us. There is another interesting package called ‘werift’ but as far as I know they don’t actually expose any way to get a working MediaStreamTrack out of a PeerConnection. But have a look around maybe it might fit what you are looking for.
Node.JS lacks support for WebRTC, so not really going to happen outside of a headless browser or you using media soup.
You could just broadcast the user and while fanning the stream to viewers, fan the stream to an audio server where you can do such. This be my preferred choice in your scenario if headless browser is not good enough for you.