Hi MediaSoup community,
I have been trying to integrate MediaSoup with React Native but I’m stuck and need specific, practical guidance.
My use case:
- Building voice rooms (Clubhouse-style, with 5–20 speakers and listeners).
- Building virtual events (host + multiple speakers + large audience, audio-only or with optional video for speakers).
What I’ve tried:
- Using
react-native-webrtc
for media capture and consumption. - Tried
registerGlobals()
and polyfills formediasoup-client
on React Native. - Attempted to handle signaling via Socket.IO with my Node.js + MediaSoup server.
The challenges:
- I could not get
mediasoup-client
to work reliably on React Native despite polyfills. - Unclear exact steps to handle SDP, ICE, and DTLS transport with React Native while using MediaSoup as the SFU.
- Unclear how to properly consume and produce tracks on React Native clients with MediaSoup’s expectations.
What I am looking for:
Clear, practical guidance (or a minimal working example) on:
- How to integrate React Native + react-native-webrtc with a MediaSoup server for both publishing and consuming.
- Whether I need to skip
mediasoup-client
entirely and use my own signaling/SDP exchange. - How to map MediaStreamTracks with MediaSoup transports in React Native.
- Best practices from anyone who has successfully used MediaSoup with React Native for production voice rooms or virtual event features.
I really appreciate any step-by-step pointers, sample repos, or architecture insights from those who have done this, so I can avoid building a custom SFU and launch my product faster.
Thank you in advance!