I am trying to setup a VoIP application. Endpoints are Mediasoup which runs on an AWS EC2 server and has a public IP with all ports open to connection and FFMPEG which runs on a device that is behind NAT. I set up the part where FFMPEG sends a stream to a Mediasoup plain transport and a Mediasoup client consumes this track and displays it on the browser.
The next step is to get the audio stream from the browser and pass it to FFMPEG. I think as long as a connection is made from my device to the server from an IP:port pair of, say, X an y, Mediasoup can send a track back to the same X:y. If that is correct, I need to make sure that I know which port I made the connection in the first place and start listening to that port for any incoming streams. This is where I get confused and ask for your valuable opinions:
- I am making sure to set the outbound port on the FFMPEG side by using the rtp option flag “localtrpport”. Can I listen to this port while the stream is going out from it to the Mediasoup server? If I can, how? I cannot get this to work and always end up with a bind error “Address already in use”. This is my ffmpeg command:
# My Mac with en0 IP of 192.168.1.64
-fflags +genpts \
-f lavfi -i aevalsrc="sin(400*2*PI*t)" \
-ar 8000 \
-f mulaw \
-f rtp -reuse 1 "rtp://MEDIASOUP_SERVER_IP:PORT?reuse=1&localrtpport=16386&localrtcpport=16387" \
-protocol_whitelist udp,rtp \
-i rtp://192.168.1.64:16386 \
- If this is a wrong approach is the following viable: I’ll send a dummy audio sample from FFMPEG to Mediasoup server just to create a UDP punch hole and stop the stream. Then on the server side, use a webRtcTransport for producing audio from the browser and use a consumer on a plain transport to transmit that audio to FFMPEG using the same IP-port pair that has just been created by the punch hole.
Is this how I should continue? How can I make sure that I’m sending to the same IP-port pair? Can you provide me a sample for sending a stream to FFMPEG? I just could not figure out how the symmetric RTP helps all these?