Once done, other endpoints (WebRTC endpoints or any others) can receive both, the FFmpeg audio and video track, by using the transport.consume() API as usual.
I I want to consume audio and video, I need to create 2x transports on client side, 2x consumers (one audio consumer on the audio transport one videoConsumer on video transport) get the tracks from each and feed them to a video element? Or can I consume video and audio over the same RTP transport?
I am using RTP only to send traffic to media soup and using a browser webRTC client to consume it.
The way data is sent from ffmpeg is one port for audio another for video, and creating 2 producers (1 audio , 1 video ) on server side. Can I have 1 transport here for both audio and video?
about consuming those, you are saying on browser side, I can create 1 transport and then consume both the audio and video producers over same transport.
Technically nothing prevents you from sending multiple audio and video tracks in the same RTP connection in either direction (and WebRTC uses RTP internally BTW). So yeah, you absolutely can do that.