simulcast with PlainRtpTransport and ffmpeg

Is it possible to create stream with simulcast using PlainRtpTransport and ffmpeg?
As I understood there will be several encodings with different ssrc inside transport.produce's rtpParameters which will be used in tee ffmpeg output. But what about ports? Can I use same rtp and rtcp ports for different qualities with different ssrc?

You can and you must. But no idea how to do that with ffmpeg.

What are other options? Basically our goal is to play out a video file through a mediasoup on server so it comes in 2-3 resolutions through simulcast. I am fine having to rescale and encode 2-3 times. But what are the tools to use if not ffmpeg? Can you give an example of how it’s done?

Thanks!

And what encodings should we pass to transport.produce in rtpParameters? Simply with [{ssrc:22222222},{ssrc:33333333}] there is no simulcast. Trying different scalabilityMode makes no effect or error cannot use both simulcast and encodings with multiple SVC spatial layers
P.S. When subscribing - consumer.type is simulcast but setPreferredLayers has no effect for temporalLayer, spatialLayer or both.
Here is an example of my ffmpeg command

ffmpeg -re -i ./input.mp4 -map 0:a:0 -c:a libopus -b:a 128k -ac 2 -ar 48000 \
-c:v libvpx -deadline realtime -cpu-used 4 \
-filter_complex [0:v]split=2[s0][s1];[s0]scale=-2:240[v0];[s1]scale=-2:480[v1] \
-map [v0] -map [v1] -f tee \
[select=a:f=rtp:ssrc=11111111:payload_type=101]rtp://127.0.0.1:10053?rtcpport=11680|\
[select='v\:0':f=rtp:ssrc=22222222:payload_type=102]rtp://127.0.0.1:10831?rtcpport=11521|\
[select='v\:1':f=rtp:ssrc=33333333:payload_type=102]rtp://127.0.0.1:10831?rtcpport=11521

I got only 240p and no 480p.

What does “cannot use both simulcast and encodings with multiple SVC spatial layers” mean? We are getting it whenever we use S/L>1 in scalability mode. T is always 1. I.e. S1T1 works but it’s just 1 mode, S2T1 always results in this error message. What can we be doing wrong?

mediasoup does not make magic. Parameters given in server side to transport.produce() are supposed to represent and announce what the media sender endpoint (ffmpeg in your case) will send, and not the other way around (ffmpeg will not magically send whatever you announced to mediasoup in ProducerOptions).

Does ffmpeg support encoding with temporal layers? No. Why are you assuming there will be then temporal layers?

And somehow this topic becomes a pure ffmpeg question, which is not the purpose of this forum. Just a tip: better use gstreamer with complex configurations to simulate simulcast be sending 2-3 stream encodings using same source port for RTP and RTCP.

We don’t need temporal layers. Spatial is enough. We tried to increase T number only in diagnostics reasons because with changing S/L>1 there are mediasoup error:
cannot use both simulcast and encodings with multiple SVC spatial layers.
Basically we don’t need SVC at all. We need simulcast.

I also commented other concerns in my previous response. The last time you just replies to some of my concerns, I spent hours until figuring out that you were using a custom, old and non supported fork of react-native-webrtc. So I won’t do it again.