This library under development is only use TypeScript, and has no dependency on native modules or other languages.
Not all features of the browser version of mediasoup-client are available, but it already supports produce and consume mediachannel and datachannel.
If you are interested in using it, please try it. If you find any bugs or issues, please let me know. I’ll do our best to fix it.
How would it produce and consume media if there are no dependencies like wrtc or native modules?
Even for data channel you need SCTP stack got send/receive data.
Currently it can only detect that a PLI request has been received.
like this.
producer.rtpSender.onRtcp.subscribe((rtcp) => {
if (rtcp.type === RtcpPayloadSpecificFeedback.type) {
const { feedback } = rtcp as RtcpPayloadSpecificFeedback;
if (feedback.count === PictureLossIndication.count) {
console.log(rtcp);
}
}
});
If you want keyframe, you need to let RTP generator (gstreamer, FFmpeg, etc.) send keyframe to client in some way at the time client receive the PLI request.
I’m currently use ffmpeg command line as a producer to push stream to server, but the headache is, obeying the official document, i have to manually put 4 RTP port args in the command line string.
I’m thinking of writing a nodejs client wrapping ffmpeg and using websocket to initiate RTP stream receiver and get the 4 port args(audio/video RTP/RTCP), but the mediasoup-client can only run in browser and doesn’t support nodejs command line env~
using ffmpeg is only for test, to clarify the webrtc streaming details, the real product needs is for cloud-game/desktop-streaming.
ideally, the webrtc server should be running with cloud machine(a android os in virtual container/box with accelerated encoding), so the ‘producer’ is built into the webrtc server, the “stream push” better be “0-copied”…
also i’m thinking about how to integrate video-transform framework like gstream to mediasoup(C++ part?)… there is an urgent AI+Render+Streaming need in 5G edge computing domain…