Using libmediasoupclient with mediasoup v2 server side

First off, thank you for your amazing work on mediasoup. We have been using it heavily in our app and have found it to be a solid foundation to build on. We have a mediasoup v2 server side implementation that we have been using with a web based client side. That has been working well. At this point we are experimenting with using your c++ library and hoping to adapt it to connect to our mediasoup v2 server. It hasn’t been too hard to wire up, with some minimal changes to the library. At this point both sides seem to be happy with the arrangement but I haven’t been able to get a producer to send an audio track. Here is my setup code:

            auto result = sendRequest("{\"method\":\"queryRoom\",\"target\":\"room\"}");

            json message = {
                { "method", "join" },
                { "target", "room" },
                { "peerName", m_peerName.toStdString() },
                { "rtpCapabilities", m_device->GetRtpCapabilities() },
                { "spy", false }
            auto peers = sendRequest(message.dump());

            auto sendTransportId = rtc::CreateRandomId();
            auto rtp = m_device->getNativeRtpCapabilities();
            auto fingerprint = rtp["fingerprint"];

            json message2 = {
                { "method", "createTransport" },
                { "target", "peer" },
                { "id", sendTransportId },
                { "direction", "send" },
                { "options", {} },
                { "dtlsParameters", {
                    { "role", "auto" },
                    { "fingerprints", {
                            { "algorithm","type").get<std::string>() },
                            { "value","hash").get<std::string>() }

            auto sendTransport = sendRequest(message2.dump());
            json dtlsParams = sendTransport["dtlsParameters"];
            json iceCandidates = sendTransport["iceCandidates"];
            json iceParameters = sendTransport["iceParameters"];
            m_sendTransport = m_device->CreateSendTransport(&m_broadcaster,
            std::to_string(sendTransportId), iceParameters, iceCandidates, dtlsParams);

I had to add codec names to the rtpCapabilities so mediasoup v2 was happy. Also, createTransport requires dtlsParameters, so I had to extract that and add it to nativeRtpCapabilities as you can see above. I’m happy to share the minimal patch I’m using to get to this point.

To setup the producer I do this:

            cricket::AudioOptions options;
            m_audioSource = peerConnectionFactory->CreateAudioSource(options);
            m_audioTrack = peerConnectionFactory->CreateAudioTrack("234324321", m_audioSource);

            json codecOptions = {
                { "opusStereo", true },
                { "opusDtx",    true }

            m_audioProducer = m_sendTransport->Produce(&m_broadcaster, m_audioTrack, nullptr, &codecOptions);

My listener looks like this:

std::future<void> MediaBroadcaster::OnConnect(mediasoupclient::Transport *transport, const nlohmann::json &transportLocalParameters) {
    std::promise<void> promise;
    return promise.get_future();

void MediaBroadcaster::OnConnectionStateChange(mediasoupclient::Transport *transport, const std::string &connectionState) {
    std::cout << "Connection state: " << connectionState << std::endl;

std::future<std::string> MediaBroadcaster::OnProduce(
    mediasoupclient::SendTransport *transport,
    const std::string &kind,
    nlohmann::json rtpParameters,
    const nlohmann::json &appData
) {
    std::promise<std::string> promise;

    unsigned int id = rtc::CreateRandomId();

    nlohmann::json message = {
        { "method", "createProducer" },
        { "target", "peer" },
        { "id", id },
        { "kind", kind },
        { "transportId", std::stoul(transport->GetId()) },
        { "rtpParameters", rtpParameters },
        { "paused", false },
        { "appData", appData },

    std::cout << "Created producer with id: " << id << std::endl;

    return promise.get_future();

I have to send the dtlsParameters to create the local transport, so I can’t utilize the onConnect mechanism, unless I misunderstand something. Everything looks good and the listener connection state goes from connected to completed but as soon as the producer is created the connection state goes to disconnected. Here is a raw dump of the final moments:

Connection state: checking
[TRACE] PeerConnection::OnIceGatheringChange()
[DEBUG] PeerConnection::OnIceGatheringChange() | new IceGatheringState:[complete]
Connection state: connected
Connection state: completed
Received: "{\"type\":\"data\",\"id\":\"9\",\"payload\":{\"data\":{\"sendMediaRequest\":\"{}\"}}}"
Created producer with id: 4024606819
[TRACE] Producer::Producer()
Producer created
Connection state: disconnected

I appreciate any suggestions anyone might have. I’m sure I’m missing something simple. Let me know if you need anything additional. Thank you!

Hi James,

Thanks for the comments.

I feel in the obligation to encourage you to upgrade to mediasoup v3 :slight_smile:

That being said. On the Transport::Listener::OnConnect event handler you should send to mediasoup a peer message updateTransport request with the dtlsParameters this callback is called with. This is the way to provide the transport in mediasoup with your local DTLS parameters.

Hey José,

Thanks for the reply! I finally got a chance to try out your suggestion. I’m still getting an immediate disconnect when the producer is created. The server side responded with success to updating the transport params. Any idea what might be causing that? I’m not getting any notifications back from the server side.

Best Regards,


PS. We would love to update to Mediasoup v3. We have a plan in place to do it, but it will be a while before we can make the switch. The new ms3 model is perfect for us, and honestly answered a lot of our concerns about ms in general.

Hi James,

While it’s possible to build the “mediasoup protocol” message exchange of mediasoup v2 on top of libmediasoupclient (which is v3 so does not define any “mediasoup protocol”), there are many things that have changed, including some naming and parameters.

In v2, for instance, each entry in encodings array of the createProducer request must contain a profile value (“default” if there is a single entry in encodings, or “low”, “medium”, “high” for defining simulcast layers):

Otherwise the server (mediasoup v2) will complain.

Anyway, there may be other things to “adapt” in libmediasoupclient to make it work with v2 server. I strongly encourage you to move to mediasoup v3. mediasoup v2 is not receiving any new feature.

Thanks for the extra details there. It sounds like it may be as much work to connect to v2 as it would be to just upgrade our server side. With all the new salability enhancements to v3 we were waiting to do that move when we had time to "do it right"™. I guess this is just another reason to do it sooner. Again, thank you for such an amazing tool!

it may be as much work to connect to v2 as it would be to just upgrade our server side

That’s my feeling :slight_smile:

I took another quick stab at it this morning and managed to find my issue. In case its interesting to someone else, after some quick reading of the code I found that I don’t need to send dtlsParameters to ms2 on createTransport. That’s optional. Somehow I had missed that point. Anyway, sending audio is working. Receiving audio seems to be working as well, at least I get an enabled track in the local consumer. Do I need to do anything with the track to hear it? In the web, I either route it through WebAudio or directly to an tag. It appears the webrtc c++ libraries automatically attach webrtc consumer tracks to the default audio out sink. Is that really true? I’d actually like to get the stream and do some processing on the audio.

And yes, we still plan to update to ms3, when we get some time. :slight_smile: This was a quick test to convince the team that c++ could actually work.

Thanks again for your assistance!

In each platform (Android, iOS, Linux, etc) libwebrtc uses a default settings for routing remote audio tracks. You can override them by passing a custom PeerConnection factory to device->CreateSendTransport() and device->CreateRecvTransport(). libwebrtc comes with a “fake” audio module for the PC factory and things like that (look for “fake” within the libmediasoupclient test/ folder).

However, note that this is libwebrtc specific. We do not provide support about this but just an interface to enable all capabilities of libwebrtc.

Perfect. I’ll spend some time and figure out that interface as I need it. Figuring out some of these interfaces is a bit tedious with the limited documentation that is available…

It sounds like the default settings should cause webrtc to automatically play out any remote track created by transport->Consume(…). Is that correct? I need to know this so I can know if my current issue is in my use of libmediasoupclient, or in my use of webrtc. I’m building an osx app using c++ and Qt. I really appreciate your help! The learning curve here is a bit steep!

Yes, that is.