Unity integration for the client

Hello,

We would like to use GitHub - versatica/libmediasoupclient: mediasoup client side C++ library inside an unity app. Is the right way to proceed? Is it supposed to work against a medisoup node server. Anyone else was doing this before? Any advice?

Thanks in advance,

Beka

I’ve tried that same thing a few months ago.

At first, we were looking into adding libmediasoupclient as a library to Unity but after reading this post I didn’t like that approach anymore (I am mostly familiar with Node-land and my C# colleague didn’t seem to think he’d be able to do it).

If that post doesn’t scare you, go for it. If it does, there’s always the possibility of using “Vanilla” WebRTC (e.g., MixedReality-WebRTC) and bridge it to be Mediasoup-compliant. This is what we went for with some success. We got the connection to Mediasoup server working and Video was transferred perfectly. However, we were faced with a problem we were not able to debug regarding Audio transmission. I’m not saying it’s impossible but unfortunately this was a show-stopper for us, considering that the MrWebRTC project has been put on ice and is not maintained anymore.

A third alternative, which we never went for, is to port SipSorcery to work with Unity and bridge it to Mediasoup (same as MrWebRTC). But this also seemed like too big of a time investment.

After all, we scrapped the idea of this project because of these problems.

If you manage to get it working, though, I’d be very happy if you could share your path. :pray:

Edit: I forgot to mention the Unity WebRTC implementation which probably is the easiest way to go (again with bridging SDP to Mediasoup). But for our use-case it didn’t work because we needed to build in UWP which this library did not yet support - there might have been some advancements since then, you should check it out.

Thanks for your answer and for sharing your experience. In our case, the unity app is already done with another streaming integrations, so I think we need to estimate the time investment. We can try with unity native webRTC implementation which we have already implemented for another use-case, architecture case…but I suppose that the scalability that libmediasoup client will get is better than the one than unity webRTC will get. And we are not sure about interoperability between both.

So, we will see and share our experience of course. Any another experience about best practices/approaches will be appreciated. We need to use audio, video and datachannel transmission.

Edit: GitHub - versatica/mediasoup-sdp-bridge: Node.js library to allow integration of SDP based clients with mediasoup I supposed this will be the way to do the interoperability

@beka
Considering you have a working WebRTC environment already, I definitely think bridging is what you are looking for. I cannot speak for datachannels since this was not part of our agenda. But in general I see no blocker for that. The link you posted is an API proposal only, it was never followed up on (afaik). This one is the fork of OpenVidu where they actually implemented the API. Double check in the README, they outline clearly about cons and pros of using their code.

You’d need to invest a bit of time to getting this to work for your specific use-case but after all it’s just “a” implementation. It’s no sorcery, if you bring enough time, you might be able to make your own. Basically what the code is meant to do is translate incoming SDP messages to ORTC and use that to interact with Mediasoup API. On the way back you’d have to do the same thing to translate it back to SDP and send it out to clients.

Edit: It’s a big help to check mediasoup-client source code where this exact thing is implemented.

Thanks! I have just asked about limitations because I was reading fastly openvidu fork README and it was not clear for me if the limitations were solved with the fork or not. Really thanks for the indications and analysis

1 Like

I am successfully using libmediasoup client in multiple projects. This library allows a lot of flexibility, but you have to get some experience with it in order to be comfortable integrating it.

Is the right way to proceed?

Depends on the experience of the people involved in the project (among other things). How are your C++ and libwebrtc skills?

Is it supposed to work against a mediasoup node server.

That’s the intended use case, see the canonical example.

Any advice?

It depends a lot on what you are expecting the integration to achieve. Grab a Unity canvas (or whatever the rendering surface is called in unity) and stream a video out of it? I’d say yes, that is possible, but not trivial.

Thanks for your information. We have a unity scene with pointclouds or RGBD video and stream it via socket.io acting as SFU (encoders, decoders, another streaming samples already done). Pointclouds are streamed via webRTC using unity native plugin and libdatachannel C++ as well in another architecture (experimental MCU one), RGBD pipeline not implemented yet in this architecture. The milestone now is migrating SFU to webRTC using mediasoup ideally. Reading documentation and answers, I think that implementing libmediasoup client is possible for us and possibly is the best option to scale but it takes time (possibly it will be our plan B or the plan A for the next phase). The SDP bridge option will be faster but not really scalable, it seems to me like that. Gstreamer pipeline implementation inside unity (integrating it)is the another possibility that I can see. Any another advice or suggestion is welcome. I hope this is useful for more people as well.

Thanks,
Beka

1 Like

I think that implementing libmediasoup client is possible for us and possibly is the best option to scale but it takes time

Implementing any solid solution takes time, there’s just no way around it. If you want to stream through WebRTC, I think libmediasoupclient can give you that confidence, especially if you want to stay as much as possible within the mediasoup ecosystem.

Gstreamer pipeline implementation inside unity (integrating it)is the another possibility that I can see.

Yes, I think that this will also work. But it will also require a decent time investment to get it right.

1 Like

Thanks for the advice. Maybe gstreamer will be the first try because we can split team (I will work on gstreamer part and another person working on unity integration) and try to be faster.

I will keep you updated about succes or failure :smile:

1 Like

Hey @beka .

I see you have found a path already, please keep us posted :slight_smile:

I would like to ask, if you don’t mind, what specifically are your concerns regarding scalability? Scalability should be 100% server related and have nothing to do with the client implementation. I’m a bit puzzled. Maybe there’s something, I’m not seeing?!
What bridging in this context means is basically putting an interpreter between SDP endpoints and Mediasoup Transports. You’d have to implement some features yourself, depending on how much of the client API you’d like to use, but at the core, the SFU would work as intended and it is up to you to scale efficiently in the backend.

That said, good luck with your project. Eager to hear updates if you’re willing to share.

Hello,

My concern is that the worst interoperability usually produce more bugs and worse experience. Our project is enough experimental to add possible not stable layers if we can choose another option. I don’t discard completely bridging, but it is not my first option. libmediasoup client and gstreamer are widely maintained and sdp bridging for mediasoup is not. This is another criteria to choose a different starting point.

Thank you!