I mean I now have a latency of about 500-1000ms every time I watch my stream from a new device / browser tab. So what if I define multiple transports / consumers ahead of time to avoid this problem? How is it generally accepted to solve such problems?
This is unreasonably large latency, never seen anything like that with mediasoup, even remotely. There must be something with you app or networking, but impossible to tell from such description.
Please let me know what information you might find useful?
Now I can say that I am not doing anything unusual, I have a meteor application on the client, and a backend on node.js. Both are in different docker containers. Communication between them follows the instructions from here: mediasoup :: Communication Between Client and Server
I assume that the delay occurs in the mediasoup or due to the network due to the fact that the black screen disappears during the connection to the broadcast after the call to the webrts.connect() and the call to the consumer.resume()
How do you measure delay? Did you already check about:webrtc
(Firefox) or chrome://webrtc-internals
(Chromium)?
By latency I mean the time from the socket connection establishment with the server to the time the broadcast starts, now I do this using the console directives. Time inside my nodejs application
This is not the kind of latency people usually talk about when they talk about latency in RTC.
I guess you’ll have to profile what is happening and where it spends most of the time.
If I say that the “delay” for me also manifests itself in how long the poster from the “poster” attribute is shown before the broadcast starts, will that change your opinion?
You are not talking about latency but about setup time, which depends on many factors (including your app code/flow/logic).
Pls, don’t close the issue, i’ll profile my app with nsolid, then coming back with explained solution or more questions.
Also can i ask you about “normal” time of setup time? 100ms ? 200ms?
Hi! Can i ask you what is normal latency value? 100ms? 200ms?
Latency from starting point (where you are producing stream) to end point (where you are consuming) is normally 500ms. ( Webrtc is ultra low latency protocol approx. 500 milliseconds)
If you are talking about stream initialize time, then it depends upon many factors.
Like Web camera response time when you ask for video stream + creating connection with mediasoup server (signaling server) + creating transports, adding tracks…
You tell me?
My chat-servers send a message via LAN to broker server then will instruct media servers around the world. Would love to say based off pinging the server anywhere from 20ms to 250ms? For initial boot-up.
It’s just that the guy who created this repository GitHub - michaelfig/mediasoup-broadcast-example: Mediasoup WebRTC vanilla JS broadcast example. doesn’t have this delay. I can start streaming from my webcam and then open 10 tabs as a subscriber, and when I hit the subscribe button I don’t see a millisecond of delay before I see the broadcast. But in my application, unfortunately, this is not the case, and I cannot understand what is the matter, since profiling gave no results. Also you are right i mean stream initalize time, but I don’t understand why this installation delay is not in the example that I posted above
I didn’t quite understand what you mean
Can you share your application url?
Broadcast demo and your application using same kind of server (configuration, server location, inbound and outbound network)?
Can I write to you in pm?
Yes, you can send…
It means if your connection is Grade A. You’re left with it being a technical issue in your signalling/media servers. This could be as simple as over-looping, being stuck in a loop, nesting statements when not necessary all sorts can slow this down.
Mediasoup is a bit complicated so I imagine it’s in your routing of messages somewhere. Maybe start applying friendly timers you can remove to give you an idea where you spend the most time calculating.