Is it the same structure described here https://aomediacodec.github.io/av1-spec/av1-spec.pdf in paragraph 5.3.1 ?
Iām not sure aiming at what chrome is doing today is the right approach. I do not believe the implementation in Chrome is complete, and it is also only client side.
W3C Insertable stream is just an API to access (encoded) media frames. It does not apply any transform by itself. Are you referring to media encryption, a.k.a. SFrame?
Wether encrypted or not, using the DD instead of the payload itself has several advantages in systems using SFUs, including but not limited to the capacity to decide immediately upon packet reception wether to keep or drop a packet in order to send a specific layer, given previous packet loss.
yes .
You linked to the Av1 codec bitstream specification, while I am speaking about the AV1 RTP payload specification.
These are not the same structure, at all.
Look at the appendix of this (and note the ārtpā string in the name):
https://aomediacodec.github.io/av1-rtp-spec/
So, if I have correctly understood, the required informations for the SFU (keyframe, scalability layers, etc.) are also contained in the RTP extension header; if this is this case, parsing the OBU elements is not necessary.
Edit: I have identified the header extension, so the required informations are all contained in the Dependency Descriptor Extension. Here is what chromium does:
https://chromium.googlesource.com/external/webrtc/+/master/video/rtp_video_stream_receiver.cc#397
One thing that is not clear is where I can find the extension ID value. In the AV1 RTP specs there is a SDP example with
a=extmap:4 https://aomediacodec.github.io/av1-rtp-spec/#dependency-descriptor-rtp-header-extension
But this extension is not present in the generated offer
.
I told you NOT to follow what chrome does todayā¦
This should help you understand the structure and the server side implementation.
Thanks for the link. One of the main changes required to use Dependency Descriptors will be implementing a receive queue. At the moment, from what I know from mediasoup source code, the received packets are simply re-distributed to the consumers as they arrive (mediasoup/Router.cpp at v3 Ā· versatica/mediasoup Ā· GitHub).
Right. AFAIU such a queue should be implemented in the SVCConsumer
class.
When the consumer has rtx
enabled, packets are put in a buffer in RtpStreamSend
with a maximum size of 600u
, so you are buffering up to 600 rtp packets to be used in case of retrasmissions (mediasoup/SvcConsumer.cpp at v3 Ā· versatica/mediasoup Ā· GitHub).
We can use this buffer to calculate the Dependency Descriptors, but in this case we will duplicate the evaluation for each consumer, that is not efficient.
Why not using a buffer at the Producer side instead? In this case, we could use the Dependency Descriptors to evaluate if a retransmission is required or not from the upstream Producer.
Yes, it makes sense doing the needed buffering in Producer side. But I wouldnāt like to buffer RTP packets in the Producer but just the minimal info about the Dependency Descriptor.
Yes, we need to keep only the header extensions.
One thing that I have missed is if libwebrtc currently inserts the Dependency Descriptor header extension. From the article posted by @DrAlex is not clear if this patch (https://chromium-review.googlesource.com/c/chromium/src/+/2623011) is required to activate this feature, or they are using a libwebrtc customized version.
Iām not up to date with AV1 stuff (I love to see how others do it ), but AFAIR such a header extension is not yet implemented.
Updates:
- running chromium with
--force-fieldtrials="WebRTC-DependencyDescriptorAdvertised/Enabled"
enables the creation of DependencyDescriptor extensions headers; - maybe, mangling the SDP description could be another way to enable the DD extension;
- In my branch Iāve updated the
supportedRtpCapabilities
to include the new header extension and Iāve just added some initial header parsing.
Thanks, info added to Implement AV1 codec Ā· Issue #512 Ā· versatica/mediasoup Ā· GitHub
Do you have any updates on av1 support ļ¼
I just read a new blog post by Lorenzo about AV1 in Janus: https://www.meetecho.com/blog/av1-svc/
It seems like it would make sense to first have at least some support for AV1, more specifically, forward everything and ignore SVC portion for now. It can later be upgraded with proper SVC support as that seems to be quite a bit of work to pull off.
hello guys, any updates on how to activate the AV1 in the mediasoup ?
is there any code sample on how to, specially that the v3 source code contains VP8 and other codecs as default
thanks in advance
What does āactivateā mean? Those are the codecs supported: mediasoup :: RTP Parameters and Capabilities
TL;DR: Someone would need to put an effort to implement AV1 support. No one volunteered yet