Let me see if I get this right:
the headless chromium redirects all consumers(audio & video) to virtual A/V devices, and then in-turn I record those?
Just trying to understand - how will this solve not wanting to post-process individual A/V streams?
I was wondering if we could get our headless Chromium to output the (screen-share) UI to something like Xvfb (display server that runs in virtual memory) & then record that output through ffmpeg?