I know that mediasoup is based on SFU architecture. Is it possible to make face processing (for example - emotion recogizing) on the server side?
5 people are connected in one “room” and they send streams to NodeJS server (which uses API from media soup). Then, that server get access to each of the streams, recognises emotions (using some external libraries) and make some filters on that streams. And finally, server sends back that streams with some changes to all of the participants in that room? Is it possible and efficient using mediasoup?
I know that this scenario is more MCU than SFU - so is it possible to make a hybrid solution using mediasoup?