The next generation of real-time streaming applications like video-conferencing apps will augment video streams that are generated and computed from the feed. Understanding computation overheads present in enabling such advanced features can inform scheduling strategies within the edge computing environments. This research project aims to explore the possibilities of distributing the components of WebRTC, the backbone technology for real-time video conferencing, across edge devices. We specifically focus on the analysis of the computational overheads associated with deploying augmented reality and FFmpeg filters in video streams.
The above figure demonstrates the workflow to setup a peer-to-peer WebRTC connection
The above flowchart describes the steps involved in overlaying Augmented-Reality filters by processing a video stream frame-by-frame
In the current directory, execute the below commands
make clean
make
./bin/main --server
Start a client in two different terminals
./bin/main --client
python3 ar-filters/app.py
ffmpeg -f v4l2 -i /dev/video0 -f mpegts udp://224.0.0.251:5353
For more details about the methodology and results, please refer to the project report.