Skip to content

Analysis of computational overhead of AR filters in WebRTC video streaming

License

Notifications You must be signed in to change notification settings

Kalit31/AR-Video-Streaming-over-WebRTC

Repository files navigation

Augmented Reality Video Streaming over WebRTC

The next generation of real-time streaming applications like video-conferencing apps will augment video streams that are generated and computed from the feed. Understanding computation overheads present in enabling such advanced features can inform scheduling strategies within the edge computing environments. This research project aims to explore the possibilities of distributing the components of WebRTC, the backbone technology for real-time video conferencing, across edge devices. We specifically focus on the analysis of the computational overheads associated with deploying augmented reality and FFmpeg filters in video streams.

The basics

WebRTC workflow The above figure demonstrates the workflow to setup a peer-to-peer WebRTC connection

Video Streaming Video streaming setup

Frame Processing flowchart The above flowchart describes the steps involved in overlaying Augmented-Reality filters by processing a video stream frame-by-frame

Experimental Setup

Step1: Compile source code

In the current directory, execute the below commands

make clean
make

Step2: Start the signalling server

./bin/main --server

Step3: Bring 2 clients into the system

Start a client in two different terminals

./bin/main --client

Step4: Start the AR-processing script

python3 ar-filters/app.py

Step5: Start the video streaming source

ffmpeg -f v4l2 -i /dev/video0 -f mpegts udp://224.0.0.251:5353

For more details about the methodology and results, please refer to the project report.

About

Analysis of computational overhead of AR filters in WebRTC video streaming

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published