This is the implementation code for Detection of Manipulated Facial Images using Faceforensics++ data.
For detailed documentation please refer following medium link.
As there is a lot of active research going on image/video generation and manipulation, that helps to evolve many new ways to manipulate the original sources at the same time this leads to a loss of trust in digital content, but it might even cause further harm by spreading false information and the creation of fake news.
We need to build a model such that it should recognize whether the given video(or image sequence) is fake or real.
The Faceforensics++ data was collected by Visual Computing Group (http://niessnerlab.org/ (http://niessnerlab.org/)) , one can obtain this data on accepting their terms and conditions.
- Total 1000 videos (videos are like news readers reading news) are downloaded from youtube.
- Manipulated vidoes are generated by using 3 automated state-of-the-art face manipulation methods (viz., DeepFakes, FaceSwap, Face2Face) that are applied to these 1,000 pristine videos.
- Gathered images from these both pritine videos(real) and manipulated videos(fake)
To get more understanding please visit https://arxiv.org/abs/1901.08971
and to download and extract the data please visit https://github.com/ondyari/FaceForensics (https://github.com/ondyari/FaceForensics)
- Data_Analysis_Extraction.ipynb ----> this file contains understanding and extracting and preprocessing the data
- modeling.ipynb -----> All modeling part was included in this file
- final.ipynb -----> Total pipline for performing video classification was included in this file