Defake

George Aboudiwan | 2019

Oh wow look at this video! It’s Obama… well I think it’s Obama 🤔 I mean it looks and sounds like him but I’m not sure… Wait why is Obama saying those things...


With elections season approaching the increase of misinformation has become a tool and weapon used to discredit public figures and polarize voters. DeepFakes are the new kids on the block with the potential to create misinformation. DeepFakes are are nearly impossible to detect which makes them an even more attractive weapon political candidates can employ.

DeepFakes are a class of counterfeit videos that superimpose face-images and even vocals allowing an attacker to realistically impersonate anyone they chose. Deepfake technology has progressed and it has become increasingly harder for a human to recognize these forgeries much less a computer.

My Hackathon team set out to combat this problem. For our first approach, we attempted to use regular face detection software – however we were not seeing any results. After talking with the AWS rep at the Hackathon we realized that the shortcoming of Face Identification Software which looks for similarities in the faces and ignores the minute differences. However, with DeepFake videos it is the minute differences that are going to identify an imposter from the actual subject.

Our hypothesis was that through the environment some is brought up/lives in they will develop their own speaking habits – something that will be expressed through their lips and eyes. DeepFakes require the attacker’s face to be slightly superimposed onto the victims to control features like the lips or eyes and doing so will alter the speaking habits of the victim. By building a model of Obama for example we will be able to train a computer to identify Obama’s (or anyone’s) natural speaking habits and thus the program will be able to identify a deepfake when this pattern is inconsistent.

Surprisingly our model worked!!

To make the model accessible to users we attached it to a chrome extension and a web application where users can enter a video url and test the video's legitimacy. The Chrome extension when enabled would automatically scan the webpage for videos and run them through the model. The second a video is flagged as a DeepFake it would alert the user in browser. This feature is important to stopping the spread of misinformation because it informs social media users of counterfeit videos before they are able to share, like or comment.

You Can Check out the DevPost Here!