DeepFake Detector AIs Are Good Too!

DeepFake Detector AIs Are Good Too!


Dear Fellow Scholars, this is Two Minute Papers
with Károly Zsolnai-Fehér. We talked about a technique by the name Face2Face
back in 2016, approximately 300 videos ago. It was able to take a video of us and transfer
our gestures to a target subject. With techniques like this, it’s now easier
and cheaper than ever to create these deepfake videos of a target subject provided that we
have enough training data, which is almost certainly the case for people who are the
most high-value targets for these kinds of operations. Look here. Some of these videos are real, and some are
fake. What do you think, which is which? Well, here are the results – this one contains
artifacts and is hence easy to spot, but the rest…it’s tough. And it’s getting tougher by the day. How many did you get right? Make sure to leave a comment below. However, don’t despair, it’s not all doom
and gloom. Approximately a year ago, in came FaceForensics,
a paper that contains a large dataset of original and manipulated video pairs. As this offered a ton of training data for
real and forged videos, it became possible to train a deepfake detector. You can see it here in action as these green
to red colors showcase regions that the AI correctly thinks were tampered with. However, this followup paper by the name FaceForensics++
contains not only not only an improved dataset, but provides many more valuable insights to
help us detect these DeepFake videos, and even more. Let’s dive in. Key insight number one. As you’ve seen a minute ago, many of these
DeepFake AIs introduce imperfections, or in other words, artifacts to the video. However, most videos that we watch on the
internet are compressed, and the compression procedure…you have guessed right, also introduces
artifacts to the video. From this, it follows that hiding these DeepFake
artifacts behind compressed videos sounds like a good strategy to fool humans and detector
neural networks likewise, and not only that, but the paper also shows us by how much exactly. Here you see a table where each row shows
the detection accuracy of previous techniques and a new proposed one, and the most interesting
part is how this accuracy drops when we go from HQ to LQ, or in other words, from a high-quality
video to a lower-quality one with more compression artifacts. Overall, we can get an 80-95% success rate,
which is absolutely amazing. But, of course, you ask, amazing compared
to what? Onwards to insight number two. This chart shows how humans fared in DeepFake
detection, as you can see, not too well. Don’t forget, the 50% line means that the
human guesses were as good as a coinflip, which means that they were not doing well
at all. Face2face hovers around this ratio, and if
you look at NeuralTextures, you see that this is a technique that is extremely effective
at fooling humans. And wait…what’s that? For all the other techniques, we see that
the grey bars are shorter, meaning that it’s more difficult to find out if a video is a
DeepFake because its own artifacts are hidden behind the compression artifacts. But the opposite is the case for NeuralTextures,
perhaps because its small footprint on the videos. Note that a state of the art detector AI,
for instance, the one proposed in this paper does way better than these 204 human participants. This work does not only introduce a dataset,
these cool insights, but also introduces a detector neural network. Now, hold on to your papers because this detection
pipeline is not only so powerful that it practically eats compressed DeepFakes for breakfast, but
it even tells us with remarkable accuracy which method was used to tamper with the input
footage. Bravo! Now, it is of utmost importance that we let
the people know about the existence of these techniques, this is what I am trying to accomplish
with this video. But that’s not enough, so I also went to
this year’s biggest NATO conference and made sure that political and military decision
makers are also informed about this topic. Last year, I went to the European Political
Strategy Center with a similar goal. I was so nervous before both of these talks
and spent a long time rehearsing them, which delayed a few videos here on the channel. However, because of your support on Patreon,
I am in a fortunate situation where I can focus on doing what is right and what is the
best is for all of us, and not worry about the financials all the time. I am really grateful for that, it really is
a true privilege. Thank you. If you wish to support us, make sure to click
the Patreon link in the video description. Thanks for watching and for your generous
support, and I’ll see you next time!

100 thoughts on “DeepFake Detector AIs Are Good Too!

  1. 2nd one was very hard, rest was easy,
    Observation : Current AI often mess up with opening and closing of mouth and visibility of Teeth specially.

  2. Deepstate has been using Deepfakes and VICSIMS for many years. High profile shooters or deepfake patsies? (Run the software on Sandyhook, 911, Vegas, ETC…)

  3. People living in some small town of Africa or slums of Mumbai don't even know about deep fake let alone deep fake detector. They will just see a forward whatsapp video on their Chinese smartphones and believe it. This can have devastating effects.

  4. Ultimately in the GAN battle between fakers and detector detectors will have to lose.
    Movie is basically just many numbers representing pixels that change in time.
    Now it looks like even battle, but each time fakers improves, the error used by detector decreases, and because this number is finite then error dropping to 0 is possible.

  5. You can probably prove something is a deep fake, but you can never prove something is real as the deep fake may simply be more advanced than the detector or it may simply fall in the one out of a thousand(or even million) that the generally superior detector sometimes gets wrong.

  6. The truth these days is predicated on whom ever has the biggest platform for information dissemination, and can reach the largest audience the quickest. Whether the information is factual or not is of little importance when facts and empirical data are tossed out for emotional arguments of little to no value.

  7. Absolute madman for actually informing the wider audience with your academic knowledge. Would be happy to give you a buck or two in patreon if I just had the money for that as a student 😀

  8. So this is what the government is going to use to make anyone guilty of anything they want by staging false crimes with you as the criminal.

  9. The limitation then is the mechanism. Video as a source of information is becoming obsolete. We need a new way to transmit information.

  10. I love this, the best thing I've heard all day. The people who achieved this are guardians of truth and integrity.

  11. My Paranoia thanks you and yours… I will Never Sleep Sound AGAIN.
    Talk about Plausible Deniability… Now with computer assistance.

  12. Deep fakes are being pushed in the public domain because political elite have, in reality, done and said unmentionable things while on videotape. This is a blackmail scheme in exchange for money and power. They will try to blame deep fakes when they come out. Good thing we can prove their validity

  13. All this is designed to do is foment and exacerbate distrust and subsequent paranoia in us all. We are increasingly being programmed to question everything in the conscious world, to the point which mankind becomes frozen by inaction. All this as side effect of the inability to discern the reality from the manipulated. A piece of the larger weapon that will spell the end of mankind as we know it.

  14. Bottom left one was fake af, but it was the only one I really looked at – I didn't manage to look at the rest of them in time. People's lips and eyebrows still look janky in some of these. Like the source actor moves their mouth in a way that the normal person never does so you don't have the source material for a realistic way of faking it.

  15. in the 6 videos where some were real and some were fake, the real videos were obviously real but not all of the fake ones were obviously fake.

  16. the first deep fakes were easy cause the ones with hand gestures were obviously real so until people can find a way to fake those we know they are legit cause they match the expressions but then again i guess you would 3d model them

  17. Next step: Rate deepfakes fitness using deepfake-detectors, i.e. next-gen deepfakes are bred to fool the AI detectors, which are in turn bred to detect the next-gen deepfakes, and so on. Eventually, the fakes will be undetectable.

  18. What worries me is that the media will use deepfakes to attack a political person and the average viewer will believe it even though it was proven fake online because the average viewer doesn't care to check to see if its real especially when the targeted person is someone they hate.

  19. I think in the next step, adversarial method, that minimize all this artifacts will be presented. And after multiple steps it will converge to equality of deepfakes and real videos.

  20. "how many did you guessed correctly?"
    All of them. Because for whatever reason, deepfake always uses formal videos. I'm so smart. Can I go drive spaceships now?

  21. I advise only trusting information that verifiably comes from its original source. This is the strategy necessary to avoid political or millitary disinformation.
    I also hope you consider the ethical quandary that by providing any information to the millitary leaders at Nato, you are contributing to the development of millitary weapons, which is in my opinion a deeply immoral activity for any scientist to knowingly participate. We have a duty to avoid or sabotage all weapons of war.

  22. What are the implications in war and similar problematic topics?
    How far could one go to manipulate people by using this technology. Especially if one can't or hardly can detect those. It's interesting that there are ways to detect false videos by using AI. However, this also should be possible for AI that fabricate or are used to fabricate such videos

  23. Presumably, 0% accuracy would be a 50% success rate, and hence 50% accuracy would mean a 75% success rate, not a coin flip.

    Else anything below 50% accuracy, like the NeuralTextures part, wouldn't make sense.

  24. I really do hope lie detector will also be a huge factor as they will be heavily needed to when someone is being framed by a rogue government because this is literally how to diy start a war on steroids lmao

  25. Is this technology needed for us?

    making fake for selling security?

    it's very evil technology.

    it's like releasing animals for selling gun.

  26. The era of "minority report" has been started. I can see how dangerous this technology can be. Imagine fake trump declaring war on iran with a nuclear strike🤒

  27. Yes, we understand how hard the Pedos are scrambling to promote this "deepfake" bullshit. Cast doubt on the blackmail tapes if they ever see the light of day in public. Your mask is slipping…

  28. Rule of thumb:
    If you see Trump doing or saying something that no sane person would, it's probably not fake.
    If you see him being human, its probably fake.
    And if you see any trace of humanity in Putin's eyes, its fake.
    You might as well deep-fake a south park character, as most world leaders or celebrities.
    Reality has made this technology pointless in 2019.
    I mean, really, If you saw one of them eat their own poop on TV tomorrow, would it shock you?

  29. 0:45 From the two seconds I got I guessed 3 of the 4 fake ones. They look terrible – like someone is painting them real time with water paint.

  30. Someone needs to package these deepfake detectors into an easily used (and ideally free) end product that anyone can use to screen a video they are suspicious of.

Leave a Reply

Your email address will not be published. Required fields are marked *