Can We Detect Neural Image Generators?

Can We Detect Neural Image Generators?


Dear Fellow Scholars, this is Two Minute Papers
with Dr. Károly Zsolnai-Fehér. Today, we have an abundance of neural network-based
image generation techniques. Every image that you see here and throughout
this video is generated by one of these learning-based methods. These can offer high-fidelity synthesis, and
not only that, but we can often even exert artistic control over the outputs. We can truly do so much with these. And if you are wondering, there is a reason
why we will be talking about an exact set of techniques, and you will see that in a
moment. So the first one is a very capable technique
by the name CycleGAN! This was great at image translation, or in
other words, transforming apples into oranges, zebras into horses, and more. It was called CycleGAN because it introduced
a cycle consistency loss function. This means that if we convert a summer image
to a winter image, and then back to a summer image, we should get the same input image
back. If our learning system obeys to this principle,
the output quality of the translation is going to be significantly better. Later, a technique by the name BigGAN appeared,
which was able to create reasonably high quality images and not only that, but it also gave
us a little artistic control over the outputs. After that, StyleGAN and even its second version
appeared, which, among many other crazy good features, opened up the possibility to lock
in several aspects of these images, for instance, age, pose, some facial features and more,
and then, we could mix them with other images to our liking, while retaining these locked-in
aspects. And of course, DeepFake creation provides
fertile grounds for research works, so much so that at this point, it seems to be a subfield
of its own where the rate of progress is just stunning. Now that we can generate arbitrarily many
beautiful images with these learning algorithms, they will inevitably appear in many corners
of the internet, so an important new question arises – can we detect if an image was made
by these methods? This new paper argues that the answer is a
resounding yes. You see a bunch of synthetic images above,
and real images below here, and if you look carefully for the labels, you’ll see many
names that ring a bell to our Scholarly minds. CycleGAN, BigGAN, StyleGAN…nice! And now, you know that this is exactly why
we briefly went through what these techniques do at the start of the video. So, all of these can be detected by this new
method. And now, hold on to your papers, because I
kind of expected that, but what I didn’t expect is that this detector was trained on
only one of these techniques, and leaning on that knowledge, it was able to catch all
the others! Now that’s incredible. This means that there are foundational elements
that bind together all of these techniques. Our seasoned Fellow Scholars know that this
similarity is none other than the that the fact that they are all built on convolutional
neural networks. They are vastly different, but they use very
similar building blocks. Imagine the convolutional layers as lego pieces,
and think of the techniques themselves to be the objects that we build using them. We can build anything, but what binds these
all together is that they are all but a collection of lego pieces. So, this detector was only trained on real
images and synthetic ones created by the ProGAN technique, and you see with the blue bars
that the detection ratio is quite close to perfect for a number of techniques, save for
these two. The AP label means average precision. If you look at the paper in the description,
you will get a lot more insights as to how robust it is against compression artifacts,
a little frequency analysis of the different synthesis techniques, and more. Let’s send a huge thank you to the authors
of the paper, who also provide the source code and training data for this technique. For now, we can all breathe a sigh of relief
that there are proper detection tools that we can train ourselves at home. In fact, you will see such an example in a
second. What a time to be alive! Good news! We now have an unofficial discord server where
all of you Fellow Scholars are welcome to discuss ideas and learn together in a kind
and respectful environment. Look, some connections and discussions are
already being made – thank you so much for our volunteering Fellow Scholars for making
this happen! The link is available in the video description,
it is completely free, if you have joined, make sure to leave a short introduction! Meanwhile, what you see here is an instrumentation
of this exact paper we have talked about, which was made by Weights and Biases. Weights & Biases provides tools to track your
experiments in your deep learning projects. Their system is designed to save you a ton
of time and money, and it is actively used in projects at prestigious labs, such as OpenAI,
Toyota Research, GitHub, and more. And, the best part is that if you are an academic
or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wandb.com/papers
or just click the link in the video description and you can get a free demo today. Our thanks to Weights & Biases for their long-standing
support and for helping us make better videos for you. Thanks for watching and for your generous
support, and I’ll see you next time!

96 thoughts on “Can We Detect Neural Image Generators?

  1. This is a very impressive paper, thank you for covering it! I am very impressed by how well it does with all of the different Gans. I expected that it would have some troubles with of the other ones, but this is very cool!

  2. Using an AI to detect AI generated images, and after we will create an AI who detect if an AI can detect an AI generated image…

  3. Something like this should be added to search engines and social media to prevent the spread of misinformation.

  4. How long will it hold up though? Can't you put the fake spotter and the generator against each other and let them train each other. Come to think if it… there's a neural network sitting in this lump of skin and bone on my shoulders. I saw one of your videos about a GAN used to put noise into a photo to make another net mis-recognise it badly. How long until that's being done on people… training a network to create images/sounds to manipulate humans. I'm going stop thinking about this!

  5. I think the advancements in detection software will keep up with advancements in artificially produced media indefinitely, so that no matter how indistinguishable from reality this media becomes, we won't really have to worry about it

  6. If you generated a fake image then displayed it on a monitor and took a picture of the monitor displaying the image and you gave this algorithm the image from the camera , would this technique be able to spot that it was a picture of a fake generated image in that case? I doubt it.

  7. If the detection method has a score that is differentiable, you can tweak your generator to make sure it fools it. I expect this will happen in a matter of weeks.

  8. Once we have the images perfected, add physical law, then sell codes to break those laws, and finally, achieve the dream we always wanted: to feel superior.

  9. One question I would have is: if you generate a picture with one of these methods, and afterward add noise, filters or distortions, will it still be able to recognise that it was fabricated from the start? I also recall this method which consisted in adding careful noise in order to fool an image recognition tool into mislabelling what it sees. Could this be possible in this case, that by adding some careful noise, we could trick the AI to think the images as authentic?

  10. I'm glad you put in an icon that appears when it's time to hold on to our papers so we know when to hold on to our papers.

  11. Nice, now we can use that detector as adversarial network to create even better images with the generative one

  12. considering that GAN already include a real/fake discriminator and are trained to fool it, shouldn't any "fake detection" be considered as a stepping stone to an even better GAN?

  13. Yes – as others have said – what about using a GAN with this detector? Surely it will be possible to create generated images that are extremely difficult to distinguish from real images.

  14. Got this from link you left below. Thanks for the video.

    This XML file does not appear to have any style information associated with it. The document tree is shown below.
    <Error>
    <Code>AccessDenied</Code>
    <Message>Access denied.</Message>
    <Details>
    Anonymous caller does not have storage.objects.get access to discord/invite/ss3xbtt.
    </Details>
    </Error>

  15. Most GAN images are great for the first minute you're looking at it, then there's an artifact of an earring or hairstyle, tattoo or background set piece that breaks the illusion. Almost as effective to use your own brain for detection because we know what an object is supposed to look like. The technology is getting better though, and I can see the value in building a detector GAN as the output will certainly improve.

  16. Mmm cool technique but it leaves me with some questions. In a GAN, you have a generator and discriminator. The discriminator essentially does the fake detection already. So how come this technique is so much better than these GAN discriminators?

  17. So wouldn't hooking the Gen nets up to this make up a GAN which could get closer to undetectable image synthesis?

  18. So … is the thumbnail real or fake?

    edit: Never mind. Just read the description. It's a stock image.

  19. I wonder, if you can attach algorithm of "spoting deep fake" as fitness rule, so it's less recognized, the better, therefore make result resistant to debunking software. Also I wonder, if that would be possible, then would it look more realistic for humans? Would that be closer to reality, or just tricking algorithm

  20. Yes but the nature of GANs is such that one should be able to train a generator specifically to beat the detector network.

  21. What happnes, if we train an image generating AI based on an AI, that was trained to detect ai generated images?

  22. I wonder if the common operations between all these methods that this AI detects could be isolated and optimized separately to increase performance across AI.

  23. Love the Lego analogy. I suppose it's still possible that, if the number of neurons and layers was vastly increased, you could effectively be using a larger variety of smaller bricks, which would only be detectable by an equally 'fine-grained' network.

    Perhaps I'm taking the analogy too far.

  24. Isn't every detection tool just more fodder for training the GAN?
    In other words, you simply add the detector to the discriminator.

  25. I wonder if poor quality mono audio recordings will ever come back to their true stereo audio majesty.. it looks like almost anything is possible with this technology.

  26. In the future the AI just needs to create these illusions, like converting a painting to a real life scenery, then inject them into our brains for us to think we are actually living in those places, she can simulate everything else, the people, the creatures, the flow of life itself and we would emotionally interact like it really had any meaning, we would never know.

  27. Next month's paper: "GAN's have been improved using last month's detection network until that detection no longer works"

  28. Next month: a paper training a neural image generator adversarially against this new detector, rendering it useless

  29. the fact that probably someone hired to read the positives and publish test results is the likely scenario is astoundingly terrifying. it means that that person can be picked to lie if something is real or fake. (conspiracy theories.. nah.. just think about what people are willing to do for a million dollars.)

  30. What if you train a neural network to learn from what the one used to detect a image generated by another neural network to finally correct the original neural network so that the one which detects real than generated image fails?

  31. Lovely to hear the phrase , " Dear Fellow Scholars, This is Two Minute Papers with Dr. Károly Zsolnai-Fehér"

  32. Oooh datapoisoning of image datasets by synthetic images is going to be a thing. I wonder if it will matter !?

  33. I assume this can not be used as a discriminator to produce even better deepfakes, at least directly? That's promising

  34. this is awsome, but i also kinda feel like these algorythms should in some way be clasified as to prevent bad guys from training theri nural net generators against these detection algorythms. Then at the same time i want this stuff to be public to aide other researhcers

  35. If you use a GAN to generate a higher resolution image and then downsample it through compression algorithms, maybe all the unnatural artifacts disappears and just get blended out by compression artifacts. What about someone using their phone to photograph a generated image vs a real image. The real image is still real and the fake one is still fake, even though it's been produced naturally by a camera.

  36. Hey Károly
    I came across this paper today about a new method of training NN (not using backprop). Could it be interesting material for a future video?
    https://techxplore.com/news/2020-03-deep-rethink-major-obstacle-ai.html
    arxiv: https://arxiv.org/abs/1903.03129
    Cheers
    Dário

  37. Hi Karoly.

    I heared that the positiv comments are making your day.

    I found your channel a few days ago and its my new favorite one since i saw all videos on vsause.

    I hope you got a little smile like me right now, if u even read this.

    I wish you and your loved ones all good. And i hope i you keep enjoying making this channel.

    Im a poor student of philosophy and psychology so i cant afford your patreon yet.

    Greetings from Germany

  38. Me: well, by being careful you can definitively identify the fakes.
    Me: 30s later: wait you mean those oranges were actually apples ?!

Leave a Reply

Your email address will not be published. Required fields are marked *