We Taught an AI To Synthesize Materials | Two Minute Papers #251

We Taught an AI To Synthesize Materials | Two Minute Papers #251


Dear Fellow Scholars, this is Two Minute Papers
with Károly Zsolnai-Fehér. Due to popular request, here’s a more intuitive
explanation of our latest work. Believe it or not, when I have started working
on this, Two Minute Papers didn’t even exist. In several research areas, there are cases
where we can’t talk about our work until it is published. I knew that the paper would not see the light
of the day for quite a while, if ever, so I started Two Minute Papers to be able to
keep my sanity and deliver a hopefully nice piece of work on a regular basis. In the end, this took more than 3000 work
hours to complete, but it is finally here, and I am so happy to finally be able to present
it to you. This work is in the intersection of computer
graphics and AI, which you know is among my favorites. So what do we see here? This beautiful scene contains more than a
100 different materials, each of which has been learned and synthesized by an AI. None of these daisies and dandelions are alike,
each of them have a different material model. The goal is to teach an AI the concept of
material models such as metals, minerals and translucent materials. Traditionally, when we are looking to create
a new material model with a light simulation program, we have to fiddle with quite a few
parameters, and whenever we change something, we have to wait from 40 to 60 seconds until
a noise-free result appears. In our solution, we don’t need to play with
these parameters. Instead, our goal is to grab a gallery of
random materials, assign a score to each of them saying that I liked this one, I didn’t
like that one, and get an AI to learn our preferences and recommend new materials for
us. This is quite useful when we’re looking
to synthesize not only one, but many materials. So this is learning algorithm number one,
and it works really well for a variety of materials. However, these recommendations still have
to be rendered with a light simulation program, which takes several hours for a gallery like
the one you see here. Here comes learning algorithm number two to
the rescue, a neural network that replaces this light simulation program and creates
photorealistic visualizations. It is so fast, it not only does this in real
time, but it is more than 10 times faster than real time. We call this a neural renderer. So we have a lot of material recommendations,
and they are all photorealistic that we can visualize in real time. However, it is always a possibility that we
have a recommendation that is almost exactly what we had in mind, but need a few adjustments. That’s an issue, because to do that, we
would have to go back to the parameter fiddling, which we really wanted to avoid in the first
place. No worries, because the third learning algorithm
is coming to the rescue. What this can do is take our favorite material
models from the gallery, and map them onto a nice 2D plane where we can explore similar
materials. If we combine this with the neural renderer,
we can explore these photorealistic visualizations and everything is appears not in a few hours,
but in real time. However, without a little further guidance,
we get a bit lost because we still don’t know which regions in this 2D space are going
to give us materials that are similar to the one we wish to fine-tune. We can further improve this by exploring different
combinations of the three learning algorithms. In the end, we can assign these colors to
the background that describe either whether the AI expects us to like the output, or how
similar the output will be. A nice use-case of this is where we have this
glassy still life scene, but the color of the grapes is a bit too vivid for us. Now, we can go to this 2D latent space, and
adjust it to our liking in real time. Much better. No material modeling expertise is required. So I hope you have found this explanation
intuitive. We tried really hard to create something that
is both scientifically novel and also useful for the computer game and motion picture industry. We had to throw away hundreds of other ideas
until this final system materialized. Make sure to have a look at the paper in the
description where every single element and learning algorithm is tested and evaluated
one by one. If you are a journalist and you would like
to write about this work, I would be most grateful, and I am also more than happy to
answer questions in an interview format as well. Please reach out if you’re interested. We also try to give back to the community,
so for the fellow tinkerers out there, the entirety of the paper is under the permissive
Creative Commons license, and the full source code and pre-trained neural networks are also
available under the even more permissive MIT license. Everyone is welcome to reuse it or build something
cool on top of it. Thanks for watching and for your generous
support, and I’ll see you next time!

100 thoughts on “We Taught an AI To Synthesize Materials | Two Minute Papers #251

  1. The paper "Gaussian Material Synthesis" and its source code is available here:
    https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/

  2. Do you have plans to be able to change the color of the selected object in real-time as you explore the color space instead of having the proposed color/ material show on the standard spherical model you showed and then once a sufficient material is found render it one the target object i.e. the grapes.

  3. This channel needs to be bigger! Maybe the name of the Channel doesn't quite resemble its content? At least for me it didn't. Anyway keep it up great work!

  4. Slightly more usable examples this time – sadly not elaborated on in terms of workflow etc. The example object materials all look horrible to me, and I would never have a use for them. Your Siggraph video on this ( www.youtube.com/watch?v=6FzVhIV_t3s ) has a much clearer explanation with tech details that people here love. So a link to that, coupled with some hands-on examples of how this would work in practice would have made my two minutes. Sorry to be such a negative Nancy today :/

    Do you see any practical implementation of this in the near future, particularly for Blender? Would you be involved in its development?

  5. Wow! Exquisite, congratulations on your work! It looks extremely professional and has real world uses. Amazing

  6. As a solo game developer this is the most glorious thing. Not having to waste tons of time on art or depend on other people would be so liberating!

  7. Great job. Good to see people taking ideas and creating novel new progressions rather than just “watching” … <guilty>

  8. Awesome Paper and video! ~ However, I am wondering if this algorithm could render videos for animations in real time?

  9. Awesome work and explanation. I wish all papers are presented like this so that they are available for everyone to understand and to keep ourselves updated on a variety of domains without too much effort.

  10. Excellent paper and good job finding a research problem that has some direct practical applications. Really happy to see success in your personal work. I'll.. see you next time

  11. noice mannnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn

  12. Incredible! Since I have some knowledge of computer graphics and how time-consuming photorealistic rendering is, this is truly astonishing work! Congratulations!

  13. I was actually working with something really similar to that, but for generating maps and not materials… Superb work anyways, love your videos!

  14. Thank you for all of your fantastic vlog publications and work! I have been watching your channel for about a year now and thought I should extend a salutation. I look forward to future posts.

  15. nagyon vagány dolgokkal foglalkozol, csak irigykedem itt 🙂 … aféle exprogramozóként, 34 éves fejjel érdekelne, de érzem hogy rengeteg idő lenne míg beletanulnék pl a tensorflowba vagy bármi hasonlóba

  16. Your channel is amazing. So much work goes into these but you keep it short and concise. Thanks for bringing the cutting edge of research to the plebs like me.

  17. Awesome work, as a free culture advocate I want to especially thank you for releasing the works under CC/MIT licenses.

  18. What type of neural network did you use to train this and how did you get an output that would create such an object? I’m just not even sure how you made the inputs and outputs its amazing what you’ve done?

  19. This is the kind of stuff I've been waiting for — AI using its 'imagination' to synthesize environments, graphics, and play areas for games instantaneously as opposed to meticulous light and material calculations. A kind of area of fuzzy, neural logic whose latent spaces can be explored through game play. This is definitely a great start, and a large accomplishment in and of itself.

    Great job guys!

  20. could you please stop using the word "AI" when "ML model" would be much more appropriate? Thank you!

  21. You should really say "material models" rather than "materials" since you're not actually synthesising materials.

  22. No way around it, you rock. This is amazing, and I cannot wait to see how it gets used! I'm in the game dev industry myself, and with the popularity of procedural worlds so hot right now… This sort of thing is sure to be a hit.

  23. You put 3000 hours of your time into this and put all the source code under an MIT license 🙂 Good to see the like/dislike ratio reflects that man, not all heroes wear capes!

  24. Holy cow, this is amazing! I wonder if a neural network could be trained to make photorealistic images of more things.

  25. im not getting the scripts to run,
    # Parameters and paths – adjust this according to your OS
    #folder = 'c:\UsersKároly\Desktop\gpr\' # Windows
    #folder = 'c:\' # Windows
    folder = '/home/user/Desktop/5-source-code/' #linux < files are here

    adjusted it but keeps failing, its looking for the recommend.txt right? after I attempt to run script, python script fail, look in the console for now…

  26. Haaa…. just create a universe that is similar to our universe in a computer – and then wait for few weeks to see if intelligent life is emerge then take all the output of their brain .. and that it you have a smart Robot AI ……just use nature mechanism dont try to reinvent the wheel

  27. I can't wait to see this implemented in game engines. A variety of similar flowers / grass / trees with no additional "cost" for the renderer. Awesome!

  28. This could be huge for assisting artists in mediums like blender
    It could easily create desired materials without hours of fiddling
    Perhaps mapping a bunch of base concepts (Metallics, Glass, Translucents, Irridescents, Fleshes), and further improving it to where instead of getting an input of preferred materials, you give it an input of descriptions of preferred materials
    A version of this with textures would also be super cool, especially since fiddling with procedural textures is about as hard as fiddling with materials

  29. That's what I had in mind for long time!
    Imagine building scanning where materials are recognized and without much work placed in the game, together with material properties, and realistic destruction/interaction.
    Also the face textures could be generated dynamically, hope we will see neural networks in graphic cards for that!

Leave a Reply

Your email address will not be published. Required fields are marked *