This AI Learned To Animate Humanoids🚶

This AI Learned To Animate Humanoids🚶


Dear Fellow Scholars, this is Two Minute Papers
with Károly Zsolnai-Fehér. If we have an animation movie or a computer
game with quadrupeds, and we are yearning for really high-quality, lifelike animations,
motion capture is often the go-to tool for the job. Motion capture means that we put an actor,
in our case, a dog in the studio, we ask it to perform sitting, trotting, pacing and jumping,
record its motion, and transfer it onto our virtual character. In an earlier work, a learning-based technique
was introduced by the name Mode-Adaptive Neural Network, and it was able to correctly weave
together these previously recorded motions, and not only that, but it also addressed these
unnatural sliding motions that were produced by previous works. As you see here, it also worked well on more
challenging landscapes. We talked about this paper approximately a
hundred videos, or in other words, a little more than a year ago, and I noted that it
was scientifically interesting, it was evaluated well, it had all the ingredients for a truly
excellent paper. But one thing was missing. So what is that one thing? Well, we haven’t seen the characters interacting
with the scene itself. If you liked this previous paper, you are
going to be elated by this one because this new work is from the very same group, and
goes by the name Neural State Machine, and introduces character-scene interactions for
bipeds. Now, we suddenly jumped from a quadruped paper
to a biped one, and the reason for this is because I was looking to introduce the concept
of foot sliding, which will be measured later for this new method too. Stay tuned! So, in this new problem formulation, we need
to guide the character to a challenging end state, for instance, sitting in a chair, while
being able to maneuver through all kinds of geometry. We’ll use the chair example a fair bit in
the next minute or two, so I’ll stress that this can do a whole lot more, the chair is
just used as a vehicle to get a taste of how this technique works. But the end state needn’t just be some kind
of chair. It can be any chair! This chair may have all kinds of different
heights and shapes, and the agent has to be able to change the animations and stitch them
together correctly regardless of the geometry. To achieve this, the authors propose an interesting
new data augmentation model. Since we are working with neural networks,
we already have a training set to teach it about motion, and data augmentation means
that we extend this dataset with lots and lots of new information to make the AI generalize
better to unseen, real-world examples. So, how is this done here exactly? Well, the authors proposed a clever idea to
do this. Let’s walk through their five prescribed
steps. One, let’s use motion capture data, have
the subject sit down and see what the contact points are when it happens. Two, we then record the curves that describe
the entirety of the motion of sitting down. So far so good, but we are not interested
in one kind of chair, we want it to sit into all kinds of chairs, so three, generate a
large selection of different geometries and adjust the location of these contact points
accordingly. Four, change the motion curves so they indeed
end at the new, transformed contact points. And five, move the joints of the character
to make it follow this motion curve and compute the evolution of the character pose. We then pair up this motion with the chair
geometry and chuck it into the new, augmented training set. Now, make no mistake, the paper contains much,
much more than this, so make sure to have a look in the video description. So what do we get for all this work? Well, have a look at this trembly character
from a previous paper, and look at the new synthesized motions. Natural, smooth, creamy, and I don’t see
artifacts. Also, here you see some results that measure
the amount of foot sliding during these animations, which is subject to minimization. That means that the smaller the bars are,
the better. With NSM, you see how this Neural State Machine
method produces much less than previous methods, and now we see how cool it is that we talked
about the quadruped paper as well, because we see that it even beats the MANN, the mode-adaptive
neural networks from the previous paper. That one had very little foot sliding, and
apparently, it can still be improved by quite a bit. The positional and rotational errors in the
animation it offers are also by far the lowest of the bunch. Since it works in real time, it can also be
used for computer games and virtual reality applications. And all this improvement within a year of
work. What a time to be alive! If you’re a researcher or a startup looking
for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I’ve talked about Lambda’s GPU workstations
in other videos and am happy to tell you that they’re offering GPU cloud services as well. The Lambda GPU Cloud can train Imagenet to
93% accuracy for less than $19! Lambda’s web-based IDE lets you easily access your instance right
in your browser. And finally, hold on to your papers, because
the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdalabs.com/papers and
sign up for one of their amazing GPU instances today. Thanks for watching and for your generous
support, and I’ll see you next time!

100 thoughts on “This AI Learned To Animate Humanoids🚶

  1. Did you enjoy our new logo animation? We love it. It was made by Tom Wilusz – check his reel out here if you need some pixel magic in your life (or projects). https://www.wilusz.tv/

  2. If we attached servo motor to this neural network wont that make it easy to make it move like real animal? If so why Boston dynamic don’t do it?

  3. Hi, i have a feeling of interest in my heart when I see these videos. I am young and though, why do not I study this?
    Does anyone know a good Uni to Start with bachelor?
    I live in Berlin Germany, but i am interested so I could also imagen to study abroud, maybe England? But does anyone has expiriens, where to study ai and deep learning in Germany or abroud ?

  4. I can’t wait until machine learning is widely used in games. I know it was used in the recent Microsoft Flight Simulator to generate 3D buildings from satellite images giving you an entire planet to explore. Projects like that would have been impossible to do by hand.

  5. Will be interesting to see how the technique can be adapted to incorporate the subject's mood into the animation too; e.g. how a subject approaches something fearfully vs. enthusiatically, how someone who's happy walks compared so someone who's sad (at least similar to how an actor might portray the subject).

  6. The movement looks incredible, slight one artifact. The feet. Bipedal movement across even flat terrain changes the shape of the foot as the actor changes their center of gravity, as well as changing shape when moving on uneven terrain. It's a small issue, but if perfect looking simulation is our goal, I'd wager it's the proper next step (no pun intended).

  7. 2:40 Siggraph Asia, using Egyptian, i.e. African, mythology… that just seems out of place.
    But it reminds me of Tad Williams Otherland series, which is a series of novels about VR/AI and much of it taking place in an Egyptian mythology world.

  8. This could actually be revolutionary in the game industry!
    -low to nonexistent input lag.
    -better quality animations.
    -good for crunch times.
    In general this would help devs more than gamers but still has so many possibilities!

  9. Just gotta say this right away, that's not an AI… It might have different things with the name "neural network" but those things were created for this specific purpose. An AI shouldn't need any type of guidance at all and it must be available to change it's own programming code to be an actual self learning AI. Hence the name "Artificial intelligence"

  10. Typical graphics level in 2019:

    Amazing texture depth!
    Incredibly realistic lighting!!
    Awesome level of detail in the models!!!
    Unbelievable animation!!!!!


    The legs still go right through that skirt at every step…

  11. i hope this ai will dance like club dancer. while i play any music and it will dance in different styles according to the different music. like a real dance master. then i will create a 3d model of my crush and integrate with that ai and then i will watch her dance with different cute, lovely, all kind of expressions on her face.😍🙈

  12. I am an industrial designer. I'd like to go to school to learn this type of AI. What should I search for to find more info?

  13. Welp, guess you could say that AI ruined my entire career

    But at the same time, i didn't even start a career in animation, all i've done so far is like scraps and only one full animation

  14. Cant we create an AI which learns how to create an AI that capable to do anything, and also learns how to rewrite itself better and better by using itself? It may start weaker but by the consecutive procedures and time, would not it had made itself and its product perfect?

  15. @Two Minute Papers
    If you want to improve the movement to perfection, you should take a look at "Dr. Peter Greb". https://youtu.be/ba6fNq1XuMs?t=93
    He brings back the natural human walking style. Over 99% of humans don't know how to walk naturally.
    If you watch toddlers or indigenous tribes walk you will see it immediately, the heels walk is not naturally, it is a fashion imitation.

    While studying Sport Science we looked at how cruciate ligaments are tear in different sports.
    I realized that it happens when sports men switch to heels walk, lowering the heel to the ground and fixating the foot.
    With the now fixated foot to the ground, the rotation of the upper leg will cause to torn apart the cruciate ligament.
    This will not happen if the weight is only on the footpad, allowing to rotate the footpad on the ground.
    Take a look at the natural movements here. https://youtu.be/yrwXgnHYkZA?t=40

    Scientists made a replica of Ötzi's shoes and they used the heel walk with it, they fell apart in three days.
    You can even take a look at some archaeological wall paintings, humans never walked the heels walk.

  16. I like, that developments like this will increase the overall standard/quality of games. Hopefully, the developer will spend more time on creative aspects of their games, like character and story building.

  17. the future, everything at a buttons push, right now i have to learn animation – in 5 years i might only need a software that does it all automatically xD

  18. The number one giveaway that you are watching a video game today is the awkward animations, when it transitions between different motion capture or just doesn't know how to animate right for the scene. This is going to take games to a whole new level!!!

  19. On the one hand, I find this an amazing achievement. On the other, I recognize this is an abomination, because it automates ART. AI will take everything from us, every job, from farm worker to engineer. But even art? Humanity is looking forward a very boring and empty future…

  20. Is this really necessary? So many things will become irrelevant in the future, because machines do them for us. When is it going too far?

  21. Oh wow, that motion capture data for the walking looks SO lazy.

    Not the work done for it, but the actual actor performing it. Put your shoulders back, man!

  22. Hold on did you say motion capture data? Blended not artificially learned data sets via interpretation, ahh that's what I thought until I noticed the chair illustration, now it feels unrevolutionary to fill the expensive costs of motion capture.

  23. So I just had to watch two ads to watch a video that itself is flagged by YT as containing paid advertisement? I'm outta here…

Leave a Reply

Your email address will not be published. Required fields are marked *