This AI Senses Humans Through Walls 👀

This AI Senses Humans Through Walls 👀


Dear Fellow Scholars, this is Two Minute Papers
with Károly Zsolnai-Fehér. Pose estimation is an interesting area of
research where we typically have a few images or video footage of humans, and we try to
automatically extract the pose a person was taking. In short, the input is one or more photo,
and the output is typically a skeleton of the person. So what is this good for? A lot of things. For instance, we can use these skeletons to
cheaply transfer the gestures of a human onto a virtual character, fall detection for the
elderly, analyzing the motion of athletes, and many many others. This work showcases a neural network that
measures how the wifi radio signals bounce around in the room and reflect off of the
human body, and from these murky waves, it estimates where we are. Not only that, but it also accurate enough
to tell us our pose. As you see here, as the wifi signal also traverses
in the dark, this pose estimation works really well in poor lighting conditions. That is a remarkable feat. But now, hold on to your papers, because that’s
nothing compared to what you are about to see now. Have a look here. We know that wifi signals go through walls. So perhaps, this means that…that can’t be
true, right? It tracks the pose of this human as he enters
the room, and now, as he disappears, look, the algorithm still knows where he is. That’s right! This means that it can also detect our pose
through walls! What kind of wizardry is that? Now, note that this technique doesn’t look
at the video feed we are now looking at. It is there for us for visual reference. It is also quite remarkable that the signal
being sent out is a thousand times weaker than an actual wifi signal, and it also can
detect multiple humans. This is not much of a problem with color images,
because we can clearly see everyone in an image, but the radio signals are more difficult
to read when they reflect off of multiple bodies in the scene. The whole technique work through using a teacher-student
network structure. The teacher is a standard pose estimation
neural network that looks at a color image and predicts the pose of the humans therein. So far, so good, nothing new here. However, there is a student network that looks
at the correct decisions of the teacher, but has the radio signal as an input instead. As a result, it will learn what the different
radio signal distributions mean and how they relate to human positions and poses. As the name says, the teacher shows the student
neural network the correct results, and the student learns how to produce them from radio
signals instead of images. If anyone said that they were working on this
problem ten years ago, they would have likely ended up in an asylum. Today, it’s reality. What a time to be alive! Also, if you enjoyed this episode, please
consider supporting the show at Patreon.com/twominutepapers. You can pick up really cool perks like getting
your name shown as a key supporter in the video description and more. Because of your support, we are able to create
all of these videos smooth and creamy, in 4k resolution and 60 frames per second and
with closed captions. And, we are currently saving up for a new
video editing rig to make better videos for you. We also support one-time payments through
PayPal and the usual cryptocurrencies. More details about all of these are available
in the video description, and as always. Thanks for watching and for your generous
support, and I’ll see you next time!

100 thoughts on “This AI Senses Humans Through Walls 👀

  1. I'm shocked with the quality, topics, edition… your content is amazing and your resume too 🙄.
    The style is just great.

  2. That is some super cool teacher / student machine learning! I love this concept, especially how good that motion tracking looks from video footage. If that could be optimized for hand held use and VR / AR, it would be wonderful for input. People really want to create animation, it's just not so easy to do and not as accessible at the moment. But that is changing fast! Also, keep that wifi and cell phone radiation on the minimal when it comes to your vital organs 🤣! Don''t wanna be slowly cooked alive by EMF (electromagnetic frequencies). I turn my phone on airplane mode when I sleep and try not to keep it in my pockets when it's on. The farther away from your organs, the better. The less EMFs beaming through your body, the better. Consider the noise filtering that machine learning has to do to make proper decisions. How much noise do our cells pick up from our contaminated and harsh environments? But that will change as we figure out better solutions. Perhaps machine learning will be part of some of those solutions to greener, healthier tech.

  3. I like the fact that it sometimes looks like skeleton A is absorbing B. I imagine it saying "There can be only one" as the life force is drained from the second one. Still, cool use of RF.

  4. This method required array of antennae which is not mention in this video and only briefly mentioned in the paper.

  5. The machine burst through the wall, tracking its target with its wifi sonar. As it crushed the man's throat, it extrapolated his last thought by analyzing micro-details in his expression:

    "What a time to be alive".

  6. Yo that's some batman shit, does anyone remembers that in dark Knight batman turns every cell phone in Gotham city some kind of satellite to locate Joker

  7. They promised us that technology would give us more freedom…in the end it will be used against us to keep us subservient.

  8. While very cool, the big limitation is that it seems to be trained on one specific room.
    for any other space, it would require retraining with cameras and enough data
    Edit: *After actually reading the paper…I was wrong, the method is not limited to one room, I assumed it learned the RF reflections off walls to learn the pose, but apparently, the walls are transparent to that RF range. overall, really cool tech!

  9. This is old idea brought to life Batman used this in the movie to locate the bad guys locations in the building he didn't want to use it becuse of privacy issues but Morgan Freeman convinced him to and I guess the had this tech back then and are showing it to us now.

  10. We're getting closer and closer to the robopocalypse by the day.

    FFS, please stop. At the very least, start development on a fail-proof way to stop a robopocalypse before you resume development on the tools of the robopocalypse. (Which, by the way, is much easier said than done. You can't outsmart a superintelligence.

  11. So I am a bit confused does the "camera" send a weak wifi signal and then the AI interprets the bounceback or do the humans have something on them?

Leave a Reply

Your email address will not be published. Required fields are marked *