Product design: how to build better products with Android Things (Google I/O ’18)


[MUSIC PLAYING] KRISTIN GRAY: Hi,
and welcome to How to Make Better Products
with Android Things. I’m Kristin, and I’m UX
lead for Android Things. MICHAEL DELGAUDIO:
My name is Michael. I’m the UX design manager
for Android Things. You can think about this
talk as product design 101 for people who may
not be designers. But if you are a designer,
we got you covered. We’ll be covering
hardware prototyping and the possibilities
of what you can create using Android Things. In this talk,
we’ll cover how you can accelerate the prototyping
and product creation process using Android Things. We’ll talk about
a design framework that you can use
starting today to help you think about
who your users are and how they can play a more
prominent role in crafting the products that
you’re creating. And we’ll also talk about a
concept project called Lantern to demonstrate how
we’ve applied the design framework to use Android Things
to create better products. KRISTIN GRAY: Thanks, Michael. So we know that
hardware design is a long and difficult process. It can take anywhere
from two to five years to bring a product from ideation
all the way up to product. So you start with
ideation, and then you move on to the prototype
phase, and then you choose your hardware, and
then you design your software and get that all coded up, and
then you send it to a factory. You finally get it on the store
shelf, and then you cycle back and you have to go
through updates. So we live in a world
that’s rapidly changing, and technology can change
right in the middle of your production process. So how can the design
process keep up? That’s one of the main reasons
that we created Android Things. It’s made for a world
that’s rapidly changing and enables people to
be part of the creation process from ideation all
the way through maintenance. So at the heart
of Android Things there is something called a
SOM, or a System On Module. OK? And this is also
called a carrier board. The SOM can be used
for prototyping, and it can be also
placed on a custom board. So you can snap this SOM off and
use it on your own custom PCB board, OK? And this carrier board–
everything that surrounds this carrier board is an accessory– so everything from ethernet
to power to the headphone jack over here. This is a powerful tool for
prototyping because you already have a lot of tools that
you need to get connected. And of course, if you need
a different peripheral, you can easily connect it using
traditional methods like pins, a breadboard, and resistors. MICHAEL DELGAUDIO:
So Android Things offers a number of tools for
you to get started easily. There is the kit, which has an
iMX7 developer board, a touch screen, and a stand. So for those of you who
have gotten your kit already or maybe been to some
of the code labs, the kit assembles
into a useful stand that you can use to
prototype right on your desk. In addition to the kit, we
offer the Android Things toolkit app to help you get
onto Wi-Fi really easily. One of the pain points that
we heard from developers in the code labs– at Droidcon, for example–
was provisioning the devices onto the Wi-Fi
network was difficult. So with the toolkit app, you can
get it onto Wi-Fi in a breeze. It will also step you
through the process of making sure that
your hardware is connected correctly. And with recent updates
to the toolkit app, we also have some samples
that you can load from the app onto your device to see some of
the powerful things the Android Things can do, like running
the TensorFlow demonstrations before you get into
Android Studio. In addition to the tool kit
app and the hardware kit, we also recently updated the
androidthings.withgoogle.com community hub. So now we offer code snippets,
samples, drivers, and projects from the community. So if you do build something
cool, you can submit it, and we’ll feature
it on the site. We also recently
updated the site to include driver
submissions, so that if you do
write a cool driver and you do want to submit
it for other people to use, we can have that on
the site as well. If you haven’t already
gotten your kit, head over to the I/O
dome, and they’ll give you information
about how to get one. KRISTIN GRAY: Thanks. So Android Things provides
an end-to-end solution. It offers tools from
prototype to production, as Michael mentioned. The SOM makes hardware
selection easier by offering modular
hardware solutions so you can use the same
SOM for prototyping as you do production. For prototyping, the kit offers
peripherals such as displays, a camera, a rainbow
hat for sensor input, an interface output,
and also an antenna to connect the device to Wi-Fi. An app also makes this
easier, as Michael mentioned, to assemble your
hardware, and it also helps you get familiar
with your carrier board, and it helps you
connect to Wi-Fi. And finally, when you’re
ready for production, the developer console can help
you create builds, configure your firmware, and release
those builds to devices. MICHAEL DELGAUDIO: Great. So we wanted to share with
you a concept called Lantern. And we’re using this
as a demonstration to help you
understand how we were able to use Android Things
to bring products to life. So Lantern is not
a Google product, but it’s a project
that we worked on with the Nord Group that
creates augmented reality anywhere around you. And so you can see this as
an easy-to-understand example of how we were exploring
creativity through prototyping. And, again, we
wanted to show you the possibilities of
what you can create– or what we can create– or
possibilities of what can be created with Android Things. So at its heart, Lantern
is obviously a lamp. But it’s a lamp that enables
you to create augmented reality anywhere around you,
and it’s created using off-the-shelf parts. And we thought that
was really important, because we wanted
to make sure that it has a recipe that you could
potentially build on your own. So what is Augmented Reality? You may have heard
this term, AR. There’s the AR kit. But how can we create this
sense of augmented reality using Android Things? So using Lantern
and Android Things, we wanted to project
onto everyday objects interesting pieces of
information and content that may be trapped inside
the phone or on the Web but that may enhance
the world around us. So say, for example, here
the currently playing cast song– we’re projecting
it onto a speaker. And none of this was
done using After Effects. This is all using the projection
system and the prototype that we created. Another example of how we’re
augmenting everyday objects is, in this example, a clock. So we’re using Google
Calendar and a wall clock with Lantern to project
the calendar information around the clock. And, again, this is all real. We shot this in the
studio using Lantern. As an exploration, here we felt
like it looked particularly good because it was on this
nice, curved, round surface, so it gave this ticker
tape kind of look. But we’re excited about
these possibilities, and that’s why we
wanted to create this to share with you to
demonstrate not only our design process, which we’ll get
into, but also to give it to the community to
see what you guys may want to create with it. And so what is Lantern made of? I mentioned before
that it was created using off-the-shelf parts. So there’s a lamp. Inside is a laser
projector, an accelerometer, a 3D-printed housing,
and Raspberry Pi running Android Things. It’s important to
recognize that there are two pieces of hardware that
Android Things– two boards, excuse me– that
Android Things runs on. It’ll run on Raspberry
Pi, and it will also run on the iMX7 boards
that are in the kit. The Raspberry Pi is a
little bit more prevalent at this point in
the maker community, so we felt like building it
on that platform with the HDMI output was going to be
better for this case because we could
connect it directly to the laser projector. And so once it’s assembled,
it looks like this. You may have seen it
over in the IoT dome. We have one running over
there as an example. And we really believe
that this is only now possible because of the
democratization of design and hardware and prototyping and
access to these kinds of tools that we’re talking about today. So it was really
difficult in the past to, say, print a
3D form like this and assemble it into
a hardware shell because hardware was expensive,
3D prototyping tools were inaccessible, and tools
like Android Things were not readily available
for you to access to create new hardware prototypes. And so Lantern can
also be assigned content to its
particular context, so it’s aware of
its orientation. And so using that
accelerometer in Lantern, we can change its
base position and then project different content
onto different surfaces. So say we wanted to project
a star chart on the ceiling, or, in the examples
that we saw earlier, you could see the calendar
information projected onto the wall. This code is available today at
github.com/nordprojects/lantern if you want to check it out,
download the source code, and build your own. So we didn’t set out
to create Lantern. So where did Lantern come from? You can see a number of
sketches that we created, and if you’re
designers, you may be familiar with ideating
through sketching. But we had an inclination
that projected systems would be interesting when
we started prototyping using Android Things. But we used design to turn
our idea into a real thing. And today we want
to share with you the process that we went through
and the frameworks that we used to create amazing
products so you can too. KRISTIN GRAY: Thank you. So, as Michael mentioned, design
helps create better products. How many of you,
with a raise of hand, have created something that
was used by another person? Go ahead and raise your hands. Oh, that’s awesome. So when we design
things, you might know that we use common
principles to ground our work. A lot of the things
that we design can be carried
over from software into hardware,
from a banking app, for instance, to a theater
app, as another instance. So we use these processes and
principles to ground our work, but then we use
the design process to move forward as well. And then we continue iterating. With Android Things, we’re
taking some of the software UI design concepts and
applying them to hardware. So what is design? Design is the creation of
tools for people in a context to help them achieve a goal. If any of you are familiar
with the development process, this is very similar
to a user story. So for example,
as a dog owner, I want to connect a
dog feeder so that I can feed my dog from work. Or as a person
who orders pizza– I order pizza all the time– I want visibility into
the delivery route so I know when my
pizza will arrive. Or something like
a simple story– as a knight, I want
a stronger sword so that I can defeat the dragon. Now taking the
knight example, we framed out that the knight
wants a stronger sword. This is a tool that
the knight uses to achieve his goal of
defeating the dragon. We call this tool an interface. So people typically think of
a user interface as a touch screen or a mobile
phone or a tablet because you can tap on the
screen and things happen, right? It’s magic. And while this is
true, user interface is much broader than that. Like in the example I used
before, the user interface is the sword. But we can see here
from this slide that a button can be used
to build upon an interface to create a joystick,
which can be used to create a
game that shoots down aliens from the sky. And then there could
be feedback on top of that where there’s LEDs
inside of a breadboard that light up when you push
the button, right? So all of these are an
example of a user interface. But one of the most
simple user interfaces that we use to design things
is simple as a piece of paper and a pen. And, admittedly, a
design is iterative and sometimes it feels
like this, right? Like the hamster in the wheel. That’s OK. The truth is is that
you’re never done. Teams need to collectively
learn through the experience of observation and iteration. That being said, Android Things
enables you to iterate faster by allowing you to work
through many design issues by using the design kit as a
base for your prototype phase and allowing for early
over-the-air updates using the developer console. MICHAEL DELGAUDIO: So
design is a process that can be used to create
better products for everyone. And we know that design is
agnostic of medium, time, trends, or technology company. And we believe it’s important
to think about design as a process that’s
agnostic of these things so that no matter what changes,
you have the right tools to apply that process, whatever
problem you’re working on. So we talked about design being
in the context of people– or people in the
context of goals. But how is it done and
what does it look like? So each milestone outlined
here– planning, prototyping, getting feedback,
and iteration– needs to be vetted, right? And using this
iterative framework, we can enable this to help
us make less mistakes, produce better products, and
have a cheaper production process along the way
because we learned earlier how the product needs to
take shape as it’s evolving. This will truly help you
make decisions sooner. So thinking about planning. Planning takes the
shape of many forms. First, we aim to create
baseline understandings of needs that may exist for our users. So we may begin by talking
to people about pain points or getting inspiration from
places from pain points that we have ourselves. We may look at
competitive products to think about how they are
solving specific problems and how we may want to
do things differently. If you’re familiar with the
design for software systems, you may be familiar with
creating user personas to get an idea of how
you can gain empathy into the mind of somebody who’s
actually using your product. We also create things
like wireframes, storyboards to begin to
tell the story of how we’ve seen the product’s
use unfolding over time. We then create something, right? So based on what we
know and the hypothesis of how something
should work, we begin by creating
medium-fidelity designs. So we saw those really
preliminary sketches, right? We may create something
like a video simulation to think about how it
might look and feel before it actually works. We might make something
on a breadboard to get an idea of what
it functionally might do or what some of the
key characteristics of that functionality
might be like. And then finally,
getting feedback. So it’s important to get
feedback in this cycle because we need to understand
how people are actually using things. So qualitative
feedback, understanding the user’s perception
of how they feel about a particular feature
or what you’re proposing. Quantitative
research can be used to gain data to understand
how specific features are being used or not used. Internal feedback– we’re
constantly sharing projects with each other internally to
get feedback from other people. It’s really helpful to get
an objective eye on something so that that person
can point out something that you may not have seen. In guerrilla research,
I’m showing your prototype to somebody who may not be
familiar with the project– can give you a tremendous
insight because then you can have an objective
set of eyes on features that you may be
creating from somebody who may not be familiar
with the projects before. And finally, in
later stages, you may employ something
like a lab study, more formally to
ask participants to use your prototype to see in
a controlled environment, side by side, what different
variations may be like. And we’re not
saying that you need to do all of these
things along the way. So for example, we may
create a rough storyboard, which can then
translate to a click through and then gain
internal feedback and iterate on that cycle. Or for example, we may do a
bit of competitive analysis or define some
must-have requirements and then create a looks
like/feels like prototype and then perform some
guerrilla research with somebody who’s not
familiar with the product we’re creating. KRISTIN GRAY: So
something else to mention is that hardware isn’t hard– it’s just different. You need to consider
what parts that you need to make what you want. You also need to think about
designing a system, possibly with non-customized parts. Sometimes it’s like Jenga– one requirement can
actually affect another. And you also need to think
about future-proofing. How much space do you need
to allocate for user data, for example, if you’re
designing a camera? This might be
important for users. And finally, you also need
to think about form factor. How will all of
the parts that you want to fit into your design
fit into a form factor that’s delightful for your users? One other thing to consider
is your interaction design. So when you connect the
product to the Internet, do you make a
companion app for that? How do you make sure that
the notifications can be seen from a reasonable view? How do you make it accessible? Take something as easy
as a software update. Do you tell the user at 8:00
AM when people are actually using your product, or
do you wait until 2:00 AM when people might not
be using your product and they might be asleep? Also, what if there’s no screen? Something else to consider
for the land of IoT. So we can apply
our design process using Android Things to help
us iterate faster and get that feedback sooner, make
better informed decisions for these questions,
and also ultimately help us design better products. And one thing to note– if you’re a software
designer, something that you’re really familiar
with is the Undo button. In hardware, you don’t
have an Undo button. It’s really easy to
roll back a release when you’re a software
engineer as well. But when you’re
designing for hardware, it’s a lot more permanent. So you need to have a
strong iteration cycle, because each time that you move
forward in your design cycle, the more expensive it
gets to move backwards. For example, in
1966, NASA’s budget was over 4% of US spending, and
the undertaking was mammoth. Now, for example, we can
shoot 3D printers into space, assemble technology up there,
update it from your laptop, and reuse the rockets– all at a fraction of the price. We can shoot cars into space
simply because it’s fun and because we can. One other thing to note
here is the design process. So with the Mercury,
Gemini, and Apollo missions, they were all built
to get us to the Moon. Mercury put a man
into space, Gemini to extend the capsule stay from
hours into days, and the Apollo to get us to the Moon. These were some pretty
big iteration cycles, but, you know, they were
really onto something. You may also notice that even
today, some hardware design is actually based on
a waterfall process. It requires everything
to be perfect and is expensive in
end-to-end cycles. So for example, you start
with your requirements, you hand those off once those
are done to the designers, who finish the designs,
and then they hand that off to the engineers
for implementation. Once the implementation
is finished off, then they hand it off to the
QA engineers for verification, and then you cycle back
through a maintenance cycle. And you do that
over and over again. And if you make a mistake or you
decide that you want to change, if you’re late in
the process, you have to go all the way back up
to the Requirements section. So on the right,
you’ll see something that looks a little bit more
like the process that Michael was talking about, only except
we have a few milestones sprinkled in there. So each one of these
is an iteration cycle. You plan, prototype,
feed back, and iterate, and you keep continuing to do
that throughout your process. Iteration is flexible, and
it helps keep costs contained and the users included
in the process. In the past, there were
also very specific roles that contribute to the
creation of a project, including fabrication engineers. Now with the access to tools
like on-demand 3D printing, product creation has
become more democratized, as Michael mentioned before. Anyone can print 3D parts. Designers can code
visualizations now with the ease of
prototyping tools. App developers can also apply
their skills to hardware, and Android Things makes
that easy to do so. Specifically dealing with
software, in the past, it was also labor-intensive
to create and release a build, and it was difficult to
set up testing environments for those devices as well. With Android Things, we’ve
introduced the developer console so that users can update
their app in Android Studio, open up the dev console, and
create a build and release it, all in a few easy steps. MICHAEL DELGAUDIO:
So now that we’ve talked a little bit
about design and process, we want to bring this back
to the Lantern project that I mentioned earlier. So how are we able to use
the process of planning, prototyping, getting feedback,
and iterating to help us improve this concept? We were able to think about
people, context, and goals and then apply this
to this simple idea. So as a designer
sitting at my desk, I’d like to bring my room to
life through projections– simple idea that connects
people, context, and goals. We started by sketching,
but what happened next? We began to create a looks
like/feels like prototype. So before we were even
assembling hardware or putting together the housing, we
started to think about what would a time rendering
look like as a projection if it was sitting next to me? Or what would I want to project
onto the desk in front of me as I was typing? Would it be some
bitcoin information or a price of
something else or how I’m doing in a certain game? A looks like/feels
like prototype can help you understand how
something exists in its end state without actually
having to get there through the full
creation process. So again, we went
back to sketching to think about how the
housing may come together. If you’re thinking about
a lamp and thinking about the parts that
needed to go in it, well, as I mentioned earlier, we’d
need the Raspberry Pi board, we’d need some
sort of projector, and it would to start to fit
inside of this specific shape. And this was one of
the first prototypes that we created using foamcore
and some of the parts. So once we knew the
parts that we needed, we began to put them together
and assemble them not into a 3D print right away, but just
using foam, creating slices, cutting them out,
and making the form so that we knew that they
would fit into the lamp. And before we put
everything together, we started to
prototype what some of these content simulations
may look like functionally. So on the left,
what you’re seeing is an accelerometer test showing
how we could change content based on orientation. And, again, it’s really simple. We had the projector, we had
the Raspberry Pi connected to an accelerometer
running Android Things, and we just said, can we
show you the direction or show us the direction
of which way the object is pointing? So does it know if it’s
pointing up, down, or sideways? And then on the
right-hand side, these were some of the
initial tests that we did looking at how we could get
the currently playing song off of the Wi-Fi when you’re
casting to a nearby device and just projecting it
onto a notepad to see is that even possible
using the hardware that we think we want to use? And looking at
physical prototyping, here’s Joe with the first
assembled prototype, looking at the accelerometer
changing the content based on the orientation of
the physical prototype. And here, it flips
up at the ceiling, and then you can see the
content start to come to life. So, again, we built these
pieces up individually and then started to put them
together into their final form. So one tool that I wanted to
mention that the team found highly beneficial in creating
this project was processing. And so if you’re a
designer, you may already be familiar with processing
as a lightweight IDE that enables you to quickly
create visualizations. For designers like
me, it helps me because I can create
visualizations independently, and there’s a really nice
library that was recently released from the processing
foundation called Processing for Android. And what that enables you to
do is write processing and then test that on an Android device. You can test that
on your phone, you can test that on your
[? ware ?] device. But you can also use that
on Android Things, which is really nice because you
could work on a visualization and then load that onto
your hardware independently. What this enabled us to do was
work with the visualizations, get them to a place
where we wanted them, and then integrate
them into the hardware. A few gotchas– when you’re
moving from processing, you can export an
Android Studio project. However, you’ll need to
upgrade the minimum SDK version in the Gradle
file in order for you to work with Android Things. So Android Things requires
a slightly higher Android build number. And once you do that, you’ll be
able to connect your processing visualizations to hardware
to manipulate them through the standard
Android Things GPIO inputs. And so thinking about
our iterative process, how did we go about getting
feedback on Lantern? So first we started asking team
members around us to use it. We couldn’t really
go out to the public because this was a private
thing that we were working on, but we were able to
find other people inside of Google who were not
familiar with the project– team members such as the
ML team on Android Things, who thought it was
pretty interesting. But one thing that we had
really kind of pushed away from was the idea of integrating
interactivity into our MVP, or our first iteration
of the project. We were specifically adverse
to incorporating interactivity because we wanted to prove
that we could project content in different ways. Interactivity adds another
layer of complexity. So, again, we wanted to simplify
this into its basic form so that we could prove that the
independent interactions were working. However, when we shared the
prototype with the ML team, they were really keen
on integrating a camera. And so because we were
working with Android Things and working with 3D
printing, we were able to make some modifications,
integrate the camera. And the Nord team happened
to be working in London, and the Mountain View
team was over here. And it was really
an interesting story of how this evolved
because, again, never before have you had access
to tools like this, like 3D printing from the web. We were able to actually
build prototypes in two physical locations
and collaborate on them and build them up
simultaneously, which is really cool. And, again, Android
Things made it really easy to integrate new
hardware like this and connect it to the
visualization pieces in a snap. And so with the camera
in place, we now had the possibility
of a greater range of interactivity and
interactive input. And so this led us
to creating Quick, Draw!, pen-and-paper edition. So for those of you who
were at I/O last year, you may have seen Quick,
Draw!, which is a Creative Labs project from Google that
prompts you with a word, and then as you draw it
on a tablet, on the web, or on your phone, it starts
to guess what you’re drawing. And we thought that
would be really cool to do in the
physical world now that we had this
projected AR system. And so what we did is
we did just that, right? So previously, it was limited
to screen-based inputs, but we thought,
wouldn’t it be cool if we could use just a pen,
paper, capture that input, feed that into the Quick, Draw! Engine, and then have an
interactive game that you could use in the physical world? This is an example
of the demonstration that we have set up
over in the IoT dome. If you haven’t visited the
dome, you can check it out, and you can try it
out for yourselves. We’ve got a Lantern set
up, and it works great. So you’re prompted
with a word, and as you start to draw with
the pen and the paper, it starts to guess
on this projected interface in front of you. So, again, thinking about this
as a mixed reality surface. And there you go. So now we’ve gone
through one cycle, right? We’ve planned, we’ve prototyped
it, and we’ve gotten feedback, and we’ve iterated
on that cycle. We made some improvements,
and so we’re done, right? We’re ready to ship the project. KRISTIN GRAY: Um,
no, no, no, no. We’re not going to
ship our product yet. So what we’re going to do is
start the production process. And so what this means is
we’re going to move forward, maybe mass producing
something like Lantern. So Android Things has
many prototyping tools to get you started. After you’ve completed
your proof of concept and the initial prototypes,
everything is working, and you know what you
want it to look like, and you know you want to
move over to a custom board. This is easy because of the
SOM, like I mentioned before. You shouldn’t have to
redo all of the work that you did in the
prototyping phase, just because you’re using
the same architecture to create your products. So now let’s say that I want
to mass produce Lantern, right? And as part of the
feedback process, I wanted to learn more about
factory production and bring-up processes, so I visit a
few factories in China. As it turns out, they
have a process too, and it maps well to the
overall product design process. When you mass produce
a product, the factory will build out a
line with stations. Each station is
staffed with people who put together the product. If I want millions of products,
or millions of Lanterns, the factory might
automate that process and make some really
cool robots and automate things with conveyor
belts and those robots. During the whole process,
though, it’s good for you as a product
designer to continue iterating on your product
but probably leaning more toward software instead
of hardware changes, and I’ll show you why. So you’ve created
10 units, perhaps, in your prototyping
phase, right? And then you send those
off to the factory, and they’re going to run
everything through something called a validation test. And you start out with
engineering validation tests, so you send your
prototypes to the factory, and they’ll send
back maybe 100 units. And during that time,
you need to make sure that everything works with the
materials that you’ve selected. They’re probably
going to be using soft tools, possibly
hard tools in this stage, to create all of the forms
for your industrial design. And then after that, after
you’ve verified everything, then you send any feedback
back to the factory, and you move on to
design validation test. This is where you move
on to stainless tooling, all of the stations
are set up, and they’re staffed with people. And at this point, the
product design team should be using
Android Things’ console to update software and update
all of the testing channels to help make your
iteration cycles go better. In the meantime, you should
also be doing user testing throughout all of these phases. Then when you have
feedback for the factory and you send things
back, then you move on into the
product validation test. This phase make
sure that everything is moving as fast as possible. This is more for the
factory than for you. They’ll send you back
around 1,000 units, and you should be continuing to
use the console to run and test metrics on these devices. And finally for a
mass production, you should be
using the developer console to send in
easier updates if needed and gather more metrics
on these devices. So as you can see here, as
you move through the process, they should be sending devices
back to you the whole time, and you should be testing
those devices, giving feedback extensively through QA. And one other note from
a design perspective– as you move forward through
this framework, your solutions might need to get more creative,
more MacGyver-y and lean more toward software
solutions, because it might be too expensive or too
late to go backwards and start over again if you’re
working with a product team. So some of the
tools that you can use to make this all
go a lot smoother is the developer console. So we’ve included something
new called the app library. This allows you to add
apps to the app library and use them on
multiple projects. So if you have
multiple Lanterns, for example, you can
write one APK, upload it to the app library, and then
you could use it throughout each of your different products. When you’re ready to build,
you can actually go through and you can use
the build settings, and then you can create
a build individually and run through
each of the steps to set all of the firmware
and software for your device, and then you create a
build very seamlessly. And when you’re ready to
test over-the-air updates, you can create
releases and channels You can push those
releases to channels so you can test your software
in different groups for detailed testing. So for example, here I’m
creating custom channel, I’m creating an
update, and I’m going to push these updates to my
devices or my fleets of devices wirelessly. And finally, after
your release, you can gather metrics to help
you gather quantitative data on your products to help
you make better decisions about what you need to do next. So for example, in
Lantern version 0.1, you have maybe a set of
features that you want to use, and then you want to move
on and add Quick, Draw!, you can do so fairly easily
by using the build tool. And then you can
check in metrics to see if it’s doing better. And once your device
is in the market, the feedback doesn’t stop there. So you should be gathering
feedback, doing user testing, and checking everything as
your device is in the market. Through any updates
that you’ve created, continue doing user
testing, continue doing guerrilla testing. Anything that’s in
that research column, you should be continuing to
do to help you create updates for your products and also help
you define the next iteration cycles through your [INAUDIBLE]. And that’s the process. So we’ve taken you
through ideation, prototyping with
Android Things kits, making final
hardware selections, creating builds and hardware
bring-up, the factory process, all the way to the end product
with maintenance and updates, running through
design iterations each step of the way. MICHAEL DELGAUDIO:
Thanks, Kristin. And so we’ve talked
about today how design is people in context with goals. And it’s a process. We can plan,
prototype, feed back, and iterate on that
process in order to help those people in those
contexts achieve those goals. And now it’s easier
than ever to participate in the creation of
physical products than it has been ever before. Android Things offers tools to
help accelerate the prototyping process that you can use
today to kickstart your ideas. Processing enables the
fast experimentation of rich graphics and, when
combined with hardware, enables you to make really
powerful experiences. All of these techniques
really enable you to make better connected
products for everyone using Android Things. You can get started
today at experiments.with google.com/lantern. You can check out some
of the samples, projects, and more at
androidthings.withgoogle.com. If you’ve picked up
a kit today, there’s lots of sample code over
there for you to try out. If you haven’t gotten a
kit, you can get one over at the IoT dome. A big thank you to the Nord
group, Ben, Joe, and Mike for helping out
with this project, Ding on the Android
Things UX team for managing all of the
intercontinental prototyping that was required for this, and
Chloe on the Android Things ML team for making all of
the ML magic come to life. Please check it out. We’d love to see you over there. Thank you guys for coming,
and if you have feedback, we do want to hear from you. You can do so right here. Thanks so much. [MUSIC PLAYING]

Leave a Reply

Your email address will not be published. Required fields are marked *