►
From YouTube: Google I/O Keynote (Google I/O '17)
Description
Join us to learn about product and platform innovation at Google.
See all the talks from Google I/O '17 here: https://goo.gl/D0D4VE
Watch more Android talks at I/O '17 here: https://goo.gl/c0LWYl
Watch more Chrome talks at I/O '17 here: https://goo.gl/Q1bFGY
Watch more Firebase talks at I/O '17 here: https://goo.gl/pmO4Dr
Subscribe to the Google Developers channel: http://goo.gl/mQyv5L
#io17 #GoogleIO #GoogleIO2017
A
B
C
A
beautiful
day
we've
been
joined
by
over
7,000
people
and
we
are
live-streaming
this
as
always
over
400
events
in
85
countries
last
year
was
a
tenth
year
since
Google
I
have
started,
and
so
we
moved
it
closer
to
home
at
Shoreline.
Back
where
it
all
began
seems
to
have
gone
well.
I
checked
the
Wikipedia
entry.
From
last
year.
There
were
some
mentions
of
sunburn,
so
we
have
plenty
of
sunscreen
all
around.
It's
on
us
use.
C
E
Two
years
ago,
at
Google
I/o
we
launched
photos
as
a
way
to
organize
users,
photos
using
machine
learning,
and
today
we
are
over
500
million
active
users
and
every
single
day
users,
upload
1.2
billion
photos
to
Google.
So
the
scale
of
these
products
are
amazing
but
they're
all
still
working
up
their
way
towards
Android,
which
I'm
excited.
As
of
this
week,
we
crossed
over
2
billion
active
devices
of
Android,
as
you
can
see
that
the
robot
is
pretty
happy
too
behind
me.
E
E
Mobile
made
us
reimagine
every
product
we
were
working
on.
We
had
to
take
into
account
that
the
user
interaction
model
it
fundamentally
changed
with
multi-touch
location,
identity,
payments
and
so
on.
Similarly,
in
AI
first
world
we
are
rethinking
all
our
products
and
applying
machine
learning
and
AI
to
solve
user
problems,
and
we
are
doing
this
across
every
one
of
our
products.
So
today,
if
you
use
Google
search,
we
rank
differently
using
machine
learning
or,
if
you're,
using
Google.
F
Recognizes
restaurant
signs
street
signs
using
machine
learning
duo
with
video
calling
uses,
machine
learning
for
low
bandwidth
situations
and
smart
reply
in
a
low
last
year
had
great
reception,
and
so
today
we
are
excited
that
we
are
rolling
out
smart
reply
to
over
1
billion
users
of
Gmail.
It
works
really
well,
here's
a
sample
email
if
you
get
a
email
like
this,
the
machine
learning
systems
learn
to
be
conversational
and
it
can
reply
and
find
what
Saturday
or
whatever.
So
it's
really
nice
to
see
just.
E
Like
with
every
platform
shift
how
users
interact
with
computing
changes,
mobile
brought
multi-touch,
we
evolved
beyond
keyboard
and
mouse.
Similarly,
we
now
have
wise
envision
as
new
to
new
important
modalities
for
computing.
Humans
are
interacting
with
computing
in
more
natural
and
immersive
ways.
Let's
start
with
voice,
we've
been
using
voice
as
an
input
across
many
of
our
products.
That's
because
computers
are
getting
much
better
at
understanding
speech.
E
We
have
had
significant
breakthroughs,
but
the
pace,
and
even
since
last
year,
has
been
pretty
amazing
to
see
our
word.
Error
rate
continues
to
improve,
even
in
very
noisy
environments.
This
is
why,
if
you
speak
to
Google
on
your
phone
or
Google
home,
we
can
pick
up
your
voice
accurately,
even
in
noisy
environments,
when
we
were
shipping
Google
home,
we
had
originally
planned
to
include
eight
microphones
so
that
we
could
accurately
locate
the
source
of
where
the
user
was
speaking
from.
E
Deep
learning
is
what
allowed
us
about
two
weeks
ago
to
announce
support
for
multiple
users
in
Google
home,
so
that
we
can
recognize
up
to
six
people
in
your
house
and
personalize.
The
experience
for
each
and
every
one
so
voice
is
becoming
an
important
modality
in
our
products.
The
same
thing
is
happening
with
vision,
similar
to
speech.
We
are
seeing
great
improvements
in
computer
vision.
So
when
we
look
at
a
picture
like
this,
we
are
able
to
understand
the
attributes
behind
the
picture.
E
We
realize
it's
your
boy
in
a
birthday
party,
there
was
cake
and
family
in
wall
and
your
boy
was
happy.
So
we
can
understand
all
that
better
now
and
our
computer
vision
systems
now
for
the
task
of
image.
Recognition
are
even
better
than
humans,
so
it's
astounding
progress
and
we
using
it
across
our
products.
So
if
you
use
the
Google
pixel,
it
has
the
best-in-class
camera
and
we
do
do
a
lot
of
work
with
computer
vision.
E
E
E
Google
lens
is
a
set
of
vision,
based
computing
capabilities
that
can
understand
what
you're
looking
at
and
help
you
take
action.
Based
on
that
information,
we
will
ship
it
first
in
Google
assistant
and
photos
and
I
will
come
to
other
products.
So
how
does
it
work
so,
for
example,
if
you
run
into
something-
and
you
want
to
know
what
it
is
say-
a
flower-
you
can
invoke.
E
E
And
we
can
automatically
do
the
hard
work
for
you
or,
if
you're
walking
in
a
street
downtown-
and
you
see
a
set
of
restaurants
across
you,
you
can
point
your
phone
because
we
know
where
you
are
and
we
have
our
knowledge
graph
and
we
know
what
you're
looking
at.
We
can
give
you
the
right
information
in
a
meaningful
way.
As
you
can
see,
we
are
beginning
to
understand
images
and
videos.
All
of
Google
was
built
because
we
started
understanding,
text
and
webpages.
E
So
the
fact
that
computers
can
understand
images
and
videos
has
profound
implications
for
our
core
mission.
When
we
started
working
on
search,
we
wanted
to
do
it
at
scale.
This
is
why
we
rethought
our
computational
architecture.
We
designed
our
data
centers
from
the
ground
up
and
we
put
a
lot
of
effort
in
them.
Now
that
we
are
evolving
for
this
machine
learning
and
AI
will
we
are
rethinking
our
computational
architecture
again,
we
are
building
what
we
think
of
as
AI.
First
data
centers.
This
is
why.
Last
year
we
launched
the
tensor
processing
units.
E
They
are
custom
hardware
for
machine
learning.
They
were
about
15
to
30
times
faster
on
30
to
80
times
more
power
efficient
than
CPUs
and
GPUs.
At
that
time,
we
use
DP
use
across
all
our
products.
Every
time
you
do
a
search.
Every
time
you
speak
to
Google,
in
fact,
DP
user.
What
powered
alphago
in
its
historic
match
against
laser
at
all?
As
you
know,
machine
learning
has
two
components:
training.
That
is
how
we
build
a
neural
@v.
We
you
know,
training
is
very
computationally.
E
Intensive
and
inference
is
what
we
do
at
real-time,
so
that
when
you
show
it
a
picture,
we
recognize
whether
it's
a
dog
or
a
cat,
and
so
on.
Last
year's
TPU
software
optimized
for
inference
training
is
computationally
very
intensive.
To
give
you
a
sense,
each
one
of
our
machine
translation
models
takes
a
training
of
over
three
billion
words
for
a
week
on
about
hundred
GPUs,
so
we've
been
working
hard
and
I'm
really
excited
to
announce
our
next
generation
of
TP
use
cloud,
TP
use
which
are
optimized
for
both
training
and
inference.
E
What
you
see
behind
me
is
one
cloud
TPU
board.
It
has
four
chips
in
it
and
each
board
is
capable
of
180
trillion
floating
point
operations
per
second,
and
you
know
we
have
designed
it
for
our
data
center,
so
you
can
easily
stack
them.
You
can
put
64
of
these
into
one
big
supercomputer.
We
call
these
TPU
parts
and
each
part
is
capable
of
11.5
peda
flops.
It
is
an
important
advance
in
technical
infrastructure
for
the
AI
era.
E
We
want
Google
Cloud
to
be
the
best
cloud
for
machine
learning,
and
so
we
want
to
provide
our
customers
with
a
wide
range
of
hardware
beats
CPUs
GPUs,
including
the
great
GPUs
Nvidia
announced
last
week
and
now
cloud
GPUs.
So
this
lays
the
foundation
for
significant
progress,
so
we
have
focused
on
driving
the
shift
and
applying
AI
to
solving
problems
at
Google.
We
are
bringing
our
AI
efforts
together
under
Google
dot,
a
I.
It's
a
collection
of
efforts
and
teams
across
the
company
focused
on
bringing
the
benefits
of
AI
to
everyone,
Google
dot.
E
A
I
will
focus
on
three
areas:
state-of-the-art
research
tools,
an
infrastructure
like
tensor
flow
and
cloud
TP
use
and
applied
AI.
So
let
me
talk
a
little
bit
about
these
areas.
Talking
about
research,
we
are
excited
about
designing
better
machine
learning
models,
but
today
it
is
really
time
consuming
it's
a
painstaking
effort
of
a
few
engineers
and
scientists,
mainly
machine
learning,
PhDs.
We
want
it
to
be
possible
for
hundreds
of
thousands
of
developers
to
use
machine
learning.
So
what
better
way
to
do
this
than
getting
neural
nets
to
design
better
neural
nets?
E
We
call
this
approach.
Auto
mo
it's
learning
to
learn
so
the
way
it
works
is
we
take
a
set
of
candidate,
neural
nets.
Think
of
these
as
little
baby
neural
nets
and
we
actually
use
a
neural
net
to
iterate
through
them
till
we
arrive
at
the
best
neural
net.
We
use
a
reinforcement,
learning
approach
and
it's
the
results
are
promising
to
do.
This
is
computationally
hard,
but
cloud
tipi
use
put
it
in
the
realm
of
possibility.
We
are
already
approaching
state-of-the-art
in
standard
tasks
like
safar
image
recognition.
E
E
So
we
are
taking
all
these
AI
advances
and
applying
them
to
newer
harder
problems
across
a
wide
range
of
disciplines.
One
such
area
is
healthcare.
Last
year,
I
spoke
about
our
work
on
diabetic
retinopathy.
It's
a
preventable
cause
of
blindness.
This
year
we
published
a
paper
in
the
Journal
of
American,
Medical,
Association
and
verily
is
working
on
bringing
products
to
the
medical
community.
Another
such
areas,
pathology,
pathology,
is
a
very
complex
area.
E
If
you
take
an
area
like
breast
cancer
diagnosis,
even
amongst
highly
trained
pathologist
agreement
on
some
forms
of
breast
cancer
can
be
as
low
as
48
percent.
That's
because
they're
each
pathologist
is
reviewing
the
equivalent
of
thousand
ten
megapixel
images
for
every
case.
This
is
a
large
data
problem,
but
one
which
machine
learning
is
uniquely
equipped
to
solve.
So
we
built
neural
nets
to
detect
cancer
spreading
to
adjacent
lymph
nodes
it's
early
days,
but
our
neural
nets
show
a
much
higher
degree
of
accuracy,
89
percent
compared
to
previous
methods
of
73
percent.
E
There
are
important
caveats
we
do
have
higher
false
positives,
but
already
giving
this
in
the
hands
of
pathologists.
They
can
improve
diagnosis
in
general.
I.
Think
this
is
a
great
approach
for
machine
learning,
providing
tools
for
people
to
do
what
they
do
better
and
we
are
applying
it
across.
Even
basic
sciences
take
biology.
We
are
training
neural
nets
to
improve
the
accuracy
of
DNA
sequencing.
D
period
is
a
new
tool
from
Google
da
di
that
identifies
genetic
variants
more
accurately
than
state-of-the-art
methods.
E
Reducing
errors
is
important
applications.
We
can
more
accurately
identify
whether
or
not
a
patient
as
genetic
disease
and
can
help
with
better
diagnosis
in
Freedman.
We
are
applying
it
to
chemistry.
We
are
using
machine
learning
to
predict
the
properties
of
molecules
today.
It
takes
an
incredible
amount
of
computing
resources
to
hunt
for
new
molecules,
and
we
think
we
can
accelerate
timelines
by
orders
of
magnitude.
This
opens
up
possibilities
in
drug
discovery
or
material
sciences.
I
am
entirely
confident
one
day.
I
will
invent
new
molecules
with
that
behave
in
predefined
ways.
E
E
Even
I
can
draw
it
this
time,
so
it
may
look
like
fun
and
games,
but
pushing
computers
to
do
things
like
this
is
what
helps
them
be
creative
and
actually
gain
knowledge,
so
we're
very
excited
about
progress,
even
in
these
areas
as
well,
so
we're
making
impressive
progress
in
applying
machine
learning
and
we
applying
it
across
all
our
products,
but
the
most
important
product
we
are
using.
This
is
for
Google
search
in
Google
assistant.
We
are
evolving
Google
search
to
being
more
assistive
for
our
users.
G
G
G
Hey
Google
talk
to
dominate
dr.
Lonely
Planet
talk
to
Korra,
show
me
my
folders.
From
last
weekend.
Your
car
is
parked
at
22b
today,
in
the
news
turn
the
living
room
lights
on
okay.
Turning
on
the
lights
back,
baby,
hey
Google
drop
a
beat
flip.
The
coin
call
Jill
set
a
timer
talked
to
heads
base
and
then
just
for
a
moment,
I'd
like
you
to
let
go
of
any
focus
at
all.
Just
let
your
mind
do
whatever
it
wants
to
do.
G
G
Everyone
last
year
at
I/o,
we
introduced
the
Google
assistant
a
way
for
you
to
have
a
conversation
with
Google
to
get
things
done
in
your
world
today,
as
sundar
mentioned
we're
well
on
our
way,
with
the
assistant
available
on
over
a
hundred
million
devices,
just
as
Google
search
simplified
the
web
and
made
it
more
useful
for
everyone.
Your
Google
assistants
simplifies
all
the
technology
in
your
life.
You
should
be
able
to
just
express
what
you
want
throughout
your
day,
and
the
right
thing
should
happen.
That's
what
the
Google
assistant
is
all
about.
G
It's
your
own,
individual
Google!
That
video
we
saw
really
captures
the
momentum
of
this
project.
We've
made
such
big
strides
and
there's
so
much
more
to
talk
about
today.
The
assistant
is
becoming
even
more
conversational,
always
available
wherever
you
need
it
and
ready
to
help
get
even
more
things
done.
First,
we
fundamentally
believe
that
the
Google
assistant
should
be
hands
down
the
easiest
way
to
accomplish
tasks
and
that's
through
conversation.
It
comes
so
naturally
to
humans,
and
now
Google
is
getting
really
good.
G
Now
recently
we
made
the
assistant
even
more
conversational,
so
each
member
of
the
family
gets
relevant
responses
just
for
them
by
asking
with
their
own
voice
and
we're
continuing
to
make
interacting
with
your
assistant,
more
natural,
for
example,
it
doesn't
always
feel
comfortable
to
speak
out
loud
to
your
assistant,
so
today
we're
adding
the
ability
to
type
to
your
assistant
on
the
phone.
Now
this
is
great
when
you're
in
a
public
place,
and
you
don't
want
to
be
overheard.
The
assistants
also
learning
conversation
beyond
just
words
with
another
person.
G
It's
really
natural
to
talk
about
what
you're
looking
at
sundar
spoke
earlier
about.
How
AI
and
deep
learning
have
led
to
tremendous
strides
in
computer
vision
soon
with
the
smarts
of
Google
lens,
your
assistant
will
be
able
to
have
a
conversation
about
what
you
see.
This
is
really
cool
and
Ibrahim
is
here
to
help
me.
Show
you
a
couple
examples
of
what
we'll
launch
in
the
coming
months.
G
So
last
time,
I
traveled
to
Osaka
I
came
across
a
line
of
people
waiting
to
try
something
that
smelled
amazing
I,
don't
speak
Japanese,
so
I
couldn't
read
the
sign
out
front,
but
Google
Translate
knows
over
a
hundred
languages
and
my
assistant
will
help
with
visual
translation
I
just
tap
the
google
lens
icon
point
the
camera
and
my
assistant
can
instantly
translate
them
into
English
and
now
I
continue
the
conversation.
What
does
it
look
like.
H
Yummy
now
notice,
I
never
had
to
type
the
name
of
the
dish.
My
assistant
used
visual
context
and
answered
my
question
conversationally.
Let's
look
at
another
example:
some
of
the
most
tedious
things
I
do
on
my
phone
stem
from
what
I
see
a
business
card,
I
want
to
save
details
from
a
receipt,
I
need
to
track,
and
so
on
with
google
lens.
My
assistant
will
be
able
to
help
with
those
kinds
of
tasks
too.
I
love,
live
music
and
sometimes
I,
see
info
for
shows
around
town
that
look
like
fun
now.
H
I
can
just
tap
the
google
lens
icon
and
point
the
camera
at
the
Markey.
My
assistant
instantly
red-eyes,
is
what
I'm
looking
at
now,
if
I
wanted
to
I
could
tap
to
hear
some
of
this
band
songs
and
my
assistant
offers
other
helpful
suggestions
right
in
the
viewfinder
there's
one
to
buy
tickets
from
tick,
Astor
and
another
to
add
the
show
to
my
calendar
with
just
a
tap.
My
assistant
adds
the
concert
details
to
my
schedule:
saving
event
save
stone
foxes
for
May
17th
at
9
p.m.
awesome.
H
My
assistant
helped
me
keep
track
of
the
event,
so
I
won't
miss
the
show
and
I
didn't
have
to
open
a
bunch
of
apps
or
type
anything
thanks.
He
drew
him.
So
that's
how
the
assistant
is
getting
better
at
conversation
by
understanding
language
and
voices,
with
new
input
choices
and
with
the
power
of
google
lens.
Second,
the
assistant
is
becoming
a
more
connected
experience,
that's
available
everywhere.
You
need
help
from
your
living
room
to
your
morning,
jog
from
your
commute
to
errands
around
town.
H
Your
assistant
should
know
how
to
use
all
of
your
connected
devices
for
your
benefit.
Now
we're
making
good
progress
and
bringing
the
assistant
to
those
2
billion
phones
and
other
devices
powered
by
Android,
like
TVs,
wearables
and
car
systems,
and
today
I'm
excited
to
announce
that
the
Google
assistant
is
now
available
on
the
iPhone.
H
So,
no
matter
what
smartphone
you
use,
you
can
now
get
help
from
the
same
smart
assistant
throughout
the
day
at
home
and
on
the
go
the
assistant
brings
together.
All
your
favorite
Google
features
on
the
iPhone
just
ask
to
get
package
delivery.
Details
from
Gmail
watch
videos
from
your
favorite
YouTube
creators
get
answers
from
Google
search
and
much
more.
You
can
even
turn
on
the
lights
and
heat
up
the
house
before
you
get
home
now.
Android
devices
and
iPhones
are
just
part
of
the
story.
H
We
think
the
assistance
should
be
available
on
all
kinds
of
devices
where
people
might
want
to
ask
for
help.
The
new
Google
assistant
SDK
allows
any
device
manufacturer
to
easily
build
the
Google
assistant
into
whatever
they're
building
speakers:
toys,
drink,
mixing,
robots,
whatever
crazy
device.
All
of
you
think
up
now
can
incorporate
the
Google
assistant
now
we're
working
with
many
of
the
world's
consumer
brands
and
their
suppliers
so
keep
an
eye
out
for
the
badge
that
says
Google
assistant
built-in,
when
you
do
your
holiday
shopping
this
year
now.
Obviously
another
aspect
of
being.
I
Useful
to
people
everywhere
is
support
for
many
languages.
I'm
excited
to
announce
that,
starting
this
summer
the
Google
assistant
will
begin
rolling
out
in
French,
German,
Brazilian,
Portuguese
and
Japanese
on
both
Android
phones
and
iPhones
by
the
end
of
the
year
will
also
support
Italian,
Spanish
and
Korean.
I
So
that's
how
the
assistant
is
becoming
more
conversational
and
how
it
will
be
available
in
even
more
context.
Finally,
the
assistant
needs
to
be
able
to
get
all
kinds
of
useful
things
done
for
people.
You
know
people
sometimes
ask
if
the
assistant
is
just
a
new
way
to
search
now.
Of
course,
you
can
ask
your
assistant
to
get
all
sorts
of
answers
from
Google
search,
but
beyond
finding
information.
I
Users
are
also
asking
the
assistant
to
do
all
sorts
of
things
for
them
now,
as
you've
already
seen,
the
assistant
can
tap
into
capabilities
across
many
google
apps
and
services,
but
Google's
features
are
just
part
of
the
story.
We
also
opened
the
assistant
to
third-party
developers,
who
are
building
some
really
useful,
integrations
I'll
turn
it
over
to
Valerie
to
share
more
about
how
the
developer
platform
is
getting
stronger.
I
Hi,
okay,
so
the
actions
on
Google
platform,
it's
been
awesome
to
see
how
developers,
like
you
have
been
gauging
with
the
Google
assistant.
Like
honestly,
you
have
built
some
really
cool
integrations
like
I
can
ask
food
network
about
the
recipe
that's
on
TV
right
now,
I
can
work
out
with
fitstar
ask
CNBC
about
the
news,
or
my
husband
and
I
could
play
name
that
tune
with
song
pop,
which
he
is
surprisingly
good
at
until
now.
I
I
Be
able
to
do
more
starting
today,
actions
on
Google
will
be
supporting
transactions.
It's
a
complete
end-to-end
solution
for
developers,
including
payments
identity,
notifications,
receipts
even
account
creation.
The
platform
handles
all
the
complexity.
Let
me
show
you
how
one
will
work
hi?
How
can
I
help
I'd
like
delivery
from
Panera?
I
This
is
vedera
I'll
need
your
delivery
address,
which
one
can
I
get
from.
Google
will
go
at.
I
I
I
Okay,
the
total
is
eighteen
dollars
and
forty
cents.
Are
you
ready
to
place
the
order?
Yes
I'll,
to
scan
my
fingerprint
to
pay
with
Google,
and
that's
it
thanks.
You're
all
set
yeah
super-easy
like
I,
was
talking
to
someone
at
the
store.
So
here
I
was
a
new
Panera.
Customer
I
didn't
have
to
install
anything
or
create
an
account.
I
So
the
developer
platform
is
also
getting
much
stronger
for
home
automation.
Integrations
actions
on
Google
can
now
support
any
smart
home
developer.
That
wants
to
add
conversational
control.
Today
over
70
smart
home
companies
work
with
the
Google
assistant.
So
now
on
my
Google
home
or
from
my
phone
I
can.
J
I
I
I
For
multiple
users,
we
can
unlock
the
full
potential
Google
home
to
offer
a
truly
personal
experience.
So
now
you
can
schedule
a
meeting,
set
a
reminder
or
get
your
own
daily
briefing
with
my
day
by
using
your
own
voice
and
get
your
commute.
Your
calendar
appointments
and
your
news
sources
now
today,
like
a
short
share
for
new
features,
will
be
rolling
out
over
the
coming
months.
So
first
we're
announcing
support.
K
L
G
That
really
helpful
information
and
providing
it
for
you
in
a
hands-free
way.
So,
for
example,
let's
say
I'm,
relaxing
and
playing
game
with
the
kids.
Well,
I
can
see
that
the
Google
home
lights
just
turned
on
hey
Google.
What's
up
hi,
Ritchie
traffic's
heavy
right
now,
so
you'll
need
to
leave
in
14
minutes
to
get
to
shoreline
athletic
fields
by
3:30
p.m.
that's
pretty
nice.
The
assistant
saw
the
game
coming
up
on
my
calendar
and
got
my
attention
because
I
had
to
leave
earlier
than
normal.
G
So
now
my
daughter
can
make
it
to
that
soccer
game
right
on
time.
Now
we're
going
to
start
simple,
with
really
important
messages
like
reminders,
traffic
delays
and
flight
status
changes
and
with
multiple
user
support
you
have
the
ability
to
control
the
type
of
proactive
notifications.
You
want
over
time
all
right.
Second,
another
really
common
activity
we
do
in
the
home
today
is
communicate
with
others,
and
a
phone
call
is
still
the
easiest
way
to
reach
someone.
So
today,
I'm
excited
to
announce
hands-free
calling
coming
to
Google
home.
G
K
K
N
So
whoever
you
call
what's
notes
coming
from
you
now
we're
rolling
out
hands-free
calling
the
us
to
all
existing
Google
home
devices
over
the
next
few
months.
It's
the
ultimate
hands-free
speakerphone,
no
setup
required
call
anyone,
including
personal
contacts
or
businesses,
and
even
dial
out
with
your
personal
number.
When
we
detect
your
voice,
we
can't
wait
for
you
to
try
it
out.
Okay.
N
Third:
let's
talk
a
little
bit
about
entertainment.
We
designed
Google
home
to
be
a
great
speaker,
one
that
you
can
put
in
any
room
in
the
house
or
wirelessly
connect
to
other
chromecast
built-in
speaker
systems.
Well,
today,
we're
announcing
that
spot
in
addition
to
their
subscription
service
will
be
adding
their
free
music
service
to
Google
home.
So
it's
even
easier
to
play
your
Spotify
playlist.
N
We'll
also
be
adding
support
for
SoundCloud
and
deezer
to
the
largest
global
music
services
today,
and
these
music
services
will
join
many
of
the
others
already
available
through
the
assistant
and
finally,
we'll
be
adding
bluetooth
support
to
all
existing
Google
home
devices.
So
you
can
play
any
audio
from
your
iOS
or
Android
device.
N
N
So
just
say
what
you
want
to
watch
and
we'll
play
it
for
you
all
on
a
hands-free
way
with
Google
home.
We
want
to
make
it
really
easy
to
play
your
favorite,
entertainment,
okay.
Finally,
I
want
to
talk
a
little
bit,
how
we
see
the
assistant
evolving
to
help
you
in
a
more
visual
way.
Voice
responses
are
great,
but
sometimes
that
picture's
worth
a
thousand
words.
So
today
we're
announcing
support
for
visual
responses
with
Google
home
now
to
do
that,
we
need
a
scream
well.
N
Fortunately,
many
of
us
already
have
a
ton
of
screens
in
our
home
today,
our
phones,
our
tablets,
even
our
TVs,
the
Google
assistant,
should
smartly
take
advantage
of
all
these
different
devices
to
provide
you
the
best
response
on
the
right
device.
For
example,
what
Google
home
I
can
easily
get
location
information,
okay,
Google
where's.
My
next
event,
your
Pokemon
go
hike
is
at
Rancho
San
Antonio
reserved
for
my
kids,
my
kids
relax,
but
if
I
want
to
view
the
directions
the
best
place
to
do,
it
is
on
my
phone.
N
Well
soon
you
could
just
say:
okay,
Google:
let's
go
all
right,
I'm
sending
the
best
route
to
your
phone.
It
will
automatically
your
phone
notify
your
phone,
whether
it's
Android
or
iOS,
and
take
you
straight
to
Google
Maps.
So
you
can
glance
at
directions,
interact
with
the
map
or
just
start
navigation.
It's
really
simple!
Now
TVs
are
another
natural
place
to
get
help
from
the
Google
assistant
and
we
have
a
great
place
to
start
with
over
50
million
chromecast
and
chromecast
built-in
devices.
N
So
today
we're
announcing
that
we'll
be
updating
chromecast
to
show
visual
responses
on
your
TV.
When
you
ask
for
help
for
Google
home,
for
example,
I
can
now
say:
ok,
Google
show
my
calendar
for
Saturday,
showing
it
on
your
TV
it'll
show
up
right
on
the
TV
screen.
I'll
immediately
get
results
from
the
assistant,
and
since
the
assistant
detected
my
voice,
we're
showing
my
calendar,
others
will
see
their
calendar
by
using
their
voice.
We
can
personalize
the
experience
even
on
the
TV.
They
can
continue
to
fall
off.
N
The
conversation
looks
like
I,
have
a
biking
trip
to
Santa
Cruz.
What's
the
weather
in
Santa
Cruz
this
weekend
this
weekend
in
Santa
Cruz,
it
will
be
clear
and
sunny
most
of
the
time.
So
it's
really
easy.
It's
all
hands-free
your
assistant
can
provide
a
visual
response
to
a
TV
to
a
lot
of
different
types
of
questions.
You
know
we
talked
how
easy
it
is
to
play
what
you
want
to
watch
on
the
TV
screen,
but
what
about
those
times
you
don't
know
what
to
watch
well,
so
you
could
just
ask
hey
Google.
N
N
It's
really
simple
again:
no
remotes
or
phone
required
in
a
short
conversation.
I
found
something
really
interesting
to
watch
using
go
home
I.
Can
you
do
with
other
things?
Ok,
Google!
What's
on
my
DVR
here
you
go
here,
we're
showing
how
it
works
with
YouTube
TV,
a
new
live
TV,
deeming
service
that
gives
you
live
sports
and
shows
from
popular
TV
networks
and
YouTube.
Tv
includes
a
cloud
DVR,
so
I
can
easily
play
my
saved
episodes
play
Modern,
Family,
ok,
playing
modern
family
from
YouTube
TV.
You
guys
have
a
too
easy.
N
N
Everything
can
be
done
in
a
hands-free
way,
all
from
the
comfort
of
my
couch
and
over
time,
we're
going
to
bring
all
those
developer
actions
that
Valerie
already
talked
about
right
to
the
TV
screen.
So
we
do
even
more
over
time
with
Google
home
and
when
you're
done
just
say,
ok,
Google
turn
off
the
TV
sure
and
that's
our
update
for
Google
home,
proactive
assistance
to
bring
important
information
to
you
at
the
right
time.
Simple
and
easy
hands-free
calling
more
entertainment
options
and
evolving.
The
assistant
provide
visual
responses
in
the
home.
N
N
This
is
the
secret
ingredient
behind
Google
photos
and
the
momentum
we've
seen
in
these
two
short
years
is
remarkable.
As
sundar
mentioned,
we
now
have
more
than
half
a
billion
monthly
active
users,
uploading
more
than
1.2
billion
photos
and
videos
per
day,
and
today,
I'm
excited
to
show
you
three
new
features,
we're
launching
to
make
it
even
easier
to
send
and
receive
the
meaningful
moments
in
your
life.
Now,
at
first
glance,
it
might
seem
like
photo
sharing
is
a
solved
problem.
N
After
all,
there's
no
shortage
of
apps
out
there
that
are
great
at
keeping
you
and
your
friends
and
family
connected,
but
we
think
there's
still
a
big
and
different
problem
that
needs
to
be
addressed.
Let
me
show
you
what
I
mean
if
there's
one
thing
you
know
it's
that
you're
a
great
photographer.
If
there's
a
second
thing,
you
know
it's
that
you're
kind
of
a
terrible
person.
N
What
yeah
you
heard
me,
the
only
photo
of
the
birthday
girl
in
focus
never
sent
it
the
best
picture
of
the
entire
wedding,
kept
it
to
yourself
this
masterpiece
of
your
best
friend.
We
are
gonna,
send
it,
but
then
you
were
like
oh
remember
that
sandwich
I
love
that
sandwich.
If
only
something
could
say,
hey
Eric
looks
great
in
these.
Do
you
want
to
send
them
to
him
and
you
could
be
like
great
idea?
Well,
it
can
wait.
It
can
yep
with
Google
photos.
N
So
today,
to
make
us
all
a
little
less
terrible
people
we're
announcing
suggested
sherry
because
we've
all
been
there
right
like
when
you're
taking
that
group
photo,
and
you
insist
that
it
be
taken
with
your
camera,
because
you
know
if
it's
not
on
your
camera,
you
are
never
seeing
that
photo
ever
again.
Now,
thanks
to
the
machine
learning
in
Google
photos
will
not
only
remind
you,
so
you
don't
forget
to
share,
will
even
suggest
the
photos
and
people
you
should
share
with
in
one
tap
you're
done.
Let's
have
a
look
at
suggested,
sharing
in
action.
L
Alright,
so
here
a
bunch
of
photos,
Dave
took
while
bowling
with
the
team.
Last
weekend
he
was
too
busy
enjoying
the
moment,
so
he
never
got
around
to
sharing
them,
but
this
time
Google
photo
sentiment,
reminder
via
notification
and
also
by
badging
the
new
sharing
tab.
The
sharing
tab
is
where
you're
gonna
be
able
to
find
all
of
your
google
photo
sharing
activity
and
at
the
top.
L
Your
personal
suggestions,
based
on
your
sharing
habits
and
what's
most
important
to
you
here,
is
the
sharing
suggestion
that
Dave
got
from
his
day
bowling
Google
Photos
recognized
this
was
a
meaningful
moment.
It's
selected
right
shots
and
it
figured
out
who
he
should
send
it
to
based
on
who
was
in
the
photos.
In
this
case
it's
John
V
Jason
and
a
few
others
who
are
also
at
the
event.
Dave
can
now
review
the
photos
elected
as
well
as
update
the
recipients
or,
if
he's
happy
with
it,
he
can
just
tap,
send
and
that's
it.
L
Google
photos
will
even
send
an
SMS
or
an
email
to
anyone
who
doesn't
have
the
app
and
that
way
everyone
can
view
and
save
the
full
resolution
photos.
Even
if
they
don't
have
Google
Photos
accounts
and
because
Google
photo
sharing
works
on
any
device,
including
iOS,
let's
have
a
look
at
what
john
VIII
sees
on
her
iPhone.
She
receives
a
notification
and
ting
on
it.
L
Lets
her
quickly
jump
right
into
the
album
and
look
at
all
those
the
Davis
shared
with
her
but
notice
here
at
the
bottom,
she's
asked
to
contribute
the
photo
she
took
from
the
event.
Google
photos
automatically
identifying
and
suggesting
the
right
ones.
John.
We
can
review
the
suggestions
and
then
simply
tap
add.
L
L
I've
now
gone
ahead
and
shared
my
library
with
my
wife
Jess.
So
let's
switch
to
her
phone
to
see
what
the
experience
looks
like
from
her
and
she
receives
a
notification,
and
after
accepting
she
can
now
go
to
see
all
the
photos
that
I've
shared
with
her.
We
can
access
Lea
easily
from
the
menu.
If
she
sees
something
she
likes,
she
can
go
ahead
and
select
those
photos
and
save
them
to
her
library,
we'll
even
notify
her
priyada
Clee.
L
She
just
wants
every
photo
I
take
of
her
or
the
kids
to
automatically
be
safe
to
her
library,
just
as
if
she
took
the
photos
herself
with
shared
library,
she
do
just
that
choosing
to
audit
a
photos,
specific
people
now
anytime
I
take
photos
of
her
or
the
kids
without
either
of
us
having
to
do
anything
they'll
automatically
appear
in
the
main
view
of
a
wrap.
Let
me
show
you
now:
I
couldn't
justify
pulling
the
kids
out
of
school
today,
just
to
have
their
photo
taken,
but
I
do
have
the
next
best
thing
right.
L
Maitre
de
su
to
Ava
and
Lily
all
righty
here
so
I'm
gonna
go
ahead,
take
a
photo
with
the
girls,
smile,
kids,
fantastic,
and
since
this
is
too
good
of
an
opportunity,
I'm
gonna
have
to
take
one
with
all
of
you
here
too,
all
right
there
we
go
Oh,
brilliant,
all
right,
okay,
so
thank
you.
Girls
much
appreciated
back
to
school.
L
We
go
alright,
so
using
nothing
more
than
the
standard
camera
app
on
my
phone
I've
gone
ahead
and
taken
one
photo
with
my
kids
in
one
photo
with
all
of
you
here
in
the
audience.
Google
photos
is
going
to
back
these
two
photos
up
it's
going
to
share
them
with
Jess,
and
then
it's
going
to
recognize
the
photo
that
has
kids
in
them
and
automatically
save
just
that.
One
to
her
like,
like
you,
can
see
right
here.
L
O
Ios
and
web
in
the
coming
weeks.
Finally,
we
know
sharing,
doesn't
always
happen
through
apps
and
screens.
There's
still
something
pretty
special
about
looking
at
and
even
gathering
around
an
actual
printed
photo,
but
printing
photos
and
albums
today
is
hard.
You
have
to
hunt
across
devices
and
accounts
to
find
the
right
photos
select
the
best
among
the
duplicates
and
blurry
images,
upload
them
to
a
printing
service
and
then
arrange
them
across
dozens
of
pages.
It
can
take
hours
of
sitting
in
front
of
a
computer
just
to
do
one
thing.
O
They're
beautiful
high
quality
with
a
clean
and
modern
design,
but
the
best
part
is
that
they're
incredibly
easy
to
make
even
on
your
phone.
What
used
to
take
hours
now
only
takes
minutes.
I
recently
made
a
book
for
Jess
on
Mother's
Day,
and
let
me
show
you
just
how
easy
and
fast
that
was.
First
thanks
to
unlimited
storage,
all
my
life's
moments
are
all
up
here
in
Google.
Photos
no
need
to
upload
them
to
another
website
or
app.
Now
my
favorite
way
start
a
book
is
to
use
people
search.
O
Since
this
is
a
Mother's
Day
gift,
I'm
gonna
simply
find
photos
of
Jess,
Ava
and
Lily
there.
They
are
alright
I
thought
I
took
more
photos
alright.
So
why
don't
we
just
go
and
pick
another
set
of
photos
Dave
if
that
one's
not
coming
up
just
it'll,
be
a
fun
Mother's
Day
gift
for
her
she'll
get
a
different
surprise.
So
I'll
select
a
bunch
of
fussier
and
the
good
news
is
I,
don't
have
to
figure
out
which
are
the
right
photos
and
which
are
the
good
ones,
because
this
is
where
Google
photos
really
shines.
O
I'm
just
going
to
go
ahead
and
hit
plus
select
photo
book
I'm
going
to
pick
a
hardcover
book.
We
offer
both
a
soft
cover
and
a
hard
cover
and
notice
what
happens.
Google
photos
is
going
to
make
the
best
photos
for
me
automatically
automatically
suggesting
photo
for
in
this
case.
How
else
is
that,
and
it's
even
going
to
go
ahead
and
lay
them
all
out.
O
And
soon
we'll
make
it
even
easier
to
get
started,
applying
machine
learning
to
create
personalized
photo
books,
you'll
love.
So
when
you
go
to
photo
books
from
the
menu
you'll
see
pre-made
books
tailored
just
for
you,
your
trip
to
the
Grand
Canyon
time
with
your
family
during
the
holidays
or
your
pet,
or
even
your
kids,
artwork
all
easily
customizable,
we'll
even
notify
you
when
there
are
new
photo
book
suggestions.
O
O
It's
a
great
example
of
machine
learning
at
work,
so
those
are
the
three
big
updates
related
to
sharing
in
Google
photos,
suggested,
sharing
shared
libraries
and
photo
books.
Three
new
features
built
from
the
ground
up
with
AI
at
their
core
I,
can't
wait
for
all
of
you
to
try
them
out
real
soon
now,
before
I
go
I
want
to
touch
on
what
sundar
mentioned
earlier,
which
is
the
way
we're
taking
photos
is
changing
instead
of
the
occasional
photo
with
friends
and
family.
O
We
now
take
30
identical
photos
of
a
sunset,
we're
also
taking
different
types
of
photos,
not
just
photos
to
capture
a
personal
memory,
but
as
a
way
to
get
things
done.
Whiteboards
we
want
to
remember
receipts.
We
need
to
file
books
would
like
to
read
and
that's
where
Google
lends
and
its
vision
based
computing
capabilities
comes
in.
It
can
understand,
what's
in
an
image
and
help
you
get
things
done,
Scott
showed
how
Google
lens
in
the
assistant
can
identify
what
you're
looking
at
and
help
you
on
the
fly.
O
But
what
about
after
you've
taken
the
photo?
There
are
lots
of
photos
you
want
to
keep
and
then
look
back
on
later
to
learn
more
and
take
action,
and
for
that
we're
bringing
google
lens
right
into
Google
photos.
Let
me
show
you
so.
Let's
say
you
took
a
trip
to
Chicago,
there's
some
beautiful
architecture
there
and
during
your
boat
tore
down
the
Chicago
River.
You
took
lots
of
photos,
but
it's
hard
to
remember
which
building
is
which,
later
on
now
by
activating
lens,
you
can
identify
some
of
the
cool
buildings
in
your
photos.
O
Like
the
second
tallest
skyscraper
in
les
Willis
Tower,
you
can
even
pull
up
directions
and
get
the
hours
for
the
V
deck
and
later
welcoming
the
Art
Institute,
you
might
take
photos
of
a
few
paintings.
You
really
love
in
one
tap.
You
can
learn
more
about
the
painting
and
the
artist
and
the
screen
shot
that
your
friend
sent
you
of
that
bike
rental
place
just
adding
lens.
You
can
tap
the
phone
number
and
make
the
call
right
from
the
photo.
O
O
P
New
and
unique
voices
and
we're
hard
at
work
to
make
sure
that
we
can
reach
the
next
billion
viewers
which
you'll
hear
about
in
a
later
I/o
session.
Today
we
want
to
give
everyone
the
opportunity
to
watch
the
content
on
YouTube,
so
YouTube
is
different
from
traditional
media
in
a
number
of
ways.
First
of
all,
YouTube
is
open.
Anyone
in
the
world
can
upload
a
video
that
everyone
can
watch.
P
You
can
be
a
vlogger
broadcasting
from
your
bedroom,
a
gamer
live-streaming
from
your
console
or
citizen
journalist
documenting
events
live
from
your
phone
on
the
front
lines,
and
what
we've
seen
is
that
openness
leads
to
important
conversations
that
help
shape
society
from
advancing
LGBTQ
rights,
to
highlighting
the
plight
of
refugees
to
encouraging
body
positivity
and
we've
seen
in
our
numbers
that
you
just
really
want
to
engage
with
this
type
of
diverse
content.
We
are
proud
that
last
year
we
passed
a
billion
hours
a
day
being
watched
on
YouTube
and
our
viewership
is
not
slowing
down.
P
The
second
way
that
YouTube
is
different
from
traditional
media
is
that
it's
not
a
one-way
broadcast.
It's
a
two-way
conversation.
Viewers
interact
directly
with
their
favorite
creators
via
comments,
mobile
live-streaming
fan
polls,
animated
gifs
and
VR,
and
these
features
enable
viewers
to
come
together
and
to
build
communities
around
their
favorite
content.
P
P
P
So
inspired
by
this
video,
the
professor
posted
a
single
comment
on
the
video
asking
for
volunteers,
with
3d
printers
to
help
print
affordable,
prosthesis.
The
network
has,
since
grown
into
a
community
of
over
6,000
people
who
have
designed
printed
and
distributed
these
prosthetics
to
children
in
over
50
countries.
P
So
the
third
feature
of
this
new
medium
is
that
video
works
on
demand
on
any
screen
over
60%
of
our
watch.
Time
now
comes
from
mobile
devices,
but
actually
our
fastest
growing
screen
isn't
the
one
in
your
pocket.
It's
the
one
in
your
living
room.
Our
watch
time
in
our
living
room
is
growing
at
over
six
ninety
percent
a
year.
P
Thank
You
Susan,
so
earlier
today
you
heard
from
Rishi
about
how
people
are
watching
YouTube
on
the
TV
via
the
assistant.
But
another
way
people
are
enjoying
video
is
through
the
YouTube
app,
which
is
available
on
over
half
a
billion
Smart
TVs
game
consoles
and
streaming
devices,
and
that
number
continues
to
grow
around
the
world.
So
when
I
think
about
why
YouTube
is
so
compelling
in
the
living
room,
it
isn't
just
about
the
size
of
the
screen.
It's
about
giving
you
an
experience
that
TV
just
can't
match.
P
Now.
One
of
my
personal
interests
outside
of
work
is
to
travel
and
one
place.
I'd
love
to
visit
is
Alaska
to
check
out
the
Northern
Lights.
So
let's
do
a
voice,
search,
aurora,
borealis,
360
great.
Let's
choose
that
first
video
and
now
using
my
TV
remote
I'm,
able
to
pan
around
this
video
checking
out.
This
awesome
view
from
every
single
angle.
P
P
We
are
also
introducing
live
360
in
the
living
room
soon,
you'll
be
able
to
witness
moments
and
events
as
they
unfold
in
a
new,
truly
immersive
way,
so
whether
you
have
a
Sony
Android
TV
or
an
Xbox
one
console
soon
you'll
be
able
to
explore
360
videos
right
from
the
comfort
of
your
couch
and
along
with
your
friends
and
family
and
now
to
help
show
you
another
way,
we're
enabling
interactivity.
Please
join
me
in
welcoming
Barbara
McDonald
who's,
the
lead
of
something
we
call
super
chat.
P
Good
morning
io
and
to
everybody
on
the
livestream,
as
Susan
mentioned,
what
makes
YouTube
special
is
the
relationships
that
creators
are
able
to
foster
with
their
fans,
and
one
of
the
best
ways
to
connect
with
your
fans
is
to
bring
them
live
behind
the
scenes
of
your
videos
offering
up
can't-miss
content.
In
the
past
year,
the
number
of
creators
live
streaming
on,
YouTube
has
grown
by
4x.
J
Q
P
P
U
Up
the
stream
and
right
from
within
live
chat,
I'm
able
to
enter
my
message.
Select
my
amount
make
the
purchase
and
send
boom
see
how
much
that
message
stands
out
and
it
gets
pinned
to
the
top
cool
right.
Yeah
thanks
Barbara,
it's
actually
lovely
in
a
minute,
although
I
feel
that
there's
a
high
chance
of
showers
right
local
showers,
like
specifically
to
this
stage.
A
Yeah
I
wonder
I,
wonder
well,
because
we
know
developers
are
incredibly
creative.
We
wanted
to
see
what
you
can
do
to
make
super
chat
even
more
interactive,
so
we've
launched
an
API
for
it
and
today
we're
taking
it
to
the
next
level
with
a
new
developer
integration
that
triggers
actions
in
the
real
world.
This
means
that
when
a
fan
sends
a
super
chat
to
a
creator,
things
can
happen
in
real
life,
such
as
turning
the
lights
on
or
off
in
the
creator
studio
flying
a
drone
around
or
pushing
buttons
on
their
toys
and
gadgets.
A
M
There
on
the
lawn
to
unleash
a
truckload
of
water
balloons
at
the
slow
mo
guys,
I'm
scared,
yeah,
that's
right,
but
every
dollar
we're
gonna
take
another
balloon.
So
more
money
means
more
balloons.
Although
I
did
hear
a
guy
over
here,
go
we're
gonna
totally
nail
these
guys.
All
right,
that's
got
to
be
at
least
four
dollars
right
there,
so
yeah
each
dollar
donate
goes
to
the
course
that
Susan
mentioned
earlier.
The
enable
network.
Okay.
So
how
much
do
you
think
we
can
send
I
can
start
at
$1
and
go
anywhere
upwards
from
there.
M
M
That
was
amazing.
Thank
you,
everybody
for
your
help.
So
this
obviously
just
scratches
the
surface
of
what
is
possible
using
super
chats,
open
api's
and
we
are
super
excited
to
see
what
all
of
you
will
do
with
it.
Next
so
Susan,
how
about
you
come
back
out
here
and,
let's
check
out
the
video
we
all
made.
M
A
So
that
360
living
room
demo
and
the
super
chat
demo,
those
are
just
two
examples
of
how
we
are
working
to
connect
people
around
the
globe,
together
with
video
now
I
hope
that
what
you've
seen
today
is
that
the
future
of
media
is
a
future
of
openness
and
diversity,
a
future
filled
with
conversations
and
community
and
a
future
that
works
across
all
screens
together
with
creators,
viewers
and
partners.
We
are
building
the
platform
of
that
future
Thank
You
IO
and
please.
A
A
Everybody
it's
great
to
be
here
at
Google,
I/o
2017.
As
you
can
see,
we
found
some
new
ways
to
Hardware
accelerate
Android
this
time
with
jetpacks,
but
seriously.
Two
billion
active
devices
is
incredible
and
that's
just
smartphones
and
tablets.
We're
also
seeing
you
momentum
in
areas
such
as
TVs
and
cars
and
watches
and
laptops
and
beyond.
So
let
me
take
a
moment
and
give
you
a
quick
update
and
how
Android
is
doing
in
those
areas.
A
Android
wear
2.0,
launched
earlier
this
year
with
a
new
update
for
Android
and
iPhone
users
and
with
new
partners
like
Emporio,
Armani,
Lovato
and
New
Balance.
We
now
enable
24
of
the
world's
top
watch
brands,
Android,
auto
being
a
10x
user
growth
since
last
year,
it's
supported
by
more
than
300
car
models
and
the
out
auto
mobile
app
and
just
this
week,
howdy
in
Volvo
and
as
that,
their
next-generation
nav
systems
will
be
powered
by
Android
for
a
more
seamless
connected
car
experience.
A
Android
TV
we've
partnered
with
over
a
hundred
cable
operators
and
hardware
manufacturers
around
the
world,
and
now
we're
now
seeing
1
million
device
activations
every
two
months
and
they're
more
than
three
thousand
Android
TV
apps
in
the
Play
Store
this
year,
we're
releasing
a
brand
new
launcher
interface
and
bringing
the
Google
assistant
to
Android
TV
Android
things
previewed
late
last
year,
and
already
there
are
thousands
of
developers
in
over
60
countries
using
it
to
build
connected
devices
with
easy
access
to
the
Google
assistant,
tensorflow
and
more.
The
full
launch
is
coming
later.
A
This
year,
Chromebooks
comprise
almost
60
percent
of
K
to
12
laptops
sold
in
the
US,
and
the
momentum
is
growing
globally
and
now,
with
the
added
ability
to
run
Android
apps,
you
get
to
target
laptops
too.
Now,
of
course,
platforms
are
only
as
good
as
the
apps
they
run.
The
Google
Play
ecosystem
is
more
vibrant
than
ever.
Android
users
installed
a
staggering
82
billion
apps
and
games
in
the
last
year.
That's
11
apps
for
every
person
on
the
planet,
all
right.
A
So
let's
come
back
to
smartphones
and
the
real
reason
I'm
here
is
talk
about
Android
Oh.
Two
months
ago
we
launched
our
very
first
Developer
Preview,
so
you
could
kick
the
tires
and
some
of
the
new
API
is
and,
of
course,
it's
very
much
a
work
in
progress,
but
you
can
expect
the
release
later
this
summer
today
we
want
to
walk
you
through
two
themes.
A
You
know
that
we're
excited
about
the
first
is
something
we
call
fluid
experiences,
it's
pretty
incredible
what
you
can
do
in
a
mobile
phone
today
and
how
much
we
rely
on
them
as
computers
in
our
pockets,
but
there
are
still
certain
things
are
tough
to
do
in
a
small
screen.
So
we're
doing
a
couple
of
features.
You
know
that
we
think
will
help
with
this,
which
I'll
cover
in
just
a
moment.
The
second
theme
is
something
we
call
vitals,
and
the
concept
here
is
to
keep
vital
system
behavior
in
a
healthy
state.
A
So
we
can
maximize
the
users,
battery
performance
and
reliability.
So,
let's
jump
straight
in
and
walk
through
four
new
fluid
experiences.
Whit
live
demos
done
wirelessly.
What
could
possibly
go
wrong?
Alright,
and
these
days
we
do
a
lot
of
once
on
our
phones,
whether
it's
paying
for
groceries.
While
reading
a
text
message
you
just
received
or
looking
up
guitar
chords
while
listening
to
a
new
song
but
conventional
multi-window
techniques,
don't
translate
well
to
mobile
they're,
just
too
fiddly
to
set
up
when
you're
on
the
go.
A
We
think
picture
and
picture
is
the
answer
for
many
cases.
So,
let's
take
a
look.
My
kids
recently
asked
me
to
build
a
lemonade
stand,
so
I
opened
up
YouTube
and
I
started,
researching
DIY
videos
and
I
found
this
one
now,
at
the
same
time,
I
want
to
be
able
to
jot
down
the
materials.
I
need
to
build
for
this
lemonade
stand
so
to
multitask
all
I
depress
the
home
button
and
boom
I
get
picture-in-picture.
A
You
can
think
of
it
as
a
kind
of
automatic
multi
window.
I.
Give
it
out
of
the
way
I
can
launch,
keep
I
can
add
some
more
materials,
so
I
know
I
need
to
get
some
wood
glue
like
so
then,
when
I'm
done
I
just
simply
swipe
it
away
like
that,
it's
brilliant
picture-in-picture
lets
you
do
more
with
your
phone.
A
It
works
great
video-calling
with
duo,
for
example,
maybe
I
need
to
check
my
calendar
while
planning
a
barbecue
with
friends,
and
there
are
lots
of
other
great
news
cases,
for
example,
picture
and
picture
for
max
navigation
or
watching
Netflix
in
the
background
and
a
lot
more
and
we're
also
excited
to
see
what
you
come
up
with.
For
this
feature,
we're
also
making
notification
interactions
more
fluid
for
users
from
the
beginning.
Android
has
really
blazed
a
trail
when
it
comes
to
its
advance
notification
system.
A
You
know
we're
extending
the
reach
of
notifications
with
something
we
call
notification
dots,
it's
a
new
way
for
app
developers
to
indicate
that
there's
activity
in
their
app
and
to
drive
engagement.
So,
let's
take
a
look.
You'll
notice
that
the
Instagram
app
icon
has
a
dotted
net,
and
this
is
indicating
that
there's
a
notification
associated
with
the
app.
So,
if
I
pull
down
the
shade
sure
enough,
you
can
see
there's
a
notification
in
this
case
someone's
wanted
on
a
photo
and
tagged
in.
A
What's
really
cool
is
I
can
long
press
the
app
icon,
and
we
now
show
the
notification
in
place.
One
of
the
things
I
really
like
about
the
notification
mechanism
is
that
it
works
with
zero
effort
from
the
app
developer.
We
even
extract
the
color
of
the
dot
from
your
icon.
Oh
and
you
get
to
raise
the
icon
by
simply
swiping
the
notification
like
that,
so
you're
always
in
control.
A
Another
great
feature:
you
know
that
helps
make
your
experience.
More
fluid
is
autofill.
Now,
if
you
use
Chrome,
you're,
probably
already
familiar
with
autofill
for
quickly
filling
out
a
username
and
password
or
credit
card
information
with
a
single
tap
widow,
we've
extended
autofill
to
apps,
let's
say
I'm,
setting
up
a
new
phone
for
the
first
time
and
I
open,
Twitter
and
I
want
to
log
in
now,
because
I
use
Twitter,
comm
all
the
time
on
Chrome
this
system
will
automatically
suggest
my
username
I
can
simply
tap
it.
A
Autofill
takes
the
pain
out
of
setting
up
a
new
phone
or
tablet.
Once
the
user
opts
in
autofill
will
work
for
most
applications.
We
also
provide
api's
for
developers
to
customize
autofill
for
their
experience.
I
want
to
show
you
one
more
demo
of
how
we're
making
Android
more
fluid
by
improving
copy
and
paste.
The
feature
is
called
smart
text
selection.
So,
let's
take
a
look
in
Android,
you
typically
long
press
or
double
tap
a
word
to
select
it.
A
For
example,
I
can
open
gmail
I
can
start
composing
if
I
double
tap
the
word
bite
it
gets
selected
like
so
now.
We
know
from
user
studies
that
phone
numbers
are
the
most
copy
and
pasted
items.
The
second
most
common
are
named
entities
like
businesses
and
people
and
places
you
know
we're
applying
a
device
machine-learning
in
this
case
a
fee
for
neural
network,
to
recognize
these
more
complicated
entities
so
watch.
This
I
can
double
tap
anywhere
on
the
phrase
old
coffee
house
and
all
of
it
is
select
for
me.
A
It
even
works
out
for
addresses,
so
if
I
double
tap
on
the
address,
all
of
it
is
selected
and
what's
more,
there's
more.
What's
worse,
the
machine
learn
model
classifies
this
as
an
address
and
automatic
suggests
Maps.
So
I
can
get
directions
to
it
with
a
single
click
and,
of
course
it
works.
As
you
expect
for
phone
numbers,
you
get
the
phone
dialer
suggested
and
for
email
addresses
you
get
Gmail
suggested
all
of
this
neural
networking
processing
happens
on
device
in
real
time
and
without
any
data
leaving
the
device.
It's
pretty
awesome
now
on
device.
A
Machine
learning
helps
make
your
phone
spire
and
we
want
to
help.
You
build
experiences
like
what
you
just
saw
so
we're
doing
two
things
to
help.
First,
I'm
excited
to
announce
that
we're
creating
a
specialized
version
of
tensorflow,
Google's,
open
source
machine
learning,
library,
which
we
call
tensorflow
lite,
it's
a
library
for
apps,
designed
to
be
fast
and
small,
yet
still
enabling
state
of
art
techniques
like
convex
LSTs.
A
Second,
we're
introducing
a
new
framework
at
Android
to
Hardware,
accelerate
neural
computation
tests
refer
light,
will
leverage
a
new
neural
network
API
to
tap
into
silicon
specific
accelerators
and
over
time
we
expect
to
see
DSPs
specifically
designed
for
neural
network
inference
and
training.
We
think
these
new
capabilities
will
help
our
next
generation
of
on
device
speech,
processing,
visual
search,
augmented
reality
and
more
touch
of
low
light
will
soon
be
part
of
the
open
source,
tensor
flow
project
and
the
neural
network
API
will
be
made
available
later
in
an
update,
oh
this
year.
A
A
Hi
everyone,
okay,
so
all
the
features
they've
talked
about
are
cool,
but
we
think
your
phone's
foundations
are
even
more
important
battery
life
security,
startup
time
and
stability.
After
all,
if
your
battery
dies
at
4
p.m.
none
of
the
other
features
that
Dave
talked
about
really
matter,
so
a
no
we're
investing
in
what
we
call
vitals,
keeping
your
phone
secure
and
in
a
healthy
state
to
maximize
power
and
performance.
We've
invested
in
three
foundational
building
blocks
security,
enhancements,
OS,
optimizations
and
tools
to
help
developers
build
great
apps.
A
First
security
Android
was
built
with
security
in
mind
from
day
one
with
application.
Sandboxing,
as
Android
has
matured
we've
developed
vast
mobile
security
services.
Now
we
use
machine
learning
to
continuously
comb
apps
uploaded
to
play
flagging,
potentially
harmful
apps.
Then
we
scan
over
50
billion
apps
every
day
scanning
every
installed
app
on
every
connected
device,
and
when
we
find
a
potentially
harmful
app,
we
disable
it
or
remove
it,
and
we
found
most
Android
users.
A
So
here
you
can
see,
play
protect,
has
recently
scanned
all
your
apps,
no
problems
found.
That's
Google
Play
protect
it's
available
out
of
the
box
on
every
Android
device
with
Google
Play.
Second
OS
optimizations,
the
single
biggest
visible
change
in
o
is
boot
time
on
pixel,
for
example,
you'll
find
in
most
cases
your
boot
time
is
now
twice
as
fast
and
we've
made
all
apps
faster
by
default.
We
do
this
through
extensive
changes
to
our
run
time
now.
This
is
really
cool
stuff,
like
concurrent
compacting,
garbage
collection
and
code
locality.
A
But
all
you
really
need
to
know
is
that
your
apps
will
run
faster
and
smoother.
Take
Google
sheets
aggregate
performance
over
a
bunch
of
common
actions
is
now
over
two
times
as
fast
and
that's
all
from
the
OS.
There
are
no
changes
to
the
app,
but
we
found
apps
could
still
have
a
huge
impact
on
performance.
Some
apps
were
running
in
the
background
and
they
were
consuming
tons
of
system
resources,
especially
draining
battery.
So
in
oh
we're,
adding
wise
limits
to
background
location
and
background
execution.
A
These
boundaries
put
sensible
limits
on
usage,
they're,
protecting
battery
life
and
freeing
up
memory.
Now.
Our
third
theme
is
helping
developers
build
great
apps,
and
here
I
want
to
speak
directly
to
all
the
developers
in
the
audience.
Wouldn't
it
be
cool
if
androids
engineering
team
could
show
you
what
causes
performance
issues
today,
we've
launched
play
console
dashboards
that
analyze
every
app
and
pinpoint
six
top
issues
that
cause
battery
drain
crashes
and
slow
UI
for
each
issue.
The
app
has
we
show
how
many
users
are
affected
and
provide
guidance
on
the
best
way
to
fix
now.
A
Imagine
if
developers
could
also
have
a
powerful
profiler
to
visualize,
what's
happening
inside
the
app
in
Android
studio.
We've
also
launched
new,
unified
profiling
tools
for
networked
memory
and
CPU.
So
developers
can
now
see
everything
on
a
unified
timeline
and
then
dive
into
each
profiler.
For
an
example
on
CPU
can
see
every
thread
you
can
look
at
the
call
stack
and
the
time
every
call
is
taking.
You
can
visualize
where
the
CPU
is
going
and
you
can
jump
to
the
exact
line
of
code.
Okay,
so
that's
android
vitals.
A
How
we're
investing
in
your
phone's
foundational
security
and
performance
later
today,
you'll
see
androids
developer
story
from
end
to
end
our
hard
work
to
help
developers
build
great
apps
at
every
stage,
writing
code
tuning
launching
and
growing.
But
there
is
one
more
thing.
One
thing
we
think
would
be
an
incredible
compliment
to
the
story,
and
it
is
one
thing
our
team
has
never
done
for
developers.
A
So
Colin
Colin
is
one
our
developer
community
has
already
asked,
for
it
makes
developers
so
much
more
productive.
It
is
fully
Android
runtime
compatible.
It
is
totally
interoperable
with
your
existing
code.
It
has
fabulous
IDE
support
and
it's
mature
and
production
ready
from
day
one.
We
are
also
announcing
our
plans
to
partner
with
JetBrains,
creating
a
foundation
for
Kotlin
I
am
so
happy.
Jetbrains
CEO,
Mac
Safarov
is
here
today.
A
This
new
language
is
wonderful,
but
we
also
thought
we
should
increase
our
investment
in
our
existing
languages.
So
we're
doing
that
too.
Please
join
us
at
the
developer
keynote
later
today
to
hear
our
story
from
end
to
end.
Okay,
so,
let's
wrap
up.
There
are
tons
more
features
in
Android
o,
which
we
don't
have
time
to
go
into
today,
everything
from
redesigned
settings
to
project
treble,
which
is
one
of
the
biggest
changes
to
the
foundations
of
Android
to
date,
downloadable
fonts
with
new
emoji
and
much
more.
A
A
Nice
stuff
hi,
everyone
from
the
beginning,
androids
mission,
has
been
to
bring
the
power
of
computing
to
everyone,
and
we've
seen
tremendous
growth
over
the
last
few
years
from
the
high
end
to
entry
level
devices
in
countries
like
Indonesia,
Brazil
and
India.
In
fact,
there
are
now
more
users
of
Android
in
India
than
there
are
in
the
US
and
every
minute
seven
Brazilians
come
online
for
the
first
time.
A
All
this
progress
is
amazing
for
those
of
us
who
have
a
smartphone,
we
intuitively
understand
the
profound
impact
that
computing
is
having
on
our
daily
lives
and
that's
why
our
team
gets
so
excited
about
how
we
can
help
bring
this
technology
to
everyone.
So
we
took
a
step
back
to
think
about
what
it
would
take
to
get
smartphones
to
more
people.
There
are
a
few
things
that
are
clear:
devices
would
need
to
be
more
affordable
with
entry-level
prices
dropping
significantly.
A
This
means
hardware
that
uses
less
power
pack
processors
and
far
less
memory
than
on
premium
devices,
but
the
hardware
is
only
half
the
equation.
The
software
also
has
to
be
tuned,
for
users
needs
around
limited
data
connectivity
and
multilingual
use.
We
learned
a
lot
from
our
past
efforts
here
with
projects
felt
and
KitKat
and
the
original
Android
1
program,
but
we
felt
like
the
time,
was
right
to
take
our
investment
to
the
next
level.
So
today,
I'm
excited
to
give
you
a
sneak
peek
into
a
new
experience.
A
A
Let's
take
one
example:
memory
is
an
expensive
component,
so
we're
making
a
number
of
optimizations
to
the
system
UI
and
the
kernel
to
allow
an
android
o
device
built
with
the
go
configuration
to
run
smoothly
with
as
little
as
512
megabytes
to
1
gigabyte
of
memory
now
on
device
performance
is
critical,
but
data
costs
and
intermittent
connectivity
are
also
big
challenges
for
users.
One
person
put
it
best
to
me
when
she
said
mobile
data
feels
like
currency
and
she
wanted
more
control
over
the
way
she
spent
it.
A
So
when
these
devices
were
putting
data
management
front
and
center
in
quick
settings
and
we've
created
an
API
that
carriers
can
great
with
so
you
can
see
exactly
how
much
prepaid
data
you
have
left
and
even
top
up
right
there
on
the
device,
but
beyond
the
OS.
The
google
apps
are
also
getting
smarter
about
data,
for
example,
on
these
devices,
the
chrome
data
saver
feature
will
be
turned
on
by
default.
A
A
Beyond
data
management,
the
Google
Apps
will
also
make
it
easier
to
seamlessly
go
between
multiple
languages,
which
is
a
really
common
use
case
for
people
coming
online
today,
for
example,
G
Board
now
supports
over
a
hundred
and
ninety-one
languages,
including
the
recent
addition
of
22
Indian
languages
and
there's
even
a
transliteration
feature
which
allows
you
to
spell
words
phonetically
on
a
QWERTY
keyboard
to
type
in
your
native
language
script.
A
G
board
is
super
cool,
so
I
want
to
show
it
to
you.
A
I
grew
up
in
the
US,
so
for
any
of
my
family,
that's
watching
don't
get
too
excited
by
the
demo.
I
haven't
learned,
Hindi
yet
and
I'm
sorry,
mom.
Okay,
so
let's
say
I
want
to
send
a
quick
note
to
my
aunt.
In
India,
I
can
open
up
a
low
and
using
G
board.
I
can
type
how
it
sounds.
Phonetically
boom,
guess
awho,
which
means
how
are
you
in
Hindi
and
transliteration
automatically
gives
me
Hindi
script?
A
Why
my
family
is
apparently
a
tough
audience
all
right?
Well,
the
kugel
apps
are
getting
go
applied.
What
is
always
propelled
Android
forward
is
the
apps
from
all
of
you
and
no
surprise.
Many
of
our
developer
partners
have
optimized
their
apps
already
so
to
better
connect
users
with
these
experiences
we'll
be
highlighting
them
in
the
Play
Store.
One
example
is
right
here
on
plays
home
page
to
be
eligible
for
these
new
sections.
We've
published
a
set
of
best
practices
called
building
for
billions,
which
includes
recommendations.
A
We've
seen
make
a
big
difference
in
the
consumer
experience
things
such
as
designing
a
useful
offline
state,
reducing
your
apk
size
to
less
than
10
megabytes
and
using
GCM
or
job
scheduler
for
better
battery
and
memory
performance,
and
also
in
building
for
billions
you'll
find
best
practices
for
optimizing.
Your
web
experience
we've
seen
developers
build
amazing
things
with
new
technology
such
as
progressive
web
apps.
We
hope
you
can
come
to
our
developer
keynote
later
today
to
learn
a
whole
lot
more.
A
Ok,
that
was
a
quick
walk
through
of
some
of
the
things
coming
in
Android
go
starting
with
Android
o
all
devices,
with
one
gigabyte
of
RAM
or
less
will
get
the
go,
configuration
and
going
forward.
Every
Android
release
will
have
a
go,
configuration
we'll
be
unveiling
much
more
later
this
year,
with
the
first
devices
shipping
in
2018.
We
look
forward
to
seeing
what
you'll
build
and
how
we
can
bring
computing
for
the
next
several
billion
users.
A
A
Thank
You
Samir
so
send
are
talked
about
how
technologies,
like
machine
learning
and
conversational
interfaces
make
computing
more
intuitive
by
enabling
our
computers
to
work
more
like
we
do
and
we
see
VR
and
AR
in
the
same
light
they
enable
us
to
experience
computing.
Just
as
we
experience
the
real
world
virtual
reality
can
be
transporting.
You
can
experience
not
just
what
it's
like
to
see
someplace,
but
what
it's
like
to
really
be
there.
An
augmented
reality
uses
your
surroundings
as
context
and
puts
computing
into
the
real
world.
A
A
lot
has
happened
since
Google
I/o
last
year
and
I'm
excited
to
share
a
bit
of
what
we've
been
up
to.
So,
let's
start
with
VR.
Last
year
we
announced
daydream
are
platformed
from
mobile
virtual
reality
and
then
in
October
to
kick-start
the
daydream
ecosystem.
We
released
daydream
view
a
VR
headset
made
by
Google.
That's
super
comfortable.
It's
really
easy
to
use
and
there's
tons
to
do
with
it.
You
can
play
inside
alternate
worlds
in
games
like
virtual
virtual
reality.
A
You
can
see
any
part
of
our
world
with
apps
like
Street
View,
and
you
can
visit
other
worlds
with
apps
like
Hello
Mars,
there's
already
a
great
selection
of
daydream,
phones
out
there
and
we're
working
with
partners
to
get
daydream
on
even
more
first
I'm
pleased
that
LG's
next
flagship
phone,
which
launches
later
this
year
will
support
daydream
and
there's
another
I'm
excited
to
announce
that
the
Samsung,
Galaxy,
s8
and
s8
plus
will
add.
Daydream
support
this
summer
with
a
software
update.
A
The
Samsung,
of
course
they
make
many
of
the
most
popular
phones
in
the
world
and
we're
delighted
to
have
them
supporting
daydream.
So
great
momentum
in
daydreams
first
six
months.
Let's
talk
about
what's
next,
so
a
dream:
we
showed
that
you
can
create
high
quality,
mobile
VR
experiences
with
just
a
smart
phone
and
a
simple
headset,
and
there
are
a
lot
of
nice
things
about
smart
phone
VR.
It's
easy!
There
aren't
a
bunch
of
cables
and
things
to
fuss
with
you
can
choose
from
a
bunch
of
great
compatible
phones
and,
of
course,
it's
portable.
A
You
can
throw
your
headset
in
a
bag.
We
asked:
how
can
we
take
the
best
parts
of
smartphone
VR
and
create
a
kind
of
device
with
an
even
better
experience?
Well,
I'm
excited
to
announce
that
an
entirely
new
kind
of
VR
device
is
coming
to
daydream,
what
we
call
standalone,
VR,
headsets
and
we're
working
with
partners
to
make
them.
So
what's
a
standalone,
headset?
Well,
the
idea
is
you
have
everything
you
need
for
VR
built
right
into
the
headset
itself,
there's
no
cables,
no
phone,
and
certainly
no
big
PC
and
the
whole
device
is
designed.
A
So
world
sense
enables
what's
known
as
positional
tracking
with
it.
Your
view
in
the
virtual
world
exactly
matches
your
movement
in
the
real
world.
It
works
by
using
a
handful
of
sensors
on
the
device
that
look
out
into
your
surroundings,
and
that
means
it
works
anywhere.
There's
no
setup,
there's
no
cameras
to
install
and
with
it
you
really
feel
like
you're
there
now,
just
as
we
did
with
daydream,
ready
smartphones
we're
taking
a
platform
approach
with
standalone
headsets
working
with
partners
to
build
some
great
devices.
A
These
devices
will
start
to
come
to
market
later
this
year.
So
that's
the
update
on
VR,
great
momentum
with
apps,
more
daydream,
ready,
phones
on
the
way
and
a
new
category
of
devices
that
we
think
people
are
going
to
love.
So,
let's
turn
to
augmented
reality.
A
lot
of
us
were
introduced
to
the
idea
of
AR
last
year
with
Pokemon
go,
the
app
gave
us
a
glimpse
of
AR
and
it
showed
us
just
how
cool
it
can
be
to
have
digital
objects
show
up
in
our
world.
A
Well,
we've
been
working
in
this
space
into
2013
with
tango,
a
sensing
technology
that
enables
devices
to
understand
space
more
like
we
do.
Two
years
ago
in
2015
we
released
a
Developer
Kit.
Then
last
year
we
shipped
the
first
consumer,
ready
tango
phone
and
I'm
excited
to
announce
that
the
second-generation
tango
phone,
the
Asus
zenfone
AR,
will
go
on
sale
this
summer.
Now,
looking
at
the
slides,
you
may
notice
a
trend,
the
devices
are
getting
smaller
and
you
can
imagine
far
more
devices
having
this
capability
in
the
future.
A
It's
been
awesome
to
see
what
developers
have
done
with
the
technology,
and
one
thing
we've
seen
clearly
is
that
AR
is
most
powerful
when
it's
tightly
coupled
to
the
real
world
and
the
more
precisely
the
better.
That's
why
we've
been
working
with
the
Google
Maps
team
on
a
service
that
can
give
devices
access
to
very
precise
location
information
indoors.
It's
kind
of
like
GPS,
but
instead
of
talking
to
satellites
to
figure
out
where
it
is,
your
phone
looks
for
distinct,
visual
features
in
the
environment
and
it
triangulates
with
those.
So
you
have
GPS.
A
We
call
this
VPS
Google's
visual
positioning
service
and
we
think
it's
going
to
be
incredibly
useful
in
a
whole
bunch
of
places.
For
example,
imagine
you're
at
Lowe's
a
home-improvement
store
that
has
basically
everything
and
if
you've
been
there,
you
know
it's
really
big
and
we've
all
had
that
moment
when
you're
struggling
to
find
that
one
weird
random
screwdriver
thing
imagine
in
the
future,
your
phone
could
just
take
you
to
that
exact,
screwdriver
and
point
it
out
to
you
on
the
shelf
turns
out.
We
can
do
this
with
VPS.
A
Let
me
show
you
how-
and
this
is
working
today
so
here
we
are
walking
down
an
aisle
at
Lowe's
and
the
phone
will
find
these
key
visual
feature
points
which
you
can
see
there
in
yellow
by
comparing
the
feature
points
against
previously
observed
ones.
Those
colorful
dots
in
the
back.
The
phone
can
figure
out
exactly
where
it
is
in
space
down
to
within
a
few
centimeters.
So
GPS
can
get
you
to
the
door
and
then
VPS
can
get
you
to
the
exact
item
that
you're
looking
for
further
out.
A
Further
out
imagine
what
this
technology
could
mean
to
people
with
impaired
vision,
for
example,
VPS
and
an
audio-based
interface
could
transform
how
they
make
their
way
through
the
world,
and
it
combines
so
many
things
that
Google
is
good
at
mapping.
Computer
vision
distributed
computing
and
we
think
precise
location
will
be
critical
for
camera
based
interfaces,
so
VPS
will
be
one
of
the
core
capabilities
of
google
lens.
We're
really
excited
about
the
possibilities
here.
So
last
thing
I
wanted
to
share
is
something
that
we've
been
working
on.
A
That
brings
many
of
these
capabilities
together
in
a
really
important
area
and
that's
education.
Two
years
ago
we
launched
expeditions,
which
is
a
tool
for
teachers
to
take
their
classes
on
virtual
reality,
field
trips
and
2
million
students
have
used
it
today.
We're
excited
to
announce
that
we're
adding
a
new
capability
to
expeditions,
AR
mode,
which
enables
kind
of
the
ultimate
show-and-tell
right
in
the
classroom.
If
we
could
roll
the
video,
please
all
right
wants
to
see
a
volcano
three
two
one.
A
Look
at
that
lah,
but,
like
that's
coming
out
of
that,
pretend
you're
an
airplane
and
fly
over
the
tornado.
That's
what
do
you
see
we're
learning
about
DNA
and
genes,
things
that
we
can't
see,
and
so
the
most
exciting
thing
for
me
with
the
air
technology,
was
that
I
could
see.
Kids
get
an
aha
moment
that
I
couldn't
get
by
just
telling
them
about
it.
The
minute
I
saw
it
pop
up
on
the
screen.
I
mean
when
get
up
walk
to
it.
A
You
actually
get
to
turn
around
and
look
at
things
from
all
angles,
so
it
gave
us
a
nice
perspective,
see
if
you
can
figure
out
what
that
might
be.
Based
on
what
you
know
about
the
respiratory
system,
I
got
to
see
where
the
alveoli
branched
off
and
I
can
look
inside
them
and
see
how
everything
works,
which
I
never
saw
before
it
was
really
really
cool.
A
We're
just
delighted
with
the
response
we're
seeing
so
far
and
we'll
be
rolling
this
out
later
in
the
year,
so
VR
and
a
are
two
different
flavors
of
what
you
might
call
immersive
computing
computing
that
works
more
like
we
do.
We
think
that's
a
big
idea,
and
in
time
we
see
VR
and
AR
changing
how
we
work
and
play
live
and
learn,
and
all
that
I
talked
about
here.
These
are
just
the
first
steps,
but
we
can
see
where
all
this
goes
and
we're
incredibly
excited
about.
A
It's
incredible
when
you
open
source
platform,
when
you
see
what
people
can
do
on
top
of
it
are
we
really
excited
about
the
momentum
behind
tensorflow,
it's
already
the
most
popular
ml
depository
on
github
and
we're
going
to
push
it
further.
We
are
also
announcing
the
tensorflow
research
cloud
we
are
giving
away
thousand
plus,
which
is
180
better
flops
of
computing
took
academics
and
researchers
for
free
so
that
they
can
do
more
stuff
with
it.
I'm
always
amazed
by
the
stories
I
hear
from
developers
when
I
meet
them.
A
A
My
name
is
Abu:
I
am
a
high
school
student
17
years
old,
my
freshman
year,
I,
remember,
googling.
Machine
learning
had
no
clue
what
it
meant.
That's
a
really
cool
thing
about
the
Internet.
Is
that
someone's
already
doing
it?
You
can
just
YouTube
it
and
it's
right
there.
It
was
a
minute.
I
really
saw
what
machine
learning
can
do.
I
kind
of
like
hit
something
within
me.
This
like
need
to
build
things
to
help
people.
My
parents
are
immigrants
from
Afghanistan.
A
A
And
then
it
kind
of
hit
me
a
way
where
I
could
actually
genuinely
help.
People
mammograms
are
the
cheapest
imaging
format.
There
is
it's
the
most
accessible
to
people
all
around
the
world,
but
one
of
the
biggest
problems
that
we
see
in
breast
cancer
is
misdiagnosis,
so
I
decided
I
was
going
to
build
a
system
for
early
detection
of
breast
cancer.
Tumors
that's
successful
to
everyone
and
that's
more
accurate.
How
was
I
going
to
do
it
machine
learning
the
biggest
most
extensive
resource
that
I've
used?
A
Is
this
platform
called
tensorflow
I've
spent
so
many
hours
going
really
deep
into
these
open
source
libraries
and
just
figuring
out
how
it
works.
Eventually,
I
wrote
a
whole
system
that
can
help
really
I'll
just
make
their
decisions
all
right,
I'm
by
no
means
a
wizard
at
machine
learning,
I'm
completely
self-taught
I'm
in
high
school.
I
YouTubed
and
just
fought
my
way
through
it.
You
don't
know
about
that
kid
in
Brazil
that
might
have
a
groundbreaking
idea
or
that
kid
in
Somalia.
A
A
Thank
you
for
joining
us
enjoy
IO
we've
been
talking
about
machine
learning
in
terms
of
how
it
will
power
new
experiences
in
research,
but
it's
also
important.
We
think
about
how
this
technology
can
have
an
immediate
impact
on
people's
lives
by
creating
opportunities
for
economic
empowerment,
46%
of
US
employers
say
they
face
talent
shortages
and
have
issues
filling
open
job
positions,
while
job
seekers
may
be
looking
for
openings
right
next
door.
There
is
a
big
disconnect
here.
A
Just
like.
We
focused
our
contributions
to
teachers
and
students
through
Google
for
education.
We
want
to
better
connect
employers
and
jobseekers
through
a
new
initiative.
Google
for
jobs.
Google
for
jobs
is
a
commitment
to
use
our
products
to
help
people
find
work.
It's
a
complex,
multifaceted
problem,
but
we've
been
investing
a
lot
over
the
past
year
and
we've
made
significant
profits.
A
Last
November
we
announced
a
cloud
jobs
API
think
of
it
as
a
first
fully
into
end
pre-trained
vertical
machine
learning
model
through
google
cloud
which
we
give
to
employers,
FedEx
Johnson,
&,
Johnson
health
out
carry
builder,
and
we
are
expanding
to
many
more
employers,
so
in
Johnson
&
Johnson's
carrier
site.
They
found
that
applicants
were
18
percent,
more
likely
to
apply
to
a
job
suggesting
the
matching
is
working
more
efficiently
and
so
far
or
over
four
and
a
half
million
people
have
interacted
with
this
API.
A
But
as
we
started
working
on
this,
we
realized
the
first
step
for
many
people
when
they
start
looking
for
a
job
is
searching
on
Google.
So
it's
like
other
search
challenges.
We
have
worked
in
the
past,
so
we
built
a
new
feature
in
search
with
a
goal
that
no
matter
who
you
are
or
what
kind
of
job
you
are
looking
for.
A
You
can
find
the
job
postings
that
are
right
for
you
and,
as
part
of
this
effort,
we
work
hard
to
include
jobs
across
experience
and
wage
levels,
including
jobs
that
have
traditionally
been
much
harder
to
searched
and
classified
thank
retail
jobs,
Hospitality
jobs,
etc.
To
do
this
well,
we
have
worked
with
many
partners:
LinkedIn
Monster
Facebook
carry
builder
Glassdoor
and
many
more
so,
let's
take
a
look
at
how
it
works.
Let's
say:
you're
come
to
Google
and
you
start
searching
for
retail
jobs
and
you're
from
Pittsburgh.
A
We
understand
that
you
can
scroll
down
and
click
into
this
immersive
experience
and
we
immediately
start
showing
the
most
relevant
jobs
for
you
and
you
can
filter.
You
can
choose
full
time
and,
as
you
can
see,
you
can
drill
down
easily
I
want
to
look
at
jobs
which
are
posted
in
the
past
three
days.
So
you
can
do
that
now.
You're,
looking
at
retail
jobs
in
Pittsburgh
posted
within
the
last
three
days,
you
can
also
filter
by
job
titles.
It
turns
out
employees
and
employers
use
many
different
terminologies.
A
For
example,
retail
could
mean
a
store
clerk,
a
sales
representative
store
manager,
we
use
machine
learning
to
cluster
automatically
and
so
that
we
can
bring
all
the
relevant
jobs
for
you
as
you
scroll
through
it.
You
will
notice
that
we
even
should
commute
times.
It
turns
out
to
be
an
important
criteria
for
many
people
and
we
will
soon
add
a
filter
for
that
as
well.
A
And
if
you
find
something
that's
of
interest
to
you,
so
maybe
the
retail
position
in
drawers
and
you
can
click
on
it
and
you
end
up
going
to
it
right
away
and
you're
one
click
away.
You
can
scroll
to
find
more
information
if
you
want
and
you're
one
click
away
from
clicking
and
applying
there
it's
a
powerful
tool.
We
are
addressing
jobs
of
every
skill,
level
and
experience
level
and
we
are
committed
to
making
these
tools
work
for
everyone.
It's
part
of
building
it.
A
We
literally
talked
to
hundreds
of
people,
so
whether
you're
in
a
community
college
looking
for
a
barista
job,
a
teacher
who's
relocating
across
the
country,
and
you
want
teaching
jobs
or
someone
who
is
looking
for
work
in
construction.
The
products
should
do
a
great
job
of
bringing
that
information
to
you.
We
are
rolling
this
out
in
the
u.s.
in
the
coming
weeks
and
then
we
are
going
to
expand
it
to
more
countries
in
the
future,
I'm
personally
enthusiastic
for
this
initiative,
because
it
addresses
an
important
need
and
taps
our
core
capabilities.
A
As
a
company
from
searching
and
organizing
information
to
AI
and
machine
learning,
it's
been
a
busy
morning.
You
know:
we've
talked
about
this
important
shift
from
a
mobile
first
to
AI
first
world,
and
we
are
driving
it
forward
across
all
our
products
and
platforms
so
that
all
of
you
can
build
powerful
experiences
for
new
users
everywhere.
It
will
take
all
of
us
working
together
to
bring
the
benefits
of
technology
to
everyone.
I
believe
we
are
on
the
verge
of
solving
some
of
the
most
important
problems
we
face.