►
Description
What does it take to connect a Google Home to the cloud? Surprisingly, not much! In this talk, we will create a Dialogflow app to get a Google Home device to talk to a container managed by Kubernetes/App Engine.
B
A
It's
about
four
in
the
morning
here:
Pacific
Standard
Time,
but
we
are
having
an
amazing
time
with
dotnet
con
thanks.
So
much
everybody
that's
tuning
in
wherever
you
are
around
the
world,
we're
broadcasting
now,
so
that
it's
a
lot
more
convenient
for
you.
You've
got
folks
from
your
regions.
Your
areas
presenting
to
you.
We've
got
folks
on
six
of
the
seven
continents.
It's
amazing!
It's
amazing.
B
C
Great,
that's
why
I
love
online
conferences?
You
know
because
you're,
the
whole
world
is
your
audience.
Basically
so
I
don't
know
where
people
are
but
I'm
based
in
London,
it's
lunchtime
in
London,
but
it's
I'm,
sacrificing
my
lunch.
So
I
could
talk
to
you
guys
because
I
did
this
conference.
Last
year
last
year,
I
talked
about
speed,
not
core
containers
on
kubernetes.
Oh,
it
was
a
lot
of
fun,
so
I'm
very
happy
to
be
here
again
today.
C
My
name
is
Matata
Mel
I
am
a
developer
advocate
at
Google.
This
is
my
Twitter
I
already
have
these
slides
a
version
of
the
slides
on
my
Twitter.
So
if
you
want
to
get
the
slides
just
follow
me,
the
application
that
I'm
gonna
show
is
already
on
github
and
I.
Don't
know
if
you
can
see
this
at
the
bottom
on
my
slides
and
I'm
gonna
at
the
end
of
my
talk
will
also
provide
the
link
to
that
as
well.
C
So
if
you
want
to
run
the
app
yourself
get
the
code,
all
that
kind
of
stuff,
you
cannot
get
it
from
on
on
github
to
save
some
bandwidth,
I'm
gonna
turn
off
my
video
and
then
we'll
start
the
presentation
so
and
do
that
I
think
that
worked
and
let
me
move
this
around
okay,
all
right.
So
the
talk
is
about
connecting
this
time.
C
C
C
Share
screen,
yeah,
hey
I
got
it
now.
I
can
turn
off
the
video.
Now
everything
should
be
good
right,
okay,
cool
now
you
should
be
able
to
see
my
screen
so
anyway.
So
what
do
I
say?
What
I
was
saying.
Is
that
so
me
I'm
my
coworker
Chris
Baker
and
he's
a
c-sharp
developer
in
Google
Cloud
libraries
team?
C
We
wanted
to
do
a
talk
at
the
beginning
of
the
year
and
we
were
thinking
something
cool
to
show,
and
then
we
said:
okay,
well,
Google
home
is
kind
of
hot
nowadays,
and
why
don't
we
just
look
into
Google
home
see
if
we
can
program
it,
you
know.
So
that's
how
the
idea
started.
Initially,
we
thought
it
would
be
really
difficult
because
you
know
when
you
talk
about
a
device
like
a
Google
home,
you
need
to
talk
to
it.
It
needs
to
understand
what
you're
saying.
C
Then
you
know
once
you
get
the
input
from
the
user,
you
need
to
parse
it
try
to
understand
what
the
user
is
saying,
get
the
entities
out
of
other
words,
and
then
you
need
to
somehow
respond
to
that
right.
So
just
thinking
about
this-
and
our
initial
assumption
was
that
this
would
be
quite
difficult
to
do,
but
we
got
the
hello
world
application
working
real
quickly
and
then
we
were
like
okay,
what
else
we
can
do
with
this
and
then
then
the
idea
came
along.
C
You
know
we
have
the
we
have
the
device
people
talking
to
it.
We
have
the
cloud
with
all
the
big
data
and
on
machine
learning
on
all
the
APs
and
everything
that
the
cloud
provides.
So
what
does
it
take
to
connect
it
to
the
cloud
again?
Initially,
our
idea
was
that
it
would
take
a
long
time,
but
it
was
really
quick.
We
were
able
to
relay
a
user's
request
to
the
cloud
quite
fast
and
then
from
that
I
was.
It
was
pure
fun.
C
We
basically
said:
okay,
let's
use
some
machine
learning
to
do
something
interesting
and
there
ways
to
use
some
big
data
processing
to
elevate
the
Google
home
application
and
in
the
end
we
have
this
application.
So
this
talk
is
about
that
we'll
go
through
it
and
see
how
it
works,
but
before
I
start,
my
presentation
I,
want
to
first
make
sure
that
my
application
is
working.
I,
don't
have
a
Google
home
device
here
right
now,
but
I'm
gonna
use
a
simulator.
So
what
I'm
gonna
do
is.
C
C
First,
let's
start
with
this,
and
one
thing
to
mention
when
you
write
an
application
for
Google
home,
you
need
to
choose
that
phrase
that
triggers
your
application,
because
normally,
when
you
talk
to
a
Google
home
device,
if
you
ask
something
like
how
is
the
weather
that
will
be
handled
by
Google
right,
because
Google
has
the
weather
data
so
that
will
be
handled
by
Google.
But
if,
at
some
point
you
want
to
you
want
the
control
to
go
to
your
application,
then
there
has
to
be
some
key
phrases
that
trigger
your
application.
C
So,
for
example,
if
you
write
a
stock
application,
you
would
say
talk
to
my
stock
application
or
if
it's
a
shopping
application,
you
would
say
talk
to
my
shopping
application,
something
like
that.
But
since
this
is
a
test
application,
the
way
you
would
test
the
test
applications
is
that
you
would
just
say
talk
to
my
test
application.
That's
how
you
start.
C
C
D
C
C
So
it
seems
that
my
application
is
working
so
we'll
get
back
to
this
I
just
want
to
double
check
that
everything
is
working.
So
let's
go
back
to
the
presentation
so
before
I
get
into
details.
I
just
want
to
give
you
an
overview
of
how
this
application
looks
like
and
some
terminology
straight.
So
first
of
all,
I
am
talking
like
the
idea
where
application
was
that
we
would
talk
to
Google
home
application
and
then
that
would
be
caught
by
Google
assistant.
C
Actually,
all
these
applications
I
call
them
Google
home
applications,
but
they're,
actually
Google
assistant
applications.
Google
assistant,
is
the
thing
that
captures
the
voice
on
many
devices
that,
for
example,
if
you
have
an
Android
phone,
then
you
would
have
a
Google
assistant
there.
If
you
have
a
Google
home,
you
will
have
Google
assistant
there.
This
simulator
has
it
so
any
device
that
has
Google
assistant
has
this
application
enabled.
Basically,
so
what
happens?
Is
that
the
user
talks
to
the
device
that
they
have
whatever
it
is?
C
Then
it
goes
to
Google
assistant
and
at
that
point,
Google
assistant
decides.
Is
this
something
I'm
gonna
handle,
or
is
this
something
I
need
to
pass
in
to
a
customer
application
if
it's
a
Google
kind
of
information
like
how
is
the
weather
or
what's
the
stock
price,
or
you
know
Google
or
something
like
that,
then
that
will
be
handled
directly
by
Google.
But
then,
if
you
trigger
your
application,
then
the
control
will
pass
your
application
in
this
case
I'm
using
something
called
pilot
flow
and
we'll
talk
about
dialogue
flow,
but
basically
from
Google
assistant.
C
If
you
want
to
extend
Google
assistant,
you
will
use
something
called
actions
on
Google
actions
on
Google
is
the
framework
to
extend
Google
assistant
and
then
there's
something
else
called
dialogue
flow,
which
is
a
framework
even
like
that
wraps
actions
on
Google
and
I'm
gonna
explain
the
differences
of
what
actions
on
Google
is
and
what
tile
floor
is
and
why
we
use
dialect
well,
but
eventually
the
control
goes
to
the
outflow
and
data
flow
in
a
similar
way.
It
makes
the
same
kind
of
decision.
C
It
first
tries
to
see
if
it
can
handle
the
users
response
within
the
health
flow
and,
if
it
can,
it
will
just
handle
it
and
return
the
response
back.
If
not,
it
can
also
call
an
end
point
and
the
end
point
it
can
be.
You
can
live
anywhere,
but
it
has
to
be
an
HTTP
endpoint.
That's
the
only
requirement,
so
a
request
comes
in
that
process.
Okay
can
I
handle
this.
If
it
can,
it
will
handle
it.
Otherwise
it
will
pass
it
to
this
HTTP
endpoint.
C
You
need
fine
now
from
then
on
the
code
handles
response,
the
users
request.
In
our
case
we
are
running
an
asp.net
core
application
in
Google
Cloud,
and
this
application,
it's
basically
an
s-1
on
the
core
container.
We
deployed
it
on
App
Engine,
but
you
can
also
deploy
it
on
something
like
kubernetes
engine
as
well.
We
can
run
on
kubernetes
as
well
as
long
as
it's
HTTP.
It
doesn't
really
matter
where
it's
running
and
I'm
gonna
talk
about
the
differences
between
the
two
and
why
we
deploy
to
App
Engine
and
then
from
then
on.
C
Once
we
made
that
connection,
we
basically
integrate
it
with
Google
search
to
search
for
some
images
that
I'm
going
to
show.
Then
we
integrated
with
vision,
API
to
get
some
intelligence
out
of
the
images
using
machine
learning.
Then
we
integrated
with
bigquery
to
analyze
some
big
data
and
get
some
intelligence
back
to
the
user,
and
finally,
no
application
is
complete
without
login
tracing
monitoring
and
debugging
right.
So
for
that
we
integrated
with
stackdriver,
and
that
gives
us
all
the
things
that
we
need
to
maintain
application.
C
So,
in
a
nutshell,
this
is
the
application
and
I
just
want
to
take
you
through
this
journey
or
how
we
build
this
application
and
what
kind
of
stuff
you
can
do
with
it
all
right.
So,
let's
first
talk
about
dialog
flow.
What
is
dialog
flow
that
lip
flow?
It's
an
end-to-end,
the
development
platform
for
building
conversational
applications.
C
So
if
you
want
to
build
an
app
for
Google
home
or
if
you
want
to
build
an
app
or
a
chat
pod
or
something
like
that,
where
people
can
actually
talk
to
it
or
they
can
also
type
to
it
like
they
can
also
text
to
it,
then
you
can
use
diode
flow.
The
great
thing
about
dialog
flow
is
that
it
has
integration
with
many
technologies.
C
So
it
has
integration
with
Google
assistant
that
I'm
going
to
show,
but
you
can
also
integrate
it
with
Skype
or
you
can
integrate
it
with
slack
or
you
can
integrate
it
with
Twitter.
So
there's
a
lots
of
integration
point,
so
you
can
use
that
for
it's
like
the
one
thing
that
you
can
use
to
intermarry
places
in
this
demo,
I'm
on
name
integrating
with
assistant
but
I,
think
it's
useful
to
know
that
you
know
you
can
integrate
it
with
multiple
sources.
C
If
you
want
to
all
right,
it
works
on
phones,
it
works
on
home
devices,
it
pretty
much
tries
to
work
on
every
everybody
can
and
it
also
tries
to
work
all
around
the
world.
So
it's
not
a
US
only
service
or
it's
not
Europe.
On
the
service
it
works
across
the
globe
and
you
know
it
supports
multiple
languages
as
well.
C
I
mean
it
doesn't
support
every
single
language,
but
there's
a
list
of
languages,
I
think
it's
like
ten
languages
right
now
and
it's
and
the
list
is
growing
as
well
and
as
I
mentioned,
that
for
it's
a
channel
for
Google
assistant,
so
you
can
use
actions
on
Google
SDK
to
program
Google
assistant
and
that
works,
but
actions
on
Google
SDK
is
kind
of
limited.
It
basically
like
when
you
use
it.
It
will
get
you
what
user
said,
but
then
you
need
to
make
sense.
C
So
what
the
user
set
yourself
and
then
you
need
to
program
that
yourself.
So
it's
not
so
easy
where
as
doubtful
it
provides
natural
language
processing.
So
you
can
actually
extract
entities
out
of
this.
This
text,
these
expressions
that
your
user
said,
but
it
also
provides
a
really
nice
UI.
Where
you
can,
you
can
define
the
conversation
in
a
you
know
in
a
UI.
Basically,
instead
of
doing
it
all
in
code
and
I'm
gonna
show
this
short
way.
C
So
that's
why
we
try
to
use
that
for
because
it
makes
it
really
easy
to
kind
of
integrate
what
user
said
and
extract
things
and
display
them.
So
I
mentioned
the
natural
language
processing
in
that
Club.
So,
for
example,
if
you
say
something
like
booked
a
flight
from
Los
Angeles
to
Hawaii
for
less
than
$300,
what
the
hell
does
is
that
it
it
first
of
all
it
detects
this
and
then
it
tries
to
extract
entities
from
this
expression.
C
So
in
this
expression,
what's
Angeles
it's
a
city,
so
it
will
pick
out
the
fact
that
this
is
a
city.
Hawaii
is
a
state
and
then
three
hundred
dollars,
it's
its
amount
and
it's
like
its
currency.
Basically,
so
we'll
pick
those
out
for
you
and
it
will
give
them
to
your
application
to
process
and
that's
very
useful
because
you
can
which
really
like
mark
what
you
want
to
be
extract
it,
and
it
will
do
that
job
for
you,
so
that
you
don't
have
to
do
that
yourself
and
it's
quite
smart.
C
You
don't
actually
have
to
give
the
full
list
or
things
that
it
should
extract.
You
just
give
some
examples
and
it
learns
from
it
and
I'm
gonna
show
you
this
shortly.
All
right,
so
that's
dialog
flow.
What
I
want
to
do
is
I
want
to
show
you
the
dial
for
consumers,
so
there's
a
console
for
diode
flow.
So
this
is
where
you
would
stagger
your
project.
The
first
thing
that
you
need
to
do
is
you
need
to
create
an
agent
you
can
think
of
agent
as
kind
of
like
the
project
or
your
application
here.
C
C
So
once
you
have
the
basic
age
and
you
don't
you
usually
don't
need
you
need
to
change
much
here.
What
you
can
do,
though,
is
that
you
can
export
and
import
agents.
So
once
you
have
your
agent
set
up
the
way
you
want,
you
can
actually
export
it
in
a
zip
file
and
someone
can
import
it
and
use
it,
and
actually,
if
you
get
the
code
for
this
application,
I
have
the
age
and
export
it
already
for
you.
C
So
you
can
just
simply
open
that
file
and
import
my
agent
and
you
have
all
this
stuff
that
I'm
showing
to
you
from
the
desert
five.
So
once
you
have
the
agent,
then
you
need
to
tell
the
agent
what
kind
of
things
that
it
should
listen
for
right
and
those
are
called
intent.
So
if
you
look
at
under
intense,
there
are
a
couple
of
things
to
note.
First,
is
there
something
called
default?
Welcome
intent?
C
This
is
the
intent
that
gets
triggered
when
you
say
talk
to
my
test
app
or
talk
to
my
stock
application,
or
something
like
that
when
you
say
that,
then
it
will
trigger
this
default.
Welcome
infant
if
you'll
get
here
so
there's
a
this
is
the
welcome
intent
and
the
response.
Is
you
redefine
the
response
here
so
here
I'm
saying
hello
from
Google
home?
It's
not!
The
container
is
up,
so
this
is
the
response
that
I
defined.
So
that's
how
we
can
trigger
this.
C
This
is
an
intended
solely
handle
in
dialogue,
post
or
request
comes
in,
like
I
say
talk
to
my
test.
Stop
it
goes
to
valid
for,
and
the
LFO
can
handle
directly
right
here.
Using
this
text,
the
other
one
is
default,
fallback
intent.
So
if
you
say
something
to
dialogue
flow
and
the
diode
for
doesn't
understand
it,
then
it
will
fall
back
here.
So
it's
kind
of
like
the
default
way
of
you
know,
handling
a
request,
and
here,
as
you
can
see,
there's
no
request
or
there's
no
training
or
anything
like
that.
There's
own
responses.
C
So
when
you
say
something
like
that,
what
doesn't
understand
then
it
will
pick
one
of
these
text
responses
and
it
will
just
say
something
like
you
know:
what
was
that
I
didn't
get
that
I
missed
that
stuff
like
that,
but
you
can
choose
whatever
you
want
in
here,
and
the
last
thing
I
want
to
show
here
is
the
remember
I
said,
say:
hi
to
everyone
for
that
I
use
this
greeting
doc
conference
intent.
So
I
can
look
at
this
intent.
C
First
thing
that
you
realize
that
I
have
some
training
phrases,
so
this
training
phrases
are
great,
everyone
say
hi
to
everyone,
say
hi,
so
any
of
these
phrases
will
trigger
this,
but
not
just
these
phrases.
Anything
that's
similar
to
this.
So
if
I
say
say
hello
to
everyone,
it
will
still
trigger
this.
That's
what
I
like
about
dialogue
for
is
that
you
just
give
examples.
You
don't
list
everything,
but
if
you
say
something
like
this,
they
will
get
here
and
the
retexe
response
again
will
be
handled
by
a
toilet
pool.
C
So
we
got
the
initial
dialogue
for
application
working
quite
quickly.
The
next
thing
that
we
wanted
to
do
is
we
wanted
to
connect
it
to
the
cloud,
and
then
you
know
nowadays
like
containers
is
the
default
way
or
for
many
people,
where
is
the
default
way
or
like
packaging
and
running
applications?
So
I
just
want
to
talk
about
containers
on
Google
cloud
briefly
and
the
choices
that
we
made
and
then
I'm
gonna
show
you
more
stuff.
C
So,
basically,
what
we
did
is
that
we
created
an
asp.net
core
application
and
we
deploy
to
App
Engine,
but
it
can
also
work
on
google
kubernetes
engine
and
I
just
want
to
briefly
cover
what
the
differences
are.
So
when
it
comes
to
deploying
your
code
on
Google
cloud,
this
is
kind
of
like
the
spectrum
that
you
have
on
this
spectrum.
On
one
side,
you
have
compute
engine,
which
is
basically
virtual
machines
on
Google
cloud.
You
can
have
Windows
virtual
machines,
Windows
Server
virtual
machines
with
different
versions.
C
You
can
a
bunch
of
different
Linux
kind
of
virtual
machines.
You
can
also
have
container
optimized
virtual
machines,
so
if
you
want
to
run
containers
on
a
VM
for
whatever
reason
you
can
do
that
as
well.
On
the
other
end
of
the
spectrum,
we
have
code
functions.
These
are
basically
servers,
no
js'
or
Python
functions
that
you
can
deploy
and
you
don't
care
where
they're
running
they're
just
automatically
maintained
and
run
by
Google
for
you
and
then
in
the
middle.
We
have
a
pendulum
kubernetes.
C
The
easiest
way
to
run
containers
is
App
Engine,
so
in
App
Engine
you
basically
just
define
your
application,
define
a
docker
file
and
just
say
g-cloud
app
deploy
and
it
will
deploy
it
to
App
Engine
under
the
covers.
App
Engine
will
run
your
containers
on
two
of
the
ends,
but
it
will
automatically
scale
up
to
20
VMs,
but
all
that
is
abstracted
away
from
you.
So
you
don't
have
to
worry
about
that
and
then
there's
Cuban.
C
It
is
engine
for
those
people
who
want
to
run
containers
on
kubernetes
and
the
choice
is
really
up
to
you
like
you.
Can
do
both
if
you
want
more
control,
you
probably
want
to
go
with
kubernetes
engine,
but
if
you
want
just
easier
use-
and
you
just
want
to
say
okay,
this
is
my
code,
just
run
it
and
you
will
probably
go
with
App
Engine,
so
App
Engine.
What
does
give
to
it?
C
Basically,
you
just
say
this
is
my
container
run
it
and
it
will
just
run
your
container
and
it
will
scale
it
for
you
automatically.
It
gives
the
dashboards
it
gives
you
version.
So,
every
time
you
deploy
your
application,
you
have
multiple
versions.
Once
you
have
the
versions
you
can
do,
traffic
splitting
and
it
has
auto
scaling.
So
it
gives
you
the
default
things
that
you
want
with
just
a
single
command.
C
You
just
say
g-cloud
app
deploy,
and
by
that
you
get
all
these
pictures,
and
if
you
want
more
control,
as
I
mentioned,
you
can
use
cuban.
It
is-
and
I
think
in
the
previous
talk
we
talked
about
communities,
so
I'm
not
going
to
cover
it
too
much,
but
it's
basically
one
of
the
most
popular
open-source
continue
management
platforms
and
there's
something
called
google
to
burn.
It
is
engine
which
you
can
think
of
as
cuban
teaser
as
a
service,
its
cuban.
It
is
maintained
by
google
for
you,
so
with
a
single
command.
C
C
C
You
can
see
now
it's
telling
us
that
it's
running
on
App
Engine,
so
we
kind
of
made
the
connection
from
our
application
from
Google
home
dialog
flow
to
the
cloud
now
and
I
want
to
show
you
briefly
what
kind
of
things
you
need
to
make
this
connection
and
they
will
move
on
with
more
interesting
things,
with
machine
learning
and
big
big
data.
So
first,
let's
go
to
dialog
for
console
again.
C
So
one
thing
that
you
realize
is
that
invalid
for
console
in
one
of
the
intents
I
have
something
called
platform
describe.
So
this
is
the
intent
that
handles
when
I
say
what
environment
are
you
running
in
or
where
are
you
running
things
like
that?
So
when
you
say
that
it
will
trigger
this
intent
and
then,
as
you
can
see,
there's
no
response
here.
So
there's
no
text
response,
but
what's
happening
is
that
there's
this
check
box
here
that
says
enable
webhook
called
for
this
intent.
So
that
says,
2012
don't
try
to
handle
this
yourself.
C
Just
call
this
web
hook
that
I
want
you
to
call
right,
and
where
is
this
web
hook?
It's
defined
under
here
fulfillment,
so
in
fulfillment,
there's
a
URL
that
you
have
to
enter
it's
a
test
to
the
HTTP.
So
this
is
basically
my
App
Engine
endpoint.
That
I
already
deployed
that
that's
gonna
handle
the
web
hook.
Calls
from
diode
flow
by
the
way.
I
should
also
mention.
There's
an
inline
editor
here,
where
you
can
also
define
a
function
in
node.js
in
here,
and
you
can
deploy
it
to
Google
Cloud
using
something
called
firebase.
C
So
you
can
basically
define
the
function
in
here
and
just
deploy
it
right
from
here
and
it's
the
easiest
way
to
basically
define
web
hooks
right
from
here.
But
we
chose
not
to
do
this
because
we're
gonna
have
lots
of
intense
with
what
so
logic
in
them
and
trying
to
do
the
follow
that
in
here
it
doesn't
make
sense.
So
that's
why
we
also
we
want
to
do
it
separately
and
our
selves
also,
to
be
honest,
I
mean
who
wants
to
write
note?
C
Yes,
when
you
have
C
sure
right,
I
mean
I
know
some
people
are
gonna.
You
know
at
me
for
that,
but
I
mean
Sheb.
You
want
to
write
in
c-sharp,
at
least
for
me,
so
that's
why
we
didn't
bother
with
this
one
all
right.
So
that's
the
so
that's
how
you
define
webhook
and
now
at
this
point
it
will
get
into
our
code.
So
let
me
just
show
you
some
of
our
code,
so
this
is
this
code
is
on
github
by
the
way,
but
let
me
just
show
you
the
flow.
C
So
what
happens
so
we
have.
This
is
an
asp.net
core
web
app
and
we
have
a
conversation
controller.
So
this
conversation
controller
is.
It
is
the
thing
that
handles
the
conversation
calls
from
dialogue
flow
okay,
so
it
will
basically
get
here
this
conversation
route
and
then
we'll
get
the
HTTP
request
right
here
and
this
will
call
dialogue
flow
app,
handle
requests.
So
let's
look
at
that
for
app.
C
So
under
Deltora
folder
there's
a
dial
phone
app
and
if
you
look
at
the
handle
request,
it
gets
the
HTTP
request
and
what
it
tries
to
do
is
it
tries
to
extract
a
web
hook
request
from
the
body
of
the
HTTP
request?
What
is
this
weapon
class?
It's
basically
a
dialogue
for
thing.
So
if
you
look
at
at
the
top
of
our
file,
there's
a
Google
Cloud
blog
for
v2
NuGet
package
that
we
are
using
so
once
I
have
the
package
I
can
I
can
use
that
and
I
can
extract
the
web
book
request.
C
So
basically,
the
contents
of
the
Dalek
for
request
from
the
body
of
the
HTTP
request
so
once
I
have
that
I
had
things
like
this
session,
so
every
conversation
has
a
session
in
that
workflow
and
I
can
see
the
intense
and
the
intense
name,
and
things
like
that,
so
I
have
the
base
in
the
context
of
the
call
from
from
this
web
request.
So
what's
happening
here.
Is
that
I
had
this
session
ID
from
dialog
form
and
what
I
need
to
do
is
I
need
to
decide
whether
this
is
a
new
new
conversation.
C
Someone
is
just
they
just
started
talking
to
me
right
now
or
whether
it's
an
existing
conversation,
so
this
method
can
or
create
conversation
figures
out
whether
to
create
a
conversation
or
to
get
an
existing
conversation,
and
once
we
have
the
conversation
either
way,
we
call
the
handle
a
syncretic.
So
we
are
passing
the
request
down
the
chain.
C
So
if
you
look
at
the
conversation,
it
gets
the
webhook
request
and
it
takes
the
intent
name
from
the
requests,
and
now
we
want
to
match
the
intent
that
we
define
in
dialogue
flow
to
a
handler
on
the
server
right.
So
this
fine
handler
method
does
that
it
looks
at
the
intent
name
and
it
finds
an
handover
for
it.
How
it
does
it
I'm
not
going
to
go
through
the
code?
The
code
is
here.
C
If
you
want
to
take
a
look
at
it,
but
in
a
nutshell,
if
you
look
at
the
intent,
so
all
the
interns
are
under
the
intense
folder
and
if
you
look
at
the
platform
describe
handler,
you
will
realize
that
at
the
top
there
is
an
intent
tag
and
this
intent
tag
has
the
same
intent
name
as
the
one.
In
that
flow,
so
by
having
this
intent
tag
with
this
value,
we
are
basically
matching
the
intent
of
dialogue
flow
to
an
handler
on
the
server.
C
That's
how
it
happens,
so
it
will
get
here
and
we'll
get
the
request
here,
and
what
we
are
doing
here
is
that
this
platform
dog
instance
asing,
it's
a
Google,
Cloud
API
call.
So
Google
is
basically
figuring
out
where
we
are
running,
and
at
this
point
we
it
will
figure
out
the
fact
that
we
are
running
on
App
Engine
and
then
once
we
have
the
detailed
description
or
where
we're
running,
we
do
two
things
we
have
double
for,
show
call
this
will
display
on
the
web
page
whatever
we
pass
so
I.
C
We
pass
in
the
text
we
get
and
displayed
on
on
the
page,
and
we
also
return
a
workbook
response
and
we
say
fulfillment
text,
so
this
fulfillment
text
is
basically
telling
dialogue
flow.
Just
say
what
I,
whatever
I
told
you
to
say.
So
this
spoken
description
here
is
what
I'm
telling
you
that
what
to
say?
Okay,
so
that's
so
that's
that's.
The
whole
chain
like
that
flow
goes
to
an
HTTP
endpoint
from
HTTP
endpoint.
We
go
through
the
conversation
from
conversation,
go
to
the
intent,
Handler
and
from
intent
handler
we
handle.
C
C
Now,
at
this
stage
we
have
our
dialogue
phone
connected
to
Google
Cloud
running
on
App
Engine,
so
things
can
get
more
interesting
and
this
the
part
of
the
talk
that
I
have
the
most
fun
with.
So
what
we're
gonna
do
is
at
this
stage
we
said:
okay,
let's
just
see
what
we
can
use
on
Google
cloud
to
make
this
more
interesting,
and
the
first
thing
that
we
looked
at
is
the
machine
learning
api's.
So
you
know
on
Google
cloud
and
in
other
clouds
as
well.
C
There
is
this
API
is
for
for
consuming
machine
learning
by
consuming
machine
learning,
I
mean
you
know.
Usually
when
you
want
to
do
machine
learning,
you
need
to
have
data.
You
need
to
be
able
to
determine
what
you
want
to
do
with
the
data.
Then
you
need
to
be
able
to
train
that
data
using
machine
learning.
C
Then,
once
we
have
a
trained
model,
then
you
need
to
expose
that
model
using
an
API,
and
once
you
have
that
API,
then
you
can
consume
that
intelligence,
that
you
got
from
machine
learning
right,
that's
a
lot
of
work,
but
then
you
can
also
rely
on
what's
called
machine
learning
api's
these
are
API
is
then
expose
a
model.
That's
already
been
trained
for
you.
So,
for
example,
there's
something
called
cloud
speech-to-text
all
what
it
does
is
that
it
takes
the
voice
and
it
turns
that
into
text
using
machine
one.
C
It
uses
a
model
that
Google
already
built
to
do
that
same
thing
for
text-to-speech,
so
you
can
convert
text
to
a
human,
sound
and
speech
and
there's
video
intelligence
API
that
you
can
extract
intelligence
from
videos.
Natural
language
is
also
also
really
good.
You
can
pass
in
some
text
in
English
and
it
can
detect
whether
the
text
is.
You
know
a
positive
text
or
negative
text
or
whether
it's
neutral
the
things
things
like
that
about
the
text
and
my
favorite
is
the
vision
API.
C
So
in
vision,
API,
you
can
pass
in
an
image
and
it
will
try
to
extract
information
about
that
image.
Just
to
show
you
an
example,
so
there's
a
demo,
you
can
actually
go
to
the
stem
of
yourself
as
well
called
vision,
Explorer,
and
what
we
can
do
in
visual
Explorer
is
that
we
can
test
in
some
images
and
you
can
see
what
we
get
back.
C
So,
for
example,
we
pass
in
an
image
of
a
cat
and
what
you
get
back
is
basically
something
like
this:
it's
just
a
JSON
with
some
text
with
some
descriptions
and
scores.
But
if
you
look
at
it
in
detail
in
graphical
way,
we
can
see
that
we
get
some
labels.
So
we
pass
this
image
to
vision,
API
and
what
we
got
is
some
labels
and
vision.
Api
is
telling
us
that
it's
a
cat,
99%
of
99%
sure
it's
an
animal
96%,
and
it
also
knows
that
it's
a
British
Shorthair
cat
93%.
C
So
it
gives
you
quite
accurate
information
about
the
image
it
gives
you,
the
colors
in
the
image,
the
dominant
colors
in
the
image,
and
it
also
tells
you
whether
this
image
is
an
adult
image
or
medical
image
or
violent
image.
Stuff
like
that,
if
you
pass
in
an
image
with
text,
then
you
can
extract
text
from
the
image
as
well.
So,
for
example,
in
this
one
we
have.
We
have
this
traffic
sign,
so
vision.
C
But
for
me
the
interesting
part
is
that
if
you
turn
this
on,
then
it
can
detect
people's
faces,
but
it
can
also
detect
people's
expressions
so
person
1,
which
is
the
person
here
vision.
A
paste
on
us,
that
is
the
person,
is
very
neutral,
so
there's
not
much
expression,
which
is
which,
which
seems
to
be
true
and
a
person
2,
which
is
this
person,
is
joyful
because
she's
smiling.
So
you
can
get
this
kind
of
information
from
vision
API.
C
So
what
we
wanted
to
do
is
we
wanted
to
basically
use
vision,
API
and
what
we
did
is
well
instead
of
me
talking
about
it.
Let
me
just
show
you
what
we
did
and
then
I
can
explain
the
details
afterwards.
So
let's
go
back
to
my
simulator
and
I
will
say:
let's:
let's
try
moves
around
I
want
to
use
vision,
API.
D
C
What
happened
right
now
is
that
we
use
Google
custom
search
to
search
for
images
on
London
and
we
got
some
images.
We
take
one
and
now
I'm
going
to
use
vision,
API
to
describe
the
image,
get
the
labels
out
of
it
and
do
a
landmark
detection
to
see
if
there's
a
landmark
and
also
do
it
safe
to
check
whether
this
image
is
safe
or
not.
So,
let's
see,
if
it
works,
can
you
describe
the
image.
D
C
C
So
it's
telling
us
that
the
pictures
right
so
yeah-
we
just
use
my
shoulder
here
in
our
application,
and
let
me
just
show
you
quickly
how
this
looks
in
the
code
now
well
before
the
code.
Let's
look
at
the
dialogue
flow
console
first,
so
we
can
close
this
so
to
look
at
the
doubtful
console
start
with
intense,
so
we
have
a
number
of
vision
related
intents.
C
The
first
one
is
vision,
dog
intro.
So
this
one,
when
you
say
I,
want
to
use
vision,
API
it
will
trigger
this,
and
all
it
does
is
it
will
set,
what's
called
an
input
context
and
an
output
context,
so
input
context
in
an
in
an
intent
means
that
this
intent
will
only
be
triggered
when
you
may
have
this
input
context.
In
this
case,
this
in
turn
will
be
trigger
no
matter
what
right,
but
the
output
context
means
that
when
this
intent
gets
called
at
the
end
of
it,
it
will
set
this
vision
context.
C
So
when
you
say
I
want
to
use
a
vision
API,
it
basically
sets
the
context
of
vision.
Alright,
so
that's
all
it
does,
and
this
is
important
because
if
you
look
at
the
other
other
vision,
intense,
the
first
thing
that
we
do
is
vision,
vision,
search.
So
the
business
vision
search
has
vision
context
as
input
right.
So
this
will
be
only
triggered
if
we
are
in
vision
context.
So
that's
how
you
can
control
what
gets
triggered
went
right
and
then
it
also
has
some
our
output
contexts
like
search
and
vision.
C
But
one
cool
thing
here
is
that
the
training
phrases
here
in
vision,
dot
search,
is
that
I
say:
let's
see
some
dogs
dogs
pictures,
you
know,
as
you
can
see,
like
I
didn't,
say,
dogs
in
my
demo,
I
said
London,
but
what's
happening
is
that
we
are
marking
this.
We
are
providing
these
expressions
and
we
are
marking
dog
as
special
entity
that
we
want
dialogue
for
to
pick
for
us
right
and
this
entity.
We
call
it
search
term
right
so
and
what
happens
is
that
the
dialogue
flow
will
say.
C
You
know
when
you
say,
show
me
images
of
London.
It
will
pick
London
and
it
will
sort
it
will
insert
it
as
search
term
in
the
request,
and
we
can
pick
that
search
term
and
use
it
later
right.
So
the
this
is
the
entity
detection
of
the
outflow
that
we
are
getting
for
free,
which
is
nice,
and
then
let
me
show
you
the
other
interns
as
well,
and
then
we'll
look
at
the
code.
So
that's
image
search.
It
must
be
search.
C
The
image
we
select
the
image-
that's
not
very
interesting,
is
basically
selecting
one
of
the
images
and
then
region
that
described
is
the
one
that
actually
makes
the
call
to
vision.
Api
so
I
mean
here
as
we
can
see,
the
context
is
vision,
dot
select,
so
this
only
gets
called
when
we
have
an
image
that
we
selected.
Otherwise,
this
intent
doesn't
get
called,
which
makes
sense,
and
then
these
are
the
phrases
that
triggers
this
intent
such
as
describe
this
image
and
then
response
is
a
web
hook.
All
right,
there's
no
response.
C
Our
code
will
handle
this
and
it's
true
for
the
other
ones
as
well.
If
it
would
the
visions
or
landmarks,
this
will
also
like
be
triggered,
and
it
will,
it
will
be.
You
will
be
calling
our
code
as
well.
So
now
we
can
actually
look
at
our
code.
Let's
go
to
visual
studio
code
in
here
under
intense
there's,
a
vision,
folder
and
then
let's
look
at
the
search
first.
C
So
if
you
look
at
the
search
eventually
the
request
will
the
webhook
request
will
end
up
in
this
method,
and
what
we
are
doing
is
that
we
are
picking
up
the
search
term.
So
this
is
the
search
term
that
was
picked
up
by
the
outflow
and
passed
to
us
right.
So
we
pick
that
up
and
then
what
we
do
is
we
create
a
search
client.
So
this
is
a
Google
Custom,
Search
API
that
we
are
using.
C
We
are
searching
for
images,
we
get
some
images
back
and
then
we
display
that
image
in
our
front
end
and
then
we
send
a
response
to
dialogue
flow,
and
so
this
is
missing
what
we
want
doubtful
to
say-
and
it
will
say,
find
some
pictures
of
London
now
select
the
picture
right.
So
that's
how
you
search
it.
It's
quite
easy,
then
vision
select.
C
Well,
basically,
we
don't
have
to
look
at
the
details,
but
it
will
basically,
when
you
say,
select
the
first
picture
that
will
be
passed
in
as
index
from
dialog
flow
and
then
will
use
the
index
to
select
the
picture
and
then
the
machine
learning
happens
in
describe
so
in
describe
one
thing
that
you
first
realize
is
that
we
are
using
google
doc
cloud
or
vision
of
v1.
So
that's
the
nougat
package
to
talk
to
vision,
vision,
API,
and
what
we
are
doing
is
that
we
get
to
request
again.
We
create
a
vision
client.
C
So
this
is
the
actual
class.
We
used
to
call
the
vision
API
and
we
just
make
a
single
call
in
this
case
the
tech
labels
acing.
So
this
says,
given
this
image,
which
you
basically
just
point
to
the
image
URL,
we
just
say
call
label
detection
on
this
image,
and
this
gives
us
some
labels
and
then
we
just
basically
take
those
labels,
and
we
just
say
this
picture
is
labeled
ba
ba
ba
ba
ba.
We
just
talked
about
that
right.
So
that
said
that
it's
just
one
API
call
to
vision.
C
Api
same
thing,
with
landmark
detection:
if
you
look
at
remote
detection,
again
vision,
client,
detect
landmarks,
almost
the
same
kind
of
code
and
safe
detection
is
also
the
same.
So
you
can
look
at
safety
detection,
have
a
client
and
the
tech,
safe
search,
a
think.
That's
it.
So
it's
cool
that
we
can
do
machine
learning
with
a
single
code,
but
this
won't
do
machine
learning.
Eight
paths
are
four
and
yeah:
that's
what
we
use,
so
that
was
one
and
the
second
that
that
we
wanted
to
use
was
bigquery.
C
Let
me
go
back
to
my
presentation
briefly.
So
bigquery
is
Google's,
so
that's
vision,
API,
so
bigquery
is
Google's
massively
parallel
processing
engine.
Basically,
so
the
idea
of
bigquery
is
that
you
ingest
your
data
and
in
this
case
I
mean
like
terabytes
of
data.
Bigquery
works
better
and
better
with
more
and
more
data.
C
So
if
you
know
what
so
data
you
can
ingest
it
and
because
it
stores
it,
it
has
a
lot
of
storage
behind
it
and
it
has
a
lot
of
compute
it,
and
the
idea
is
that
you
run
sequel,
queries
against
this
big
data
and
it
runs
really
fast
because
because
there
is
really
optimized
to
take
the
query,
split
the
query
into
small
queries
and
and
just
run
it
as
fast
as
possible
right.
So
it
and
it's
fully
managed.
C
So
you
don't
have
to
worry
about
clusters,
machines
or
anything
like
that,
and
the
thing
of
a
big
core
is
that
it
comes
with.
It
comes
with
public
datasets.
So
if
you
look
at,
if
you
go
to
this
page
on
Google
Cloud
bigquery
public
data,
there's
all
these
public
data
that's
available
in
bigquery
that
you
can
use
to
try
it
out
or
even
using
your
application.
If
you
want
on
things
like
github
data,
for
example,
you
can
search
for
all
github
commits
and
everything
that's
on.
Github
is
ingested
in
bigquery.
C
C
He
actually
analyzed
github
data
using
bigquery
and
he
figured
out
which
companies
and
at
the
top
contributors
to
to
open
source
using
github
data,
and
here
he
has
a
blog
post
about
it-
that
he
explains
what
what
you
can
do
is
basically
can
I
mean
I
won't
go
into
details,
but
you
can
go
to
google
cloud
and
you
can
go
to
bigquery
and
let's
just
go
here,
the
quarry
and
then
I
just
copied
and
pasted
his
his
his
sequel
statement.
So
I
have
one
here:
github
top
contributors,
let's
just
move
this
around.
C
C
Yeah
in
bigquery
there's
two
public
data
sets
that
we
were
interested
in.
One
was
hacker
news:
we
want
to
search
everything
that
happened
on
our
news
and
there's.
Another
data
set
was
global
temperatures,
so
there's
a
data
set
where
you
know
we
we
keep
well.
The
data
set,
keeps
track
of
all
the
global
temperatures
in
all
countries
since
like
1910,
or
something
like
that.
So
let's
try
hacker
news.
First,
what
was
top
hacker
news
on
May
1st
2018.
D
C
C
C
I,
don't
know
where
that
is,
but
I
that
looks
like
a
really
high
number,
like
four
to
eight
degrees,
is
probably
like
120
hundred
thirty
five
night.
I
guess
I,
don't
know,
but
that's
a
lot
and
you
can
ask
like
coldest
temperature
as
well
and
in
in
pretty
much
any
country
and
the
way
we
did
this
again
just
to
show.
C
You
briefly
is
that
if
we
go
to
doubtful
console
intense,
so
bigquery
intro
sets
the
bigquery
context
just
like
before,
and
then
once
we
have
the
context,
there's
a
let's
look
at
hacker
news,
so
the
hacker
news
gets
called
when
we
are
in
bigquery
context,
and
the
training
phrase
is
what
was
on
Hacker
News
yesterday.
What
was
the
top
hacker
news
yesterday?
C
We
are
just
picking
up
the
date
and
again
like
here:
I'm,
not
okay,
here
I'm,
given
one
example
with
a
date,
but
you
can
also
say
yesterday
and
as
long
as
you
mark
these-
and
you
tell
that
like
for
just
pick
this
thing
as
date,
it
will
figure
out
what
yesterday
is
and
it
will
give
you
a
proper
date.
That's
the
cool
thing
about
it.
So
we
basically
pick
up
to
date
and
we
pass
it
to
bigquery
code
that
I'm
going
to
show
you
and
the
same
thing
with
the
temperatures
in
temperatures.
C
We
have
more
things
to
look,
so
we
are
looking
for
the
hottest
temperature
in
France
in
2015,
so
we
are
picking
up
hottest
or
coldest
or
highest
or
lowest,
and
we
are
picking
up
country
so
France
and
we
are
picking
up
the
year
right.
So
the
entities
are
more
complicated
here,
but
again,
dialogue
who
is
doing
the
work
for
us
and
by
the
way,
if
you
say
in
2015,
was
the
highest
temperature
and
if
you
don't
say
the
country,
then
they'll,
probably
pumpkin
say
which
country,
which
is
pretty
cool
as
well.
C
So
you
because
these
are
marked
as
required
right
so
since
they're
marked
as
required,
valid
for
will
make
sure
that
the
user
provides
that
so
I
don't
have
to
worry
about
that
myself,
all
right
and
just
to
look
at
the
code.
Again,
we
go
to
bakery.
Let's
look
at
hacker
news.
First,
the
first
thing
that
we're
gonna
do
is
we're.
Gonna
use
the
bigquery
package
on
that
we
have
and
then
it
will
get
into
handle
async.
We
will,
from
the
request
the
web
hook
request
that
the
outpour
gives
us
we
pick
up
the
date.
C
That's
not
anything
that
we're
interested
in
in
hacker
news.
Then
we
create
a
bigquery
client
that
we'll
use
to
talk
to
bigquery.
We
specify
the
table.
The
public
data
sets
that
that
we
want
to
get
information
from.
We
define
a
sequence
statement
and
we
define
the
parameter
in
this
case
the
date.
So
that's
the
only
thing
that
we
pass
it
to
single
statement
and
they
show
the
query
in
our
web
page
and
then
we
started
clock.
C
So
you
can
time
how
long
it
takes
and
then
we
execute
to
query
and
then
we
get
the
results
and
in
the
end
we
show
the
query
to
people
in
the
browser,
and
we
also
do
a
response,
so
the
fulfillment
text
will
say
well
the
fulfillment
text,
so
that'll,
probably
basically
just
say
this
starts
about
the
query
so
scan
this.
Many
megabytes
in
is
a
matter
of
time
stuff
like
that
and
the
same
thing
with
the
global
temperatures.
C
The
only
thing
that's
different
in
global
temperatures
really
is
that
there's
more
things
to
extract
from
the
requests,
so
that
logic
is
here
so
I'm.
Looking
at
all
the
things
that
the
Alpo
should
provide
to
me
and
doing
some
kind
of
error
checking.
So
if
there
is
something
that
did
out
for
is
not
giving
to
me,
then
I'm
gonna
say
something
and
the
rest
is
pretty
much
the
same.
I
won't
show
you
the
rest
of
the
code,
because
it's
pretty
much
the
same
yeah.
So
that's
that's
how?
C
Because
we
got
Vickery
integrated
and
the
last
thing
that
I
want
to
show
you.
I
want
to
wrap
up
in
five
minutes,
but
I
want
to
make
sure
that
you
show.
I
show
this
to
you,
because
it's
kind
of
important
you
know
this
is
all
code
like
we
can
integrate
with
cloud.
We
have
machine
learning,
we
have
Big
Data,
but
this
all
of
this
means
nothing
if
you
cannot
maintain
your
application.
C
So
for
that,
we
thought
that
it
was
really
important
to
use
something
like
stack
driver
to
to
maintain
our
application
and
motors
truck
driver
is
basically
Google
clouds,
monitoring
and
logging
debugging
error
reporting
tracing
tool.
So
logging
is
it's
like
a
central
place
where
all
the
logs
can
go.
Every
reporting
is
anything
that
you
you
throw
from
your
application,
that's
not
caught,
it
will
be
reported
here
and
you
get
stats
about
the
earth
tracing
is
HTTP
tracing.
C
So
all
the
calls
into
your
application
will
be
traced
and
you
can
see
the
stats
about
them
as
well.
And,
finally,
the
thing
that
I
want
to
show
you
is
debugging,
so
you
can
actually
point
your
code
on
github
and
get
that
loaded
in
your
browser,
put
a
breakpoint
and
get
a
snapshot
on
a
live
application
running
in
the
cloud.
Okay.
So
let's
just
I
just
want
to
in
five
minutes.
Just
show
you
quickly
what
we
have
here,
but
before
that
to
enable
stackdriver
all
I
have
to
do
is
in
my
application.
C
If
I
go
to
not
start
up
to
the
program
of
CS
yeah
when
I
create
my
application,
I
just
say:
use
Google,
then
diagnostic,
passing
my
project,
ideas,
service
name
and
the
version
or
my
application,
and
that's
it.
This
will
enable
that
driver
for
me
right
so
by
with
this
I
have
my
stuck
driver,
enabled
and
also
in
your
docker
file.
You
need
to
start.
If
you
want
to
use
debugger,
you
need
to
start
in
a
special
way,
but
it's
very
special.
C
Just
your
wrapped
net
call
which
started
debugger
and
and
that's
it
and
once
you
do
that
what
you
get
is
that
I
come
here,
I
go
to
stack
driver
in
Google,
Cloud
console.
So
let's
look
at
logging
first,
so
logging
not
exciting,
but
it's
useful.
It's
a
central
place
where
you
can
see
like
this
is
my
App
Engine
application
version
6
or
my
application
and
I
can
see
all
my
logs
I
can
do,
search
and
stuff
like
that
here,
which
is
really
useful.
The
other
thing
is
the
tracing.
C
So
all
the
HTTP
calls
in
my
application.
They
are
traced,
so
I
have
a
long
pole.
End
point
where
my
webpage
is
calling
once
in
a
while,
so
I
can
see
the
long
pole
calls
I
had
the
conversation
endpoint
where
that
would
fall
is
calling
so
I
can
click
on
conversation
and
I
can
see
the
calls
that
are
being
made
and
when
they
are
being
made
and
if
I
click
on
the
actual
call
itself,
I
can
also
see
what's
being
called
underneath.
C
C
So
if
you
look
at
error
reporting
my
application,
it
has
some
timeout
issues
that
I
didn't
fix
on
purpose,
so
I
can
show
it
to
you.
So,
as
you
can
see,
there's
time
on
exceptions,
that's
been
happening.
I
can
see
how
often
they
happened
and
if
I
can
link
to
an
issue.
If
there's
an
issue
in
it
and
I
can
get
someone
to
be
called
if
I
want
to.
So
it's
very
useful
as
well,
but
the
thing
that
I
want
to
show
before
I
finish.
My
talk
is
debugging,
so
what
I
did
look
at?
C
What
you
can
do
in
debugging
is
that
first,
you
point
to
your
source
code,
and
you
can
go
here
at
so
Square
and
point
to
github.
They
will
load
your
code
here.
It
won't
load
it
in
Google
Cloud.
So
we
won't
see
your
code.
It
will
only
load
it
in
the
browser.
Then
then
you
can
see
your
code
just
like
here
and
you
can
put
breakpoints
so,
for
example,
this
vision
search
handler
is
the
handle
that
gets
called
when
we
search
for
an
image
right.
So
I
can
come
here
and
I
can
just
say.
C
Yeah
I
want
to
look
at
the
search
term,
for
example.
It
maybe
there's
something
wrong
with
the
search
term.
So
I
can
do
a
breakpoint
here
and
now
what's
happening
is
that
site
driver
is
waiting
for
this
to
be
hit
when
it
gets
hit.
It
will
take
a
snapshot
of
the
call
and
all
the
variables,
but
it
will
continue
running
so
you
won't
stop
anything.
So
if
we
go
back
just
to
see
that
this
is
working,
if
we
go
back
to
our
application,
let's
do
this
and
let's
say
I
want
to
use
a
vision,
API.
C
Go
back
here,
as
you
can
see,
I
mean
you
couldn't
really
see,
but
what
happened
is
that
this
actually
captured
it
was
already
captured.
I
can
already
see
that
search
term
is
London
right,
but
my
application
is
running
is
in
the
production.
I
mean
to
me.
This
is
really
valuable,
because
anyone
who
has
done
any
kind
of
production
debugging
knows
how
hard
it
is
and
how
hard
it
is
to
instrument
your
code
and
application.
So
having
this
is
it's
very,
very
valuable
for
me
and
I
think
that's
all
I
want
to
say.
C
Let
me
just
double
check.
Yeah
this
stackdriver
and
yeah.
That's
all
I
have
to
say
if
you
want
the
slides.
This
is
my
Twitter.
I
already
have
these
slides
from
two
days
ago,
and
if
you
want
to
have
the
code,
this
is
a
link
to
github.
You
can
there's
a
readme
there
that
explains
how
to
set
it
up
yourself.
It's
not
that
complicated,
so
you
can
play
with
it.
If
you
want
and
yeah
I
guess
in
the
last
5-10
minutes,
we
can
have
some
questions.