►
From YouTube: gRPC June Meetup: Demo: Configuring gRPC probes with the latest versions of Kubernetes by Sergey K.
Description
Kubernetes introduces the native support of gRPC probes for health and readiness checks. Many Kubernetes users who used to run gRPC services on Kubernetes will be able to migrate to those probes from ones currently used. New built-in probes have limitations that will be described in this talk. Also the talk will cover potential future improvements. As Kubernetes maintainers are looking at making gRPC probes support GA soon, this section is a way to gather more feedback from practitioners.
A
I
wanted
to
ask
a
question
so
if
you
can
turn
on
camera
just
for
a
second
and
answer
like
maybe
not
your
head
or
something,
I
wanted
to
ask
how
many
of
you
knows
kubernetes
very
well.
Do
you
feel
that
I
mean
I
won't
understand
how
deep
to
go
into
kubernetes?
A
Oh,
I
see
a
hand,
okay
and
okay,
so,
okay,
so
just
a
little
bit,
I
see
okay.
So
I
will
try
to
cover
as
much
as
needed
for
this
presentation
but
feel
free
to
interrupt
me.
Ask
questions.
A
Ask
a
clarifying
questions,
I'm
happy
to
stop
any
moment
and
talk
about
what
you
see
and
what
you
may
not
understand
or
what
I
may
just
I
meet,
because
I
just
go
too
fast
or
anything
so
yeah,
let's
make
it
quite
informal
and
I
hope
it
will
be
educational
as
well
and
I'm
really
need
the
feedback
from
community.
That's
why
I
came
here
in
the
last
slides.
A
I
will
talk
about
what
exactly
kind
of
feedback
I'm
looking
for
so
bear
with
me
and
let's
get
started,
I
will
switch
to
next
site
so.
A
Kubernetes
is
used
to
run
applications
and
there
are
applications.
We
need
to
know
what
applications
are
healthy,
happy
and
whether
they
like,
in
which
stage
they
are
currently
in
to
know
to
answer
all
those
questions.
We
have
different
kind
of
probes
that
we
run
against
applications.
A
Probe
is
kind
of
a
callback
that
kubernetes
call
into
to
ask
the
questions.
Are
you
alive,
like
a
loudness
pro
indication
like
whether,
like
we
ask
an
application?
Are
you
still?
There
are
you
running?
Are
you
healthy
and
if
it's
not
healthy,
then
after
certain
number
of
attempts,
we
will
just
restart
application
depending
on
configuration
we
may
just
shut
it
down
completely
or
restart.
It
all
depends
on
how
you
configure
the
application.
A
A
Please
wait
a
second
and
after
a
certain
configured
amount
of
failed
probes,
you
will
stop
traffic
to
application
and
when
you
want
to
receive
traffic
again,
you
just
start
returning
okay
status
from
this
probes,
and
we
will
keep
pushing
traffic
on
on
that
container
on
this
application
and,
finally,
startup
probes
that
we
support
in
kubernetes
is
designed
to
make
a
startup
easier.
So
let's
say
your
application
needs
to
load
a
lot
of
data
and
it's
not
going
to
be
ready
or
live
before
this
data
is
loaded
from
cache
or
from
database.
A
You
can
implement
startup,
probe
and
kubernetes
will
keep
asking.
Are
you
started
yet?
Are
you
started
yet
and
once
your
return
started,
it
will
turn
swish
you
into
a
different
mode
and
start
asking
readiness
and
loudness
probes.
A
So
those
are
types
of
probes
that
kubernetes
has.
I
think
it's
pretty
clear
what
the
purpose
is,
and
without
these
probes
it's
really
hard
for
kubernetes
to
know
how
to
manage
applications
and
how
to
how
to
make
sure
that
they're
healthy
and
such
so
what
kind
of
kubernetes
what
kind
of
probes
kubernetes
support.
Today,
today
we
support
tcp,
props,
http,
probes
and
exec.
Props
tcp
is
basically
trying
to
open
the
port
if
it
can
be
opened
and
like
it's
okay,
if
it
can't
be
opened,
it's
it
failed.
A
A
So
that's
all
that
kubernetes
supports
today,
so
you
may
notice
that
there
is
no
grpc
in
this
list.
So
if
you're
running
jrpc
application
or
you're
running
some
other
types
of
application,
but
we
consider
it
in
grpc
today
what
are
your
choices?
A
So
today,
if
you
run
grpc
application,
you
have
quite
very
little
options.
The
first
you
can
do
tcp
check.
So
let's
say
you
have
grpc
endpoint.
All
you
can
check
against
this
jrpc
endpoint
is
that
port
is
open.
It's
quite
weak
check,
so
you
cannot
test
much
and
it
probably
will
be
okay.
Even
the
application
is
completely
down
or,
like
I
mean
not
completely
done,
but
it's
like
barely
surviving
so
tcp
check
is
not
very
helpful
in
most
cases.
A
Another
way
people
work
around
this
limitations
of
kubernetes
is
besides
a
grpc
endpoint
people
open
http
port
for
health
checks.
So
you
have
jrpc
application.
It
only
works
with
jrpc,
you
happy
with
jrpc,
but
you
still
have
to
open
http
port.
Just
because
you
want
to
allow
kubernetes
to
ask
you
about
your
status,
then
there
is
another
thing
that
you
can
do
is
this
is
like
very
heavy
solution.
A
In
kubernetes
we
have
a
concept
of
side
cars,
the
side.
Cars
is
a
container
that
you
deploy
alongside
your
kubernetes
container,
that
your
main
container
and
it
runs
in
the
same
address
space
and
like
networking,
for
instance,
and
what
you
can
do
you
can
run
containers
at
xposed,
http
endpoint
and
it
proxy
http
questions
for
status
to
grpc
questions
of
status.
A
A
As
I
said,
it's
not
ideal
solution
because
it
carries
at
least
35
megabytes
of
additional
things
and
it's
running
some
http
service
in
your
port
and
it's
quite
heavy.
And
lastly,
there
is
a
very
popular
a
solution
called
exact
probes,
so
exact
props,
as
I
said,
is
a
way
for
kubernetes
to
call
any
executable
inside
your
container
and
this
executable
like
pass
any
parameters,
and
this
executable
will
do
whatever
checks
it
wants
and
return
the
status.
If
it's
returned
zero,
then
it's
fine.
If
it's
non-zero,
then
you
health
check
are
failed.
A
This
is
also
quite
popular
solution.
It's
used
on
many
many
services
and
the
problem
with
that
is
its
usability
and
security
still
suffers.
So
from
usability
perspective,
you
still
need
to
package
something
extra
with
your
application.
A
You
need
to
check
security
of
this
extra
executable
and
let
me
show
how
it
works.
So,
typically,
what
you'll
do
I
mean
not?
Typically,
this
is
example
of
something
that's
working,
but
it's
not
ideally
working.
So
I
took
etcd
container
etcg
is
a
database
and
then
what
I
did
is
I
created
another
container
that
will
download
this
health
prop
executable
from
github.
You
see
this
github
jrpc
house
prop
and
it
will
put
it
into
the
volume
called
probe
that
I
mounted
to
the
port.
A
A
So
if
they,
if
you
have
atcd,
you
have
a
docker
file
that
is
based
on
etcd,
but
it
will
add
this
grpc
executable
on
top
of
it
and
it
will
be
new
container
or
new
image
that
you'll
base
your
container
on.
C
I
just
wanted
something
that
what
I
often
see
people
when
they
do
more
custom
integration-
they
just
have
their
application,
write
the
file
into
a
tmp,
folder
and
live
live
less
prop
exact
command
will
be
cat
that
file
so
there.
So
you
don't
have
to
install
anything
specific,
like
any
other
check,
so
just
basically
checking
the
files
there.
A
I
see
okay
yeah,
it
makes
sense,
it
works.
I
I
guess
I
forgot
to
mention
it
and
it's
a
good
call
out.
So
let's
come
back
to
this
slide,
as
you
said
that
there
is
another
way
are
still
using
exact
props,
but
instead
of
exec
probes
calling
into
your
application
and
asking
for
its
status,
it
checks
for
file
on
the
file
system,
and
if
this
file
is
healthy
or
updated
recently,
then
it
will
return.
A
Okay,
what
I
see
typically
with
this
pattern
for
liveness,
for
instance,
this
file
will
be
constantly
updated
by
by
your
application,
with
the
latest
timestamp,
for
instance,
and
then
exec
exactly
will
check.
If
time
step
is
recent,
then
it's
fine.
If
timestamp
is
too
far
gone
like
too
far
I
mean
in
the
past,
then
it
will
say:
oh,
it
seems
to
be
unhealthy
and
it
will
fail
to
check
for
readiness
and
for
startup
it's
very
similar
yeah.
It's
another
way,
it's,
I
would
say
it's
less
popular
but
yeah.
It's
still
quite
popular.
A
Yeah
startup
probes
were
introduced
in,
I
would
say
14,
114
or
116
of
kubernetes,
it's
quite
old.
It's
not
used
quite
I
mean
I
don't
think
many
people
know
about
it,
but
they're
probably
trying
to
solve.
Is
you
want
to
start
the
application
quite
fast?
And
but
you
don't
want
to
run
your
rageness
props
too
often.
So,
if
you
only
use
readiness
props,
then
let's
say
your
application
being
started.
A
I
mean
you
want
to
your
application
to
start
in
five
seconds
in
like
in
practice,
proximity
of
five
seconds.
If
you
do
time
out
of
your
regions
prop
10
seconds
which
is
default
typically,
then
you
kind
of
like
you
either
start
right
away
or
zero.
You
start
at
10.,
so
you
cannot
start
at
like
five.
A
That's
why
startup
probe
was
introduced,
it's
very
similar
to
ryzen's
probes,
but
it
will
run
in
the
beginning
and
you
can
change
the
interval,
so
you
can
say,
run
startup
probe
every
second
and
once
your
port
is
ready,
then,
like
you
switch
to
readiness
prop,
which
is
completely
different.
C
Oh,
so
you
don't
call
it
periodically
when
the
container
is
running
to
decide
whether
to
took
off
take
off
traffic
from
it.
It's
just
like
hey
on
the
startup.
I
do
it
once
and
then
okay
got
it.
Thank
you.
A
And
it's
very
handy
in
many
scenarios
for
this
specific
reason,
yeah,
I
I
will
move
forward.
So,
as
I
said,
this
step
sometimes
is
done
on
startup
on
on
during
build
time
during
container
build
time
what
I
looked
at
github.
A
I
looked
at
many
implementations
of
this
and
I'm
yet
to
see
people
often
check
the
show
of
downloaded
file,
so
they
just
download
from
github
and
just
packaging
in
their
production,
which
is
a
little
bit
of
security
hole,
I
would
say-
and
it
also
not
only
not
secure
but
also
like
I
mean
it-
exposes
your
application
to
many
troubles.
A
Then
another
problem
with
that
is
yeah.
I
mean
not
too
many
problems.
Exact
props
are
not
very.
I
mean
it
has
some
limitations
in
kubernetes,
mostly
because
they
are
heavy
to
execute
exact
props.
You
need
to
instantiate
this
other
space.
You
need
to
run
the
process
for
every
single
probe.
A
Like
let's
say
you
want
to
have
your
application
very
responsive,
so
you
want
to
run
your
readiness
prop
every
second,
sometimes
it's
hard
to
keep
up
with
every
second
probes,
if
it's
exact
props,
because
exec
probes
has
such
a
big
overhead
themselves.
A
A
Also,
this
is
not
fully
functioning
etcd.
I
just
pointed
here
so
you
don't
call
me
on
that,
because
it
doesn't
expose
any
ports.
So
it's
like
basically
a
database
that
doesn't
open
anything,
and
I
will
come
back
to
this
node
about
ports
a
little
bit
later.
A
It
will
be
needed
so
yeah,
so
what
we
did
in
123,
oh
yeah,
things
are
getting
better
and
in
kubernetes
for
grpc,
and
I
wanted
to
point
out
that
the
idea
to
introduce
grpc
probes
was
floating
around
for
a
very
long
time.
So
you
can
see
this
comment
from
2016
february
2016..
So
it's
like
long
time
ago,
people
were
trying
to
introduce
grpc
probes
into
kubernetes
and
at
that
time
biggest
concerns
were
that
we
want
to
avoid
dependencies
on
anything
that
anything
extra.
A
Basically,
so
if
you
have
some
rpc
protocol,
we
don't
want
to
take
depends
on
this
rpc
protocol
from
kubernetes,
especially
keep
in
mind
that
kubernetes
and
kubelet
is
written
in,
go
and
then
go
dependencies
are
very
hard
to
manage.
A
Then
kubernetes
always
strives
to
be
fair
and
open.
So
we
want
to
make
sure
that
if
we
support
one
rpc
protocols-
and
we
have
a
clear
answer-
why
you
don't
support
other
rpc
protocol?
And
finally,
we
wanted
to
allow
customers
to
do
some.
A
Generic
solution
and
exec
probes
were
thought
of
being
that
generic
solution,
so
by
calling
exact
prop
you
can
proxy
into
whatever
you
want,
so
you
can
proxy
into
grpc
as
well
as
you
can
proxy
to
any
other
rpc
protocol,
so
exact
props
were
thought
of
as
a
generic
replacement
for
any
rbc
protocol.
You
can
imagine,
but
it
didn't
quite
work.
A
As
I
said,
there
are
many
problems
with
that,
and
mostly
this
problem
is
I
use
ability,
but
also
real
problems
of
like,
as
I
said,
writing,
responsive
grpc
application
may
be
harder
with
the
exact
probes,
so
we
thought
again
about
grpc.
First,
we
realized
that
there
is
no
new
dependencies.
Kubernetes
and
kublet
heavily
depend
on
grpc
already,
so
we
have
grpcs
dependency
packaged
up
and
we
don't
envision
any
future
when
it
wouldn't
be
packaged
up
at
least
not
anytime.
A
So
then
we
see
a
big
demand
for
grpc
and,
as
I
said,
I
looked
at
many
like.
I
just
searched
to
github
for
this
grpc
health
probe
executable.
That
is
part
of
grpc
ecosystem,
and
I
think
thousands
of
hits
when
people
have
docker
file
with
this
jrpc
house
and
point
defined
to
some
other
ways
how
they
download
it.
So
it's
clearly
a
big
demand
for
jrpc
and
these
thousands
are
just
open
source.
A
I
can
imagine
how
many
of
them
as
in
closers
and
then
like
finally
built
in,
is
better
like
we
want
usability.
We
want
people
to
have
a
good
time,
writing
grp,
search
services
and
hosting
the
grpc
services
and
kubernetes.
That's
why
we
introduced
jrpc
probes
in
alpha
in
a
previous
version
of
kubernetes
yep.
There
is
a
question
eric.
D
Yeah
I'll
just
I'll
just
mention
that,
if
it
ever
did
become
a
problem
about
the
jrpc
dependency,
which
I
mean
you
happen
to
already
have
that
it
wouldn't
be
that
hard
to
to
do
some
of
this
directly
within
kubernetes
and
and
avoid
the
the
client
library
kubernetes.
D
That
would
be
a
big
deal
for
protobuf
is
probably
the
the
bigger
dependency,
and
I
don't
see
that
changing
either
pretty
soon.
But.
A
Yeah
exactly
this
is
why
we
knew
that
even
in
2016.
I
mean,
when
I
say
new,
I
I
wasn't
with
kubernetes
at
that
time,
but
I
added
a
lot
of
comments
and
like
github,
is
amazing
place
where
you
can
look
at
people,
interaction,
kind
of
social
network
of
engineers
and
all
the
engineers
are
exchanging
these
comments
like
no,
it's
not
a
new
dependency,
let's
not
kid
ourselves,
but
then
like
in
the
end.
This
approach
of
let's
try
a
generic
approach.
A
Let's
try
exact
prop,
so
everything
prevailed
and
we
started
using
that.
Now
we're
changing
our
mind
and
supporting
for
usability.
A
I
can
call
called
quality
of
life
improvements
that
we
make
in
kubernetes,
which
are
always
hard
because
kubernetes
is
for
hardcore
engineers,
typically
and
quality
of
life.
Improvements
are
not
very
easy
to
implement.
Sometimes.
A
Okay,
so,
as
I
said
alpha
in
123
between
124
124
was
very
recently
released,
I
think
a
little
bit
over
months
ago,
it's
already
deployed
in
some
cloud
vendors,
so
you
can
try
it
out
and
126.
We
expect
it
to
be
to
go.
Ga
this
is
when
you
definitely
can
take
depends
on
that
feature
in
beta,
it's
already
enabled
by
default,
but
some
people
prefer
not
to
take
dependency.
A
That's
why
I
came
here
because
you
probably
will
be
early
adapters
or
you
can
spread
the
word
and
ask
people
to
try
it
out.
Okay,
let's
move
on.
So
this
is
how
it
looks
just
a
refresher
grpc
has
this
health
check
protocol
and
you
can
see
that
this
health
check
request.
That
here
is
a
service
name
as
a
string,
a
service
name
is
mind,
saying
it's
just
a
label.
A
I
didn't
find
any
good
explanation:
how
to
use
service
name
in
health
checks
and
generally
it's
just
a
label
that
you
label
your.
I
mean
how
you
what
you
want
to
health
check
basically,
and
then
you
can
respond
like
you,
there
is
a
check
and
there
is
a
watch
api.
Obviously
it
checks
his
check
is
immediately
responding
with
the
status
and
the
watch
is
continuously
looking
at
status
and
returns
as
as
ma
as
soon
as
service
became.
A
Like
change
the
state
and
yeah
there
is
a
serving,
which
is
what
you're
looking
for
and
other
statuses
are
indicating
some
problem
so
how
to
support
this
in
kubernetes
in
kubernetes.
You
will
just
add
this
probe
like
probe
type
and
then
says
that
it's
grpc
and
the
specifier
port
service
name
is
optional.
It's
sometimes
a
good
practice
to
edit,
but
port
name
is
the
only.
A
I
think
you
need
pretty
straightforward,
very
easy
to
implement
and
you
can
see
that
since
it's
a
well-known
interface,
it's
it
was
very
easy
for
us
to
just
say:
call
grpc
health
check.
C
I
would
imagine
that
the
service
is
there
to
for
especially
watch
streaming
like
protocol,
so
when
you
potentially
so
when
you
have
a
streaming
version
of
this
protocol,
you
might
be
sending
live
from
different
services,
and
when
you
get
the
response,
you
need
to
understand
when
server
responds
to
map
back
to
what
service
you're
talking
about.
B
Actually,
I
think
it's
more
than
that,
certainly
so,
a
given
application
could
be
exposing
multiple
services
right,
a
different
status,
a
health
standard
for
each
of
those
services.
D
Normally
we
were
expecting
that
people
would
use
just
empty
string
for
the
service
for
their
the
main
entry
point,
but
the
problem
with
singletons
like
that
is
the
moment:
that's
no
longer
singleton,
you
hate
things,
and
so
there's
a
string
there
to
allow
you
to
disambiguate
for
other
things.
D
The
the
service
unknown
for
the
watch,
though,
is
because
normally,
if
you
do
a
check
for
on
the
the
unary
and
the
service
isn't
known
that
will
return
a
status
code
with
with
not
found,
but
for
this
the
watch.
If
it
comes
and
goes,
you
just
would
like
to
keep
the
stream
open.
C
Yeah
with
yonder
it's
clear
about
you
that
you're
making
requests
and
you're
going
to
get
the
response
for
the
same
thing
and
with
watch
you
might
be
getting
sending
different
props
different
requests
and
getting
different
response
for
other
all
other
services.
A
A
It's
only
status
yeah.
That
was
one
of
my
questions
and
on
one
of
the
slides
I
put
a
suggestion.
So
kubernetes
has
three
types
of
probes:
it's
resonance
health
probe
and
startup
probe,
so
I
would
imagine
people
will
start
using
service
as
a
way
to
indicate
which
one
you
want
to
ask
for.
A
So
if
you
have
multiple
services,
you
probably
will
have
three
of
each
for
every
service
and
it
probably
will
be
something
like
main
underscore,
readiness
or
main
underscore
liveness.
D
So
liveness,
so
so,
there's
likeness.
What's
the
other
one,
the
the
common
one
yeah
readiness,
so
readiness
and
health
check
are
really
a
lines.
Those
are
trying
to
do
the
same
thing,
ideally
for
a
liveness
probe.
You
would
actually
process
something
in
the
application,
because
you
want
to
make
sure
it's
not
deadlocked
or
something
like
that,
and
so,
ideally
that
that
that
has
a
little
bit
more
integration
to
the
application
in
some
way,
shape
or
form
it's
not
as
strong
of
a
of
a
candidate
for
the
health
checking
api.
D
You
can
do
it
and
it
can
get
you
some
other
way,
but
the
liveness.
There
is
a
lot
like
a
a
watchdog
sort
of
thing
which
it's
it's
good,
to
make
sure
that
the
application
is
still
processing.
A
Yeah,
this
is
what
liveness
will
do
right.
So
if
liveness
doesn't
respond
in
a
certain
amount
of
time
or
it
responds
with
some
unknown
status
and
you
will
fail
the
application.
So
I
think
I
mean
this
is
how
http
are
typically
being
used
for
http.
There
is
different
urls
you
can
use
for
different
types
of
probes
for
grpcs.
There
is
one,
and
I
mean
you
can
expose
different
ports
for
different
types
of
props,
but
another
way
how
people
may
start
using.
A
It
is
to
define
different
service
names
or
maybe
clarify
service
names
with
postfix
like
suffix
of
types.
A
Great,
so
I
got
it
right
great.
Thank
you
for
feedback
differences
with
other
pro.
So
like
one
of
the
interesting
values
of
kubernetes,
I
joined
this
project
project
two
years
ago
and
I'm
still
trying
to
understand
all
the
values
that
community
strive
by
I'm
not
sure
how
it
is
for
grpc,
but
for
kubernetes
every
time
something
new
being
developed.
A
There
is
a
lot
of
emphasis
on
engineering
and
on
good
practices
and
like
making
it
right
and
less
emphasis
on
making
it
consistent
with
the
past
experience.
So
when
we
looked
at
grpc
endpoint,
we
grpc
health
checks.
We
wanted
to
we
thought
of
different
types
of
features
that
we
already
support
for
http
and
other
probes,
and
we
realized
that,
for
instance,
before
we
had
consistency
between
probes
and
life
cycle
hooks.
So,
okay,
I
can
step
back
so
life
cycle
hooks.
A
We
I
explained
what
probes
are
doing
life
cycle
hooks
is
different.
Beast
life
cycle
hooks
are
type
of
callbacks.
That
kubernetes
will
call
before
application
is
initialized,
so
it's
kind
of
like
a
please
do
start
up
and
then,
when
it
shears
application
down
down
it
will
cause
this
callback
to
clean
up
after
itself.
A
So
before
that
we
had
parity
of
what
kind
of
props
you
support
and
what
kind
of
life
cycle
hooks
we
support
with
grpc.
We
only
support
probes
right
now
and
it
will
be
one
of
the
questions
I
will
raise
in
future
directions
of
kubernetes,
where
the
lifecycle
hooks
will
be
useful,
and
if
it
they
will
be
useful.
What
kind
of
api
you
want
to
support
there?
A
Let's
not
discuss
it
now,
let's
park
it
until
the
future
direction
slide,
then
there
is
a
custom
host
can
be
configured
for
http
probes.
You
can
configure
custom
host
it.
It
is
a
feature
like
typically
what
we
do
is
we
take
a
ip
address
of
container
and
just
call
this
ip
address
with
the
port
specified
for
http.
There
is
a
feature
to
override
it
and
say
instead
of
this
ip
address
uses
host
and
it's
typically
used
for
very
edge
case
scenarios.
A
When
container
is
hijacking
like
I
mean
using
a
host
network
and
it
will
it
register
itself
with,
like,
let's
say,
a
local
host
127.001.
A
In
this
case,
people
don't
have
any
options,
but
ivy
rides
a
host,
but
it
also
opens
this
scary
things
that
you
can.
I
write
host
to
any
host,
like
I
don't
know,
google.com
and,
like
start
pinging
google.com,
instead
of
your
application,
which
is
like
I
mean
this
is
not
what
you
wanted.
Obviously,
so
this
is
another
limitation
that
we
have.
We
don't
support
host
in
grpc.
A
We
don't
want
this
edge
scenarios
and
if
you
get
a
feedback
that
we
really
really
need,
this
edge
case
scenarios
and
we
will
reconsider
it's
easy
feature
to
add,
but
that
it's
not
obviously
needed
right
away
and
named
quartz
is
another
feature.
It's
very
kubernetes
specific.
It's
going
into
depth
of
how
kubernetes
was
designed
in
kubernetes.
A
When
you
define
a
port,
you
can
specify
name
of
a
port
and
then
use
this
name
to
configure
probes
or
configure
services,
and
this
name
port
make
a
lot
of
sense
when
you
configure
other
objects
that
refers
to
your
port
to
your
container,
because
you
can
change
the
port
of
your
container
and
then
name
stays
the
same
and
so
service
doesn't
change.
The
service
keeps
sending
traffic
to
the
right
ip
address
so
like
whatever
produce
or
trust
right
port
specified
in
case
of
grpc
in
case
of
single
port
definition.
A
When
you
have
port
defined
and
you
have
your
probe
defined,
we
decided
that
we
don't
want
to
support
custom
ports,
mostly
because
it
caused
a
lot
of
confusion.
People
forgetting
quotes
in
a
name
or
people
start
naming
naming
ports
as
a
port
name
and
they
think
it's
a
required
field.
So
it
costs
a
lot
of
confusion
and
it
doesn't
add
too
much
value.
A
A
Kind
of
like
a
good
design,
rather
than
like
consistency
with
the
past
so
consistent,
is
not
necessarily
what
is
being
valid
here,
but
everything
else
is
quite
the
same.
So
it's
easy
and
then
I
wanted
to
spend
a
little
bit
of
time
to
discuss
migration
from
the
rpc
health
probe.
I'm
not
sure
whether
anybody
here
ever
run
this
executable.
A
This
executable,
basically
pings
your
like
jrpc
endpoint
that
you
specified
in
the
argument.
A
The
first
problem
is,
since
this
executable
runs
inside
your
container,
you
can
ping
ping
your
container
with
a
local
host
and
if
your
application
is
only
listing
on
localhost-
and
it
will
just
work
so
you
can
have
one
port
that
is
actually
a
working
port
that
you
expose
for
cross
services,
interaction
and
another
port
for
health
checking.
You
don't
even
need
to
open
it
for
public,
so
you
don't
even
expose
it.
You
can
just
keep
it
inside
your
container
network
and
nobody
would
even
see
this
port
out
there.
A
So
this
is
a
limitation
of
building
probes
versus
exact
probes,
but
I
don't
think
it's
a
big
limitation.
Then
we
don't
support
any
authentication.
A
So
if
you
want
to,
if
you
want
to
do
a
certification,
any
mtls,
you
just
don't
have
any
option,
we
will
ignore
certificates
and
if
it,
if
clients
authentication,
is
needed,
we
will
just
fail.
I
mean
we
have
no
certificate
to
present
and
it's
really
hard
to
configure
certificates
and
manage
certificates.
So
we
just
only
support
one
way.
Is
we
basically
don't
check
any
certificates?
Next
one
is.
A
There
is
no
error
codes
like
this
executable
can
check
if
it's
a
client
problem,
if
it's
like
a
server
problem
with
its
timeout
and
it
returns
different
exit
codes.
So
you
can
say
that
I
want.
I
want
to
fail
my
probe
if
it's
this
type
of
error,
but
not
that
type
of
error.
A
We
don't
support
it
in
built-in
containers,
because
we
like
it's
either
a
return
serving
in
amount
of
time
specified
or
we
failed
a
check,
and
it
also
cannot
check
multiple
multiple,
like
you
cannot
chain
different
executables,
like
with
exact
props.
You
can
say,
call
this
endpoints
and
call
this
endpoint
and
then
return
like
status
like
combined
status.
A
You
cannot
do
it
obviously,
with
built-in
you
can
only
ping
one
one
port
and
specify
only
one
service
yeah,
and
then
there
is
like
very
a
minor
detail
about
some
time
out.
Problem
is
basically
bug
fix
that
we
have
in
kubernetes
now
we're
going
into
demo.
I
will
re-share
oops.
A
You
should
see
my
vs
code
now,
okay,
so
yeah
for
demo.
I
created
my
cluster
with
124
version
of
kubernetes
already
did
it
before
the
dml,
so
yeah,
you
can
see
the
nodes
and
see
that
it's
124.
A
Okay,
yeah,
so
it's
124.
and
then
what
you
can
do.
I
have
a
small
pod
defined
with
one
container
it's
argon
host.
It's
called
it's
short
for
agnostic
host,
it's
a
container
that
we
use
in
kubernetes
for
testing
and
if
it's
run
with
grpc
health
checking,
I
think
it
will
expose
grpc
health
check
endpoint
and
it
will
also
expose
some
command
interface.
I
will
show
this
command
interface
a
little
bit
later.
A
A
Again,
this
port
needs
to
be
specified
here
and
it's
specified
here,
so
you
need
to
expose
your
jrpc
health
checking
port
as
opposed
to
what
you
did
before
with
exact
props.
You
can
hide
it,
you
don't
necessarily
have
to
expose
it.
A
So
I
created
the
port
and
I
will
now
check
that
it's
ready
and
running,
so
you
can
see
that
it's
ready
a
okay.
Now
I
I
will
like
this
8080,
it's
a
http
command
protocol,
so
it's
a
http
endpoint
that
I
exposed
on
top
of
a
grpc
endpoint
I'll,
just
port
forward
that
so
I
will
be
able
to
curl
against
it.
So
what
I
will
do,
I
will
change
the
service
into
not
serving
mode.
A
What
you
will
see
that
registrar
failed,
but
you
see
that
we
fail,
which
we
make
market
failed
after
three
attempts.
So
that's
why
ready
is
still
true
because
it
hasn't
failed
three
times
yet
it
only
failed
once
so.
If
I
run
it
again,
it's
already
30
seconds
passed.
I
will
see
that
it
already
failed
five
times
like
four
times,
so
it
will
change
to
not
ready
yeah.
So
now
it's
not
ready,
it
doesn't
receive
any
traffic.
I
mean
I
don't
have
any
traffic
here.
A
I
I
didn't
configure
any
service,
so
kubernetes
doesn't
have
any
traffic
to
route
to
your
port,
but
at
least
it
knows
that
it's
not
ready.
So
if
you
had
the
service,
it
wouldn't
serve
any
traffic
to
your
port
and
then,
if
I
switch
back
to
serving
it
will
get
into
a
ready
state
right
away,
because
you
we
make
market
failed
after
three
and
sucks
succeed
after
one
so
see,
even
though
it's
seven
failures
by
the
date
now
it's
switched
to
not
failed
and
we
don't
have
not
failed
events
here.
A
It
will
be
too
much.
We
have
it
ready
now,
a
very
short
and
straightforward
demo.
I
don't
know
what
else
to
demonstrate
in
kubernetes
like
how
to
make
it
work,
so
I
will
switch
to
presentation
now
and
if
you
have
any
questions
about
this
demo,
please
ask
now.
C
So
you,
when
you
were
confident
in
this
spot,
you
just
said
port,
jrpc,
call
and
port.
Is
there
a
way
to
specify
service.
A
C
A
A
Okay,
demo,
again,
let's
reiterate
all
the
limitations
we
had
so.
First
we
ignore
certificates.
We
don't
support
any
way
of
authenticate.
We
just
ignore
everything
ignore
solicit
problems.
We
don't
support
watch.
We
only
support
check.
We
don't
support
streaming,
mostly
because
it
was
aligned
with
how
we
do
things
for
http.
A
But
this
is
something
that
I
really
want
to
see
supported
in
the
next
versions,
because
there
are
a
lot
of
scenarios
when
watch
can
help
to
make
very
quick
and
quick
grpc
applications
to
be
even
quicker
so
like
right.
Now
you
only
specify
probes
down
to
one
second
time
like
periodicity.
A
Some
applications
wants
sub
second
startup
or
sub
second
readiness,
so
they
want
to
react
really
quickly
on
probes
and
I
think
what
you'll
support
help
with
that
then
host
cannot
be
configured.
As
I
mentioned,
you
can
run
multiple
checks,
as
you
can
do,
with
exec
probes
and
yeah
for
different
props.
You
either
need
to
support
different
services
or
you
open
different
ports.
So
at
first
I
was
thinking
like
how
would
I
do?
How
would
I
configure
readiness
and
liveness
and,
like
I
was
trying
like
and
then
I
okay?
A
I
don't
want
more
ports,
I
want
just
different
services,
so
yeah
that's
my
conclusion.
So
these
are
limitations
and
I
think
biggest
limitations
that
I
can
see
is
this
ignoring
certificates,
and
I
was
asking
around:
is
it
fine
like?
Do
you
have
like
authenticated
service?
A
Do
you
want
health
checks
to
be
also
authenticated,
and
somebody
pointed
me
out
to
this
I'm
getting
example
from
google
cloud,
because
google
is
where
I
work,
but
apparently
it's
a
common
practice
to
like,
even
if
you
use
authenticated
endpoints,
also
expose
some
port
number
of
port
plus
one
for
health
checks,
and
this
port
plus
one
will
not
be
authenticated
and
you
can
use
this
health,
health
checking
service
for
load,
balancers
or
whatever
your
cloud
supports.
A
So
obviously,
kubernetes
like
based
on
that
kubernetes
wouldn't
be
the
first
solution
that
requires
you
to
open
non-uh,
secure
port
alongside
your
secure
port,
but
this
doesn't
require
you
to
open
http.
C
That
was
kind
of
chicken
in
the
egg
problem.
We
did
it
because-
and
you
know
that
sanjay
brought
it,
we
did
it
because
kubernetes
didn't
support
it,
so
we
decided
to
add
port
plus
one
and
before
that
we
even
just
tried
to
the
notification
to
only
use
a
tcp
probe
the
fact
that
parts
open,
but
obviously
it's
not
enough
for
things
like
readiness,
yeah
but
yeah.
I
think
it
it
makes
sense
to
to
expose
some
things
on
unencrypted
separate
protocol
that
just
has
no
security
enabled
sanjay.
B
Writing
this
text-
and
this
is
specifically
talking
about
client
certificates,
though
right
in
case
of
this
alone,
if
your
server
business
certificate,
your
client
can
ignore
it
or
not
authenticate
it.
That's
what
you
do
right,
yeah.
A
That's
why
we
just
ignore
it
and
that's
another
question:
do
we
need
to
support
it
like
do
we
need
to
not
ignore
it,
and
the
problem
with
that
is
it
may
be
like
it's
really
hard
to
check
certificates,
it
may
be
on
different
a
host,
so
it
it
may
not
be
like
very
straightforward
solution.
B
But
do
you
do
you
use
a
a
typical
trust
store
like
a
truss
store
from
the
host,
or
so
if
there
is
a
certificate
issued
by
a
certificate
authority
that
you
trust?
That
would
be
okay
right,
you
don't
have
any
cluster.
A
Perhaps
I
think
if
this
feature
will
be
very
requested,
so
like
don't
ignore
certificate,
please
validate
certificate
as
well,
then
you
may
need
to
think
and
look
into
implementing
it
and,
like
one
solution,
will
be
only
support
trusted
like
certificate
trusted
on
this
note
and
note
is
work
like
google
trends.
A
It
may
be.
One
solution
supporting
custom
certificates
is
much
harder
because
we
need
to
distribute
the
certificates
securely
alongside
the
port
somehow,
and
that
introduced
a
lot
of
complications.
D
B
Hey
I
I
maybe
you
mentioned
it,
but
I
missed
it,
but
the
readiness
proof
is
used
to
mark
the
container
as
ready
or
not
ready
right
and
if
it
is
not
ready,
you
just
don't
send
traffic
to
it,
so
it
is
used
only
for
traffic
management.
A
If
there
is
a
liveness
prop,
I
can
switch
to
this
slide.
Really
quick
and
so
lightness
is.
When
we
have
a
certain
number
of
failures,
we
will
just
terminate
the
container
and
depends
on
depending
on
container
configuration.
It
may
be
a
restarting
container
or
it
may
be
completely
like
rescheduling
this
port
on
different
nodes
or
something.
B
A
Well,
you
can
configure
either
way.
There
is
not
too
much
flexibility,
but
you
can
configure
either.
Can
there
will
be
started,
the
port
will
be
marked,
as
I
mean
body
will
fail
completely.
Okay,
it's
not
like
very
granular.
You
cannot
just
like
do
it
per
container
you,
but
I
mean
you
can
configure
at
least
a
little
bit
so
yeah
last
flight
I
had
is
future.
I
want
to
reiterate
we
we
want
some
feedback
and
from
what
I
hear
in
this
session
today,
we
did
everything
right.
A
It
seems
like
we
don't
need
more
features,
at
least
for
first
version,
so
I
think
it
will
be
okay
in
126
to
release
it,
but
please
spread
the
word:
make
people
try
it
and
try
it
yourself.
If
you
have,
if
you
have
any
service
to
try
it
on,
then
I
already
talked
about
watch
support.
There
are
a
lot
of.
There
is
a
lot
of
effort
to
make
kubernetes
run
faster
and
run
like
start
applications
faster,
run
them
faster.
A
We
have
different
ways:
different
bottlenecks
right
now,
and
startup
props
and
residence
props
is
one
of
those
bottlenecks.
We
want
applications
to
be
agile
and
be
able
to
notify
its
status
very
quickly
and
watch.
I
think,
will
be
ideal
for
this
purpose
and
finally,
life
cycle
hooks.
A
We
I
mean
what
we
did
is
we
simplified
the
life
for
grpc
applications
to
expose
those
probes,
but
also
applications
often
need
this
cleanup
and
like
initialize
and
tear
down
kind
of
callbacks
for
life
cycle,
we
don't
we
didn't
support
it
out
of
like
from
get
go.
We
didn't
support
it
from
very
beginning
because
there
is
no
interface
defined
in
grpc
for
this
kind
of
hooks.
We
didn't
want
to
introduce
this
interface
like
this
kubernetes
declared
interface
for
life
cycle
hooks.
A
I
think
it
would
be,
may
not
be
received
properly.
Maybe
it's
something
that
we
need
to
introduce
in
jrpc,
ecosystem
first
and
then
reuse
it
in
kubernetes
and
maybe
in
other.
Whatever
other
management
platforms
will
have.
D
Yeah,
I
think
we
mainly
just
expect
people
to
use
term
right
now.
The
term
signal
what
I
think
we'd
mainly
just
be
expecting
the
normal
term
signal.
B
D
Kill
dash
term
sort
of
idea
which
isn't
necessarily
perfect,
but
overall
that's
how
a
lot
of
systems
work.
There
could
maybe
be
done
something
here.
There
is
a
little
bit
of
a
problem,
though,
in
that
the
question
of
all.
Do
you
feel
like
you
need
security
around
this,
because
it
can
actually
bring
down
your
your
server?
Does
that
where
we're
exposing
the
the
health
checking
is
not
really?
D
No
one
really
cares,
but
if
you
have
something
here
that
can
shut
down
a
server,
do
you
need
security
around
that?
You
need
authentication
around
that
that
sort
of
thing
where
kill?
That's
that's
already
semi-protected,
so
so,
you're,
okay,.
A
That's
true
yeah,
I'm
not
sure
how
okay
it's
a
good
question,
how
we
protected
for
http.
I
will
I'll
double
check
on
that.
D
Yeah
and
anything
if
people
are
okay
with
with
hp,
then
yeah
we
could
think
of
doing
something
similar
in
grpc.
A
B
It
yes,
I
just
had
a
question
right.
Yes,
actually,
for
the
istio
echo
server,
are
we
using
the
grpc
pro
for
the
sdk
server
we
currently
just
using
tcp?
B
Oh
okay,
so
we
can
start
using
this
now
yeah.
That
sounds
really
exciting.
I've
got
one
question:
why
can't
we
use
the
same
boot
for
the
three
different
groups.
A
We
can
use
same
port
but
to
distinguish
different
types
of
probes,
we'll
need
to
like
pass
some
argument
and
service
name
will
work
for
the
purpose.
I
think
I
think
it
may
look
strange
when
service
name
contains
the
type
of
probe
so
it'll
be
like.
Let's
say
you
have
service
like
reporting
and
then
reporting
underscore
readiness,
it's
not
quite
a
service
name,
but
this
is
a
workaround
that
I
think
good
enough.
D
C
That's
where
I
got
confused
too
apparently
watch
is
one
directional
streaming.
I
thought
it's
bi-directional,
so
the
server
can
ask
hey,
what's
your
status,
maybe
we
could
add
that
or
defaulting
to
your
is
fine.
D
Yeah
the
history
behind
watches,
originally
we
just
had
the
normal
unitary
polling,
is
what
we
expected
most
systems
to
do
like
like
kubernetes
or
the
like.
That's
pretty
common
approach.
D
The
watch
was
for
clients
to
be
able
to
listen
to
every
single
backend
and
we
didn't
want
all
of
those
probes
happening
constantly
for
every
single
client
or
server
might
have
so
watch
was
really
about
scaling
to
to
higher
number
of
ongoing
health
checks.
Basically,.
A
Yeah
we're
talking
about
http
sub
seconds
probes,
so
something
that
will
in
call
like
maybe
one
every
100
milliseconds.
But
at
some
point
you
you
reach
the
limit.
So
you
need
to
switch
to
something
that
like
where
streaming
works
better.
D
Yeah,
even
for
this
with
streaming,
it
probably
doesn't
help
too
much
the
startup
case,
because
in
that
case,
you're
waiting
for
the
port
to
start
listening
and
some
stuff
like
that.
So
you
still
might
need
some
lower
polling
but
yeah.
Once
you
get
connected,
the
watch
can
notify
you
very
rapidly.
C
But
also
if
you
have
a
probe
that
is
keeping
the
stream,
so
you
know
when
you
just
establish
channel
it's
also
streaming.
You
mean
that
if
you
don't
lost
the
channel,
you
have
very
similar
guarantees
to
streaming.
You
don't
have
a
lot
so
with
streaming.
You
just
don't
get
an
overhead,
but
otherwise
tcp
connection
stays
open.
B
C
Expects
a
positive,
affirmatively
positive
response
once
every
x
seconds
and
if
it
doesn't
get
it,
it
transition
the
pot
to
unhealthy
or
not.
D
A
Yeah
one
one
thing
I
I
was
trying
to
play
around
with:
if
in
http
we
have
this
query
string
parameters
that
you
can
pass
to
http
for
exact
probes,
you
can
have
different
arguments
that
you
can
pass
to
executable.
A
I
wonder
if
people
will
start
using
services
kind
of
a
query
string
of
different
arguments
by
like
having
more
configuration
into
into
the
probes.
I
wonder
if
this
may
happen.
D
I
don't
know
I
sort
of
figure
that
most
of
the
time
this
get
this
is
pretty
simple.
You
have
to
work
really
hard
to
make
it
complicated
sure
someone
might
add
in
a
slash
and
have
their
their
first
product.
The
name
of
the
second
part
of
the
name
or
something
like
that,
but
I
I
imagine
you
know,
99
of
the
cases
will
remain
very,
very
simple.
Having
like
two
different
services.
A
Yeah
and
I
didn't
find
any
good
examples:
how
service
being
used
in
health
checks,
because
most
people
just
use
default
like
service
like
single
servers.
So
it's
literally.
D
C
Right
does
it
align
with
what
we
called
services
like,
for
example,
if
you
go
through
reflection,.
D
It
it
it.
The
name
here
is
real
unrelated
to
all
other
names.
Okay,
it
doesn't
relate
to
the
the
name
of
the
service
like
the
host
name.
You
might
call
that
the
service
name,
you
might
call
the
the
like
grpc
service,
along
with
the
method,
to
be
the
service
name.
It's
none
of
those,
it's
just
some
other
name.
That
was
meaningful
to
the
person
who
who
said
it
on
the
server.
D
Thought
the
documentation
had
that
and
it
was
misleading,
and
I
thought
I
fixed
that
we
we
might
need
to
double
check
some
things.
B
Eric
is
correct,
it's
supposed
to
be
whatever
the
service
owner
and
whatever
the
server
owner
and
the
health
checking
client
owner
agree
upon.
So
it's
deliberately
because
the
use
cases
are
such
that
they
they
go.
B
A
Okay,
if
there
is
no
more
questions
or
comments,
we
can
finish
you
can
reach
me
on
many
places
I
mean
just
google,
my
last
name
and
there
will
be
a
lot
of
hits.