►
From YouTube: Kubernetes SIG Apps 20170821
Description
Demo of Searchlight, discussion around proposals for: 1. lifecycle hooks with Stateful Set and 2. auto-pausing feature for Deployments, Daemon Sets
A
A
There
is
a
upcoming
home
Q&A
at
the
next
stand-up
meeting,
which
is
next
week,
and
so
there
is
a
link
to
a
forum
where
you
can
submit
questions
like
not
beforehand,
and
this
is
just
to
satisfy
our
retro
item
from
the
last
retro
win.
People
had
some
some
questions
about
home.
So
looking
forward
to
that,
we
have
a
demo
today,
tamal
is
going
to
show
off
searchlight
and
we
have
some
discussion
topics
around
proposals.
So,
let's
get
started
tamal
are
you
around
to
give
us
a
demo
yeah
all
right?
Take
it
away.
A
B
So
hi
I'm
Paula,
so
soft,
slightly
higher
quantities,
controller
for
single.
If
you
are
familiar
with
Nagios,
it's
indices
for
code
modules,
so
we
so
the
way
that
works.
Is
you
know
if
you
are
familiar
with
Nagios?
You
can
set
up
some
alerts
on
various
conditions
and
it
will
politically
check
those
alerts,
and
if
there
is
some
error
condition
fails,
it
will
notify
you
and
the
notification
mechanism
can
be
configured.
You
can
get
notification
via
email,
SMS
or
chat.
How
are
you
like?
B
It
can
easily
also
send
you
notification
based
on
like
a
critical
state
or
a
warning
state,
so
you
can
set
up
different
levels
of
notification,
so
the
Surfside
controller
is
just
a
DPR
controller
for
a
finger,
so
you
can
get
in.
The
corner
is
native
way
allows
for
like
different
types
of
couponing
subjects.
It's
concluded
on
the
clock,
the
clutter
level
at
the
node
level
or
the
pot
level
and
it'll
automatically
apply
and
you'll
automatically
configure
the
the
running,
Singha
and
its
site.
B
Then
there's
a
finger
that
is
running
and
the
site
as
a
side
car,
and
then
it
will
send
you
notification,
as
you
consider
so.
I'm
going
to
show
you
how
to
do
it,
so
first
you
install
the
controller,
so
we
have
a
like
a
two
separate
deployment
sketch
you
can
deploy
with
our
back
or
without
our
back.
So
once
you
deploy
just
this
one
line
command,
it
will
deploy
the
subset
controller
and
these
in
the
side
class,
and
then,
after
that
you
can
also
use
home.
I'll
have
a
home
chart.
I
can
also
use
bad.
C
B
It
runs
a
single
a
pawn
and
it
has
actually
predefined
cyan
containers
in
that
one
is
for
this.
Cooper
doll,
encore
de
sinka
and
there
they
also
put
gates
on
every
single
database
to
run
cleaned
also
comes
with
a
wave,
is
called
single
wave.
Your
wife,
if
you
are
familiar
with
that,
so
it
is
also
installed
in
the
same
pod,
so
currently
I'm
doing
a
put
forwarding.
B
So
if
you
do
give
CTL
get
posts,
you
find
the
pod
name
and
then
put
for
the
port
number
port
number
is
sixty
thousand
six,
so
just
to
make
sure
is
not
to
get
used
for
anything
else.
So
and
then,
if
you
go
to
localhost,
you
will
be
able
to
see
that
this
is
a
cinder
that
is
running
locally.
In
my
any
local
mini
cube
cluster
that
is
I'm
running
locally.
B
Now
you
can
create,
like
a
as
I
said,
like
a
number
of
different
types
of
alerts.
So,
for
example,
I'm
going
to
show
you
a
couple
of
parameters
tamela's
that
used
for
our
clusters,
for
example,
like
the
other
CSI,
that
the
CSF
that
is
used
by
the
quantities
API
and
we
usually
want
to
keep
an
eye
on
like
it
doesn't
expire,
because
we
have
cases
readied
expired
like
after
a
year,
so
you
can
set
up
in
a
alert
like
this,
so
you
get
a
APA
version
kind
cluster
alert.
B
B
That's
good
days,
then
three
days
once
you'd
get
close
to,
like
you
know,
expiring
in
three
days,
it'll
be
critical
state
and
then
you
can
get
alerts
via,
like
you
know,
and
the
critical
fjd
kicks
and
a
a
large
KML
done.
So
you
get
it
email
so,
once
I
set
that
up
super
global
I
have
done
that
here.
I.
B
Know
it's
all
okay
for
now,
so,
for
example,
this
is
the
one
that
I've
set
up.
So
the
LR
you
can
see.
This
is
the
one
CSR
demo.
So
right
now
the
mini
Q
cluster
has
like
almost
like
a
Phineas
of
explanation
time.
So
it's
so
strong
that
it
has
is
going
to
expire
in
two
thousand
days,
so
you're
fine
and
it's
going
to
check
and
you
can
also
configure
like
how
frequently
this
is
a
jet.
B
So
the
check
interval
right
now.
It
is
going
to
check
every
30
seconds
and
then
and
when
you
get
an
alert
like
essentially
when
it
is
close
to
like
expanding
in
certain
days,
it
will
send
an
email
and
then
until
you
acknowledge
that
a
lot
it
will
keep
sending
you
email.
Every
two
minutes
is
the
standard
where
a
singl
works.
So
if
you
get
an
email,
you
can
come
here
and
you
can
actually
acknowledge
that
you
adding
a
comment.
B
So
it's
similar
other
thing.
We
can
redirect
a
bunch
of
alert
commands
that
we
have.
So
if
you
go
to
the
dog
section,
you
will
see
that
the
different
alerts
come
on
so
I
support.
It
for
like
the
cluster
level,
one
is
the
CSR
you
can
check
for
component
status.
Then
we
have
any
interesting
one,
which
is
the
Jason
Park.
B
So,
for
example,
you
have
any
P
I
or
some
sort
of
HTTP
server,
and
you
want
to
check
it
say
it
will
turn
JSON
response,
so
you
want
to
check
that
the
sudden
fill
path
has
a
certain
value.
So
so
it
can
do
that
for
you,
for
example,
in
this
example
like
is
this
is
the
this
API
doesn't
mean
we
need
to
be
inside
the
class
extra
beanie
or
as
long
as
it
is
accessible
from
the
cluster.
B
If
you're
familiar
with
that,
essentially
you
have
to
write
a
JQ
query
and
if
you're
a
DJ
curator
will
check
against
that,
so
you
can
say
like
dot
metadata
at
the
end,
we
feel
if
it
is
not
proud
and
you
and
get
any
alert
or
something
so
right.
Now,
like
it's
Pross,
you
don't
get
the
network.
So
you
can
set
it
on
any
field.
You
want
to
check,
so
that's
a
so
different
one
and
then
I
am
going
to
go
to
another
one
like
if
you
want
to
set
on
then
on
the
cluster
level.
B
You
can
also
check
for,
like
particular
nodes.
For
example,
some
mood
gets
disconnected
from
your
cluster
and
you
want
to
get
an
alert.
You
can
get
an
alert
using
something
like
this,
so
you
say
this
is
you
want
to
check
for
node
like
this,
and
then
you
also
set
the
parameters.
It's
called
the
worse
in
the
Ginga
sort
of
pseudo
paradigm.
So
it's
a
selector.
B
B
So
there
is
one
for
odd
exists
for
Apple.
I
am
sure.
If
you
have
used
a
coupon
is
in
production.
You
have
seen
cases
where,
like
you
know,
sometimes
your
note
goes
out
and
comes
back
up
and
for
some
reason
that
may
be
the
persistent
William
did
not
disconnect
correctly
or
something
we
have
seen.
This
problem
is
google
flower
quite
a
few
times,
so
you
know
you
even
though
you're
running
with
the
deployment,
your
part
is
not
running
because
it
cannot
really
mount
that
volume
or
something
like
that.
B
So
there
are
a
few
other.
If
you
had
the
noob
level,
we
can
usually
have
like
a
two
commands
with
tickle
hands,
as
you
can
check
for
influx
query.
If
you
are,
you
know
the
using
the
the
being
fast
way.
That
is
a
you
know,
stools
the
hipster
data
you
can
check
for
node
scatters,
so
you
can
set
an
alert.
That
is
a
check
for
all
the
node
status,
a
if
any
node
is
not
in
radius.
B
It
can
also
check
for
node
volume,
so
I'm
not
going
to
go
into
that
here
right
now.
But
if
you
look
at
this
detail,
description
you'll
find
that
information
up
so
to
check
for
a
node
volume
you
need
to
deploy
another
server.
We
call
it
like
a
host
fax.
We
can
essentially
express
an
affinity
server
that
exports
movement
and
the
lower
volume
metrics
for
from
node.
So
this
is
actually
a
wrapper
around
this.
B
This
PCT
library
exports
this
kind
of
information
so
yeah,
so
you
can
deploy
this
using
like
a
system
D
so
it'll
be,
and
when
you
position
your
clusters,
it'll
be
running
this
cipher
and
then
you
can
set
up
alerts,
so
you
can
say
like
if
you
know
if
outside
70%,
so
you
volume
is
70%
full
its
warning.
If
it
is
95
percent
fear
Phillip,
then
you
gather
like
the
critical
state
and
you
get.
Maybe
you
send
an
SMS
or
something
like
that.
It
should
take
a
look
at
that.
B
B
You
can
also
do
pod
volume,
so
the
important
thing
is.
The
interesting
thing
is
that
the
pod
volume
it'll
actually
talk
me
about
like
either
a
PVC
or
if
you
mount
a
like
a
you
know,
the
kW
is
disc
or
a
goal
cloud.
This
directly
it'll
actually
give
you
a
lot
for
that.
So
I
am
because,
since
the
host
fact
is
directly
running
on
the
node,
it
can
access
the
you
know
the
volume
information
directly
from
that
and
then
that's
what
is
used
by
such
like.
B
So
that's
sort
of
my
demo.
If
you
go
to
pod
status,
you
can
set
up
an
board
alert.
So
if
you're,
you
know,
if
you
say
you
have
the
you
know
say,
for
example
here
you
can
say
that,
like
any
polls
that
has
the
label
at
Kona
nginx,
if
they
you
know
I
did
they
are
not
like
in
a
ready
state.
You
get
an
alert,
so
it
does
check
every
30
seconds,
but
you
can.
They
include
the
interval.
B
So
it'll
automatically.
Sync,
if
you
know,
if
your
bot
goes
out
and
then
the
different
pod
comes
out
with
you'll
actually
see
a
different
is
automatically
clear
they're,
you
know
configured
agenda
for
that
new
pot
and
you'll
get
another
yeah,
so
that's
sort
of
a
very
quick
demo
how
it
works.
You
know
you
can
wave
in
using
this
with.
B
You
know
just
a
fornicator,
it's
kind
of
clerks
for
the
cluster
feed
on
the
different
commands
that
are
supported.
If
you
go
to
the
tutorial
page
you'll
see
the
this
is
the
full
list
of
our
different
type
of
letters
that
we
have
this
little.
You
know
country
included
here
and
in
terms
of
sending
a
notification,
I
feel
so
previous
demo
with
Cube
da
Jason
Shelley.
B
D
My
name
is
Mike
thanks
for
the
demo
tomorrow,
it's
pretty
cool
stuff,
I'm
curious,
like
going
forward
I'm
I
see
you
can
monitor
a
lot
of
stuff
with
Searchlight
DC.
This
mainly
being
used
monitor
like
internal
kubernetes,
pods
or
monitoring
the
actual
like
kubernetes
cluster
itself
like.
Is
it
more
about
monitoring,
what's
happening
inside
kubernetes
or
monitoring?
You
know
kubernetes
and
the
hosts
like.
What
do
you
see
going
forward.
B
D
B
We
did
this
mainly
Des
Moines
tab,
the
cluster,
but
you
can
also
monitor
the
applications.
If
that's
what
you
want
to
do,
I
mean
yeah
like,
for
example,
even
it's
thought
relax.
You
can
actually
can
equate
into
the
port
and
you
can
run
some
commands
if
you
want
to
check
for
something
specific.
So
it's
supposed
to
be,
you
know
anything
you
want
to
make
it.
You
can
add
additional
plugins
and
do
that.
B
D
A
E
One
is
to
actually
show
you
with
small
demo
how
to
drive
light
cycle
hooks
using
auto,
posing
and
give
a
bit
of
feedback
on
that,
and
also
to
propose
that
racial
extended.
But
now
it's
implemented
just
for
stateful
sets
and
it
would
be
nice
to
actually
extend
it
to
their
own
selves
and
deployments
as
well.
But
I
will
leave
that
after
the
demo
and
also
because
we
kind
of
implemented
it
with
a
simple
controller
using
auto
posing.
E
E
So
before
we
start
a
lifecycle
hooks
for
the
terminology,
it's
basically
a
job
that
you
can
run
during
a
rolling
campaign,
so
say
like
red
meat
or
post
hook.
That
will
trigger
a
job
during
the
update
and
you
can
use
it
to
notify
external
systems
or
around
some
kind
of
acceptance,
check
that
is
more
abroad
and
more
time-consuming.
E
The
normal
life
or
readiness
kernel,
so
I
will
start
with
the
demo
and
we'll
leave
the
discussions
after
so
here's
a
simple
demo
which
actually
has
a
controller
that
is
using
a
stateful
set
because
the
only
one
this
is
implemented,
auto,
posing
to
try
brand
mate
and
pulls
Inc
that
virtual
notify
IFC
and
ran
an
acceptance
check
on
the
rolling
update.
That
is
happening
right
now,
I
started.
And
yes,
so
we
sought
opposing
to
actually
trigger
the
pre-hook
which
will
just
modify
the
IRC.
E
A
E
E
But
usually
a
degraded
performance,
because
what
all
right
the
cuffs
are
running-
and
this
is
actually
using
a
community
DD
which
is
new
database
with
a
che
and
using
stateful
sent.
So
after
the
red
loop,
we
will
send
the
auto
auto
partition
so
Auto
partitioning
to
the
middle
and
wait
for
it
to
trigger
the
middle
that
will
share
around
an
acceptance.
Checks.
Make
sure
that
the
rolling
debate
is
going
well
and
after
that
down
or
window
support
opposing
to
the
end
and
one
that
is
reached
be
around
the
puzzle.
E
That
will
actually
check
that
the
deployment
or
interval
around
acceptance
checks.
It
will
check
the
deployment
is
alright
and
maybe
notify
external
systems
as
well,
and
if
it
fails,
there
would
be
an
option
to
go
back
and
that's
it
yeah.
It
was
really
short
demo
and
the
reason
why
I
am
bringing
this
here
is
mainly
because
in
openshift
we
have
something
similar
to
deployments
which
is
actual,
which
are
actual
deployment
configs,
which
already
support
the
lifecycle
hooks.
E
The
customers
use
it
mostly
to
rent
acceptance,
checks
to
make
sure
that
they've
deployment
went
actually
really
well
and
it's
kind
of
broader
than
the
liveness
and
readiness
probes
that
are
used
internally
in
Cabana
things,
and
you
can
also
plug
in
any
other
functionality
one
because
it's
basically
a
job.
So
you
can
run
anything
there.
You
can
notify
external
systems,
send
I've,
seen
messages
which
normally
you
would
have
to
write
your
own
controller
watching
all
bits
and
we
don't
want-
are
used
to
write
their
own
controls
just
for
this
functional
data.
E
You
want
to
have
something
standardized
they
can
use,
and
lately
we
are
thinking
about
actually
using
just
upstream
deployments
and
having
having
us
support
that
and
work
on
that
with
the
upstream
as
well
and
deprecating.
What
we
have
looks
out
the
deployment
configs.
So
this
is
actually
the
only
thing
that
is
me
mainly
missing
there
for
us.
So
we
wanted
to
talk
to
you
guys
and
see.
A
Course
is
that
anybody
have
any
questions
or
something
any
feedback
right
now
at
the
con.
F
F
E
So
right
now
it
is
running
on
top
of
stateful
sense,
and
this
was
actually
our
first
intention
to
use
auto
posing
for
this,
and
the
other
part
is
that
we
actually
came
to
a
conclusion
that
it
would
be
better
to
actually
implement
it
upstream
into
this
controllers
as
well.
But
we
were
not
sure
if
this
something
that
the
community
would
accept
so
I
guess
this
is
part
of
the
question
as
well.
F
Okay,
I
mean
I'm,
not
very
sure
about
the
communities
or
the
cig
leaders
take
on
whether
or
not
to
take
this
feature
inside,
but
so
I
mean
I
just
want
to
clarify
it
myself
that
video.
So
if
you
start
a
statement
set
up
great
and
then
there
are
like
three
ways
of
adding
hooks
to
it
right
so
before
we
upgrade
and
during
the
upgrade
and
after
the
upgrade
right,
so
is
there
a
way
for
me
to
add
hooks
for
other
things.
F
E
So
I
guess
the
question
so
at
this
example
we
use
prayer,
mate
and
posting,
and
this
is
just
a
simple
example
to
show
you
how
it
works,
but
generally
how
we
want
to
implement
the
lifecycle
hooks
is
that
you
specify
a
progress
point
that
virtually
say
either
the
percentage
or
the
number
of
the
replicas
which
are
updated.
So
assuming
you
have
a
master
and
slice,
it
master
is
usually
the
number
zero
right
or
the
first
first
thread,
the
kind
states
percent.
E
F
Point
for
you,
oh
well,
the
scenario
is
slightly
different.
I
mean
you
could
not
always
say
that
replica
zero
is
the
master,
because
you
know
you
have
enough
workload,
something
like
eighty
CD
or
console
which
can
Ricky
pre
electing
the
master,
so
you
would
never
notice
the
master
is
so
on.
That
scenario
is
if,
if
you,
if
you
suddenly
decide
to
reduce
the
number
of
replicas,
reduce
or
increase
the
number
of
it,
because
then
it
would
be
very
beneficial
to
add
hook.
F
E
A
E
Sure
the
approach
we
were
thinking
about
thinking
is
that
we
would
define
it
basically
as
a
joke,
then
saying
failure
policy.
So
if
the
group
fail
should
we
abort
the
deployment
or
control
back
or
continue
and
also
the
progress
point
which
will
specify
at
what
percentage
sure
it
would
number
of
Earth
because
it
should
be
triggered,
but
mostly
it
would
be
a
job,
so
any
arguments
that
jump
take
it.
D
A
E
E
So
I
think
this
could
be
a
good
use
case
of
using
the
auto
posing,
except
for
the
obvious
canary
deployments,
which
it
is
kind
of
as
well
intended
for
I.
Just
the
only
downside
of
implementing
life
cycle.
It's
a
bit
auto
posing
was
that
you
can't
actually
set
the
partition
to
the
point
when
it's
done
and
run
the
life
cycle
there,
because
the
deployment
control
will
consider
the
deployment
being
done
once
you
reach
updating
the
last
replica,
so
we
can't
actually
stood
there
and
wait.
E
So
what
other
posing
is
or
why?
Sorry
misty,
oh
right,
so
for
what
I
know
the
other
building
was
implemented
to
easily
make
Canaries,
which
was
one
in
scheme.
So
you
can
actually
set
the
partition
point
during
or
before
the
running
debate.
You
set
a
partition
point
so
save
12
early
customs.
You
set
it
to
free.
E
12,
pretty
replicas
get
updated
and
it
will
write
and
be
those
last
three
replicas
with
the
old
versions.
So
you
have
a
canary
or
half
of
it
or
not,
half
of
it,
but
part
of
it
is
updated
and
the
rest
is
not
and
right
there
so
des
you
support
opposing
and
you
can
also
build
on
top
of
it
with
a
controller
like
I
have
introduced,
which
will
should
drive
lifecycle
season
there.
A
Let's
start
over
there,
and
so
do
you
want
to
take
that
into
your
next
proposal?.
E
C
So
just
got
implementing
in
1.7,
the
initial
rollout
is
staple
set
well
and
I.
Think
I,
don't
know.
I
think
we've
got
pretty
good
consensus
that
we
want
an
auto
Clause
feature
for
replica
set
deployment
and
potentially
demon
sect
going
forward.
Is
that
where
well,
the
kind
of
consensus
that
we've
come
around
or
that.
E
C
Well,
from
what
we've
been
doing,
at
least
it
at
Google,
we've
been
looking
at
it
as
something
we
probably
not
want,
but
that
we
don't
want
to
gate.
Do
you
want
on,
but
we
haven't
taken
like
we
haven't,
put
a
huge
amount
of
consideration
into
what
it
means
for
deployment
and
demon
set.
In
particular,
you
put
more
consideration
into
what
it
means
for
staple
said,
because
we're
the
between
it's
stateless
right,
so
I
can
spit
up
a
second
deployment
and
turn
down
another
one.
C
But
I
have
a
lot
of
options
to
do
a
canary
and
to
do
a
staged.
Rollout
I
can
do
many
things
without
potentially
getting
myself
into
trouble.
We
just
thought
it
was
more
stapl
set
because
there's,
but
you
don't
necessarily
want
to
spin
up
a
second
cluster
of
a
storage
system
in
order
to
deploy
a
new
version
of
it
right.
C
You
want
to
try
to
deploy
it
in
place
as
much
as
possible
and
may
need
to
do
it
one
at
a
time
or
you
may
need
to
at
least
soak
a
set
of
them
to
ensure
that
you've
gotten.
You
know
that
what
you
deployed
is
stable
before
progressing
out
to
the
entire
storage
system,
and
there
wasn't
really
a
good
way
to
do
that
with
staple
sets,
whereas
you
can
probably
accomplish
something
with
deployment
as
it
is
in
terms
of
the
auto
pause
feature
itself.
C
The
one
thing
fundamentally
different
about
replica
sets
and
deployments
and
staple
sets
is
four
staple
set,
because
everything
has
an
ordinal
right,
because
every
if
I
have
the
storage
system
and
there's
like
ten
nodes,
each
node
is
unique
and,
depending
on
how
my
keys
basis,
partition
contains
unique
data.
Unless
it's
like
an
identical
identically
replicated
like
a
MySQL
deployment
where
I'm
replicating
bead
replicas
with
deployment,
does
it
make
sense
to
do
it
on
an
individual
node
basis?
C
E
C
C
You
might
at
one
point,
have
twelve
or
thirteen
scale
back
to
nine
and
then
scale
up
the
can
rate
so
at
any
given
instance,
if
you
freeze
the
entire
system,
you're
not
guaranteed
to
only
have
up
attends
I,
just
don't
like
the
conduct
of
partitioning
I,
don't
know
if
it
would
work
as
well
for
deployment,
and
it's
generally
working
on
a
probabilistic
algorithm
to
begin
with,
so
it
might
have
to
be
I,
don't
know
if
you
can
actually
do
it
based
on
I
want
n
of
them.
You
might
have
sad
when
this
percentage
of
them.