►
From YouTube: 20190419 cluster api node lifecycle workstream
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
Friday
April
19th
2019.
This
is
the
cluster
API
move
lifecycle
management
for
extremes
as
the
initial
kickoff,
so
I'm
going
to
talk
about
the
beginnings
of
this
stuff,
so
I
kind
of
wanted
to
frame
the
beginning
of
the
discussion
as
a
conversation
around
scope
like
what
is
in
scope
and
what
is
out
of
scope
for
what
we
call
the
lifecycle
management,
it's
kind
of
an
umbrella
of
things,
and
there
are.
There
are
aspects
that
I
know
people
talk
about.
A
B
Think
I
generally
agree
with
that.
However,
if
we
are
talking
about
integrating
with
kubernetes
autoscaler,
it
gets
a
little
bit
more
complicated
because
there's
some
logic
right
now,
that
is
within
the
kubernetes
autoscaler,
that
determines
which
exact
node
to
delete
from
the
available
machines
and
I
think
there's
some
work.
That
needs
to
be
done
between
the
auto
scaling
group
and
the
cluster
API
group
to
determine
you
know
how
do
we
handle
scale
down?
You
know
consistently?
How
do
we?
B
How
do
we
make
sure
that
we
have
the
right
interface
between
the
autoscaler
and
cluster
API,
because
to
me
it
seems
odd
that
we
would
have
completely
distinct
and
different
scale
down
behavior,
whether
it's
an
auto
scaler,
that's
scaling
down,
say
a
machine
set
or
if
it's
a
manual
scale
down
in
the
machine
set.
You.
A
See
I
think
it's
indistinguishable
like
there's
or
it
should
be
like
is
we
provide
the
API
or
constructs
that
other
people
can
use,
but,
like
that's
like
we
can,
we
can
bite
shut
on
this
project.
Conversation
law
too,
but
like
there
are
aspects
of
who's,
the
consumer
of
the
API
and
I.
Think
we
should
we
should.
We
should
address
those
consumers
but
I
think
the
the
behavior,
the
auto-scaling
behavior
should
be
irrelevant,
whether
or
not
it's
a
person
or
it's
an
actual
tool.
Right
I
think
the
behavior
itself
of
scale
down
should
be
consistent.
A
C
A
C
A
D
A
D
Tend
to
separate
and
I
know,
this
isn't
traditionally
what's
been
done,
so
you
know
taken
for
what
it's
worth.
I
tend
to
separate
the
provisioning
of
the
server
from
the
making
that
server
into
a
node
and
to
me
the
making
the
server
into
a
node
as
what
would
be
in
scope,
this
work
stream
and
that
the
I
as
layer
integrations.
B
B
D
Yeah
I
just
right
now,
I
think
we
we
have
this
machine
concept,
which
is
expected
to
be
a
node
which
is
fine
for
us.
But
to
me,
provisioning
that
is
later
and
making
into
a
node
are
pretty
distinct
and
I.
Understand
that
we
have
maybe
they're
in
this
project.
We
have
a
need
for
that
and
we
just
have
to
build
it
because
other
projects
don't
need
it
right
now,
but
yeah
so.
B
So
I
agree
that
we
want
separation
for
our
for
the
provisioning
and
the
machine
versus
kind
of
the
bootstrapping
steps,
but
I
think
it
gets
a
little
complicated
because
you
can
have
kind
of
like
provisioning
config
and
then
you
can
also
have
post
provisioning
config.
So
in
the
pre
provisioning
convey
you
basically
take
the
bootstrapping
what
you
need
for
the
bootstrapping
config
and
you
would
inject
it
into
the
machine
provisioning
as,
for
example,
of
using
cloud
in
it
and
you're
using
it
up
us.
You
would
inject
it
into
the
user
data
for
that
machine
request.
B
However,
if
you're
talking
more
bare
metal-
and
you
don't
have
the
ability
to
impact
any
of
the
pre
provisioning
config,
then
you
would
have
to
actually
have
some
type
of
remote
session
to
that
host
and
the
element
more
interactively
kind
of
do
that
config,
so
I
think
there
still
it's
a
tying
to
the
machine
provisioning
in
in
a
sense,
but
I
think
we
could
potentially
get
a
nice
clean.
A
relatively
clean
separation
between
responsibilities
in
the
implementation.
A
Yeah,
so
what
I
wanted
to
do
is
is
take
a
step
back
here
and
let's
talk
about
what
the
behavior
is
for
the
behavior,
that
we
expect
and
then
from
there
the
boundary
lines
of
what
in
scope
and
otoscope
will
make
even
more
sense.
So
so,
if
we
started
from
zero,
we
have
nothing.
What
is
the
life
cycle
of
a
node
so
like
step?
A
A
D
B
D
I
can
just
that's
a
good
question.
I
think
pocket
eights
here
to
me.
Taints
is
a
node
level
thing
and
the
ability
to
inject
configuration
was
still
at
the
instance
level.
So
I
think
that
there's
probably
two
separate
you
inject
this
instance
configuration
there's
going
to
be
things
on
that
instance
like
the
IM
policies
associated
with
an
instance
and
probably
other
cloud
provider
specific
stuff
on
that
instance,
and
then
there's
the
node
related
policy,
one
of
the
taints
associated
with
this
node,
possibly.
G
A
Can
you
help
me
I
I'm,
struggling
with
language?
So
if
you
can
help
me
write
in
the
doc,
those
two
subsections
of
optional
configuration
things
are
on
policy
because,
like
I,
do
want
to
try
to
enumerate
what
we
consider
to
be
the
state
space
of
a
here
and
then
that
will
determine
in
scope
and
on
a
scope
for
for
componentry.
D
As
I
see
it
right,
there's
you
want
me
to
just
type
in
the
bagman
more
yes,
yes,
yes
and
then,
and
then
we'll
refine
it.
So
I
would
say
that
there's
there's
two
aspects
to
create:
there's
instance,
creation
creation
and
configuration,
and
then
so
that
kind
of
are
those
and
then
there's
conversion
to
a
node.
It's
which
includes
inject
policies.
A
Do
we
want
to
think
you
know
just
thinking
outside
the
box
just
to
entertain
the
idea?
Do
we
want
to
think
of
anything
where
we
have
all
or
nothing
type
of
behavior
like?
If
you
say
you
have
a
machine
set
or
machine
deployment,
if
I'm
gonna
view
things
and
gang
model,
that
means
that
I
either
have
all
of
them
together
or
have
none
of
them?
Does
that
matter
to
anybody?
Are
we
thinking
about
that.
H
But,
wouldn't
that
basically
go
into
the
like
the
Machine,
the
plot,
like
the
state
full
set
of
the
machine
machine
set
of
the
Machine
deployment
like
life
cycle
like
I'm,
assuming
that's
what
you're
referring
to
right.
If
you
define
one
of
those
and
say
I
need
like
five
replicas.
That
means
five
nodes
and
if
I
can
get
all
or
none
no
I,
think.
G
A
You're
both
you
both
have
partial
what
my
I'm.
Thinking
of
so
there
would
be
something
part
of
the
conversion
of
a
node
which
would
say
like
a
whole
right.
I,
don't
know
if
that
matters
to
us
right,
so
that
I
tried
to
play
a
little
bit
of
devil's
advocate
and
purpose
to
get
more
folks
to
talk.
Does
it
does
it
make
sense
to
this?
Is
there
a
user?
Is
he?
E
I
A
B
I
think
this
gets
complicated
in
that
it
might
be
something
that's
provider,
dependent,
I.
Think
in
the
general
case,
we
should
discourage
update
because
it
greatly
simplifies
the
life
cycle
management.
But
if
we
are
talking
about
like
the
bare
metal
use
case,
where
there
really
isn't
much
around
automated
provisioning
and
update
may
be
the
only
way
to
really
accomplish
upgrade
in
those
situations.
B
A
A
E
The
other
use
case
that
I
know
of
is
like
supposed
to
have
a
heavy
database
that
takes
a
long
time
to
restart
and
that's
a
single
ton.
You
don't
want
to
go
down
and
you
decide
append
in
a
pod
for
reasons
you
could
imagine
that
we
could
do
a
cupid
update
or
we
can
do
an
SSH
update,
without
necessarily
disrupting
your
database,
for
example.
I
I
would
also
agree
on
that,
so
I'd
be
able
to
seen
something
very
similar.
So
basically,
there
is
only
be
existing
things
running
on
it
and
the
noodle
in
the
asks
that
that
is
student,
some
small
knob
that
has
to
be
configured
across
all
the
motion.
They
do
not
really
afford
to
start
on
the
muscle
so
Lisa
way
we
were
in
place
a
bit
which
can
operate
in
a
sense.
Some
change
some
small
knots
here
in
this.
I
D
So
that
that's
the
way
you
can
do
it
with
existing
API
is
and
I
think
the
question
at
hand
is:
should
that
be?
Should
there
be
api's
that
are
part
of
the
cluster
api
that
facilitate
that
sort
of
operation
like
upgrade
cubelet
across
all
nodes?
For
example,
you
could
I
mean
one
one
sort
of
wait,
a
sidestep,
it
is
to
say
no,
but
of
course
you
can
always
still
do
that
right,
because
you
can
almost
do
what
you
just
said
a
created
privilege
Tina.
So
do
whatever
you
want.
D
A
Mean
I
want
to
be
explicit,
like
I,
there's,
multiple
ways
to
slice
this
one-
and
you
know
this
is
this-
is
the
hard
conversation
of
scope
right
I
know
that
people
will
want
an
update
for
legacy
type
of
workloads,
but
there's
multi-way
surround
this
and
whether
or
not
we
should
support
this
in
this
API.
If
we
should
be
opinionated
or
if
we
should
try
to
there's,
there's
always
the
trade-off
right.
A
You
can
either
be
opinionated
and
say
it's
possibly
out
of
scope
or
you
try
to
entertain
some
hard
user
stories,
but
once
you
thirteen
those
hard
user
stories,
you
get
into
degenerative
use
cases
right
like
where
you
start
to
to
the
space
where
you
have
to
maintain
what
happens
if
you,
if
you
fail
an
update
right,
do
you
grow
that
back?
There's,
there's
a
lot
of
issues
that
come
along
with
updates,
I.
K
K
Then
you
know
that's
something
that
is
going
to
need
updating
at
some
point,
but
we
might
say:
look
if
you
want
to
update
the
you
know
the
container
runtime.
You
can't
do
it
through
the
API,
and
that
is
because
you
know
they
can't
a
updating
the
container
runtime.
May
you
know
once
you
support
that
you
may
you
know
you
may
end
up
having
to
support.
You
know
changing
the
storage
driver
of
the
container
runtime
and
that
that
becomes
you
know
very,
very
difficult
and
interact
with
a
lot
of
other
pieces.
A
Well,
it
gets
there's
a
bunch
of
degenerative
user
stories
that
you
kind
of
proven
my
point
in
my
statement.
So
the
the
question
is:
do
we
want
to
be
immutable
from
the
API
perspective
and
sort
of
have
this
similar
mantra
that
we
have
inside
of
kubernetes?
Like
you
can't
you
take
a
container,
that's
live,
you
could
that's
crazy,
pics
write
it
or
do
you
want
to
be
able
to
say,
like
these
containers
are
deployed,
you
do
the
rolling
update
like
you.
Do.
The
role
of
your
deployment
I
mean.
K
A
It
depends
if
you
don't
use
up
the
storage.
If
you
use
network
attached
storage,
it's
it
should
be
that
you
know
you
design
your
applications
to
be
tolerant
to
certain
SLO
guarantees
for
disruption,
budgets
for
your
pods,
and
if
you
have
a
controller
or
an
operator
for
a
stateful
legacy,
style
workload
running
on
kubernetes,
then
your
controller
should
be
able
to
manage
the
time.
As
you
start
to
destroy
nose.
D
D
D
E
It's
a
very
slippery
slope,
III
think
I'm
agreed
with
the
gut
sentiment
here.
We
should,
we
probably
don't
want
to
open
the
door.
The
the
mechanism
I
was
imagining
where,
if
we
did
open
the
door
we
could
expose,
it
would
be
that
the
there
would
be
a
web
hook
on
the
machine
which
would
evaluate
the
change
and
block
a
change
that
was
not
supported,
live
or
hot
hot
change,
and
so
then
the
machine
said
controller
would
still
not
have
to
actually
understand
what
the
individual
provider
does
for
does
not
support.
E
D
Would
worry
about
drift
with
something
like
that
right,
I
mean
you
start
up,
creating
little
individual
pieces,
and
now
your
nodes
start
to
look
different
because
there's
some
failure
mode
we
didn't
consider,
and
so
some
things
do
a
lot
of
day.
I
mean
I.
Guess
you
can
reconcile
it.
I,
don't
know
it's
having
the
having
delivered
like
one
of
those
projects.
I
did
was
delivering
appliances
on
customer
site
and
doing
in-place
upgrades
on
those
you
just
you
miss
and
you'll
end
up
literally
with
with
problems
and
all
of
a
sudden.
D
A
This
particular
group
to
tackle
this
one
I
have
strong
opinions,
but
I
also
want
to
entertain
that
the
topic
just
so
that
we
kind
of
you
know
my
mental
model
of
the
hellscape
that
I've
lived
through,
can
sometime,
be
like
distilled
down
and
shared
amongst
the
group
right,
because
the
mutability
can
be
so
dangerous,
like
installing
software,
in
an
environment
that
you
don't
control
or
having
them
install
something
that
is
not
part
of
your
stack
can
get
you
into
this
weird
space
that
you
no
longer
have
the
ability
to
make
changes
anymore.
I.
E
Guess
one
way
to
ask
this
is:
is
there
anyone
that
wants
to
work
on
this
in
the
short
term
horizon
right
like
we?
It
feels
like
we
feel
like
it's
something
that
eventually
we
might
have
to
tackle,
but
we
know
that
it
would
be
a
lot
of
work
and
that
we
don't
want
to
tackle
it
in
this
thing,
we
believe
there
to
be
a
viable
path
without
tackling
it.
E
H
H
I
know,
but
when
we
say
no,
what
I
mean?
Essentially
it's
I
mean
it's
like
when
you
do
cube,
CTL
get
nodes,
he'll
get
masters
and
workers.
Both
I
mean,
depending
on
how
you
set
it
up.
I
guess,
but
I
just
want
to
make
sure,
because
if,
if
at
all,
if
this
touches
at
all
like
the
master
nodes,
then
there
are
definitely
some
other
things
that
we
have
to
consider.
H
For
example,
things
like
you're,
you
know
configuration
that
you
put
in
for
the
visa
for
the
cloud
provider
configurations
right
things
like
passwords
could
change,
you
might
have
to
you
know,
do
some
things
in
order
to
make
you'll
just
keep
the
cluster
running
like
password,
updates
or
sort
of
cert
updates
things
like
that,
so
you
can't
really
ignore
them,
and
you
can't
really
say
that.
Oh
no
you're
gonna
just
do
a
replays
reppin
replays,
just
because
I
have
a
new
cert
to
know
inject
in
inside
it
right
and
the.
A
H
Not
agree,
I
mean
that's
actually
I'm,
not
disagreeing
with
that,
but
the
only
thing
and
the
reason
I
asked
that
clarifying
question
is
because
one
concrete
example
that
came
to
my
mind
is
in
order
for
let's
say
this
target
kubernetes
cluster
to
work,
given
that
it
gives
backing
that
up
with
some
sort
of
a
cloud
provider
whichever
it
is
doesn't
matter,
and
if
you
want
to
roll
out
an
update
to
the
configuration
of
that
cloud
provider,
for
example
itself.
Now
that's
something
to
me
seems
you
probably
want
to
put
in
a
bucket
of
saying
that.
H
Oh,
this
is
something
that
we
should
be
able
to
change
on
the
live
system,
because
it's
just
configuration
so
I.
Guess
there
is
a
distinction.
I
guess
from
you
know.
Building
on
the
previous
comment
that
Daniel
made
right
I
mean
there's
probably
like
two
different
separation
of
things
here
that
we
could
say
that
you
know
we
as
a
matter
of
policy.
H
These
are
certain
things
that
it
could
do
changes
and
we
should
be
able
to
do
it
and
on
live
system
and
on
another
case
is
things
that
are
too
disruptive
or
we
should
call
it
out
of
scope
that
we
should
not
support,
and
you
know
the
only
way
you
can
I
do
that.
Is
you
basically
bring
up
a
brand
new
node
with
whatever
change
that
you
have
and
then
you
replace
it
I
mean
so.
A
People
can
help
you
with
taxonomy
here,
but
I
view.
Those
distinctly
different
like
I
do
realize
that
update
sounds
similar,
but
I
could
I
call
that
make
up
like
you're
reconfiguring
right
sure.
If
you
specify
a
new
configuration
like
it's,
a
declarative
configuration
I've,
updated
a
something
in
my
spec
and
then
the
controller
rectifies.
The
status
I
do
absolutely
think
that
we
should
be
able
to
do
a
recom,
but
in
that
in
the
lowest
level,
primitive
layer,
that's
not
a
mutable
upgrade.
That's
like
a
destroy
and
create.
D
I
think
it's
in
that
spec.
If
something's
in
the
spec,
then
we
want
to
reconcile
it
if
something's
not
in
the
spec,
its
otoscope.
But
of
course,
then
that
just
tells
let's
fight
about
what's
in
the
spec,
but
like
kernel
parameters
or
you
know
some
certificates
that
it
need
to
be
on
the
node
like
are
those
things
part
of
that
spec
and
or
are
they
external
to
this
altogether?.
A
It's
a
good
question.
Certificates
gets
a
little
bit
weird.
We've
actually
baked
this,
the
behavior
of
certificate
management
in
select
committee
and
life
cycle
and
the
distribution
lifecycle
of
kubernetes.
So
we've
done
that
indirectly,
so
that
you're
going
to
be
forced
to
upgrade
after
the
time
window
to
support
you
could
always
rotate
their
own
certificates,
but
in
the
current
mantra,
that
would
mean
that
you
would
have
to
destroy
recreate
so
certificate
management
is
a
good
question.
I.
K
Have
a
general
question:
if,
if
we
are
saying,
if
we
decide,
for
instance,
that
you
know
reconfiguration
or
any
kind
of
update
is
going
to
be
destroyed,
followed
by
create
why
why
have
any
sort
of
update
or
upgrade
API?
K
Why
not
just
say
you
know
there
there
is
a
create
and
there
is
destroy
it's
a
method,
that's
what
you
should
use
instead
of
instead
of
you
know
us
exposing
some
kind
of
upgrade
method
or
update
method
and
then
doing
the
create
and
destroy
in
there
being
opinionated
about
that,
and
then
let
you
know
later
on,
if
we
want
to.
If
somebody
is
interested
in
working
on,
you
know
this
in-place
upgrade,
which
sounds
like
an
optimization,
albeit
an
important
one,
for
you
know
for
some
production
with
the
use
cases,
then
you
know.
D
Controlling
upgrade
and
we're
not
we're
talking
on
node
here
and
what
what
you're
referring
to
is
the
thing
we
want
to
update,
isn't
the
node
per
se,
it's
the
control
plane,
but
that
may
imply
updates
to
the
node
I.
Guess
if
there's
a
there's,
a
crinoline
ability,
you
want
to
upgrade
on
your
nose
with
the
colonel
right,
then
yeah
you're
talking
about
just
a
note,
not
the
control
babies
to
keep
the
same
Queen
on
each
version,
so
I
guess
they're
both
and.
A
I
still
think,
like
every
every
example,
we've
had
other
than
too
much
data
gravity,
you
should
be
able
to
do
a
create
and
destroy
the
one.
The
one
exception
that
I
can
think
of
is
certificate
management
and
but
I
believe
that
this
should
be
a
part
of
KK
insert
rotations
should
be
built
in
machinery
of
KK
I.
Don't
think
it
should
be
on
us
to
do
that
particular
behavior,
like
a
controller
inside
of
KK
being
like
a
sir
sir
rotation
manager.
A
B
A
B
E
I
think
it's
a
great
thing
for
us
to
call
up
here
is
like
we
believe
this
to
be
out
of
scope.
We
believe
this
to
be
something
that
should
be
done
in
a
different
project
and
if
there
is
massive
pushback
that
it
should
docked
in
a
different
project
than
we
can
reevaluate,
but
I
think
it's
a
success.
If
we
can
say
we.
H
So
if
I
like
when
I
said,
like
certificate
I
mean
yes,
there
is
one
section
of
certificate
that
our
deal
they
dealt
with,
which
are,
for
example,
created
by
cube
ABM
and
you
know,
which
are
completely
within
the
Cuban
Aires
operational
requirement.
But
then
there
is
another
section
of
certificate
management
of
desserts.
That
I
can
think
about.
For
example,
when
the
cubelet
or
any
of
this
system
is
going
to
talk
to,
let's
say
a
cloud
provider
which
is
which
could
be
lets,
say
an
AWS
endpoint
or
could
be
a
vSphere
endpoint
for
that
matter.
Right.
H
Those
endpoint
me
or
me
now
have
all
see
a
sign
search,
so
there
may
be
a
requirement
where
you
have
to
also
provide
the
cert
that
will
allow
the
system
to
talk
to
these
remote
endpoints
in
a
good
way
right
and
those
Hertz
don't
necessarily
participate
in
the
same
way
as
the
desserts
that
cube
ATM
generates,
and
the
rotation
of
that
is
like
slightly
different.
So
I
just
wanna
make
sure
that
there
is
a
little
bit
more
than
just
those
certs
that
are
in
play
and
that
may
be
in
play
in
a
given
system.
Yes,.
D
H
But
they
would
be
part
of
the
node
life
cycle,
reason
being,
let's
say:
if
I
am
as
a
user,
if
I
am
my
intent
is
to
deploy
a
kubernetes
cluster
right
and
I
have
my
cloud
provider
that
I
want
to
configure
it
against
now
ultimate
goal
is
when
I
use
the
cluster
API,
the
VMS
come
up
or
the
physical
nodes
gets
configured
with
the
ability
to
talk
to
my
provider,
so
that
the
you
know
the
divs
I
behavior
can
be
achieved
to
me
that
should
come
under
this
I
mean
you
can't
really
say
that.
No.
H
I
mean,
as
yeah
I
mean
configuration
so
say,
I
mean
I,
like
the
fact
that
you
know
the
way
you
can
have
broke
it
down
and
like
the
instance
creation
configuration
and
the
conversion
to
node.
There
are
two
so
there's
a
part
of
configuration
that
goes
into
both
the
buckets
and
yeah
one
can
treat,
for
example,
this
particular
certificate
that
I
mentioned
as
an
example
could
be
part
of
the
first
bucket,
like
the
instance
creation
and
configuration
in
general,
and
it
could
be
part
of
that
lifecycle
to
say
you
know
inject
these
additional
certs.
H
H
A
The
only
other
life
cycle,
I
can
think
of
will
be,
there's
two
copper
miners
and
other
VM
based
systems.
A
lot
of
them
have
like
the
difference
between
hibernate
and
destroy
and
I.
Don't
necessarily
know
if,
if
we
can
support
something
like
that
distinction,
because
it's
not
ubiquitous
destroy
is
bitterness
but
hi
Rene
is
not.
B
A
D
K
B
E
There's
the
time
to
boot,
which
I
think
is
more
of
a
implementation,
detail
I,
would
argue
and
doesn't
need
to
be
exposed,
but
I
think
what
might
need
consideration
and
I
got
into
a
lot
of
trouble
on
AWS
or
debates
on
the
universe.
For
this
which
is
like
putting
on
a
device
when
you
stop
an
instance
and
it
comes
back,
should
it
be
and
you
eventually
like
unstop
it?
E
Should
it
be
the
same
node
or
should
it
be
a
different
note
and
the
people
that
were
stopping
instances
seem
to
believe
it
should
be
the
same,
node
and
I
think
it
was
I.
Think
the
current
behavior
is.
We
actually
delete
the
node
entirely
when
the
when
the
instance
is
stopped,
which
was
making
them
unhappy
and
it
was
I
think
it
was
basically
enough.
We
did
not
reach
a
clear
conclusion
on
this
front,
so
I
don't.
A
B
A
I,
like
simpler,
is
more
gooder.
I've
always
been
a
fan
of
that,
because
complexity
breeds
complexity,
so
it
I
think
this
is
a
use
case
worth
thinking
about,
but
I
don't
think
this
is
something
that
we
need
for
v1
alpha
to
you
that's
my
opinion.
If
somebody
has
strongly
feeling
otherwise,
please
speak
up.
K
I
Also,
probably
blew
that
I
think
hibernation
work
makes
a
lot
of
sense
for
the
cluster,
the
cluster
level
that
it's
at
some
point
users
have
a
hundred
development
associating
probably
might
be
willing
to
shut
down
those
clusters
and
say
when
they
get
back
in
the
morning.
I
want
the
same
state,
but
at
the
node
lifecycle,
I'm,
not
sure
whether
it's
a
good
idea
to
sleep
in
machines
and
then
start
these
transmissions.
I
guess
make
friends
not
have
it
here.
E
K
E
A
One
thing
I've
always
talked
about
with
this
group
and
I've
talked
about
with
other
folks.
Is
the
idea
of
blast
radius
on
destroy
if
there
shouldn't
parameters
that
are
specified
to
controllers
that
allow
you
to
have
things
like
destruction,
budgets
on
destruction?
So
that
way,
even
even
though
you
should
have
these
on
your
pods
it
just
it,
it
reduces
the
blast
radius
for
people
who
haven't
necessarily
updated
all
their
cause
to
have
things
like
construction
budgets
or
it
could.
You
could
pre-baked
the
amount
of
like
just
like
controllers
inside
of
kubernetes
baked.
A
B
A
I
mean
the
capo
for
kubernetes,
like
it
we've
followed.
The
mantra
that
we
did
in
the
very
beginning
of
the
project
is
that
disruptions
budgets
didn't
exist.
We
had
hard-coded
behavior,
you
can
always
film
and
ride
that
hard-coded
behavior
by
default,
and
then
we
develop
destruction
by
this
when
it
was
more
mature.
K
Is
it?
Is
it
fair
to
say
that
you
know
any
anything
that
we
want
to
add
to
the
cluster?
Any
kind
of
method
that
we
want
to
expose
should
be
very
tightly
coupled
to
the
lifecycle
of
the
machine.
So
so
that's
something
like
you
know
a
drain
or
a
cordon.
You
can
do
that.
You
know
but
keep
the
Machine
around.
So
we
shouldn't
expose
those
operations
through
our
API.
Should
you
know
we
should
only
expose
something
like
a
destroy
or
anything
else
that
that
is.
D
That,
basically
up
level
the
existing
operations
in
a
sort
of
opinionated
way,
although
it's
pretty
non-controversial
opinion
that
you
should
train
in
court
in
your
notes,
but
so
that
close
to
API
are
there's
no
lifecycle.
Piece
of
is
just
dealing
with
them
as
a
whole
entity.
Kinda
goes
back
to
the
immutability
right.
K
So
if
we
yeah
exactly
so
so
so
like
cluster
API
should
only
expose
destroy,
because
that
is
going
to
affect
the
you
know.
Okay,
it's
going
to
have
some
intermediate
effect
on
the
node,
but
then
finally
it
will
affect
the
Machine
lifecycle.
We
don't
want
to
expose
something
that
will
not
have
any
effect
on
the
Machine
lifecycle,
but
just
you
know
it
just
repeats
some
functionality
that
you
can
also
get
you
know
through
the
node,
API
and
kubernetes,
so
perhaps
yeah.
K
You
know
there,
of
course,
there's
this
there's
this
this
interaction
right
like
with
drain
and
cordon,
but
perhaps
we
can
have
a
strategy
that
the
user
can
specify
you
know
by
default.
The
strategy
is
okay,
draining
kordon,
but
then
you
can
override
I,
don't
know,
but
but
but
the
operation
is
going
to
always
be
destroyed
and
it's
always
going
to
affect
the
machine
life
cycle.
It
won't
stop
at
some
node
configuration.
Then.
A
So
I
kind
of
I
think
I
get
what
you're
trying
to
say
and
that,
like
you
want
to
potentially
have
like
you
know,
you're
fast,
right,
ash
right
and
that
would
be
passed
through,
and
that's
this
very
similar
analogous
thing
that
exists
today
or
you
have
like
a
graceful
which
basically
drains
accordance,
has
a
time-out
default
specified
for
your
workloads
and
then
after
that,
timeout
it.
It
kills.
It
I
can't
think
of
other
Commons
strategies
for
destruction
besides
those
two
primary
ones
that
you've
already
kind
of
do
inside
of
KK
inside
the
notice
for
KK.
K
A
A
D
B
D
Be
something
that
user
can
control
like?
Do
we
want
to?
Should
there
be
a
tough
little
function
that
does
this?
You
know
they
sent
some
state
that
says:
there's
no,
it's
not
scheduled
or
should
be
drained,
or
is
that
we
just
leave
that
to
the
existing
the
existing
facilities
that
are
already
there
for
kubernetes?
Well,
why
would
we
just
serve
as
a
proxy
towards
it
to
that?
Essentially,
if
you
talk.
B
About
multi
machine
management
and
like
a
machine
set
managing
the
scale
of
a
group
of
node,
and
you
scale
that
down
you:
don't
really
have
some
external
process
to
go
and
kind
of
cordon
and
drain
the
nodes
that
you're
scaling
down.
In
that
case,
there
needs
to
be
some
level
of
automation
to
automatically
handle
that.
A
A
I
Think
at
some
point
we
will
have
to
give
the
user
the
ability
to
also
configure
the
timeouts,
at
least
for
the
Train
right,
because
for
different
cloud
providers
and
for
different
kinds
of
probably
user
might
want
to
do
different
kinds
of
time.
Also,
I
think
before
these
five
minutes
generally,
but
for
certain
set
of
providers
user
might
want
to
keep
10
minutes
or
15
minutes
or
more
than
where
some
kind
of
that
configuration
I
guess
should
be
added
at
the
top
level
in
that
configuration
instead
at
the
top,
then
probably
the
drain
itself
become.
A
C
A
C
G
A
A
So
my
thought
process
is
as
follows.
So
I
think
we
have
a
decent
rubric
to
start.
What
I'd
like
to
do
offline
is
to
start
tube
in
and
bucket
componentry
from
behavior
into
the
in
scope
and
out
of
scope
and
have
that
as
the
discussion
and
then
also
REE
REE
walk
through
this.
This
behavior
model
again
one
more
time
next
week
and
then
have
the
bins
or
buckets
for
the
things
that
we
would
be
working
on
using
this
behavior
model.