►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180926 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.r801lpnsj9zd
Highlights:
- Representing target cluster status
- Update on CRD migration
- Addon management
- Proposal of Machine States and Phases
A
B
All
right,
thanks
Robin,
so
I
just
wanted
to
kind
of
bring
this
topic
for
part
of
the
session
here
today,
for
instance,
I
realized
that
the
cluster
object
at
the
high
level.
It
doesn't
have
any
good
way
of
reporting
back
the
status
of
the
cluster
and-
and
please
correct
me-
if
something
quite
possible
so
essentially
was
my
question
was
around
first
of
all,
I
realized.
The
terms
ready
for
the
target
cluster
might
have
very
different
potential
meanings.
The
way
I
was
thinking
at
least
was.
B
Saying
that
you
know
the
epa's
are
ready
to
go
and
then,
of
course,
depending
upon
this,
and
then
there
could
be
a
second
sub
status
somewhere
which
could
report
the
desired.
Let's
a
number
of
or
nodes
how
those
added
or
not
or
others
active
or
not-
and
that's
a
bit
different
from
that,
but
yeah
I
just
want
to
throw
it
out
and
see.
You
know
what
everybody's
opinion
are.
That
is
the
definition
already
and
the
second
well.
That
means
how
and
where
do
we
actually
show
this
status
so
cluster
status?
B
We
have
an
object,
let's
just
add
it
today,
which
does
I,
don't
I
didn't
see
any
specific
field.
You
know
one
on
field
that
would
you
be
used
to
specify
this
set
status.
There
is,
however,
a
provider
status
which
is
a
provider
specific
status
reserved
for
the
cluster
level.
So
do
we
use
that
or
do
you
want
to
maybe
add
some
additional
provider,
agnostic,
stablish
foods,
/
objects
and
that
keep
the
static
object
to
represent
the
state.
A
Yeah
I
think
you're
a
really
good
point
about
cluster
status,
not
saying
that
the
cluster
is
ready.
I'm,
just
I
was
just
poking
through
the
cube
cuddle
code,
trying
to
see
how
it
decides
to
say
that
it's
finished
successfully
and
I
think
right
now
it
just
is
sort
of
fire-and-forget
where,
for
the
most
part
it
you
know
cube
cuddle
applies.
You
know
the
API
stack
to
the
target
cluster
cube.
A
So
it
would
be
really
nice
to
sort
of
have
a
wave
in
the
API
to
signal
that,
like
yeah,
I
use
of
the
control
plane
is
ready
and
ready
to
be
interacted
with,
and
the
nodes
are
ready.
The
other
thing
that
we've
checked
for
in
the
past
and
is
is
add-ons
and
whether
other
things
that
you're
sort
of
adding
on
to
the
cluster
that
you're,
managing
as
where
the
cluster
administrator
are
also
ready
and
ready
to
be
used.
A
You
know
if,
if
you
are
trying
to
create
a
cluster-
and
you
know,
deploy
qbn
s
to
it-
it's
nice
to
like
actually
know
that
the
QBs
pods
are
running
and
sort
of
telling
you
they're
healthy.
Before
you
say
that
the
cluster
is
fully
functional
and
sort
of
representing
some
of
those
intermediate
states
actually
probably
pretty
useful,
because
then
you
can
say:
oh
the
control
planes
up,
but
the
nodes
haven't
registered
or
the
nodes
are
registered,
but
the
add-ons
aren't
all
reporting
healthy,
so
I
agree,
that's
something
we
should.
A
A
B
Completely
agree:
the
ions
are
completely
visible,
a
that's
a
very
good
point
actually
just
to
not
to
sidetrack,
but
I
know.
There
was
an
earlier
discussion
around
something
on
the
add-on
manager
and
I'm,
not
really
sure
what
the
status
is,
but
maybe
later
on.
If
we
have
time,
if
somebody
knows
about
it,
can
update,
that
will
be
great.
A
B
Mean
a
very
high-level
my
initial
thought
around
how
we
can
potentially
do
this.
Was
you
modified
like
the
control
plane,
epi
charter?
So
one
is
just
that
and
that
could
be
a
representative
of
whether
the
communities
API
or
the
target
class
are
responding
or
not,
and
I
think
the
three
different
things
that
you
can
mention
right,
whether
the
nodes
that
are
anticipated
are
those
all
added
and
reported.
B
So
this
would
be
some
sort
of
like
an
even
sort
of
thing,
which
would
say
you
know,
know
that
I
added,
but
not
not
ready,
your
lower
eyelid
and
ready
and
and
some
of
that
combination
and
the
threads
could
be
the
argon,
which
is
actually
also
important
requested.
Are
they
all
reporting
the
desired
state
or
not?
It
kind
of
reminds
me,
for
example,
in
distance,
to
make
an
analogy
here
like
what
how
helm,
for
example,
they've
been
like
for
ham
charts
when
you
deploy
it
has
been.
B
Sometimes
it
has
basic
say
whether
this
particular
thing
is
up
on
an
elixir
sort
of
dependency
management,
and
it
can
report
back
well.
Is
this
service
that
I
want
to
apply
is
really
up
or
not
something?
To
that
effect,
I
mean
something
similar
logic.
We
could
apply
here
and
you
know
bring
down
or
bring
up
these
three.
Maybe
to
begin
with
these
three
granular
statuses,
and
we
can
probably
also
think
about
if
you
want
to
break
it
down.
B
For
example,
the
agonal
manager
I
mean
let's
say:
if
you
have
three
add
one
that
you
want
to
add
I
mean
one
option
could
be
that
you
know
we
can
even
break
down
that
add-on
status
to
well
the
three
things
that
you
have
requested.
This
is
what
the
status
is
if
needed,
but
that
would
be
yeah.
That
would
be
something
useful
as
well.
A
Right
Jack,
so
the
machines,
if
you
say
like
in
all
the
machines,
are
registered
and
ready.
There
is
another
API.
You
can
go,
look
at
to
figure
out
which
ones
you
expect
to
be
there
and
which
ones
have
nodes,
Long's
and
so
forth.
But
right
now
for
add-ons
we
don't.
We
don't
have
an
API
for
those
I'm,
a
little
leery
of
baking
that
into
the
cluster
API
as
opposed
to
making
that
its
own
sort
of
first-class
API.
A
You
have
to
start
with
so
I
think
we'd
love
to
think
about
that
a
little
bit
on
how
we
want
to
do
add-ons
I
mean
the
first
thing
that
comes
to
mind
here
is
to
use
conditions
because
that's
like,
if
you
think
about
like
the
way
that,
like
nodes
start
up,
they
start
up
and
they
say
you
know
networking
not
ready.
They
say
no
I
have
a
condition
that
maybe
says
this
basis,
Follette
cetera.
That
seems
like
sort
of
natural
mapping.
A
A
Think
Eric
might
have
mentioned
that
it
was
just
to
sort
of
put
the
top-level
status
fields
and,
like
you
said
like
the
status
of
like,
is
the
control
plane
ready
as
there's
a
top-level
field,
instead
of
making
that
one
of
a
list
of
conditions
that
are
all
sort
of
nested
in
a
condition
structure?
So
if
we
can
sort
of
identify
the
key
key
things
that
we
want
to
use
to
sort
of
roll
up
until,
like
is
the
cluster
ready
as
a
cluster
healthy
status,
I
think
that
might
be
useful.
Just
making
top-level
fields.
B
A
I
was
gonna,
ask
for
like
the
for
the
machines
part
what
what?
What
do
you
think
should
be
keeping
that
field
up
to
date
right
so
when
you
create
a
cluster,
it
makes
sense,
because
you
say
I
want
to
create
a
cluster
with
three
nodes
and
you
sort
of
wait
for
those
three
nodes
to
show
up,
and
then
you
can
say
that
you
know
all
the
nodes
are
ready,
but
then
over
the
lifetime
of
the
cluster
you're
gonna
scale,
the
number
of
nodes
up
and
down
the
autoscaler
is
gonna
scale.
A
A
number
that's
up
and
down
you're
gonna
do
upgrades
those
nodes.
We're
like
the
actual
number
of
nodes
reporting
with
the
cluster
is
gonna
vary
over
time
and
it's
not
always
gonna
match
the
target
number.
So
would
you
anticipate
like
the
one
or
more
of
the
machine
machines,
editor
machine
deployment
controllers
constantly
keeping
that
field
in
the
cluster
object,
up-to-date
with
the
status
of
like
sort
of
the
fleet
of
machines
assigned
to
that
cluster
or
the
cluster
controller
would
be
sort
of
watching
machines
and
deciding
when
I
thought.
You
know
the
machines
were
all
healthy.
B
What
the
auto
scaling
I
mean:
I
guess
you
know,
based
on
the
large
general
that
we
saw
in
the
last
meeting,
I
mean
if
the
auto
scaling
is
kind
of
done
by
our
that's
heavy
human.
It
is
not
a
scalar,
let's
start
a
scaler,
then
it's
potentially
going
to
talk
to
the
class
of
API
and
let
the
Machine
set
itself
to
say
I
want
to
scale
up.
You
know
increase
the
number
of
counter
so
that
if
that
happens,
then
you're
trying
to
object.
B
That's
basically
right
in
the
center
of
it
will
be
like
the
machine
and
then
it
becomes
easier
because
then
that's
what
is
driving
actually
the
scaling
up
or
scale
down
yeah
I
mean
I.
Would
imagine
that
you
know
the
machine
controller,
for
example,
would
be
responsible
for
keeping
the
status
of
the
current
set
of
machines
that
it
is
servicing
up
to
date.
The
object.
Okay.
A
Okay,
yeah
I'm,
just
trying
to
think
like
the
sort
of
who
who's
responsible
for
for
understanding
there,
because
the
cluster
autoscaler,
you
can
point
out
any
machines
that
or
pointed
at
a
machine
deployments
in
general.
You
can.
You
can
have
machines
underneath
controllers
at
multiple
levels,
right.
You
can
create
machines
that
directly
or
create
a
machine
directly
or
create
a
machine
deployment,
and
so
it's
not
clear
whose
responsibility
is
to
sort
of
watch.
All
the
machines
figure
out
if
they're,
all
in
sort
of
the
correct
State
for
the
cluster.
C
So
we're
doing
a
periodic
reconciliation
of
the
machine
objects
considering
a
normal
cluster
API
deployment,
we're
doing
that
through
the
Machine
controller
and
the
machine
actuator
right
now,
so
it
almost
seems
at
least
for
an
initial
implementation
that
that's
where
we
would
do
some
type
of
validation
there.
You
know.
Obviously
we
have
to
take
into
account.
You
know
the
health
of
the
node
object,
that's
associated
with
that
machine.
If
we
can
as
well
so
I,
it
seems
like
it
would
be
actuator
dependent
based
on
the
current
architecture.
C
I
think
it
gets
muddied
a
little
bit
if
we
start
talking
about
providing
an
extension
mechanism
directly
to
either
machine
deployments
or
machine
sets,
for
example
like
ASG
backed
instances
or
things
like
that,
but
it
almost
seems
like
that
machine
actuator
is
the
is
the
way
to
go
at
least
for
the
current
architecture.
So.
A
I
think
that
makes
sense
for
detecting
whether
each
individual
machine
is
healthy
right.
So,
but
that's
already
representative
than
in
the
Machine
status
and
the
question
is:
how
do
you
roll
it
up
and
aggregate
it
for
the
overall
cluster
right?
So
I
want
to
create
a
cluster
with
15
nodes?
How
do
I
say
that
I'm
done
clear
that
cluster,
because
all
15
nodes
exists
and
is
it
different
if
I
create
15
nodes
by
specifying
a
single
machine
deployment
of
size,
15
or
5,
specify
15
individual
machines
using
separate?
A
You
know,
machine
objects
right
and,
like
who's
responsible
for
saying
you,
your
target
for
the
entire
cluster
was
15.
Maybe
you
take
three
different
machine
deployments
like
each
of
those.
Individual
controllers
has
its
own
sort
of
slice
of
view
on
the
world
and
it's
doing
its
own
thing.
But
what
aggravates
them
all
together
and
says
your
set
of
machines
across
multiple
deployments
or
across
multiple
machine
sets
is,
is
all
healthy
is
finished
being
created.
You
know
in
the
initial
pace,
yeah.
C
And
this
seems
like
maybe
the
abstraction
that
we
currently
have
doesn't
quite
fit
that
concept,
because
the
cluster
itself,
really
the
cluster
object,
doesn't
actually
create
or
manage
the
entire
cluster
per
se.
It
only
manages
really
only
common
cluster
components
right
now
and
potentially
I
know,
we've
discussed
extending
it.
I
also
managed
the
control
plane,
but
as
far
as
I
know,
those
discussions
haven't
made
it.
You
know
very
far
today,
so
it
almost
seems
like
the
naming
maybe
needs
to
be
improved
before
we.
You
know
exit
alpha
to
improve
that
situation.
C
A
That's
a
good
point:
I
mean
I,
guess
right
now
we
have
the
potential
to
do
at
least
I.
Don't
think
we're
actually
doing
in
cluster
huddle
is
when
you
use
a
CLI
to
create
the
cluster
in
the
first
place.
You
do
know
what
your
target
size
is
and
the
CL
I
could
go
and
look
at
the
cluster
and
say
is
a
cluster
reporting.
A
The
control
plants
healthy
and
do
I
have
the
right
number
of
machines
in
the
cluster,
and
it
could
actually
do
that
validation,
and
so
we
can,
if
we
just
had
the
status
field
for,
is
the
control
plane
healthy
as
part
of
the
cluster
object,
then
the
client
side
can
do
the
aggregation
of
the
two
bits
together.
I
guess
what
I'm
trying
to
figure
out
is
is
do
we
have
all
the
machines
we
expect
and
are
they
all
healthy?
A
How
would
you
represent
that
in
the
cluster
and
and
who
would
be
responsible
for
keeping
it
current
all
right,
like
it's
one
thing
to
set
that
initially,
when
you
create
the
cluster,
but
over
time
like
how
do
we
keep
that
up
to
date,
so
I
think
the
control
plane?
One
makes
a
lot
of
sense
I'm,
just
trying
to
poke
at
the
other
other
pieces
of
this
and
see
how
they
fit
yeah.
C
D
So
another
option
that
we
could
do
because
they
said
this
topic
sounds
very
similar
to
the
way
that,
similar
to
the
discussion
about
docker
and
kubernetes
healthcheck,
where
they
don't,
they
don't
hard-code
any
checks
in
in
the
runtime
or
the
code
itself.
They
they
let
the
user
I
know
that
we're
focus
on
bringing
up
the
cluster
just
the
basic
cluster
in
the
control
plane.
D
A
Yeah
so
I
think
to
Locke's
point
they're
sort
of
there
a
couple
ways
you
could
implement
that
right.
You
could
have
the
cluster
controller
itself
quote
hard
code,
those
health
checks,
because
those
health
checks
would
be
specific
to
how
that
cluster
is
being
provisioned
right
and
if
the
notion
of
a
healthy
control
plane
is
different
for
the
cluster
actuator
on
Google
versus
Amazon.
You
know
it's.
It's
probably
consistent
for
all
the
clusters
created
by
that
cluster
controller
on
Google
right
and
it
might
vary
via
versions
and
so
forth.
A
D
Well,
what
I
was
proposing
is
that
we
can
have
books.
So
you
know
we
don't
know
what
this,
what
these
clusters
are
going
to
look
like
in
the
future,
I
have
I
hesitate
to
say
more
because
I
have
other
ideas,
but
that's
for
the
future,
but
you
know
a
lot
of
lawyering
users
to
also
have
some
kind
of
rudimentary.
D
D
A
And
that
might
be
a
good
way
to
handle
like
the
add-on
thing
I
was
mentioning
earlier
for
now.
Until
we
have
a
good
eye
on
API
is
if
the
baseline
check
is,
you
know
the
Kunis
API
server
is
up
its
it
spot
is
reporting
healthy,
you
know,
I
have
you
know,
I'd
see
the
quorum
and
and
the
health
checks
for
those
things
are
passing.
A
A
Okay,
it
seems
like
the
discussion
is
petering
out
a
little
bit,
but
I
think
that
there
was
general
agreements
that
we
would
like
in
a
cluster
object,
a
way
to
represent
the
health
of
the
control
plane
that
there
is
the
potential
for
breaking
apart.
What
we
currently
call
the
cluster
object,
may
we
keep
something?
That's
called
a
cluster
object,
but
it's
an
aggregation
of
the
sort
of
common
control
playing
pieces
and
the
Machine
pieces.
C
A
Makes
sense
so
I
think
what
makes
sense
in
the
short
term
is
to
at
least
add
some
sort
of
field
in
the
cluster
status,
for
is
the
control
plane,
healthy
and
I.
Think
at
a
minimum
we
can
then
update
cluster
cuddle
to
say
it's
gonna.
Take
that
and
wait
for
all
the
nodes
to
be
reporting
ready
before
it
says.
If
the
cluster
is
up
and
running
and
ready
to
use
and
that's
pretty
consistent
with
what
we
do
and
like
keep
up
and
I
think
what
things
like
cops
do
as
well.
A
Okay,
go
ahead,
did
you
have
another
question?
Well,
no,
I
came
late,
so
I
hesitate
to
belabor
this
conversation,
I
I,
just
I,
guess
I'll.
Just
mention
anecdotally
bit
in
trying
to
solve
this
problem.
For
us,
I've
tried
to
stay
away
from
defining.
Some
abstract
exists
already,
because
it's
difficult
to
know
what
that
means,
and
instead
I've
tried
to
substitute
concrete
things.
I
can
check
for
so,
for
instance,
I
want
to
know
what
the
control
planes
ready.
That
means
that
I
want
to
know
that
there
is
an
API
endpoint
for
the
control
plane.
A
Actually,
that's
not
what
that
means.
I,
don't
ever
check
to
know
if
it's
ready,
because
I
don't
even
know
what
that
means.
I
check
the
Hat
to
see
if
there's
an
API
endpoint,
because
that's
what
I
need
to
know
to
proceed
so
I
just
was
going
to
mention
that
I've
been
trying
to
remove
abstract
fields,
are
hard
to
define
and
replace
them
with
concrete
fields.
That
mean
a
precise
thing.
That's
what
I've
been
trying
to
do.
A
On
the
other
hand,
what
I
have
found
is
my
experience
is
people
writing
interfaces
on
top
of
this,
they
don't
like
having
to
have
this
conceptual
indirection
to
find
out
what
they
want
to
do,
and
they
really
do
want
a
ready
status
field.
So
what
we've
done
to
provide
that
to
them
is
we've
had
it,
we've
built
a
different
layer
on
top
of
the
communities
API,
which
I
think
is
just
not
not
a
great
solution,
but
yeah.
A
A
So
we
have
that
concrete
field
today
and
I
think
what
cluster
coil
does
is
it.
It
actually
fills
that
in
right.
We
want
to
flip
that
around
so
it's
sort
of
waits
for
it
to
be
there
before
proceeding,
and
so
one
way
to
do
that
would
be
to
have
the
cluster
controller
that
fills
it
in
not
put
something
there
until
it
believes
a
control.
A
C
I
worry
about
that
a
little
bit
just
because
the
current
implementation
that
that
we're
looking
at
for
AWS
we're
looking
at
backing
that
API
endpoint
by
a
load
balancer.
So
we
can
actually
set
that
API
endpoint
before
the
control
plane
is
up
and
ready,
and
that
also
allows
us
to
better
when
we're
provisioning,
the
actual
control
plane
nodes,
we
can
actually
use
the
cluster
object
to
get
that
API
endpoint
to
use
to
drive
the
config
for
the
control
plane,
nodes.
A
A
If
you
wanted
to
know
if
the
control
plane
was
usable,
what
you
would
do
is
take
that
endpoint
and
do
a
list.
For
instance,
if
that
worked,
you
would
say
it's
listable.
It
doesn't
even
really
mean
that
is
useful
if
it
just
means
the
list
worked
once
and
so
I
guess
I'm
arguing
that,
to
the
extent
possible,
we
should
make
the
fields
as
concrete
as
possible
and
not
abstract,
because
it's
difficult
to
know
what
the
abstraction
means
to
different
people
or
providers.
B
Having
that
the
API
status,
just
like
you
know,
really
just
mean
that
the
DP
in
final
is
in
the
endpoints
list,
is
that
really
literal
or
for
example,
is
it
really
just
impractical
vitamins?
Is
that
concrete?
You
know,
meaning
of
that
means
one
specific
field
not
essentially
like.
Overall,
let's
just
Adams
was
just
like
API
status.
What
it
does
you
know
would
be.
Is
that
something
that
might.
A
Write:
it's
not
the
overall
out
of
the
cluster
and
I
think
that
that
I
think
it's
hard
to
guarantee
the
state
of
a
distributed
system
right,
and
so
we
decouple
it
say
that
we're
actually
not
going
to
guarantee
this
about
every
node.
We're
going
to
say
that
we
know
this
about
this
particular
observation.
A
A
B
A
A
Ok,
let's
move
on
to
the
next
thing
on
the
agenda,
which
is
the
CRT
migration
status,
so
Phil
is
actually
next
mean
he's
been
in
Seattle
this
week
we
are
working
hard
on.
There
is
I,
feel
getting
the
code
compiling
and
the
test
passing
and
and
the
actual
bootstrap
part
working
the
initial
cut
of
the
PR.
A
It
seems
like
that's,
that's
kind
of
slick
and
working
reasonably
well,
so
we've
almost
got
sort
of
deployments
of
clusters
working
on
GCP
once
we
have
that
working
and
all
the
tests
passing
we'll
go
back
and
try
to
try
to
clean
the
code
up
to
get
and
a
bit
and
get
it
in
sort
of
a
reviewable
state.
So
people
can
take
a
look
and
you
know,
go
through
and
see
if
they
have
any
major
concerns
about
the
way
the
codes
been
restructured,
because
a
lot
of
it
has
changed
pretty
significantly
during
the
migration.
Slash
refactor.
A
Somebody
put
a
question
the
doc
about
time
frame.
Phil
is
here
through
the
end
of
the
day
tomorrow
and
we're
really
hoping
before
he
flies
back
down
to
California
that
we
are
at
a
good
point
where
we
can
share
progress
and
start
getting
eyes
on
pr's
instead
of
us
just
hacking
on
code
next
to
each
other.
A
B
Well,
we
commissioned
a
study
called
was
something
was
in
works
to
really
be
an
add-on
managers
and
I.
Don't
know
much
detail
about
how
I
think
it
was
not
really
easiest
publicly
released.
I
think
I
was
still
kind
of
work
in
progress.
I,
don't
think
there
is
any
updates
around.
That
I
mean
it's,
but
yeah.
A
So
I
don't
see
Justin
Santa
Barbara
on
the
call,
but
Justin
and
another
engineer
here
at
Google
have
been
working
on
this
internally
and
are
I
think
excited
to
release
it
to
the
community
that
I
haven't
quite
jumped
through
all
of
the
hoops
to
do
so
yet
so
it
is
still
in
fights,
I
think
that
the
they
have
a
design.
They
were
very
excited
to
share
that,
but
I
can't
quite
do
it.
Yet.
A
So
it's
similar
to
I,
don't
I,
don't
know
if
any
of
you
guys
were
at
tour
or
watch
the
recording
of
the
cluster
lifecycle
meeting
we
had
yesterday,
but
another
engineer
at
Google
named
Josh
has
been
working
on
a
concept
of
cluster
bundles,
which
is
basically
a
declarative
way
to
describe
the
components
that
make
up
a
cluster
and
grouping
them
in
a
reasonable
way,
and
so
that's
something
he'd
be
working
on
internally
for
our
systems.
Here
and
and
now
we've
publishes
open
source
and
we're
trying
to
figure
out
how
that
interacts.
A
With
you
know
the
changes
we
were
making
to
the
Kuban
API
and
we'll
look
at
how
that
works
with
the
cluster
API
and
so
forth.
Going
forward
and
I
know
that
Justin
again
said
that
the
cops
community
he
presented
that
to
them
recently
and
there
excited
they're
starting
to
adopt
it.
So
it's
sort
of
along
that
vein
of
like
is
something
as
is
brewing
and
we're
sort
of
working
on
it,
and
once
we
feel
like
it's,
it's
ready
for
a
wider
review
and
we're
ready
to
put
it
out
there.
A
Then
you
know
Justin
and
Jeff
will
show
up
here
and
and
talk
to
you
guys
about
how
that
works,
or
actually
the
probably
off
to
the
main
sig
beating
first
and
then
I'll
drag
it
to
this
one.
Also,
since
it's
pertinent
to
us
as
well,
so
we
also
I
mean
we
have
a
quote
add-on
manager
in
the
queue
Bob
Scripps
right
now,
and
it's
also
used
by
gke,
which
is
basically
a
shell
script
inside
of
a
container
cops,
has
an
ad
manager
that
uses
a
concept
of
channels
that
they
run
as
part
of
their
cluster.
A
This
is
one
of
those
things
that
everybody's
definition
of
add-ons
is
a
little
bit
different
during
one
of
like
the
contributor
meetups
I
think
was
like
a
cube
con
Austin.
Last
year
there
was
a
breakout
session
about
add-ons
and
Tim
Hawkman
and
I
pulled
the
audience.
You
know
before
we
started
just
to
ask
people
what
how
they
defined
add-ons
for
their
cluster
and
everybody
who
raised
their
hand,
had
a
different
definition.
So
part
of
the
part
of
the
difficulty
is
getting
people
to
agree.
A
A
So,
if
I
look
at
things
that
we
call
add-ons
today,
I
see
things
that
are
sort
of
control,
plane
extensions,
so
things
like
the
ingre
stroller
controller,
which
generally
runs
as
part
of
the
control
plane,
is
something
we
call
an
ad
on
things
like
cube
proxy,
which
are
per
node
things
that
we
run,
but
that
are
sort
of
managed
with
the
lifecycle
of
the
node
at
the
light
stuff
loads
of
cubelet
as
opposed
to
user
applications.
We
call
those
add-ons
of
things
like
cubes
ENS
things
like
the
metric
server,
there's
lots
of
different
things.
A
A
Okay
and
then
hardik
isn't
here
today,
but
he
sent
me
this.
It
sounded
like
he
was
travelling
at
the
moment,
but
he
sent
me
this
doc
to
share
with
people
that
I
just
linked
from
the
meeting
notes,
which
is
doc.
He
wrote
that
I've
reviewed
already
that's
called
proposed
states
for
machines.
This
is
similar
to
the
dark
that
I
put
together
earlier
this
year
with
the
sort
of
state
diagram
of
how
we
think
the
Machine
lifecycle
should
look.
A
This
stock
is
largely
based
on
the
fields
that
are
existing
in
Gardner
today
that
they
would
like
to
effectively
upstream
into
the
cluster
API,
so
that
the
machine
set
controller
can
actually
determine
health
of
machines
and
determine
when
to
replace
them.
So
I
think
Harvick's
mentioned
this
in
previous
meetings
with
the
fields
for
phases
and
last
operation,
and
so
to
make
a
little
bit
more
concrete,
he
created
a
document
where
he
wrote
up
what
he
thinks
those
field
should
be
and
what
the
meaning
of
each
of
them
is.
A
You
know
in
a
format,
that's
really
easier
for
people
to
comment
on,
and
so
it
would
be
great
if
people
could
take
a
look
at
the
doc,
add
questions
or
comments,
and
then
one
heart
expect.
Next
week
we
can
sort
of
go
over
it
in
detail
and
and
look
through
those
and
see
what
people
think
about
either
adding
these
fields
to
machines,
to
do
health
checks
and
deal
with
the
phases
and
states
of
machines,
or,
if
there's
you
know,
other
proposals
of
how
we
wanted
to
represent
the
state
of
a
machine.