►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180307 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#
A
all
right,
so
it
looks
like
people
are
adding
stuff
to
the
agenda
now
feel
free
to
sign
in.
If
you
would
like,
but
I
had
some
stuff
I
wanted
to
bring
up
so
I
guess
we'll
start
with
the
two
items
I
added
last
night.
The
first
one
is:
has
any
I
don't
know
like
if
we've
targeted
a
stable
release
of
the
API
yet
or
if
we're
working
towards
that?
Can
anybody
give
me
an
update
on
where
we
are
with
getting
this
thing
to
a
stable
release.
B
Yeah
I
can
give
an
update
from
from
our
perspective
here
at
Google
yeah,
so
so
I
think
we
scoped
out
to
know
quite
a
bit
over
the
last
couple
of
months.
You
know
them
the
work
that
we
have
are
trying
to
be
as
comprehensive
as
possible,
but
what
you
know
the
design
decisions,
the
implementation
that
we
need
for
for
alpha.
You
know
meeting
some.
B
You
know
kind
of
a
define
bar
for
alpha
and
what
we
have
for
beta
and
and
I
believed
all
those
issues
are
tracked
and
now
the
know
like
on
github
they've
been
reflected
on
github,
so
you
can
see
they're
there
for
alpha
that's
kind
of
whatever.
Maybe
they
know
what
you're
saying
it's
stable
release
or
thinking
of
alpha
then
on
alpha
release.
B
So
I'm
kind
of
a
I
welcoming
feedback
to
note
that
lease
as
well,
but
that's
kind
of
our
work
in
their
own
terms,
over
thinking
over
releases
and
often
for
being
that
first,
lease
that
we
have
on
cube,
deploy
and
beta
beta
like
it
has
a
few
issues
but
I
would
say
primarily
you
know,
multi
master
support
would
be
the
know
like
the
big
addition
for
for
beta
and
any
feedback.
You
know
any
other
kind
of
hopefully
smaller
issues
that
we
have
so
that
kind
of
what
did
it
go
for
for
beta,
beta
I?
B
A
B
Yeah,
actually,
actually,
this
is
a
really
good
point.
We've
been
so
so
close
to
API.
It's
been
a
little
confusing
and
I
think
internally.
We're
trying
to
also
then
within,
go
gonna,
try
to
make
it
clear.
They
know
what
this
is
about,
and
it's
not
only
the
API
spec
as
there
is
an
implementation.
You
know
there
is
a
tooling
as
well
there.
Now
this
doesn't
go
with
this
right
of
the
bootstrapping
is
an
example
of
that
and
kind
of
a
in
the
architecture.
The
number
two
pluggable
architecture.
B
Then
it's
gonna
come
with
some
some
of
the
you
know
the
provider,
implementation
or
actuators,
etc.
So
the
API
outside
there
like
it's
fairly,
it's
been
fairly
stable
but
there's
like
still
some
some
of
the
some
design
decisions
that
were
tracking
and
I.
Don't
I,
don't
know.
If,
though,
all
of
those
are
tracking
issues,
but
one
of
them,
for
example,
is
machine
class.
You
know
I'd
like
to
make
a
final
call
of
machine
class.
B
You
know,
even
if
you
don't
have
an
IKE
implementation,
that,
like
use
the
machine
class
for
now
machine,
said
I
think
we're
in
a
good
place.
The
only
other
one
that
I
remember
right
off
the
top
of
my
head
would
be
machine
deployments.
That's
the
one
that
was
brought
up
last
week,
so
I
was
kind
of
a
tracking
those
kind
of
a
few
decisions
as
sort
of
a
towards
stability.
B
A
The
reason
I'm
asking
is
I
would
like
for
us
to
push
to
at
least
explicitly
calling
out
a
stable
version
of
the
API,
we're
welcome
to
mutate
and
not
changes
to
it,
but
if
we
could
just
lock
down
a
version
so
that
we
can
implement
it
in
other
projects
and
that'd
be
helpful,
could
we
push
for
that?
This
next
release,
perhaps
yeah.
B
I
would
say
that
alike.
My
my
kind
of
personal
timeline
for
making
these
decisions
you
know
like
for
an
alpha
would
be
by
in
a
quarter.
I,
don't
know
if
that's
too
aggressive
I
don't
like
to
penalize
it
in
a
machine
class.
They
story
on
upgrading
machine
deployments.
How
are
we
going
to
be
using
those
Merilee?
So
if
you,
if
you
feel,
although
it's
kind
of
already
marginal
so
we
may
might
have
like
three
weeks
or
so
you
know
so
maybe
the
next
month
that
could
be
a
Targa.
B
No,
maybe
we
could,
you
know,
actually
get
settled
on
that
and
Oh
like
the
next
month.
We
we
have
an
alpha
or
sorry.
We
have
a
stable
release
of
the
API
itself
and
then
that's
going
to
be
followed
by
the
implementation.
You
know
with
Maschine
set
and
other
things
re
fight,
yeah
I,
think
that's
a
good
suggestion.
Okay,.
A
I'll
add
a
note
that
really
quick
and
I
think
a
lot
of
what
we're
talking
about
might
actually
play
into
the
next
topic
that
I'm
gonna
bring
up,
which
is
Tim.
St.
Clair
was
fortunate
enough
to
give
us
a
new
repository
which
I'm
gonna
write
down.
What
we
just
talked
about
in
the
notes
and
then
I'll
put
a
link
to
the
repository
and
if
anybody
else
wants
to
volunteer
to
take
notes
that
would
save
some
time.
A
Okay,
so
the
new
repository
is
here:
it's
got:
comm,
slash,
kubernetes
SIG's,
/c
life,
underbar
cluster
API,
I
tweeted.
Yesterday
about
suggestions
for
a
name,
I
talked
to
Joe
and
Timnath
both
about
it.
We
talked
about
it
on
the
call
last
week.
This
is
what
we
decided
on
I
think
we
can
still
change
it,
so
it's
not
like
it's
written
in
stone
or
anything,
see
life
being
the
short
name
for
a
Sikh
Christian
lifecycle
underbar
as
the
delimiter
and
then
cluster
API,
for
this
particular
working
group
project.
So
I
think
we
have
a
repository.
A
We
talked
about
needing
one
possibly
last
call
I
think
it
would
probably
be
meaningful
to
talk
about
what
we
yeah
or
three
that's
a
good
point
as
well.
I
think
it'd
be
meaningful
to
talk
about
what
this
particular
repository
should
hold
it
shouldn't
and
how
we
want
to
start
working
on
whatever
it
is.
We
need
to
work
on
out
of
this
thing.
One
of
the
topics
we
brought
up
last
week
was
having
the
API
in
a
standalone
repository
to
prevent
cyclical
imports.
A
C
When
you
say
not
the
implementation,
you
mean
not
the
generated
extension
API
server
code,
just
the
types
and
generated
types,
correct,
that's
ones
reasonable
and
I
think
what
we
try
to
do
before
was
just
keep
it
like
in
a
separate
directory
and
in
the
same
repo
and
I
guess,
we've
moved
away
from
that
since
we
started,
but
that
would
be.
The
other
option
is
just
to
keep
it
sort
of
just
separate
directory
at
the
top
level
of
our
repo
to
make
it
easy
to
just
import
that
part.
C
A
It
was
pretty
easy
to
get
this
repo
set
up.
I
think
the
hardest
part
was
picking
a
name,
but
once
we'd
had
a
name,
it
was
just
a
few
seconds
I
imagine
that
process
will
get
harder,
as
these
repositories
start
to
grow
more,
but
right
now
like
get
it.
Why
it's
good?
If
we,
if
we
need
repos,
we
could
get
them
very
easily.
E
C
C
F
D
A
The
problem
is
we're
getting
into
cyclical
imports.
Justin
I
think
hit
this
with
cops.
You
can
probably
speak
more
to
that
use
case
exactly,
but
just
in
general
I've
always
favored
having
an
API
definition
live
in
its
own
repository,
so
that,
if
you
are
trying
to
vendor
it,
you're
not
having
to
deal
with
any
of
the
noise
of
the
implementation
and
simple
example
would
be
implementation
of
the
API.
Then
during
one
package,
the
program
I'm
using
then
during
another
dealing
with
that
conflict,
but.
E
Most
of
the
dependency
tools
simply
take
the
entire
repository
and
then
from
there
it
becomes
very
messy.
There
are
some
a
lot
of
different
things,
and
especially
with
this
use
case,
where
there
are
multiple
provider
implementations
for
different
clouds.
It's
it's!
It's
it's
nightmare.
If
you
try
to
import
the
correct
repository
on
compress
a
series
of
dates,
you
you
you,
you
spend
several
hours
even
days
wasting
with
this.
D
F
Related
problem,
I,
don't
know
whether
it
was
actually
services.
In
my
case,
I
can't
remember
the
details,
but
the
one
I
definitely
hit
was
the
Coxes
using
the
1/9
machinery
and
the
generated
code
is
1/10
will
be
1/10,
I
presume
to
actually
being
compatible
until
we
upgrade
it
is
incompatible,
and
so
I'm
just
took
me
how
the
communities
API
doesn't
they
have
different
branches
for
the
different
releases,
so
I
guess
as
long
as
we
start
doing
that,
once
we
hit
beta
and
until
then
I'll
just
hack
it
up
in
some
way.
Recent.
D
D
F
A
Yeah
I
hope
I
have
weak
opinions
about
whether
or
not
this
should
go
in
the
same
repo
and
just
do
a
different
package
in
a
different
directory,
I'm
kind
of
indifferent.
As
long
as
it
is
venerable,
it's
really
all
I
care
about.
C
A
A
Be
fair,
though
I
think
the
problem
was
the
same
issue
that
Justin
had
I
just
needed
to
make
sure
I
got
the
right
version
of
API
machinery
and
client
go
that
the
rest
of
the
repo
is
using
and
as
soon
as
I
updated
my
constraints
in
my
go
package:
dot,
Tamil
or
Locker,
whichever
one
it
is,
it
was
able
to
compile
very
good
yeah.
D
A
D
D
F
E
E
G
B
The
question
is:
if
we
let's
say
we
move
to
the
middle
repo
causes
the
discussion
II
started,
you
know
about
clarity
of
what
we're
doing
right,
cubed
applied
to
no
it's
not
kind
of
as
clear
and
cute
ability
like
so
okay,
the
new
repo
makes
sense.
If
we,
if
it
said
to
have
everything
in
a
single
repo
and
later
the
notes
played
out
and
all
the
controllers
is
that
much
work,
then
all
like
where's
that,
like
something
that
we
should
think
there
are
ahead
of
time.
A
B
A
Yeah
and
again
like
if
that
split
happens
in
the
form
of
a
different
repo
or
in
the
form
of
just
a
different
package,
I
think
I'm
kind
of
finding
their
way.
It's
really
important
to
me,
though,
that
so
that
I
can
start
venerating
counting
on
the
stability
of
it.
We
explicitly
call
out
this
in
this
location
because
once
I've
entered
in
it,
I
really
don't
wanna
change
locations,
but
this
cut
this
copy
of
the
API
in
this
location
is
stable
and
we're
not
going
to
maintain
it
moving
forward.
G
So,
in
my
experience
is
not
really
it's
done
in
a
way
to
speed
them
up,
as
in
the
future,
where
our
API
is
fully
stable,
they
don't
need
that
is
put
into
anyway
to
modify
the
field.
We
don't
change
it
API
is
stable.
We
can
talk
about
separate
by
now.
If
you
separate
you're,
adding
a
lot
of
look
for
your
manually
copy
to
code
when
to
generate
the
code
and
because
the
genetic
code
is
not
just
type,
it
also
includes
a
general
code
in
the
controller
implementation.
G
H
H
A
E
E
However,
for
the
internal
types
I
couldn't
actually
mean
the
you're
trying
to
convert
to
have
to
be
known
to
the
API
server.
So
if
you,
if
using
some
type
that
is
not
defined
so
the
IP
is
server,
does
know
about
in
poetry,
work,
the
end
result
will
be
object
on
the
fact
it
would
be
unknown
type
Morris
and
also
you
have
to
write
a
custom
custom
converter
as
well.
But
it's
an
are
key
point.
So
my
suggestion
is
to
use
Pro
extensions
for
both
the
internal
and
external
versions
and
optionally.
E
C
A
My
original
approach
at
migrating
to
the
cluster
I
API
involves
relying
heavily
on
the
fact
that
that
directive
was
a
string
so
that
I
could
just
serialize
the
existing
API.
We
were
using
and
embedded
in
the
cluster
API
for
a
provider,
config
I
think
in
this
case
it
would
still
work,
but
just
something
to
bear
in
mind
if
anybody
else
was
planning
on
doing
that
as
well.
So.
C
A
E
C
E
E
C
A
A
I
So
last
week,
I
shared
a
proposal
for
how
to
approach
cluster
bootstrapping
for
folks
to
take
a
look
at
and
give
feedback
and
I
think
some
folks
said
with
some
comments
on
the
dock,
but
it
overall
doesn't
seem
like
there's
any
big,
open
issues
or
contentious
points.
But
I
wanted
to
see
whether
anyone
won
that
in
the
meeting
to
talk
about
a
bit
more
before
considered
it
reviewed.
I
A
Had
a
question
here
on
our
end:
okay,
yeah:
what
is
the
use
case
for
having
the
intermediary
API
server
up
and
running
intermediary
yeah,
so
just
made
that
word
up,
but
it
looks
like
we
were
setting
up
an
API
server,
only
external
to
the
cluster.
We
were
mutating
moving
forward.
What
was
the
point
of
having.
I
I
If
you
don't
have
an
external
cluster
management,
API
stack,
then
you
will
need
a
tool
that
knows
how
to
provision
enough
to
get
the
cluster
running
enough
to
have
an
internal
cluster
management
API.
And
then
you
have
two
pieces.
Two
places
I
have
provisioning
logic
and
also
there's
a
lot
of
edge
case
handling.
You
need
to
do
fitness
as
well,
so
it
seemed
a
lot
cleaner
just
to
spin
up
that
one
stack
that
knows
how
to
do
things
and
then
move
it
in
and
then
continue
from
there.
A
H
Said
the
question
that
I
have
what
that
is,
is:
is
there
any
goal
to
keep
that
external
cluster
synchronized?
In
the
case
of
dr,
where
you
have
to
bring.
H
I
The
external
cluster
synchronize,
so
in
terms
of
running
state
you
either
have
that
cluster
management
API
stack
outside
of
the
cluster
or
inside
of
the
cluster,
and
so
I
would
say,
regardless
of
where
you
keep
it,
you
probably
want
to
have
regular
backups
of
that
stack
so
that
you
can
recover
in
case
of
some
disaster.
H
I
From
my
perspective,
when,
if
the
customer
is
choosing
to
do
a
pivot
and
have
the
management
stack
be
internal,
they
should
move
all
the
state
internal.
They
shouldn't.
We
shouldn't
be
leaving
an
extra
copy
floating
around
on
someone's
desktop.
Now,
if
they
want
a
dr
story,
they
should
definitely
be
doing.
Backups
so
said,
stack,
but
not
go
back
to
whatever
stale
copy
they
have
on
their
desktop.
F
Disaster
scenarios
anyway,
one
of
which
is
the
one
where
you
like
lose
your
cluster.
You
lose
the
state
of
it,
the
other
one
of
which
is
where
you
have
the
state,
but
the
state
is
workd
and
you
can't
bring
up
in
the
AI
server
or
scheduler
or
controller
manager.
And
so
you
sort
of
need
to
revert
to
another
state.
And
you
don't.
F
I
Yeah,
and
so
in
terms
of
so
the
cluster
management
API
stack
has
the
ability
to
provision
machines
right,
so
it
has
a
certain
ability
to
heal
the
control
plane
in
the
sense
that
if
the
machines
are
bad,
they
can
destroy
the
machines
and
recreate
machines.
Admittedly,
if
that
cluster
management
API
stack
is
inside
the
cluster
and
it
happens
to
died
about
the
same
time,
then
you
need
someone
to
come
in,
because
there's
no
piece
I
can
heal
the
other
piece.
I
If
the
API
stack
is
external
to
the
cluster,
then
it
has
an
ability
to
fix
the
cluster.
Even
if
all
all
the
machines
involved
in
the
cluster
are
dead
or
bored
or
whatever,
but
then
something
has
to
take
care
of
that
external
cluster
management.
Api
stack,
so
there
definitely
a
trade
offs
and
situations
in
which
something
has
to
come
in.
A
I
I
E
C
The
only
thing
that
happened
recently
was
I
got
pulled
into
a
conversation
with
the
cloud
provider
extraction
working
group
where
they're
trying
to
figure
out
how
to
represent
suspended
nodes
in
kubernetes.
In
some
cloud
environments,
they'd
like
to
be
able
to
suspend
a
node
and
not
have
it
disappear
right.
C
It
sounds
like
that
future
is
basically
now
they'd
like
to
be
able
to
represent
that
now
in
the
cubelet,
and
so
I've
got
on
my
list
to
go
and
talk
to
Walter
fender
and
the
other
folks
that
are
driving
that
extraction
and
try
to
reconcile.
Basically,
this
state
diagram
with
what
they're
doing
on
their
side
and
if
we
can
have
sort
of
an
agreed-upon,
a
set
of
states
that
the
the
node
thinks
that
it
should
be
reporting,
and
then
we
can
also
put
into
the
machines.
C
C
E
C
Also,
what
what
they're
proposing
is
adding
a
state-
that's
not
represented
in
the
state
transition
diagram,
as
it's
currently
drawn
right
so
right
now
the
the
sort
of
steady
states
are
running
and
drained,
which
is
what
is
called
standby,
which
is
which
is
which
are
the
two
states
we
have
experienced
right
now.
A
note
is
either
running.
Who
may
be
unhealthy,
but
it's
running
or
it's
it's
not
able
to
have
things
scheduled
on
it
writes
unschedulable.
C
What
they
want
to
do
is
add
a
sort
of
third
state
which
is
like
the
the
note
itself
is
suspended,
which
is
not
in
the
state
diagram,
so
I'd
like
to
add
it
to
the
state
diagram
and
then
go
sort
of
socialize
with
those
folks
and
talk
about
the
transitions
and
how
we
can
more,
maybe
explicitly
represent
those
transitions
either.
In
you
know
the
node
status
API
or
in
the
cluster
API
and
making
them
consistent
is.
F
F
Yes,
so
at
least
on
AWS
it
means
you,
you
turn
off
the
machine.
The
state
still
exists,
you're
not
being
billed
for
it
and
you're
that
allowed
to
make
certain
changes
to
it,
which
you
can't
make
when
it's
running
you
can
change
the
instance
type.
You
can
change
some
other
stuff.
Let's
city
instance
type
is
like
the
primary
one:
it's
pretty.
We
like
it.
Doesn't
it's
a
virtual.
It's
a
state,
we're
like
it,
isn't
backed
by
a
virtual
man,
instant
virtual
machine
right,
yeah.
C
F
D
I
believe
one
of
the
speed
ups
you
gain
is
you
don't
have
to
redo
network
programming?
Is
society
still
in
the
network,
even
if
it's
routing
to
a
machine?
That's
not
technically
running
and
other
like
metadata
programming
in
the
entire
system,
like
that,
they
still
have
a
logical,
vm
device
in
the
api
that
they
can
attach
other
objects
to.
A
H
C
Yeah
Justin
on
GCP,
it
looks
like
you're
not
built
for
the
core
hours
of
them
suspended
machine,
but
you
are
still
build
for,
like
the
IP
address
that
you're
holding
on
to.
If
you
have
an
external
IP
and
your
your
boot
disk
right,
so
your
boot
disks
and
has
a
persistent
disk-
and
you
start
getting
billed
for
that
too,
even
though
you're
not
being
billed
for
the
whole
machine,
so
there's
sort
of
a
nominal
charge
of
holding
those
suspended
machines
around
it.
H
C
Presumably,
the
folks
that
are
trying
to
do
the
cloud
provider
extraction
actually
have
a
use
case
because
they're
saying
they're
running
into
issues
trying
to
represent
that
state
in
some
of
the
code
paths
that
they
have
so
I,
don't
know
what
those
what
those
actual
use
cases
are.
It
sounds
like
they
must
have
some.
C
B
A
Yeah
I
can
add
some
notes,
really
quick.
If
somebody
else
wants
to.
G
G
G
J
A
Okay,
one
more
thing
I
wanted
to
bring
up
earlier
in
the
call
we
had
talked
about
getting
the
API
to
a
stable
release.
I
would
like
to
take
a
stab
at
trying
to
go
through
the
backlog
and
figuring
out
which
issues
are
there
that
would
be
needed
to
be
taken
care
of
before
we
could
tag
a
copy
of
the
API
stable
as
that
works
for
everyone.
B
Sure
yeah
and
keep
us
posted
on
the
slack
as
well
as
you
know,
as
you
have
questions
or
make
any
major
changes
or
something
yeah.
A
B
We
might
want
to
actually
create
made
my
new
milestone,
then
I'll
just
track
in
the
API
changes,
because
I
think
the
milestone
alpha
right
now
has
an
implementation.
So
we
can
do
that
as
well.
They
don't
have
a
new
milestone.
They
started
moving
some
issues.
You
know
to
that.
Just
for
the
API.
Okay,
give
me
somebody
yeah
yeah
I
can
do
that.