►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180228 - Cluster API
Description
Link to meeting: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#
A
B
B
A
D
D
D
D
Although
AWS
has
some
best-effort
ish
guarantees
that
if
you
have,
for
example,
a
auto
scaling
group
with
three
a
Z's
listed
and
three
instances
that
you
will
get
one
in
each,
it's
not
a
hard-and-fast
guarantee,
and
so
we
we
are
very
strict
about
creating,
and
we
have
the
same
problem
of
staple
sets
right
where
this
is
an
ongoing
discussion
of
how
is
how
zones
tie
into
stateful
sets
so
there
is,
there
is
a
which
has
not
really
been
resolved.
There
are
some
heuristics
in
kubernetes,
which
are
not
very
popular
of
work
and
I.
B
No,
so
it's
mostly
around
its
strong
gated
for
nodes
right
now.
It's
not
targeted
for
masters.
So
for
notes,
it
doesn't
make
a
difference
because
you
have
machine
deployments
in
each
availability
zone
that
you
want
anyways.
So
it's
like
how
would
you
go
from
one
machine
deployment
in
one
zone
to
an
upgraded
version
of
that
in
the
same
abilities,
so
we
haven't
touched
on
how
that
would
religion
masters
to
address.
F
A
F
A
D
I
just
want
to
say,
like
we
do
want
to
be
like
one
of
the
trickiest
things
that
I
know
cops
doesn't
do
absolutely
correctly,
is
sequencing
the
sequence
of
updates
of
the
master
components
and
the
node
components.
So
we
definitely
wanted
to
be
in
scope
of
API
in
terms
of
an
upgrade
and
the
cluster.
Even
if
we
would
then,
if
we
didn't
use
machine
deployment,
we
would
need
something
else.
B
D
Yeah
I
think
that's
that's
a
good
idea.
I
think
we.
If,
like
then
someone
comes
along
and
says
they
want
EDD
separately
and
I.
Guess
then
the
question
is
so
like
that
they
it
becomes.
We
do
we
have
like
an
API
type
for
each
or
do
we
start
doing
roles
or
something
like
that
in
a
machine
deployment.
I,
that's
a
design,
question
I!
Guess
so:
hey
I
will
read
the
dog
yeah.
B
H
Thank
you.
What
one
interesting
and
maybe
some
a
related
question
is
if
they're
stateful
sets
running
on
top
of
tube,
and
let's
say
there
is
three,
there
is
three
pods
and
they
all
have
a
persistent
volume.
Let's
say
in
haz:
is
there
any
kind
of
relationship
between
decision
that
there
is
really
three
Easy's
and
that
the
underlying
nodes,
I
guess,
would
be
landed
in
three
easy
such
that
you
can
actually
bring
back
the
pod
with
applicable?
D
Your
apps
I
mean
you
actually
buy
this.
We
have
seen
cops
a
lot
surfer
the
nodes.
We
do
have
a
single
order,
scaling
group
that
spans
a
ZZZ
and
we
rely
on
the
a
to
be
a
sort
of
a
separate
guarantees,
but
some
people
want
a
stricter
guarantee
on
their
nodes
as
well,
specifically
because
of
this
exact
PB
or
EBS
affinity,
and
so
we'll
create
separate
instance,
groups
or
machine
deployments,
one
for
each
AZ,
so
that
they
can
really
see.
You
know
exactly
the
state
of
each
and
scale
them
independently.
D
I
H
F
I
think
the
reason
is,
how
do
you
specify
that
outside
of
an
Amazon
or
AWS
AZ
doesn't
translate
to
every
single
provider?
So
there
needs
to
be
thought
put
into
the
API
if
you
want
to
codify
different
availability
zones,
whether
your
domains,
whatever
you
call
them
in
the
API,
it
can't
be
specific
to
a
single
provider.
H
F
H
A
H
That
you
would,
at
that
point,
has
to
figure
out
how
to
the
higher-level
machines
that
wouldn't
necessarily
know
that
that
provider
config
is
for
a
particular
easier
it.
So
maybe
then
we're
saying
that
machines
that
really
deals
with
multiple
flavors
or
I
guess
the
provider
configure
and
somehow
correlate
that
to
which
easy
needs
to
be,
for
which
kind
of
a
equalizing
number
of
actual
instances
made
based
on
that
provider
configure
something
like
that,
but
that
seems
to
be
very,
very
generic
versus
introducing
something
more
first-class
like
an
AZ
or
maybe
a
different
name.
H
H
Yeah
I'm,
actually,
a
one
of
the
project
that
I
work
on
is
quandary
abortion.
We
actually
do
quite
a
lot
of
it
right
from
our
own
kind
of
experiences.
For
example,
on
one
AWS,
the
provider.
Specific
thing
is
just
as
a
mole
abilities
all
right
on
Azure,
for
example,
there
is
no
notion
of
first-class
disease
today,
so
our
AZ
configuration
is
entirely
empty.
Now
for
GCP,
it's
I
believe
it's
just
zone,
and
what
else
is
there
that
interesting
OpenStack
again?
They
also
have
just
availability
zone.
H
First-Class
configuration
these
here,
for
example,
we
use
clusters
to
represent
ATS.
So
we
ask
to
to
answer
the
you
know:
the
the
cluster
specific
provider
configuration
but
itself.
You
know,
for
example,
dork
is
trading
component
boss
in
this
case
actually
doesn't
know
what
it
means.
What
what
isn't
is
he
just
knows
that
he
is
the
configuration
that
will
pass
down
to
a
lower
level
component.
You
know
like
the
what
we
call
CPI's,
but
I
guess
there
is
a.
J
Yeah
I
think
that
ties
back
to
two
other
also.
We
we
want
to
have
a
high
level
property
on
AZ's
right
like
all
domains
and,
of
course,
if
you
don't
have
photo
means,
you
have
only
a
single
and
all
domain
there,
but
it's
a
multiple
deny
could
be
able
to
specify
and
I
took
an
action
items
you'll
file,
an
issue
on
that
I
think
we
should
follow
up
with,
like
more
maybe
more
concrete
proposal
based
on
this
discussion
on
what
that
look.
K
A
Okay,
is
there
anything
else
on
a
machine
deployment
discussion.
I
I
I
I
I
But
then
you
need
something
to
actually
keep
that
stack,
healthy,
that's
external
or
if
it's
internal,
then
the
cluster
itself
can
keep
that
stack
healthy.
But
if
everything
goes
wrong
or
the
culture
doesn't
exist
yet
get
into
this
weird
chicken
and
make
problems,
so
my
suggestion
for
bootstrapping
is
that
what
we
should
do
is
have
our
command
line
tool.
First
spin
up
an
external
sack
and
then
put
those
objects
into
the
socket.
That's
like
we'll
provision,
the
cluster
and
then
optionally.
I
We
can
actually
move
an
external
sack
into
a
internal
stack
by
pivoting,
which
is
basically
stop
the
external
stack
copy
over
the
edge
CD
and
then
start
that
stack
back
up.
So
that's
sort
of
verbally.
What
I
think
it's
a
good
way
to
do:
bootstrap
or
clusters
with
the
cluster
API
any
thoughts
on
this
or
I.
A
I
J
And
outside
that
are
like
we
are
open,
they're,
not
just
suggestions,
don't
know
how
what
this
command
line
tool
should
look
like
or
if
it
should
be
based
on
an
existing
tool.
You
know,
we
know
we
have
a
lot
of
tools
and
are
out
there
and
if
it
makes
sense
and
other
if
we
envisioned
and
our
functionality
to
be
reused
there,
no
reusable
and
that
we
should
essentially
they
don't
base
on
like
some
existing
code
base.
You
know
we
could
also
do
that.
J
D
I
mean
I,
I
rubble
to
see
how,
for
example,
cops,
would
use
the
existing
use,
a
provided
command
tool,
but
certainly
that
pattern-
and
maybe
some
code-
absolutely
sounds
great,
but
I
wouldn't
I
wouldn't
worry.
These
are
my
point
of
view.
I
wouldn't
I,
wouldn't
worry
too
much
about
creating
a
generic
tool,
because
I
expected
most
people
will
follow
the
path
room
rather
than
trying
to
reuse
the
tool.
I
suspect.
I
D
D
D
I
J
Yeah,
so
one
one
question
about
Tony:
do
you
guys
think
that
this
is
something
that,
like
other
to
no
other
tools
and
all
that
out
there
is
a
no-carb
skill
corner?
You
know
would
be
interesting,
cooperating
incorporating
some
of
this
functionality
for
bootstrapping,
so
they
can
enhance
that,
instead
of
actually
don't
using
this
common
tools,
because
essentially,
what
I
think
the
challenges
find
the
right
trade-off
here.
So
we
can
have
a
common
logic,
so
people
the
bootstrapping
and
you
know,
pivoting,
etc.
Don't
need
to
be
replicated
across
all
tooling.
J
D
This
is
hard
and
I
would
love
to
figure
out
the
right.
The
right
way
to
do
this.
That
is
a
common
pattern
and
do
the
exactly
the
same
thing
in
cops.
Be
like
that.
The
thing
which
is
really
hard
is:
where
do
you
manage
the
objects
after
you've
created
them
like?
Do
you
manage
it
in
the
command-line
tool,
or
do
you
manage
it
on
the
cluster
and
then
the
complexity
is
what
happens
when
the
cluster
blows
up?
It's
like,
for
whatever
reason:
how
do
you
then
manage
the
recovery
when
you
no
longer
have
like?
D
Presumably
the
answer:
is
you
manage
it
through
through
kubernetes
afterwards,
but
then
the
problem
being?
How
do
you,
how
do
you
do
recovery
type
scenarios
or
like
if
you
have
to
go
back
to
the
command
line?
What
do
you
do
and
then
that
also
happens
in
add-ons
right,
so
it
getting
this
figuring
out?
How
did
what
we
wanted?
The
answer
to
be
would
be
wonderful
and
Kupfer.
Definitely
stuff.
Definitely,
leverage
that
that
pattern
yeah.
A
This
is
something
Wes,
Hudgens
and
I
had
talked
about
a
few
weeks
ago,
which
was
the
importance
of
defining
the
sequence
here
and
then,
as
far
as
actual
implementation
goes
like,
if
I
can
borrow
code
from
cube,
deploy.
That
would
be
great,
if
not
like.
Having
a
step
by
step.
Quickie
scripts,
like
we
have
here,
would
be
equally
as
valuable.
H
One
question
I
guess
regarding
bootstrapping
that
I
have
is:
how
would
this
integrate
with
some
of
the
platform
provided,
kubernetes
experiences
like,
for
example,
on
G
key
or
an
azure
on
IBM?
They
typically
provide
you
some
already
I,
guess
masters.
We
we
have
their
own
tooling
right.
Would
its
cluster
API
too
low-level
to
create
those
masters,
given
that
I
guess
customer
doesn't
have
direct
control
over
them
and
can
necessarily
you
know
they
don't
see
the
nodes
for
them?
They
don't
see
any
I.
Think.
A
A
Gonna
say
if
T
is
having
the
master.
A
big
control
plane
set
up
is
already
a
solved
problem
and
in
my
eyes,
that
kind
of
alleviates
the
value
out
of
having
a
bootstrap
process.
Because
then,
at
that
point
you
can
just
deploy
the
CR
IDs
and
run
your
controller,
and
that
should
take
care
of
scaling.
Your
cluster
from
then
I.
H
See,
thank
you.
Yeah
I'm,
just
I'm,
just
curious
in
terms
of,
if
there's
some
longer
term
plan
to
somehow
convince
some
of
the
larger
vendors
to
maybe
to
come
up
with
some
consolidated
CI
experience
of
some
kind
that
would
work
against
their
api's.
A
lot
of
people
potentially
create
the
clusters
in
their
CIS
and
whatnot
and
if
they
want
to
kind
of
a
decouple
themselves
from
a
particular
vendor
that
you
know
a
benefit
to
a
genetic,
see
like
experience
for
getting
a
plaster
up
and
running
I
think
is
a
it's
pretty
big.
Well,.
A
I
think
I
think
that's
a
possibility
here.
If
we
look
at
the
importance
of
defining
the
sequence
and
if
that
is
an
implementation
and
to
use
Google,
for
example,
gke
or
GCE,
those
could
be
two
independent
implementations
and
ultimately,
you
still
end
up
with
a
kubernetes
cluster,
one
of
which
is
homegrown.
The
other
one
is
managed
service,
I.
H
J
There
is
kind
of
an
interesting
point
which
is
like
the
cluster
adoption.
You
know,
maybe
so
to
speak,
to
know
why
there's
a
cluster
that
is
being
managed.
You
know
how
do
you
get
to
know
the
closer
API
in
it?
You
know
like
I'm
running
as
a
separate
note
or
something
that
you
can
actually
use
it,
the
same
tooling
against
that
that
cluster
right,
that
might
be
managed
by
a
provider.
F
H
Me
is
actually
one
one.
One
kind
of
interesting
ideas
might
might
be
something
to
investigate.
Is
you
know
we've
we
just
mentioned
a
few
minutes
ago
about
master
deployment.
Should
there
be
some
high-level
object
that
represents
the
the
master
that's
provided
by
the
ayahs,
and
then
this
too
just
has
a
certain
level
of
API,
the
generic
API
to
kind
of
interact
with
it,
but
doesn't
see
it
as
individual
machines
and
versus
the
rest
of
the
machines
are
seen,
as
you
know,
as
the
individual,
no
there's
a
lower
level
components.
J
I
I
A
Like
to
give
at
least
give
it
a
little
bit
of
time
for
users
in
the
broader
community
to
provide
feedback.
Somehow,
if
that's
in
a
Google
Doc,
that's
seems
fine
to
me
yeah.
So
maybe
we
can
pick
it
up
next
time
and
see
if
there's
any
concrete
action
items
to
come
out
of
it,
something
cool
moving
on.
If
nobody
has
anything
else,.
A
J
Kind
of
a
part
of
the
discussion
here,
gonna
Sebastian,
suggested
that
a
couple
of
days
back
and
it's
like
yeah
it
Cuba
deployed-
doesn't
really
convey
what
we're
doing
you
know
and
I
don't
know
if
that's
a
possibility.
We
we
talked
about
the
know
whether
to
move
this
code
to
a
new
report,
something
and
all
so
I,
don't
know
the
history
and
like
what
we
could
do
you
know
so
they
recorded.
A
To
speak
a
little
bit
of
the
history
and
how
we
ended
up
here
was
when
we
started
at
the
this
effort.
We
were
in
the
locked
repository
phase
of
the
kubernetes
project,
so
we
ended
up
here
because
the
repository
already
exists.
There
was
originally
talk
of
migrating
this
out
to
a
different
repository.
If
that
comes
in
the
form
of
a
name
change,
then
it
seems
fine
to
me.
One
thing
kind
of
related
to
this.
That
I
would
want
to
call
out.
A
Is
we've
been
talking
a
lot
about
implementing
the
cluster
API
for
Cuba
corn
and
having
the
API
separate
that
an
implementation
seems
like
something
that
would
be
really
valuable,
so
maybe
we
actually
end
up
needing
two
or
more
repositories
here.
J
D
Yeah
I
definitely
I
hit
this
using
the
the
API
in
cops,
which
is
using
1.9
by
machinery,
because
that's
the
released
version,
yeah
I
think
true
deploy
is
using
110
API
machinery,
which
is
incompatible
and
I.
Honestly
I,
don't
know
how
its
how
we
solved
that
I
think
that
the
the
way
itself
does
it
split
out
into
a
separate
repo
which
I
don't
even
know.
If
we
generate
code
in
that
repo
or
not
and.
D
D
Because
it
clarify
it
would
be
very
easy:
I
think
they.
They
were
almost
at
the
point
where
it's
easy
to
get
a
crew
minute.
Github,
slash,
kubernetes,
SIG's,
sync,
cluster
life
cycle,
cluster,
API
I
think
is
it's
not
really
clear
yet,
but
we're
like
that
seems
easy.
Ish
to
do
I,
don't
know
whether
stripping
the
cost
or
life
cycle
thing
makes
it
harder,
so
I
fell.
Prefix
makes
it
harder.
J
D
D
N
D
We
could
make
the
suggestion
that
working
groups
should
also
get
their
top
names
under
I,
read
any
SIG's,
so
that
so
that
we
don't
have
this
problem
and
maybe
even
that
there's
some
sort
of
global
allocation
that
we
don't
even
need
the
WG
sig
prefix,
so
that
should,
for
example,
someone
decide
that
Sega
DVS
should
be
a
working
group
of
sick
cloud
that
we
don't
have
to
rename
things
we
can.
We
can.
We
can
make
the
case.
This
is
not
gonna
be
a
fast.
D
A
A
K
My
only
concern
is
that
that
the
type
should
be
separate
from
the
client
more
or
less,
because
it's
in
case
you
might
not
want
to
use
the
the
time
that
that's
provided.
If
you
want
to
generate
your
own
using
that's
all.
If
you
look
the
brains,
you
have
the
API
and
then
you
can
write
a
machinery.
Then,
if
you
have
the
client
go.
So
if
you
can't
the
same
thing,
it
will
be
perfect,
especially
for
rendering
okay.
H
Is
tooling
like
node,
repair
and
whatnot,
and
maybe
future
additional
kind
of
a
new
tooling
may
be
informal.
We
see
lieth,
not
something
that
is
considered
part
of
the
API.
If
you
will
right
because
you'll
be
nice.
If
you
know
cluster
operators,
don't
necessarily
have
to
worry
too
much
about
having
different
tooling
where
they
go
between.
A
In
my
mind,
the
command
line
tool,
implementation
of
the
cluster
API
would
encompass
things
like
the
sequence
of
bootstrapping
and
cluster
and
probably
like
mutating
CRTs
within
a
cluster.
After
there
have
been,
you
know,
creating
them
changing
them,
updating
them
and
then
I
think.
As
far
as
implementations
go,
we
can
there's
a
lot
of
different
ways.
We
can
look
at
having
a
plug-in
based
model
or
having
just
even
like
cloud
provider
based
like
style
of
working.
A
M
J
Some
amiibo
somebody
out
to
scale
a
maybe
with
the
repair,
dono
and
things
that
are
built
on
top
of
the
API
may
be
reference
a
moment,
just
kind
of
reference
in
the
orientation,
but
that
doesn't
prevent
all
there's
a
note
from
actually
augmenting
that
or
like
replacing,
with
the
better
implementation
specific
to
the
provider
right
and
those
might
not
be
like
that
were
poster
I.
Don't
think
we
have
a
good
start
on
how
that's
gonna
evolve
into
like
a
more
fooling
around
the
API
and
where
they
should
live.
Yeah.
O
L
O
N
N
L
A
E
Yes,
I'm
just
wondering
if,
for
example,
if
you
know
I
have
a
cluster
API
implementation
running
on,
you
know
running
on
some
cluster
that
it's
a
reachable,
let's
say
externally,
but
then
it's
managing
you
know
one
or
more
clusters
that
aren't
reachable
externally
and
if
users
want
to
be
able
to
reach
those
those
clusters
API
they
might
want
to.
You
know,
use
a
proxy
through
through
this
one
sort
of
is
a
management
cluster
or
management,
API
sort
of
like
a
you
know,
we
we
can
do
for
individual
services.
Let's
say
running
on
the
cluster.
A
And
that
would
solve
the
problem
of
having
we
have
one
management
cluster
that
is
externally
accessible,
and
then
we
have
in
number
of
internally
accessible
clusters.
Then
we
need
a
routes
through
the
external
one
to
the
internal
ones.
Your
question
is:
if
the
cluster
API
should
support
a
proxy
there
right,
I
mean
I.
Think
it's
a
good
idea,
I
think
it's
the
same
idea.
I,
don't
know
if,
if
a
proxy
is
in
scope
for
this
working
group,
though
I.
P
E
A
I
think
that
would
probably
be
a
an
exercise
of
how
you're
deploying
your
clusters
what
and
what
your
controller
is
doing
and
how
you
want
your
proxy
set
up.
If
the
proxy
would
live
in
the
management
cluster,
then
somehow
you
would
have
to
ensure
that
either
your
cluster
API
controller
is
enforcing
stayed
there
or
if
it's
just
any
old,
kubernetes
primitives
like
a
maybe
you
had
a
deployment
that
would
manage
your
proxies.
Kubernetes
could
just
manage
it
at
that
point.
E
F
E
F
F
One
thing
I
was
gonna
suggest
the
cluster
object
has
a
has
status
for
API
endpoints
that
makes
sense
to
register
a
proxy
endpoint
there.
Instead,
okay,
that
may
be
something
worth
exploring:
I'm,
not
sure
exactly
how
that
would
work,
but.
O
O
F
What
I
was
saying
is
you
have
you
set
up
a
proxy
separately
and
you
put
its
the
proxies
endpoint
into
the
cluster
status,
API
endpoint,
so
that
you're
talking
to
a
proxy
that
is
specified
in
the
cluster
API
endpoint,
but
it
does
proxy
to
the
internal
one
that
nothing
else
can
reach.
Instead
of
going
through
that
I'm
just
I'm
just
suggesting
a
way
to
get
rid
of
the
slash
proxy
resource
and
go
about
it
a
different
way.