►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180620 - cluster api
Description
Link to document: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#
B
Sorry
I
noticed
you
rearrange
the
order.
Okay,
so
so
Samsung
has
a
need
to
provision
to
a
number
of
different
providers,
and
we've
been
looking
at
things
like
uber
corn
and
then
also
the
cluster
control
implementation
as
possible
building
building
blocks.
B
B
However,
we
also
have
some
other
provisioners,
which
we
don't
think
or
we're
not
aware
of
anyone
in
the
community
being
interested
in
reading,
for
instance,
for
mass,
so
I
have
a
couple
of
follow-up
questions
on
on
how
provisioners
are
structured
in
the
wild.
Like
kubu
corn
has
a
bunch
of
provision
orders
under
one
repo,
and
we
were
kind
of
interesting
to
be
able
to
maybe
pick
and
choose
different
provisioners.
B
C
Totally
so
like
I,
don't
want
to
say,
the
projects
is
dead,
but
I,
definitely
like
I'm,
pretty
busy
at
the
office
right
now
with
traveling
for
conferences
and
everything
and
then
Marco,
one
of
the
other
main
maintainer
z--.
A
And
I
have
something
later
on
the
agenda
to
talk
about
that,
because
DIMMs
is
starting
to
try
that
I
want
to
talk
about
that
more,
but
I
would
say
like
it
seems
like.
There
are
a
number
of
people.
They
are
working
on
some
sort
of
AWS
support
for
the
cluster
API,
but
we
don't
really
have
like
a
rallying
place
for
that
code
to
kind
of
come
together
at
the
moment
and
since
the
Google
implementation
is
is
in
tree
and
that's
where
the
people
at
Google
are
working.
A
We
don't
really
also
have
an
example
of
here's,
how
you
would
set
it
up
to
be
out
of
trippy
and
and
how
that
works
and
so
forth.
So
that's
definitely
a
problem
that
we
need
to
solve,
and
maybe
this
is
a
good
impetus
for.
Let's
set
up
a
repo
for
AWS
and
start
that
process
and
try
to
get
get
that
machine
controller,
working,
I.
Think.
C
That's
a
great
idea,
I.
Think
Cuba
corn
has
an
example
of
what
it's
like
to
vendor
the
cluster
API
bits
out
of
the
repo,
but
I
still
think.
There's
a
lot
more.
That
needs
to
be
solved
there
and
the
controller
is
like.
We
can
scale
those
and
that's
all
it
can
do.
It
doesn't
even
report
status.
So
it's
very
primitive.
A
Yeah,
it's
a
little
late
on
January.
We
can
just
talk
about
that
now,
so
PR
360
I'll,
just
move
up
is
related
to
how
to
get
cluster
cuddle
to
use
a
provisioner.
That's
not
in
tree
right,
because
there's
sort
of
two
problems
here.
One
problem
is
you
vendor
the
cluster
API
code
out,
and
then
you
build
your
machine
controller
and
so
forth,
and
you
hooked
up.
But
then
how
do
you
actually
get
closer
call
to
use
that
and
right
now
it
will
only
use
at
things
that
are
compiled
into
it,
which
is
rather
unfortunate.
A
So
DIMMs
has
a
PR
to
basically
allow
you
to
register
external
providers
into
cluster
cuddle.
Chris
Rousey
and
I
were
talking
about
it
yesterday
and
the
the
reason
you
have
to
do.
That
is
because
there's
an
interface
we
have
right
now,
because
there
are
two
things
that
cluster
huddle
needs
from
a
cluster
and
ideally
it
wouldn't
need
eight
either
of
those
things
in
the
way
it
gets
them
today
and
we'd
have
it
sort
of
a
different
way
to
get
those
two
bits
of
information,
so
one
of
them
is
where
is
the
cluster
located?
A
Don't
think
we
yet
have
a
super
clear
story
about
how
to
do
this,
so
we
should.
We
should
figure
that
out,
but
in
the
meantime,
if
we
do
want
to
start
building
the
open,
cyclization
and
AW,
you
know
be
a
simple
tation
out
of
tree
is
probably
you
know
reasonable,
short-term
workaround.
To
put
your
your
change
in,
you
know,
with
some
comments,
basically
saying
a:
we
are
never
going
to
expand
this
interface
and
B.
We're
going
to
try
to
get
rid
of
it
completely.
A
So
I
think
there's
some
some
ideas
for
how
to
the
second
part,
I,
just
don't
think
it's
clear.
What
sort
of
the
right
way
to
do
it
is
right
so
yeah
we
could.
We
could
use
a
secret,
we
could
make
it.
You
know,
provider
specific
via
SSH,
like
I,
think
there
are
sort
of
a
number
of
different
things.
People
have
said
we
could
do
we
just
haven't
sort
of
built
any
of
them
and
see
that
they
would
work
well.
D
And
the
other
thing
about
my
register,
lookup
mechanism-
was
that
we
still
have
the
register.
Part
is
happening
in
init
call,
so
we
still
have
to
but
the
package
name
somewhere
in
in
one
of
the
go
files
for
us
to
be
able
to
use
it
so
that
that
is
still.
You
know
one
of
the
things
that
we
need
to
figure
out
how
to
do
better
right.
D
D
Yeah,
so
at
this
point,
if
you
ask
me
to
would
what
we
have
with
the
register,
lookup
is
better
than
what
we
have
right
now,
plus
it
has
the
masticated
of
the
vsphere,
the
extra
we
have,
an
external
data
structure
for
the
two
methods
get
IP
and
the
other
one.
So
we
should
merge
this
and
then
take
one
more
step
forward.
I
think
yeah.
A
I
mean
it's
basically
saying
is
I
think
this
is
a
good
intermediate
solution
to
unblock
people.
I
just
want
to
make
sure
everybody
understands
that
this
is
not
a
long-term
solution.
You
know
we
don't
expect
to
ever
make
this
interface
any
bigger
right.
In
fact,
we
want
to
shrink
it
and
eventually
get
rid
of
it.
Otherwise,
you
do
have
to
say.
Oh
hey,
we
have
AWS.
Let
me
check
something
in
the
cluster
cutter
repo.
We
have
opens
X.
Let
me
check
something
in
and
that's
not
gonna
be
scalable
right.
D
It
doesn't,
but
you
know,
that's
I,
so
if
we
do
that,
then
what
will
unblock
is
people
here
having
to
review
the
code
in
the
OpenStack
one
and
the
OpenStack
code
resides
in
a
separate
repository
and
it
can
have
its.
You
know
right
now.
People
have
been
modifying
the
code
for
a
while
now
and
there
have
been
some
attempts
at
reviewing
it.
You
know
the
people
who
doing
OpenStack
code
should
probably
take
over
that
piece
and
not
worry
everybody
else
here.
D
A
I
mean
maybe
it'd
be
worth
talking
to
like
the
steering
committee
about,
or
maybe
just
a
cig
setting
up
some
repos
for
the
different
implementations.
I
think
some
people
worry
a
little
bit
about
repo
bloat
about
having
too
many,
but
certainly
I
had
a
chat
with
Tim
a
couple
weeks
back
about
that
Rinku
blow
in
some
ways
was
a
good
thing,
because
in
every
repo
can
be
very
focused
and
targeted
and
kind
of
like
do.
One
thing
well
write.
D
B
Okay,
I.
B
Have
a
question
for
later
yeah
go
ahead
so
on
the
next
topic
about
sharing
SSH
scripts.
This
is
related
to
breaking
the
repos
out
right
now,
when
I
look
at
the
Google
implementation,
I
leave
also
the
vSphere
implementation
and
then
all
the
Cooper
corn
implementations.
They
all
rely
on
bash
COO
medium.
B
Well,
those
two
things
in
order
to
draft
nodes,
since
we
want
to
have
a
relatively
consistent
layer
with
which
to
build
our
additional
services
like
any
applications
or
logging
pipelines.
We
deployed
with
these
clusters
we'd
like
to
make
the
kubernetes
the
buoyant
as
uniform
as
possible,
which
means
things
like
sharing
those
bootstrap
scripts.
A
C
C
We
took
the
opposite
approach
and
said:
actually
no
we're
gonna
give
users
all
the
power
and
we're
gonna
have
independent
bootstrap
scripts
for
every
single
different
possible
permutation
of
a
cluster
and
I'm
wondering
what
we're
trying
to
fix
yourself
by
finding
things
together
and
having
one
uniform
way
here.
That's.
B
A
good
point,
so
actually
it's
not
I'm,
not
trying
to
insist
on
the
bootstraps,
it's
being
the
same
between
of
applications,
but
I
want
as
a
service
provider
to
be
able
to
choose
which
bootstrap
scripts
all
of
my
users
use.
So
so
maybe
what
I
need
is
to
be
able
to
configure
like
the
Google
implementation,
to
use
my
bootstrap
scripts
and
then
configure
some
a
to
the
supplementation
to
use
those
same
bootstrap
scripts.
B
A
A
A
But
if
Samsung
wants
to
say
for
our
product
we
want
them
to
be
consistent,
it
would
be
nice
if
sort
of
each
the
providers
had
an
easy
way
for
them
to
sort
of
override
those.
Those
scripts
like
the
Google
has
big
map
where
they
say
we're.
Gonna
use
the
Google
code,
but
all
we
need
to
do
is
is
put
this
config
map
in
and
then
that
will
it
will
run
from
that
config
map.
So
we
can
put
our
our
script,
that
we
know
works
and.
D
A
It
just
runs
it
so
that
that
might
be
a
good
pattern
for
people
to
look
at
and
see
if,
if
it
is
more
generally
applicable
and
if
it
seems
like
that
works
well,
we
could
standardize
it
and
if
it
doesn't,
then
the
implementations
don't
might
diverge,
but
it
would
be
good
to
make
them
all
sort
of
pluggable.
In
that
sense,
the
other
thing
I'll
point
out.
I
don't
think
justin
is
on
the
call,
but
cops
does
have
like
a
binary.
A
I
could
go
binary
that
they
were
on
the
nodes
instead
of
a
shell
script,
which
is
another
thing
we
might
want
to
look
at
moving
towards
in
the
future.
The
shell
scripts
are
nice
and
this.
Instead,
it's
really
easy
for
someone
who's
developing
to
see
what's
going
on,
but
it's
also
a
little
bit
harder
to
make
them
really
reliable.
Yeah.
A
B
B
C
C
Admin
might
actually
be
the
right
place
to
sort
of
build
this
logic
into,
and
then
the
second
that's
a
broader
discussion,
for
the
sake
just
wanted
to
throw
it
out
there
and
then
the
second
one
here
is
in
general,
the
idea
of
using
a
bootstrap
scripts
to
pull
information
off
the
internet
to
bring
up
your
kubernetes
cluster
I.
Think
is
a
really
dangerous
pattern,
as
the
Internet
can
change.
So
I
think
you
know
moving
forward.
I.
C
Think
the
config
map
approach
is
solid,
but
I
think
we
probably
ought
to
encourage
folks
to
start
making
as
much
of
this
logic
into
whatever
image
they're
using
before
they
even
bring
up
a
cluster,
so
that
we're
not
mutating
the
filesystem
at
runtime.
When
we're
bringing
up
a
constant
just
my
opinions
here
for
what
their
way.
A
Yeah
I
mean
the
more
that
you
can
sort
of
preload
the
more
reliable
it
is,
and
the
faster
it's
gonna
be
to
write,
but
that
all
would
also
require
people
to
have
a
pipeline
of
building
images
with
with
the
scripts
all
sort
of
pre-baked
and
a
lot
of
times.
The
reason
that
there's
a
script
is
because
things
are
parameterized
right,
so
you
can't
build
an
image
that
has
a
hard-coded
IP
for
a
single
master
right
or
a
cube.
Admin
joined
token,
for
instance,
like
those
have
to
be
passed
in
somehow
as
parameters.
This.
A
A
Cuecore
needs
bash
scripts
cops
uses,
go
binary,
so
they're,
pluses
and
minuses
to
the
different
approaches.
We
certainly
want
to
mandate
that
everybody
has
to
do
at
a
particular
way,
but
if
we
do
start
to
find
sort
of
best
practices,
we
should
bring
those
back
to
the
meeting
and
try
to
adopt
those
somewhat
consistently
across
the
different
implementations.
E
Just
once
I'm
not
about
cops,
it's
actually
using
basket
as
well
to
download
the
node
up,
and
it's
very
nice
because
it
has
to
compensate,
for
example,
for
that
Recology.
So
in
the
past
we
have
to
do
a
rate,
rich
rice
and
all
those
kinds
of
really
bad
things.
So
I
think
no
one,
no
one
will
escape
the
basket.
A
Yeah
I
mean
I,
guess
you
could
you
could
get
past
that
if
you
did
have
that
note
up
binary
pre-loaded
on
the
image
right,
you
could
say:
I
have
an
image.
It
already
has
cube
admin
installed.
It
already
has
a
cubed
install.
It
already
has
note
up
installed
and
then
all
I
need
to
put
on
that
image
is
a
couple
of
parameters
that
actually
make
those
things.
Do
the
right
code
paths
right
based
on
my
particular
cluster
yeah,
because
you
know
our
bass
grips.
A
All
right,
so
we've
we've
ventured
a
little
ways
away
from
sort
of
the
original
topic
here,
but
David
does
that
answer
some
of
your
original
questions,
or
did
you
have
sort
of
more
follow-on
to
do
I
think
we
started
talking
about
Cuba
corn
a
little
bit.
Did
you
want
to
talk
about
like
the
platform,
nine
SSH
provisioner
at
all
or
other
provisioners?
A
A
But
if,
if
you
guys
want
to
collaborate
with
them
on
that,
then
all
of
a
sudden
we
have
multiple
maintainer
across
multiple
companies,
which
is
really
nice
and
sort
of
much
better.
If
it's
the
sig
model,
in
which
case
we
should,
instead
of
having
them,
write
the
code
in
platform,
9
/,
you
know
whatever
repo
they
wanted
to
create.
We
should
put
it
in
karate,
SIG's,
jurassic,
lesser
lifecycle,
repo
and
have
you
guys
start
working
on
it
together?
There
we.
A
That
was
my
main
worry
about
the
design
dock.
That
I
read
was
if
you're,
relying
on
SSA
Chen
to
clean
up
a
machine
without
like
going
through
a
reinstall
cycle,
then
you
don't
have
great
guarantees
on
what
state
that's
machine
machine
is
in
for
the
next
person
that
comes
along
to
try
to
use
it.
I
tell
you
zero
guarantees.
A
The
I
was
I
was
trying
to
be
nice
and
I
said
not
no
good
guarantees
but
I.
Think,
like
you
said
it
might
be
great
for
for
getting
stuff
up
and
running
quickly
and
might
be
a
stepping
stone
to
having
something
a
little
bit
more
production
ready
for
bare
metal.
So
so
I
think
what
I'm
hearing
here
is
so
for
both
AWS
and
for
that's
h1.
We
should
figure
out
how
to
set
up
a
repo
to
get
you
guys
started
on
those
now.
So
I
also
can
action
item
to
follow
up
or
maybe
focus.
A
A
E
So
it's
about
the
proquest
for
adding
reference
to
the
coaster
in
the
mushi
specification
from
what
you
don't
see
in
the
core
thing:
yeah,
okay,
so
pretty
much
the
reason
why
we
need-
or
we
need
something
like
this-
is
because
the
actual
rate
of
photo
reasons
the
actuator
needs
a
reference
to
a
coaster
object
and
the
coaster
object
has
some
cost
white
chair
in
data
pretty
much
so
especially
for
use
cases
when
there
are
multiple
posters
and
machines
in
the
same
space.
For
some
reason
it
should
be
I
mean
something
like
this
should
be
added.
E
So
one
approach
of
getting
rid
of
this
reference
is
either
move
more
things
to
the
machine
or
there's
no
other
way,
I
mean
either
we
have
the
reference
or
we
we
updates
on
the
machine,
so
yeah
I
mean
I,
don't
see
any
any
problem
with
with
with
the
reference
think
right
now,
just
just
something
I
mean
just
right
now,
if
any
has
anyone
has
by
any
better
suggestions
about
how
to
do
it?
So.
A
A
E
E
E
E
E
A
A
Mean
I
think
it's
certainly
easier
to
move
here
from
option
also
required
later
than
the
reverse.
So
we
could
certainly
start
with
it
being
optional
and
see
if
there
are
any
cases
where
people
like
wouldn't
want
to
use
it,
where
we
want
to
sort
of
keep
it
optional
or
if
we
want
to
tighten
tighten
sort
of
the
screws.
As
we
move
towards
alpha
and
beta
and
say,
like
you
know,
we've
sort
of
exercised
the
different
use
cases
and
we
think
this
should
always
be
required.
A
A
It's
not
clear
that
it's
sort
of
the
right
API
design,
but
for
services
and
kubernetes
the
IP
addresses,
are
in
the
spec
instead
of
the
status,
because
that
allows
you
to
request
a
specific
address
and
if
you
don't
request
it
and
the
controller
will
fill
in
what
it
assigned
in
the
spec,
which
seems
a
little
weird
sort
of
the
kubernetes
model.
But
we
should
sort
of
definitely
check
with
you
know.
A
I,
don't
know
who
the
right
sort
of
API
reviewer
cabal
would
be,
but
sort
of
where
the
right
place
to
put
this
field
would
be
so
demos
on
here,
but
I
was
gonna
mention
to
him.
That
I
think
it's
fine
to
put
this
in
status
for
now.
As
long
as
we're
aware
that
you
know,
since
they
API
is
still
alpha,
it
might
make
sense
to
move
it
to
Speculator
or
it
might
make
sense
to
put
it
in
both.
So.
B
A
So
the
question
of
is:
if
it's
in
the
spec,
we
have
it
inspect
and
status
or
just
inspect
right,
so
services
just
put
it
inspect
in
a
controller
like
actually
update,
suspect,
which
is
also
kind
of
a
weird
pattern
in
kubernetes
I.
Don't
think
that's
really
followed
anywhere
else,
and
if
it's
not
it,
if
it's
in
stack
and
status
and
a
user
doesn't
request
it,
then
now
your
spec
and
status
are
out
of
sync.
A
B
D
B
A
A
Okay,
so
to
poke
that
PR
and
try
to
keep
it
moving
forward.
Next,
as
we
were
starting,
this
meeting
and
I
was
looking
for
agenda
items,
because
the
agenda
was
pretty
light.
I
pulled
up
the
pull
request,
so
I
noticed
that
there
are
quite
a
few
pull
requests
that
don't
have
an
assignee
they're,
not
marked
as
work
in
progress.
I
know
like
a
lot
of
the
ones
from
Googlers.
A
Well,
people
will
just
manually
assign
it
to
other
people
to
review
I
think
some
people
that
are
on
this
call
will
you
know
you
know,
get
someone
assigned
because
we
discuss
it
during
the
call,
but
if
people
just
send
PRS
I
think
they
might
just
sort
of
fly
under
the
radar
and
not
really
end
up
with
the
reviewer
and
not
end
up
making
progress.
So
I
know,
kubernetes
has
a
way
to
auto
assign
issues
to
people.
It
appears
that
we
don't
have
that
configured
in
a
repository
I,
don't
know
how
to
do
that.
A
I'm
sure
I
could
figure
it
out,
but
if
someone
is
interested
in
sort
of
taking
that
on
to
figure
out
how
we
could
set
up
an
auto
assignor,
that
would
be
really
awesome.
I
want
to
I
just
want
to
make
sure
that
sort
of
experience
for
people
who
are
contributing
code
is
that
they
send
a
PR,
and
then
it
doesn't
go
anywhere
because
nobody
happens
to
notice
it
yeah.