►
From YouTube: Kubernetes SIG Cluster Lifecycle 20181114 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.nmlfae4ffj1s
Highlights:
- Discussed the proposal for common provisioning logic
A
Hello
and
welcome
to
the
Wednesday
C
November
14th
edition
of
the
Kuster
API,
it's
a
project
meeting
for
state
cluster
lifecycle.
We
have
a
pretty
short
agenda
today,
so
focusing
on
add,
more
things
agenda.
That
would
be
great,
but
if
any
time
let's
go
ahead
and
dive
into
our
first
topic,
which
is
a
follow-up
from
last
week,
which
is
talking
about
sharing
common
provisioning
logic
in
the
upstream
Rico,
so
that
provider
repos
don't
have
to
read
up
on
it.
So
it's
like
I
stuck
a
link
to
the
dock
in
our
meeting
notes.
I.
A
Think
Marco,
you
like
to
also
to
a
PR
here
as
well
with
proposal
and
I,
think
there
were
some
comments
that
were
left
right
before
the
meeting.
So
you
said,
you've
responded
to
most
comments,
but
there
probably
a
couple
of
new
ones
there
we
should
dive
into.
So
just
can
you
give
us
like
a
quick
like
two
minute
overview
for
people
in
a
may
admit
the
last
meeting
about
what
you're
proposing
and
why.
B
We
have
started
talking
about
the
contravening
scripts,
telecaster,
API
and
so
far
we
have
wrote
episode
and
what
we
want.
We
want
to
have
common
scripts
that
are
usually
seen
between
providers
such
as
installing
kubernetes,
cube,
ATM
and
some
stuff
is
most
the
same.
Even
promise
unique
to
some
degree
is
same,
but
there's
some
difference
to
make
it
easier
for
cluster
API
impetus
to
get
started.
They
want
to
create
some
mechanism
to
shell
scripts
available
to
those
who
want
to
use
them.
B
So
those
are
that
are
easy
to
get
started,
but
still
everybody
can
provide
their
own
scripts.
If
that
provided
one
does
it
work,
for
they
all
provide
their
own
whatever,
and
for
now
we
have
proposal.
We
have
talked
about
it
at
the
last
meeting
and
as
we
dispose
at
it,
I
have
created
a
PR
with
the
proposal,
I
all
supported
con
from
Google
Docs
to
get
up.
So
it
is
easy
to
follow
what
has
been
talking
about
so
far.
B
C
So
we,
the
ADA,
bears
provider.
We
talked
about
this
in
the
very
beginning
that
we
did
not
want
to
go
this
route
on
purpose,
because
it's
less
idempotent
and
there's
a
bunch
of
issues
that
can
crop
up
along
the
way.
So
we
use
image
stamping
as
an
referencing
the
canonical
images
for
the
versions
that
we
have.
We
did
this
on
purpose,
because
it
become
can
be
super
thorny
from
a
number
of
different
perspectives
and
and
for
exactly
the
reason
that
you
mentioned.
C
People
keep
on
washing
rinsing
and
repeating
the
same
type
of
scripts
across
different
providers.
I,
don't
know
if,
if
there's
an
official
take
on
it,
but
I
think
from
a
simplicity
of
management
from
your
provider
perspective
and
from
your
API.
It
actually
greatly
simplifies
what
you
need
to
do
for
that
aspect.
I
don't
have
strong
opinions
on
on
making
a
set
of
hooks.
If
you
want
to
call
them
that
standard
or
common.
A
C
Don't
we
did
for
OS,
you
can
stamp
whatever
you
want
to.
We
create.
We
have
the
Stamper
available
for
people
to
use.
They
just
need
to
specify
details
of
the
image
that
they
want.
They,
whatever
image
they
want
right.
So
we
inside
of
the
repo,
the
anniversary
Pro
there
exists
a
utility
for
them
to
create
their
their
stamps
and
they
can
provide
their
configuration
of
deployment
stuff.
We
just
need
to
make
sure
it's
the
plumbing
to
kuba
da
and
it
is
seen
I'll
let
Chuck
and
Vince
talk
in
more
detail
on
logistic.
D
On
the
image
something
yep
and
how
I'll
monster
yeah,
so
we
I
mean
it's
like
you
said
you,
we,
we
make
it
available
and
stamp
out
the
images
that
we
use
a
link
right
now
we're
using
Sentosa
Labonte
in
a
bun
tube,
and
then
we
bump
that
for
the
version
of
kubernetes
we
want
to
use,
but
we
actually
do
in
in
AWS.
I
actually
do
use
the
cloud
or
cloud
a
bit
yeah.
We
use
equipment
scripts
to
actually
provision
the
cluster
or
install
the
cluster
and
then
also
run
the
joint
command.
D
A
C
A
B
So
far,
a
quick
note
about
the
image
tactic
that
we
have
talked
about
now
is
that
I
like
how
it
works
and
for
Amazon
it
works
great,
and
it
is
a
great
way
to
skip
some
of
these
steps.
But
an
important
thing
to
mention
is
that
for
case
of
some
other
cloud
providers
such
for
example,
I
work
alone,
cross
therapy,
I
provided
for
digitalocean,
and
they
still
don't
help
custom
image
feature
that
allows
you
easy
to
share
image.
That
may
be
a
problem
to
go
with
that
approach,
because
we
get
started.
B
Images
may
be
hard,
so
I'm,
worried
and
I
guess
that
means
still
apply
to
some
smaller
cloud
providers.
I'm,
not
sure
how
that
works
for
your
crowd.
But
that
is
the
feature
that
here
it
depends
on
cluster
Cos,
Cob
provider
and
I
have
mentioned
it
a
little
bit
on
the
end
of
the
proposal,
but
for
now
I
think
that
some
basic
stuff
to
at
least
get
kubernetes
in
so
don't
have
to
take
care
of
about.
B
B
B
But
when
we
get
been
racing,
the
Koster
is
easy
for
one
to
meet
and
users
if
they
get
installed
tube,
ATM,
cubelet
and
sell
stuff,
and
that
is
same
for
all
call
providers,
and
this
is
a
part
that
we
can
definitely
get
in
Isis
without
any
changes
or
any
Toby
changes.
But
more
problem
we
have
is:
how
do
we
want
to
chant?
B
Do
a
cluster
initialization,
because
what
we
have
we
have
qubit
configuration,
which
a
lot
depends
on
Co,
provide
event
infrastructure
where
we
are
running
and
dosage
have
to
be
DM
configuration
file
and
share
to
maybe
a
minute
now
to
maybe
a
minute
adjust
that
common
meet
config
file
acid
in.
But
a
problem
is
how
to
build
that
configuration
file
and
how
to
build
cubed
configuration
now.
What
the
head
of
my
mind
is.
B
Should
we
stop
on
the
first
step
and
the
wall
to
the
cursor,
a
parameter
can
easily
provide
the
cube
code
file
and
queue
baby,
a
graphic
file
and
because
their
API
will
run
your
baby
and
for
them
or
that
they
can
choose
not
to
go
with
your
baby,
em
or
such
stuff
and
to
go
like
that
or
to
still
share
some
basic
compla
a
later
on
Hubert
and
cube
ABN,
and
this
is
the
first
thing
I
want
to
talk
about
today.
If
that's
okay,.
C
So
there
was
many
things:
let's
can
you
press
which
we
break
down
the
topics
into
a
couple
of
them,
I.
Think
the
the
first
topic
of
how
do
you
match
the
input
of
cube
ADM
in
configuration
because
there's
two
parts
there's
like
the
Kulik
configuration
or
the
and
the
overall
comedian
configuration
configuration
so
the
way
it
could
be
a
and
is
specified
now
is
it's
broken
up
into
several
different
configurations
for
different
phases
of
the
lifecycle
of
whether
you're
initializing
you're
joining
the
node
they're
joining
to
a
cluster?
C
You
know,
like
cetera,
just
ranges
and
stuff
depending
upon
how
you
want
to
do
that,
or
you
know
you
might
have
some
template
override
I
this.
This
gets
into
the
weird
state
space
of
yeah
mol
templating
that
I
don't
know.
If
there's
an
answer
that
isn't
fractal
I,
don't
know
if
I
a
standard
solution
that
we
would
want
to
recommend
other
than
like.
C
Because
well
at
least
the
way
we
do
it
stubbed
out,
if
you,
if
you,
if
you
don't
go
with
image
stamping,
then
it
becomes
a
little
bit
weird,
because
then
you're
gonna
have
to
specify
some
details,
and
that
also
becomes
a
little
bit
more
difficult
to
manage
this
sort
of.
Do
you
have
more
cones
there,
I
guess.
B
C
E
I
linked
like
to
the
node
configuration
that
we
use
so,
for
example,
we
do
like
past
the
no
registration
name
as
the
host
name
here
like
for
the
for,
like
the
user
data,
and
we
it
does
have
Kubler
extra
arguments
so
like
this
is
what
Chuck
was
pointing
to
it
like.
You
can
pass
all
the
arguments
as
this
list
here
and
that's
like
I've
been
working
fine.
So
this
is
a
got,
a
go
template
we
can
discuss
if
this
is
the
best
solution
or
not,
but
this
has
been
working
fine
for
us.
B
C
F
C
The
args
flow
to
eventually
become
a
couplet
config
file
on
disk
that
gets
specified
and
run
on
startup,
so
like
the
args
feed
through
and
there's
two
parts
to
the
arch,
there's
cluster
wide
settings
that
get
ripped
down
from
QT
and
config,
as
well
as
local
information
about
the
couplet,
such
as
the
CRI
that
then
gets
overridden
and
written
to
local
disk.
This
is
this
is
evolved
over
a
long
long
time.
C
D
C
We
do
not,
we
use
a
portion
of
cupola
configuration
and
then
we
use
an
even
value
by
our
override.
So
we
don't
do
full
component
configuration
with
the
full
dynamic
component
configuration,
but
we
do
use
component
configuration.
So
it's
not
dynamic,
we
pull
it
down
and
we
reset
it
for
it's
pretty
much
heterogeneous
for
your
environment,
really.
A
C
A
B
C
I
think
I
need
to
read
the
original
proposal
and
go
back
and
try
and
understand
it.
I've
been
too
busy
on
113
from
trying
to
get
really
stuck
because
we
are
tightly
bound
to
the
release
cycle.
We're
switching
cube
idiom
to
BGA
this
cycle,
so
it's
been
been
pretty
difficult
to
keep
track,
so
I
might
circle
back
to
that,
but
there's
no
guarantees
I'll
even
have
man
West
to
do
that.
C
The
know
the
the
the
configuration
file,
which
is
the
types
for
Covidien,
is
going
to
be
one
babel
on,
but
the
actual
all
the
command
line.
Our
girls
are
going
to
GA
we're
super
conservative
and
always
have
been
with
how
we
do
promotion
and
Covidien,
because
a
lot
of
folks
depend
upon
it.
So
we
don't
want
to
break
any
compatibility,
but
for
some
of
the
outfit
commands
in
the
alpha
phase
is
we've
switched.
E
B
Ok,
so
far,
what
I
don't
have
any
personal
questions
on
beasts?
The
first
question
I
think
the
only
open,
the
only
open
action
item
for
me
is
to
see
how
about
the
cube,
ATM
configuration
and
node
configuration.
Also,
we
have
to
see
about
how
do
we
want
to
provide
that
you
bait
the
end
configuration
file
and
little
related
to
next
question?
Is
company
want
to
store
it
is?
B
E
C
B
C
You
again,
this
gets
really
fractal
with
regards
to
this
stuff,
I
I.
Think
having
some
global
like
you
can
you
can
consider
it
almost
like
a
set
of
lifecycle.
A
A
And
how
much
do
we
want
to
optimize
and
write
a
whole
bunch
of
code
in
templating
to
try
and
share
that
logic
right
if
I
compare
that
with
the
scripts
that
we
use,
you
know
for
DCP,
where
we
actually
do
only
installation
pieces.
Those
groups
are
long
and
complex
because
we
are
trying
to
install
a
bunch
of
things,
someone
reliably
on
TLS,
the
actual
scripts
that
they
have
written
where
they
are
using
pre
stamped
images
which
have
everything
already
installed
are
very
short
and
sweet
and
provider-specific.
A
So
if
we
can
share
all
of
the
logic
where,
if
you
don't
have
the
pre
stamped
images,
get
an
image
to
be
the
point
where
it's
ready
to
be
joined
to
the
cluster
I.
Think
that's
a
pretty
big
value
when
we'd
agree
to
those
things
are
pretty
common
across
providers
and
then,
if
we
keep
the
150
lines
of
code,
we
have
to
duplicate
that
a
few
times,
because
they're
all
different
and
then
gets
rid
of
a
bunch
of
templating
and
sort
of
ugly
interfaces
between
declares
the
stack
I.
Don't
see
that
as
being
about.
C
Yeah,
that's
that's!
Basically
what
our
image
stamper
does
and
it's
it's
just
ansible
based
so
like
you
know
any
use
ansible
for
what
it's
good
for
don't
turn
into
plain
cluster,
but
I
use
it
for
for
idempotent
image
creation
and
that
it
works
really
well
for
that,
and
it
basically
allows
there's
enough
parameters
there
to
do
what
we
need
to
do
and
we'd
be
happy
to
push
that
some
other
common
location
for
people
to
use
it.
Reference.
C
Not
if
they're
open-source
license
we
only
stamp
currently
for
what's
a
code
for
CentOS
and
blue
two.
If
you
are
worrying
about
other
examples,
you
can
basically
pass
that
you
know
information
into
cloud
in
it
too
as
well,
and
your
cloud
in
it
script
would
be
a
little
more
complicated,
but
most
of
your
other
stuff
would
not
be
so
like
if
you're
doing
a
traditional
rel
type
of
things.
You
would
have
your
your
registry
information
for
how
you're
going
to
certify
your
distro.
C
A
A
B
Okay,
I
think
most
of
questions
are
answered,
but
another
big
problem
we
have
with
this
proposal
is:
how
do
we
want
to
store
scripts
in
the
our
code
or
comment
ago
that
you
mention
and
using
bundles?
But
do
we
want
to
do
something
like
that
and
how
that
would
look
like,
and
do
we
want
to
use
templating
and
just
our
go
templates?
What
would
we
prefer?
It.
A
Yes,
this
was
the
one
that
at
the
end,
if
I
wanted
it
go
so
I'm,
pretty
leery
of
using
go
templates,
I
think
there
was.
We
initially
did
that
for
the
gtp
implementation
and
switched
over
to
using
config
maps,
because
I've
used
own
templates.
You
actually
have
to
recompile
and
Reaper
binaries
when
you
want
to
fix
your
script,
which
is
really
annoying,
and
we
found
this
especially
even
purchasing
like
development
velocity.
You
didn't
want
to
either
cross
trumpets.
A
You
can
tweak
your
config
map
if
those
things
are
working
and
then
you
can
do
that
config
max,
let's
get
Joe
next
time.
That's
really
convenient
one
downside
of
using
config
maps
as
if
they're,
basically
just
an
untyped
data
blog.
So
the
logical
next
step
after
config
Maps
is
to
look
at
using
something
I
can
see
already,
but
you
actually
have
a
better
type
information
in
there
and
the
reason
I
mentioned.
The
bundle
is
because
the
bundle
effectively
already
has
sort
of
proto
CRD.
For
this
use
case.
A
I,
don't
think
it's
necessary
had
a
lot
of
eyes
on
it
yet,
but
if
I
think
Josh
dropped
a
link
to
it
this
morning
in
the
dock,
that's
not
better,
but
if
you
are
but
there's
a
part
of
the
bundle
that
they
see
is
going
config
version
which
has
a
place
to
stick.
You
know
initialization
scripts,
which
you
know
would
be
basically
be
a
place
where
we
can
say
you
know
in
a
code
you
get
have
been.
This
kubernetes
object
right.
A
It
could
be
the
bundle
charity
or
a
similar
CRD
and
the
codes
as
a
great
community
in
in
scripts.
Let
me
run
those
scripts
and
we
can
then
you
know,
take
the
common
in
scripts
and
then
these
are
specialization
for
cluster
formation.
Put
those
two
things
together
and
pass
that
to
the
actuator
code.
It's
sort
of
how
I
was
picturing,
it
I
think
I
think
avoiding
go
templates
will
be
really
nice
just
because
I
don't
want
to
have
to
build
new
binaries.
If
we
want
to
tweak
it
Network,
don't
let
it.
A
B
Okay,
let's
make
somewhat
of
sense
because
go
temple
is
building,
and
maybe
you
want
to
run
away
from
that.
So
it
is
easier
to
me
maintain
everything.
So
I
will
take
a
look
in
bundles
I'm,
still
not
sure
how
that
work
right.
But,
as
you
said,
maybe
there
is
links
somewhere
in
the
proposal.
So
I
will
try
to
find
a
little
bit
more
about
that
and
maybe
update
proposal
and
commit
to
that
I
put.
A
Yes,
I
think
one
of
the
things
you
know
I
remember
this
morning
is
we've
been
talking
with
the
cue
baby
of
idiom
folks
about
possibly
having
them
adopt
parts
of
the
bundle
as
well,
because
it's
it's
you
know
it's
a
pretty
generic
sort
of
schema
for
how
to
define
what
should
be
running
your
cluster.
This
specific
part
of
the
bundle,
the
notice
figure
in
part,
is
I,
think
a
part
that
the
cube,
ATM
product
itself
probably
doesn't
care
about
because
they
don't
care
about
how
the
note
is
configured
right
thanks.
A
The
first
person
do
that
out
of
and
and
someone
to
run,
QAM
joint
and
it
leads
to
higher
level
tooling,
let's
close
our
API
to
configure
nodes
to
be
in
a
ready
state
and
the
new
config
is
basically
a
way
for
us
to
codify
what
the
clustering
guy
would
need
to
do
to
get
to
know.
It's
ready
for
cue
video
join.
A
The
other
part
is
there
are
other
parts
of
the
bundle
specification
and
talk
about
like
how
the
control
line
should
be
configured
which
different
you
know.
Binary
should
be
run
and
so
forth
and
I
think
that
part
is
less
relevant
to
this
discussion,
but
might
be
relevant
to
what
you've
mentioned
previously
about
how
we
pass
data
down
from
the
foster
API
in
the
cube
ATM.
B
Okay,
I
think
it
sounds
good
Tuggle.
You
can
put
all
this
stuff,
because
this
topic
is
still
a
very
important
role,
but
not
really
explaining
the
proposal
beside
musical
templates.
So
before
proceeding
with
these
purposes,
all
right,
you
take
a
look
into
that
and
try
to
provide
as
much
as
you
tear
as
possible
and
when
to
fight
them
alternate
away.
We
decide
go
templates
because
indeed,
maybe
that's
not
the
best
way
to
handle
all
this
I.
A
A
So
you
put
it
out,
there's
a
here's,
what
we
think
it
might
look
like,
but
this
would
be
a
really
great
sort
of
concrete
use
case
to
validate
whether
that
approach
is
correct
and
if
not,
what
should
be
fixed
about
it,
and
so
I
mean
I,
guess
the
the
main
thing
I
propose
is
we
actually
use
a
cube
any
type
for
this
and
I
think
you
know
the
bundle
put
it
off
that
type
if
it's
not
the
type,
that's
currently
in
the
bundle.
I'm.
A
Sorry
like
I
love
for
those
things
to
come
together
and
I
know
that
the
qadian
folks
have
started
looking
to
bundle
it
off.
Are
the
gun
and
I
know
that
Justin
has
been
really
excited
about
trying
to
apply
the
bundle
and
as
well
and
so
again
from
a
consistency,
point
of
view
as
we're
trying
to
drive
sort
of
consistency
across
different
pieces
of
Buster
lifecycle
stack
if
we
can
start
finding
things
like
that
about
here
is
consistent
ways
to
describe
how
we
configure
things.
Then
it
comes
a
powerful
tool
we
can
take
to
people
there.
C
C
Right
now,
right
now,
the
management
of
it
is
is
very
fractal
and
in
fact
it's
it's
almost
unsustainable.
So
I
don't
like
rocket
cars
I,
don't
like
doing
all
the
other
things,
but
unless
we
actually
have
a
single
canonical
location
for
how
we
manage
this
stuff,
it
doesn't
do
are
any
of
our
users
or
customers
any
good
at
all.
Right.
A
Yeah
I
think
that,
having
some
consistency
on
how
we,
you
know,
configure
control
planes
and
how
we
configure
nodes
and
how
we
eventually
configure
machines
and
clusters
and
all
these
different
pieces,
I
think
as
part
of
the
value
that
the
host
device
IO
state
can
bring
to
the
community,
because
if
we
can
start
driving
consistency
across
the
tools
that
are
part
of
ours
day
and
potentially
even
into
some
of
the
commercial
tools
as
well.
And
that
creates
a
more
sort
of
seamless,
consistent
user
experience.
Right,
which
I
think
is
part
of
our
goal.
A
B
Have
any
questions
and
used
a
lot
of
time
on
this
meeting
for
this?
So
if
you
have
any
other
questions,
feedback
and
anything
like
that,
you
can
leave
a
comment
on
Google
document
or
on
PR,
and
we
will
try
to
get
back
as
soon
as
possible
to
see
and
to
answer
the
contents.
I
will
also
try
to
take
a
look
at
everything
we
talked
about
today
into
a
daily
purpose
or,
if
needed,
or
to
give
someone
date.
Oh.
A
And
and
I
just
want
to
make
sure
that
somebody
thank
you
for
doing.
This
is
great
they're,
trying
to
drive
consistency
here,
because
I
think
having
a
place,
you
know
the
end.
Maybe
we
should
also
need
cover
D
with
the
image
Stamper,
but
having
a
place
where
we
can
sort
of
consolidate
on
these
boots
driving
scripts,
a
few
room
that
we
have
to
maintain
the
more
reliable
we
can
make
them
the
better
test
that
they
will
be
Federer
like
I,
think
that
is.
That
is
a
great
goal
to
have
in
mind.
A
E
A
Yes,
so
that
there's
only
at
the
bottom
of
the
google
doc
link
from
the
notes,
but
I
also
think
that
links
directly
into
the
notes
as
well.
I
put
it
into
chat
that
I
feel
like
would
be
better,
and
so
it's
probably
not
worth
putting
in
there
or
I'll
drop
it
in
the
chat,
real
quick
for
you
to
since
I
recounted
thanks.
A
All
right,
I
think.
Third,
one
I'll,
let
you
know
I
have
been
going
through
PRS
and
trying
to
get
things
merge
it's
always
easier
if
someone
is
already
ldcm
them.
So
if
you
find
people
to
review
your
pr's
and
give
them
a
first
batch,
that's
super
helpful
and
otherwise
I'm
doing
my
best
to
go
with
them.
We've
merged
a
number
in
the
last
couple
days
and
I
see
people
are
still
sending
more
this
morning,
I
think
so.
There's
also
it's
great
to
see
people
stepping
up
in
fixing
things
all
right
with
that.