►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180314 - Cluster API
Description
Link to doc: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#
10:04:15 From Jason DeTiberus : I'd like it if we could get rid of the prefixes in favor of tags, but I'm not willing to fight that battle myself :)
10:16:52 From ctracey : Should we be taking a cue from the current work to extract cloud providers from kubernetes itself…will we fall into the same trap if compiled directly into the cluster-api?
10:17:03 From Cluster Ops : ^ that is such a good question
A
Hello
and
welcome
to
the
Wednesday
March
14th
cluster
API
meeting
sponsored
by
sig
cluster
lifecycle,
so
first
on
the
agenda
today,
I
wanted
to
bring
up
the
name
of
the
new
repository
that
we
had
set
up.
We
had
a
few
people
comment
on
the
name
and
in
general
may
mean,
with
the
new
repository
naming
structure
has
been
a
bit
controversial
so
right
now
we
are
github.com
/
kubernetes
sings,
/,
sealife,
underbar
cluster
api,
a
lot
of
weird
des
limiters
and
a
lot
of
weird
acronyms.
A
A
Cool
I
think
that
is
relatively
easy
to
change
at
this
point.
If
anybody
does
want
to
bring
something
else
up
and
once
we
start
been
during
that
code,
changing
the
repository
name
will
be
a
little
bit.
Trickier
looks
like
to
comment:
Jason
I'd
like
it.
If
we
could
get
rid
of
the
prefixes
in
favor
of
tags,
but
not
really
to
fight
that
battle
myself
I,
agree,
I,
think
the
repository
name
should
be
descriptive
of
what
the
repository
is
and
not
used
as
a
way
to
didn't
know
what
the
repository
represents
or
who
owns
it.
A
Okay,
so
yeah.
If
anybody
comes
up
with
anything
else,
feel
free
to
bring
it
up.
I
think
the
other
thing
for
the
repository
is:
we
need
to
figure
out
what
we
want
to
track
there.
We
had
talked
a
little
bit
about
moving
the
API
there.
Is
that
so
something
we
want
to
focus
on
and
do
we
want
to
assign
an
owner
for
that
work?.
B
A
We
had
asked
for
a
new
repository.
We
we
talked
about
separating
the
API
away
from
implementation,
and
the
original
reason
we
put
the
API
work
in
cube,
deploy
anyway,
was
because
the
repository
process
was
frozen,
where
the
steering
committee
had
figured
out
the
new
process.
Now
that
we
have
a
new
process,
we
were
able
to
get
a
new
repository,
and
this
is
kind
of
an
open
discussion
about
what
we
want
to
do
with
that
and
how
we
want
to
start
tracking
work
there.
If,
at
all,
okay.
B
A
Another
problem
that
we're
gonna
run
into
is:
if
we
want
to
move
issues.
I
know
we
put
a
lot
of
work
into
defining
milestones
and
we
keep
deploy
repository
by
Garrety.
Those
might
also
be
a
chore
as
well,
which
is
why
I
think
it
might
be
helpful
to
have
somebody
own
this,
but
still
begs
the
question
of.
Do
we
even
want
to
go
to
this
whole
process
in
general,.
A
A
C
A
A
A
A
Okay,
I
think
Robert
might
be
trying
to
login.
So
if
he
kicks
me
off
of
the
easing
account
well
restarted
as
soon
as
we
can
I
think
that
it.
A
Just
to
catch
you
up
real,
quick
Robert,
we
were
just
having
a
discussion
about
the
new
repository
and
we're
gonna
open
up
an
issue
in
the
cube,
deploy
issue
log
with
a
proposal
for
the
scope
of
what
should
be
in
the
new
repo
and
what
shouldn't
shouldn't,
move
and
I
think
the
goal
there
was
to
have
a
bit
of
a
concurrent
long-wave
discussion.
Instead
of
trying
to
hash
it
out
here,
cool
that.
E
A
The
work
was
internally
to
the
project
and
switching
things
out
or
changing
field
types
should
be
a
relatively
trivial
amount
of
work
at
this
point.
So
the
fact
that
the
API
is
not
stable
isn't
really
gonna
cause
too
many
problems,
but
just
wanted
to
kind
of
get
it
in
there
so
that
we
can
start
using
it
and
start
to
develop
opinions
about
what
isn't
isn't
working
for
us.
A
If
anybody
who
wants
to
look
at
my
approach
or
talk
more
about
it,
I'm
happy
to
share
the
TLDR,
is
that
I
pretty
much
nested
everything
in
provider
config
both
for
machine
sets
and
for
the
control
plane?
Cluster
definition
itself
will
start
moving
directives
up
into
the
higher
level
API
as
needed
and
as
the
project
matures.
A
A
A
C
Just
wanted
to
bring
this
up
really
quick.
One
of
the
things
I
want
to
do,
at
least
for
the
Google
implementation
of
the
controller
is
start
like
giving
it
a
config
map
for
generally
how
it
should
operate
and
maybe
making
it
more
configurable
and
like
looking
through
this
exercise.
I
noticed
how
the
code
is
architected.
C
Constructing
a
machine
controller
does
not
allow
very
much
customization
of
it
as
you
construct
it,
and
one
of
the
really
bad
things
I
think
we
have
right
now
is
that
the
actuators
are
linked
into
the
machine
controller
library
directly
and
I
would
like
to
change
that
so
that
you
actually,
when
you
construct
a
new
machine
controlling
you,
pass
in
an
actuator,
you
wanted
to
implement
that
way.
At
a
top
level.
C
I
can
import
at
the
top
level
like
package
mean
I,
can
import
a
Google
machine
actuator
and
pass
it
into
the
library
or
someone
could
import
a
AWS
machine
actuator
and
pass
that
into
library
without
the
library
itself
being
directly
dependent
upon
anything
related
to
Google
or
any
WS,
and
the
problem
with
this
is
the
auto
generated
code
assumes
that
you
can
construct
the
Machine
controller
in
a
certain
way
and
to
make
this
change
I
would
have
to
break
the
auto
generated
code.
I
was
wondering
if,
as
a
group,
we're
okay
with
that.
C
A
C
E
C
E
E
C
You're
trying
to
upgrade
from
like
the
sorry
if
you're,
trying
to
upgrade
from
the
like
1.8
to
1.9
or
1.9,
to
1
that
10
version
of
the
Builder,
you
would
have
to
regenerate
these
files
and
then
you'd
have
to
go
back
and
manually.
Make
this
change
again,
because
I
think
the
API
is
changed
from
1/8
to
1/9.
E
E
C
As
far
as
like
the
cloud
provider,
API
from
what
I've
read
they're
more
sorry,
I
was
pumping
in
terms
of
like
the
cloud
external
cloud
controller
for
the
API
server
and
they're
more
focused
about
just
monitoring
machine
status.
To
know
if
a
notes
not
coming
back,
creating
actual
load,
balancers
and
I
forget
the
other
two
things
Oh
doing
routes
and
stuff
like
that,
so
it
there
is
some
overlap,
but
where
our
abstraction
is
currently
at
a
higher
level
of
where
we're
just
saying,
add
a
machine
and
anything.
That's
where
I'd.
B
Well,
to
expand
upon
that
question
a
bit
more
or,
though,
is
I.
Think
some
of
the
other
works
happening
to
extract
cloud
providers
is
to
enable
broader
support
for
clouds
that
are
not
quote-unquote,
intrigued,
so
digitalocean
being
one
of
them
etc.
Are
we
locking
ourselves
in
by
having
to
compile
and
the
actuators
into
the
code
itself?
What.
C
Oh
I
seriously,
what
I'm
saying
is
right
now
we
are
locked
in
that
way
and
I'm
what
I'm
suggesting
is
trying
to
get
us
out
of
that
by
allowing
people
to
have
a
top-level
main
of
where
they
construct
their
own
actuator
from
their
own
library
and
then
pass
it
in
to
our
controller
library.
The
new
controller,
here's
the
actuator
which,
if
you
use
and
the
controller,
not
run
like
a
three-line
main
for
each
cloud
provider
being
able
to
reuse
with
our
cloud
provider
library
but
pass
in
their
own
actuator.
C
B
C
D
I
mean
I
think
at
a
high
level.
It's
definitely
easy
to
swap
in
a
separate
machine
controller.
I
think
what
Chris
is
talking
about
is
thing
okay,
so
now
you
want
to
build
a
new
machine
controller
or
something
that's
out
of
tree
like
say
digital
ocean,
and
how
much
work
is
that
and
if
we
can
make
sort
of
the
core
controller
part
that
does
a
lot
of
the
washing
the
API
and
sort
of
notifying
it
their
code,
that
a
machine
could
be
created
or
deleted
or
updated.
D
Reusable
generic
code-
and
all
you
have
to
do-
is
write
the
actual
glue
to
digitalocean
pass
that
actuator
into
the
common
controller.
Then
that
reduces
a
lot
of
effort.
You
need
to
do
to
write
that
book.
You
can
still
repent
the
whole
custom
control
yourself
if
you
want
to,
but
we'd
like
to
be
able
to
provide
some
some
common
libraries
make
it.
E
E
A
D
The
the
actuator
interface
is
may
be
simple
enough
if
it's
just
sort
of
create,
read,
update,
delete
so
I
kept
it
simple
sort
of
crud
interface
to
the
underlying
platform,
which
is
basically
the
abstraction
that
terraform
has
right.
Tara
forms
plug-ins
for
cloud
providers
basically
do
just
have
a
crud
interface,
I.
Think
things
like
Bosch
have
a
similar
interface
in
their
CPI
as
well.
So
that's
a
pretty
sort
of
standard
centered
way
to
address
this.
D
D
A
E
E
E
A
E
A
Okay,
I'm
fine,
with
a
rename
I
vendor
den
the
the
code
from
the
wrong
directory
and
to
keep
a
cordon
anyway,
so
I'm
gonna
have
to
change
it,
no
matter
what
so
yeah.
A
B
D
Yeah,
so
to
restate
that
the
machine,
controller
or
GCP
is
under
the
cloud
Google
directory
and
there's
a
place
there,
for
you
know
cloud
as
your
digitalocean
any
of
the
those
machines
rollers
you
want
to
be
entry
and
the
GCP
deployer
is
sort
of
the
prototype
bootstrapping
tools
that
we
can
actually
test
and
play
with
the
cluster
API
on
GCP
and
so
I
think.
The
plan
would
be
to
to
basically
yeah,
like
I,
guess
the
Spanx
that
replace
that,
with
with
the
proposal
that
Jessica
came
up
with
for
how
to
bootstrap
clusters.