►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180627 - Cluster API
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Let's
the
look
over
the
actual
item
from
the
last
week,
the
first
action
idn't
a
rocky
and
the
craze
supposed
to
pick
out
a
naming
convention
for
the
providers
specifically
passed
away
and
started
setting
up
so
Chris
open
a
issue
folk
people
like
Ashley
Ashley
for
people
like
that
started
a
discussion
so
L
for
proposal
so
far
and
looks
like
it's
a
real
meaning
like
people
like
the
idea
using
the
class
API
provider
instead,
like
others,
alternative
like
a
agitator
or
computer
provider
or
cloud
provider.
So
I
will
wait,
I
guess
to
wait.
A
B
I'll,
just
gonna
jump
in
real,
quick,
so
I
think
next
next
meeting
we
should
just
get
formal
thumbs
up
on
on
the
name,
and
then
we
have
to
go
to
the
broader
sake
and
figure
out
how
this
whole
process
of
getting
new
repositories
is
going
to
work
moving
forward.
But
ideally
we
can
pick
that
up
with
the
next
call.
So
if
you
want
to
assign
an
action
item
to
me,
that
would
be
great
great.
C
Interest
we
have
any
concerns
about.
One
of
the
questions
raised
in
the
issue
is
what
about
providers
which
span
clouds
or
multiple
implementations
of
the
same
provider?
Do
we
have
any
concerns
about
putting
or
occupying
the
kubernetes
SIG's
namespace
with
providers
that
may
not
be
maintained
long
term
I'm.
B
I
think
we
need
to
be
kind
of
choosy
about
what
gets
a
repo
and
I,
don't
think,
there's
anything
wrong
with
having
more
than
one
flavor
of
a
kubernetes
cluster
per
provider
or
port
per
use
case.
A
great
example
here
is
like
private
and
public
apologies
for
AWS,
like
how
do
you
want
your
network
configured
there's
two
completely
different
ways
of
doing
that
using
the
same
provider.
B
In
my
mind,
that
would
live
at
the
same
repository
under
the
AWS
controller,
but
I
think
we're
just
gonna
have
to
cut
across
each
one
of
these
bridges
one
at
a
time,
and
instead
of
using
a
three-letter
acronym,
that's
relevant
to
the
cloud
we
can
come
up
with
a
short
name
or
some
other
name
if
needed
for
one-off
use
cases
you
know
downstream,
but
I
think
it's
totally
a
pattern.
We
can
adhere
to.
A
A
C
So
this
next
agenda
item
I
think
is
like
super
quick
I
hesitated,
even
adding
it.
This
week,
I'm
super
swamped,
I
haven't
made
as
much
progress
on
my
skeleton
or
null
providers.
I'd,
like
we've
got
people
that
have
flown
in
from
all
over
and
so
pretty
busy,
but
I
just
wanted
to
see
if
Daniel
had
any
update
on
Anastasia
provider
and
then
I
wanted
to
see
where
inappropriate
place
might
be
to
document
the
different
existing
providers
that
are
in
progress,
I
can
think
of
two
places.
D
I'm
working
on
this
a
provider,
but
it's
not
quite
in
a
in
a
state
where
I
want
to
open
up
open
up
the
repos,
not
it
it's
still.
It's
still
a
under
active
development,
but
I
yeah
I'd
like
to
work.
You
know
with
you
and
anybody
else
on
on
yeah,
either
a
skeleton
repo
or
maybe
a
kind
of
document.
I
have
I
have
some
notes
on
my
own.
You
know
blockers
that
I
hit
things
around
the
dependencies,
etc.
D
C
So
I
think
that's
great
for
documenting
like
how
to
create
providers
in
terms
of
pointing
people
to
existing
providers.
I
I
would
like
to
hear
if
anyone
objects,
I
think
it
was
hard
for
me
to
discover
the
end
progress
implementations
like
I
asked
around,
and
it
was
only
after
my
third
time
asking
that
I
thought
this
list
of
five.
So
okay
I'll
take
that
action.
Item
yeah.
B
A
D
Yeah
I
I
submitted
this.
This
PR,
as
I,
was
trying
to
use
both
a
machine
provider
status
and
a
cluster
provider
status
and
and
I
was
you
know.
I
was
at
the
time.
I
was
looking
at
the
way
that
the
GCP
deployer
implemented
encoding
and
decoding,
and
so
you
know
my
yeah,
my
immediate
takeaway
was
that
it
I
could
do
it
the
same
way,
but
I
would
need
to
factor
out
the
the
provider
status
into
its
own
type,
I.
Think
in
in
hindsight
there
are
probably
alternatives.
D
This
sort
of
talked
about
it
on
the
on
the
PR,
a
conversation
between
robbing
myself.
If,
yes,
anybody,
if
anybody
objects
or
or
could
you
know,
maybe
point
me
to
to
an
alternative
I
would
appreciate
it,
but
otherwise
I
it
seems
like
it's.
You
know,
maybe
a
reasonable,
reasonable
thing
to
do
in
the
same
way
that
provider
config
went
into
its
own
bit,
some
type.
A
A
D
I,
just
to
bring
up
I
I
think
this
has
been
brought
up
on
the
5
channel
right
that
right
now
we
have
so
cluster
API
repo
has
dependencies
on
the
1.9
kubernetes
release
and
I
I'm
wondering
if
you
know
what
what
our
plan
is
to
to
move.
You
know,
as
kubernetes
releases
come
along,
you
know:
do
we
do
we
want
to
establish
some
regular
plan
for
for
moving
our
dependencies
and
one
one
motivation
at
least
that
I
have
as
I'm
working
on
the
SSH
provider.
I
internally
want
to
generate
a
cube.
D
A
DM
master
configuration
know
this.
This
so
I
have
to
I
have
to
have
a
dependency
on
on
the
cube,
a
DM
API
and,
in
my
case,
I,
would
like
to
generate.
I
would
like
to
use
cube
a
DM
1.10,
so
I
have
to
generate
this
master
configuration
that's
using
the
1.10
type,
but
I
can't
because
I
also
the
ssh
provider
also
depends
on
cluster
api,
which
in
turn
is
locked
on
on
1.9.
D
A
I
think
I
first
like
write
a
first
version.
I
can
put
the
order
like
that
GCE,
like
a
provider
and
all
the
prototype
it
generate
using
the
IPS
ever
builder.
So
I
can
make
some
comment
here.
So
yes,
the
aps
ever
builder
is
not
like
making
like
a
lot
of
a
new
update
and
probably,
if
you're,
not
in
the
future,
so
we
I
think
we
need
to
like
the
maybe
position
here.
Yeah
III
again
see
couple
options.
A
First,
is
someone
to
like
to
update
the
API
server
builder
which
to
move
like
a
base
to
the
like
one
point,
n
and
latest
question?
Then
we
generate
order
like
Unicode.
It's
it's
a
lot
of
work
and
then
adoption
would
be
maybe
put
project
a
release
date.
We
can
make
just
get
rid
of
ApS
ever
builder
and
they
don't
use,
maybe
as
a
builder
and
but
the
decease
also
declare
like
a
lot
of
like
cooperate
code
and
a
copy-paste
a
lot
of
place
and
the
single
now
men
actually
ideal.
A
E
C
A
I
think
this
particular
configure
their
wait
walk
long.
They
need
a
better
way
to
do
it,
but
I
think
the
culture
in
general
is
I.
Think
these
real
concern,
so
we
may
be
other
issues
maybe
later
with
rest
a
be
in
to
reference
the
latest
like
different
version
of
coconutty
for
new
functionality
or
whatever.
So,
but
if
we
stuck
with
1.9,
you
will
be
issue,
maybe
sooner
or
later
so,
I
think.
A
D
A
Need
but
that's
a
good
idea.
Oh
you
already
over
computer,
yes,
computer
I
think
that
is
a
Paloma
10.
We
have
discussion
inside
Google
to
say
in
the
future.
We
can
move
back
when
the
CRB
it's
mature,
provide
all
the
functionality.
We
actually
reach
a
move
back
like
a
CRT
based
implementation.
In
that
time,
a
lot
of
time
we
can
definitely
using
computer.
It
will
be
a
more
officially
maintain
the
tool
going
forward.
I
think
that
we
don't
have
suffered
is
like
the
brace
problem.
It's
a
particular
community
version.
F
A
Not
particularly
usage,
the
EPA's
over
Buda
was
pumped
into
like
the
right
now
you
come
into
Cooper
91.9,
so
because,
because
the
way
we
are
depend
on
the
APS
of
you
to
generate
a
lot
of
controller
code,
so
that's
why
dependencies
alia,
okay,
yeah,
there's
no
particulars
Louise,
and
we
have
three
illustrative
community,
one:
four,
nine.
Okay,
thanks
all
right.
G
A
G
So
I
would
like
other
change
to
do
this,
where
you
can
work
on
the
same
feature:
big
focus
whatever
you
want,
but
any
image
changes
and
any
roll
out.
You
need
to
happen
in
a
separate,
isolated
request,
and
maybe
we
can
have
a
template
without
I'm,
not
sure
how
to
import
that.
But
this
will
just
make
the
history
cleaner.