►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180509 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.xh5qa37052mc
Highlights:
- Recap from KubeCon
- Introducing Tencent
- Process for changing deployer images
- Cluster API Validation
- MachineDeployment PR
- Moving cloud provider code out of tree
- Validating ProviderConfig
A
Hello
and
welcome
to
the
Wednesday
May
9th
edition
of
the
from
stray
API
meeting.
As
far
as
a
cluster
lifecycle,
we
have
a
pretty
full
agenda
today,
so
we're
gonna
dive
in
and
we're
gonna
start
off
with
a
recap
from
Chris
Nova
of
the
in-person
cluster
API
meeting
that
happened
last
week.
Thank
you
pop
here.
A
B
So
we
had
a
kind
of
an
impromptu
meeting
that
I
thought
wasn't
gonna,
be
a
handful
of
folks
that
turned
out
to
be
probably
20
or
30
people
at
the
end
of
cute
con.
We
made
it
a
point
not
to
decide
anything
but
only
to
discuss
open
agenda
items
and
create
action
items
to
come
back
to
the
same.
We
did
an
unconference.
The
first
thing
we
started
about
was
the
cluster
API
in
an
arc
backup
just
talking
about
different
scenarios
of
backing
up
both
the
cluster
with
the
cluster
API.
B
B
This
was
kind
of
a
longer
issue
for
us
and
we
decided
that
it
would
make
sense
to
take
some
sort
of
action
item
to
ping
me
and
multi
clustered
folks,
and
just
let
them
know
that
we're
both
working
on
and
cluster
definitions
and
start
to
figure
out
what
that
boundary
looks
like
as
a
as
a
group.
I
think
those
are
the
two
big
ones.
We
talked
about
bare
metal
a
little
bit,
but
not
themed
concrete
came
out
of
it
other
than
it's
gonna
be
interesting.
B
I
think
another
issue
that
was
probably
important
to
bring
up.
Can
you
guys
hear
me?
Okay,
I
forget
who
it
was,
but
somebody
had
loosely
committed
to
trying
to
help
us
get
these
to
a
stable
tag
of
the
the
API
as
well
in
the
new
repository,
but
I
think
we
can
probably
die
ignore
into
that
as
we
go
through
the
mini
Gritty
of
what
you
guys
have
been
working
on.
I
haven't
been
here
the
past
couple
weeks,
so
still
playing
catch-up.
I
think
that
is
the
only
update
for
the
Q
con.
You.
C
B
Totally
agree:
I
kind
of
just
opened
that
up
to
get
the
ball
rolling,
my
plan
of
attack
went
there
was
to
make
the
the
old
directory
it
get
sub-module.
So
that
way,
anybody
who's
still
a
vendor
and
cube
deploy
could
potentially
check
out
some
module
out
and
should
get
at
least
a
relatively
close
copy
of
code,
just
to
sort
of
preserve
a
little
bit
of
backwards
compatibility.
B
A
Do
you
think
this
would
be
a
good
forum
to
ask
people
if
they
would
prefer
gets
that
module
versus
straight
deletion?
I
mean
people
who
are
vendor
if
they're
updating
their
vendor
dependencies?
Is
it
easier,
or
just
as
easy
to
update
to
the
new
location
for
vendor
dependencies
as
it
is
to
deal
with
any
changes
at
my
kamas
from
the
get
set
module?
I'm.
B
A
D
E
A
F
A
G
So
my
strawman
for
for
our
process
until
we
have
automation,
is
that
if
people
are
making
changes
that
we
will
acquire
image
updates,
they
should
make
sure
that
the
contributions
work
with
at
least
one
provider,
and
then
we
should
have
some
way
to
give
a
heads
up
to
people
who
own
the
various
providers,
update
images,
and
we
should
merge
only
Monday
through
Thursday,
so
that
the
deployer
doesn't
break
and
stay
broken
over
the
weekend.
Does
that
seem
reasonable?
Or
do
we
understand
the
issue.
G
A
B
G
So
in
the
future,
ideally,
we
have
the
automation
to
have
the
intent
test
to
validate
the
deployers,
as
well
as
to
update
images
when
people
check-in
changers
that
require
an
image
bump.
But
we
don't
have
that
now
and
it'd
be
great
not
to
leave
the
deployers
broken
for
long
periods
of
time.
So
just
some
process
in
the
interim
yeah.
A
C
What
does
is
basically
checking
whether
the
cluster
and
machines
and
the
system
is
working
correctly,
with
ample
ik
Wizards
those
master
components
already
LC
wizard,
like
a
straight
esta
functioning,
it's
a
little
different
from
the
test,
because
this
is
like
a
step
before
that
I
think
I
believe
it
never
is
not
working
today
with
test
anyway,
and
this
will
provide
some
things.
Ok
now
we'll
have
it
not.
You
should
check
that,
so
it's
a
which
you
will
help
the
developers
or
anyone
to
understand
what
was
happening
on
the
custard
and
if
it's
as
expected.
C
B
Think
this
in
general,
this
crosses
over
into
the
status
of
the
machine
realm
a
little
bit,
which
is
where
do
we
draw
the
line
between
the
the
piece
of
software?
That's
going
to
be
continually
checking
the
status
of
our
machines,
ensuring
that
they're
healthy
and
then
this
overall
concept
of
a
healthy
cluster.
So,
for
instance,
we
could,
you
know,
have
all
of
our
machines
registering
okay
and
then
something
is
still
going
on
in
the
cluster,
and
then
this
validation
would
fail.
C
So
so
there
could
be
something
there
actually
because
family,
if
we
are
creating
a
cluster,
if
we
want
to
wait
until
the
capacitor
is
ready,
and
they
will
have
some
components
from
me
around
only
in
the
controller
manager
or
manager
to
make
sure
everything's
working.
We
will
need
some
function
in
here.
There
are,
then
I'm,
not
sure
whether
we
attrition
should
also
check
whether
those
components
are
working
otherwise,.
C
B
Guess
my
question
here
is:
is:
are
we
introducing
a
new
piece
of
server-side
software
other
than
the
controller?
That's
going
to
be
responsible
for
protecting
the
health
of
our
infrastructure
Oh,
or
are
we
just
going
to
only
depend
on
the
controller
code
and
the
command?
My
tool
is
just
a
thin
wrapper
that
just
check
status,
I,
say.
C
Okay,
so
I
would
rather
actually
share
code
with
enemies
in
making
functional
manager
or
manager
already
is
there
and
so
I
think
that
is
a
crea.
Did
you
the
question
here
like
say
we
should
do
it
in
class
and
all
servers
actually
so
I
think
that
if
we
can't
redo
and
I
refer
to
do
most
of
the
staff
on
the
server
side
pass
as
many
as
possible,
mostly
because
a
lot
of
things
rounding
there
will
also
require
some
sort
of
validation
functionality.
So
we
can
share
though
fortunately
there.
C
Well,
then
there
will
be
some
chart.
I
saying
cannot
be
done
on
server
side.
I
check
in
the
controller
manager
itself
is
working.
Helping
me
down
on
the
client
side
on
to
call
the
guys
and
that
part
I
guess
should
be
living
in
the
front
side.
So,
ideally,
maybe
the
tool
makes
of
the
possible
factors.
Yeah.
B
I
mean
I
think
having
the
mix
is
important
and
I
think
coming
up
a
layer
above
the
controller
and
checking
the
stuff,
so
the
controller
is
also
important.
I
I
would
be
weary
of
introducing
new
code.
That's
independent
of
the
controller
that
old
person
does
the
same
kind
of
validation
that
were
kind
of
also
expected.
The
controller
to
have
yeah
I
would
almost
rather
see
like
a
section
of
the
controller
code
that
has
a
validation,
step
yeah.
A
C
I
J
Was
just
constantly
cluster
is
a
little
tool
that
basically
checks
a
similar
sort
of
thing
that
the
this
cluster
is
basically
ready.
We
don't
currently
have
a
wait
option.
I
think
we
should,
but
so
currently
it
exits
like
ready
or
not
ready,
I
think
people
typically
wrap
it
in
a
loop
and
will,
when
they're,
using
automated
tooling,
to
build
clusters
it
effectively
use
it
to
gate,
moving
on
in
their
deployment
procedure
or
their
update
procedure.
We
also
embed
the
logic
in
our
rolling
update.
J
A
J
Very
similar
innocent,
yes,
we
don't.
This
is
what
Dennis
describing
is
probably
better,
but
yes,
we
so
currently
the
Cox
has
a
sort
of
less
clear
layering
of
infrastructure,
not
so
consolidate.
Custard
goes
and
like
talks
to
the
cloud
provider.
If
you
guys-
and
it
looks
like
this-
is
implementable-
without
having
to
do
that,
if
the
controller
is
publishing
appropriate,
so
you
know
so
whatever
it
is,
which
I
think
would
be
much
better.
H
C
C
C
H
I
guess
I
would
say
my
concern
here
is
that
then,
if
you're
pushing
allow
logic
down
to
the
client,
it
assumes
that
you're
always
using
cluster
CTL,
and
that
may
not
be
the
case
if
I'm
adding
this
like
if
I
want
to
basically
have
if
I
want
to
ensure,
as
the
first
use
case
like
that,
the
that
the
controller
manager
was
up
and
operational
or
doing
it,
things
that
I
would
want
to
do.
I
would
almost
probably
like
this
as
an
additional
process
or
like
a
sidecar
process.
C
J
And
maybe
they
maybe
the
same
coat
if
we
thought
in
a
modular
way
could
be
like
used
in
other
places
that
there
is
a
there
is
a
sort
of
driving
use.
Kids
that
pops,
by
the
way,
which
is
the
classic
failure
mode,
is
that
you
don't
have
quarter
or
so
components
and
aunt
your
machines
or
an
instant
sake
is
not
available
because
and.
J
C
C
C
Yes,
like
that,
yeah
like
these
now
sizing
anyone
think
is
wrong
to
get
nose,
get
component
spaces
and
get
system
system
yeah.
Any
people
can
do
that.
So
it's
a
matter
of
whether
we
want
to
make
this
as
so
I
guess.
The
benefit
is
actually
so
easy
to
use,
because
a
long,
long
brown
one
commander
and
check
everything
and
their
witness
includes
those.
A
Maybe
that's
addressing
Craig's
concern
about
where
the
logic
lives.
If
the
logic
is
really
it's
doing
things,
you
could
all
do
by
hand
but
sort
of
doing
at
one
command
and
sort
of
pretty
printing
or
formatting
that
out
make
it
easier
for
you
to
understand
what
might
not
be
working
and
then
it's
adding
additional
logic
that
we
should
have
on
the
server
side.
It's
just
sort
of
tying
together
server-side
logic
in
a
way
that
maybe
makes
it
easier
for
humans
to
consume.
A
C
B
Mean
I
think
I
think
it's
an
interesting
idea
to
bring
up
anything.
We
should
be
talking
about
the
status
of
the
incident
structure
and
how
we
make
sense
of
them,
like
Robbie,
said
how
we
evaluate
that
together.
As
far
as
implementation
into
the
tool
goes,
I
think
we
should
just
follow
standard
protocol
and
open
up.
You
know
small
concrete
issue
around
what
that
would
look
like
technically
in
okay.
A
In
addition
to
that,
there
there's
the
stocks
that
as
I'm
presenting
that,
if
folks
are
interested
in
sort
of
the
scope
of
this
tool
and
what
its
response
oppose.
If
you
please
go,
take
a
look
at
that
and
add
comments,
because
if
we
open
a
small
issue,
it's
almost
certainly
gonna
link
to
the
doc,
where
there'll
be
lots
of
details
and
the
more
agreement
that
we
get
on
on
that
end
state
before
we
start
spending
time.
Writing,
oh
the
better
put
them
or
not
crashing
later.
L
Unfortunately,
I
don't
think
Kenny
has
a
mic,
so
he
wanted
me
to
just
talk
a
little
bit
about
this.
He
is
taking
a
first
pass
at
be
machine,
deployments
implementation
and
he
would
just
like
to
put
that
out
there
and
make
sure
everyone
gets
some
eyes
on
it,
so
he
gets
a
lot
of
good
feedback
on
it.
A
B
I
could
demo
I
mean
like
next
week
we
wanted
to,
but
I
think
before
we
schedule
it.
I
kind
of
also
just
wanted
to
paint
the
picture
of
why
I
did
what
I
did
and
how
this
relates
to
the
cluster
CTL
tooling,
which
was
I,
wrote
a
completely
vanilla
implementation
from
scratch,
explicitly
not
using
the
common
code
in
the
repo
as
it
stands
today,
in
the
hopes
of
proving
and
disproving
different
patterns
along
the
way.
So
something
I
would
kind
of
want
folks
to
kind
of
be
aware
of.
B
A
A
A
L
I,
when
I
move
the
repos
over
I,
initially
didn't
move
any
of
the
provider
stuff
over
and
I
got
a
little
pushback
that
there
wasn't
really
a
plan
around
how
we
were
going
to
manage
that
and
the
original
the
original
task
was
just
to
move
everything
over
to
the
new
repo.
As
far
as
going
forward
with
this,
we
have
some.
L
We
need
to
figure
out
how
we
want
to
support
multiple
repos
and
the
testing
scenario
and
like
how
do
we
validate
cluster
API
changes
like
end
to
end
without
a
provider
or
do
we
do
the
set
of
providers?
There's
a
lot
of
issues
regarding
this,
and
we
just
haven't
really
talked
about
it
because
we've
been
focused
on
getting
the
core
stuff
stable
and
working
Jess.
Do
you
want
to
add
anything
to
that
or
Chris.
B
A
Yeah
I
think
that's
a
good
point
is,
and
we
have
some
similar
conversations
at
the
contributor
meet
up
before
coupon
and
Austin
about
about
moving
things
around
and
and
some
people
I
think
Kim
him.
This
was
related
to
coordinates,
I'm.
Looking
with
the
assumption,
of
course,
do
you
want
to
do
X,
and
then
we
actually
start
asking
everybody
about
it.
There's
a
lot
of
pushback
to
saying
like
actually,
that's,
probably
not
a
good
idea.
A
We
probably
are
going
to
make
our
development
any
faster
or
easier
or
better,
so
just
sort
of
putting
it
out
there
is
like.
Maybe
we
came
in
with
the
assumption
that
you
know
obviously,
we'd
want
to
put
the
cloud
provider
code
out.
Is
that
actually
the
right
path
forward?
Do
people
have
strong
opinions
one
way
or
the
other,
or
you
know
good
rationale?
Maybe
that's
why
we
should
keep
it
in
there.
How
would
you
propose?
We
gather
that
input
I,
think
you
know
we
can
create
an
issue
and
ask
people
to
comment
on
it.
B
A
H
F
B
H
B
J
A
B
A
B
A
I
So
this
is
more
basic
than
the
validations
we
thought
before.
We
just
always
ensure
that
the
provider
can
pick
them.
It's
a
big
to
the
API
server,
validly
formed
and
so
right
now,
if
I
want
to
use
the
API
server
image,
I
don't
have
a
way
for
it
to
a
way
to
modify
it
to
your
validate
a
specific
cloud
provider
right
and
I'm,
not
sure
I.
What
would
be
a
good
pattern
to
to
support
that.
L
So
as
far
as
validating
provider,
config
embedded
in
machine
and
cluster
objects,
that's
just
a
runtime
draw
extension
and
I
I.
Don't
have
a
good
story
for
how
to
validate
that,
but
I
believe
a
while
back.
Robby
made
a
change
and
someone
made
a
similar
change
in
the
cluster
object
to
have.
It
also
be
able
to
point
to
a
different
API
endpoint,
our
API
type,
and
you
could
register
a
CRD
that
has
CRD
based
validation.
L
You
went
that
way
if
you
really
wanted
and
then,
when
you
your
machine
controller,
sorry
when
you
go
to
create
that
API
type,
the
validation
could
fail
there
for
the
again
for
the
embedded.
The
best
story
I
can
come
up
with
is
when
your
machine
controller
sees
it
and
tries
to
parse
it
back
a
error
status
in
the
machine
status
somewhere,
saying
invalid
provider
config
with
validation
errors.
It's
not
a
great
story,
but
I'm
open
ideas
on
how
to
make
that
part.
Better.
I
L
That's
I'm
not
sure
how
that
could
work,
but
if
you
register
a
CRD
with
the
API
server
and
it
shows
up
in
that
API
service
scheme
I
guess
there
could
be
validation,
logic
that
would
try
to
parse
the
API
version
and
type
out
of
the
embedded,
see
if
it
knows
about
it
and
then
run
validation
on
that.
It's
you're
welcome
to
try
to
prototype
something
like
that.
Let
me
put
it
that
way.