►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180530 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.7s0qcdkp4xc8
Highlights:
- Base images moving from alpine to debian
- API cleanup, in particular removing container runtime from the machine spec
- Proposal to add network address to machine
- Plan for cluster-wide actuators
- Machine to Cluster references
- Renaming the terraform provider to vSphere provider
A
Hello
and
welcome
to
the
May
30th
2018
edition
of
the
cluster
API
breakout
meeting
for
C
cluster
lifecycle.
We
have
a
relatively
short
agenda
today.
So
if
you
have
something
you'd
like
to
talk
about,
please
go
ahead
and
paste
the
link
to
the
notice
link
to
the
chat
to
start
off.
I
want
to
give
an
FYI
late.
Last
week,
I
sent
P
R
to
switch
all
the
base
images
for
the
cluster
API
from
Alpine
to
Debian
stretch.
A
This
is
part
of
a
process,
that's
happening
across
all
of
kubernetes
and
they're.
A
couple
of
questions
on
the
PR
and
the
context
is
that
you'll
see
this
change
happening
across
sort
of
all
of
the
kubernetes
improvements
associated
repos.
And
if
you
want
more
information,
you
can
go
talk
to
Jago,
who
is
helping
orchestrate
the
teams
at
Google
to
change
the
parts
of
the
the
kubernetes
ecosystem
that
they
own,
and
so
that's
that's
sort
of
a
quick
summary
there,
and
if
some
people
had
some
questions
on
the
PR,
something
buddy
has
any
extra
questions.
A
I
would
I
would
direct
you
to
J
go
because
I
don't
have
any
more
information
than
I
was
told
that
we
were
doing
the
switch,
and
we
should
own
that
change
for
our
components.
So
next
on
the
agenda,
I
had
a
issue
about
API
cleanup.
So
last
week,
Tim,
Harkin
and
Justin
were
visiting
the
Google
office
here
in
Seattle,
and
we
sort
of
did
a
deep
dive
on
the
cluster
API
and
walks
through
the
API
and
Tim
had
some
questions
and
in
particular
he
asked
us
why
we
had.
A
They
contain
a
runtime
version
in
the
machine
spec
and
told
us
which
sort
of
agreed
to
thought
I
heard
from
dawn
in
the
past.
That
sort
of
confirmed
to
me
that
the
container
runtime
is
really
sort
of
an
implementation
detail
that
is
often
very
tightly
tied
to
the
underlying
operating
system
and
that,
having
that
as
a
declarative,
sort
of
input
to
how
a
machine
should
be
configured
doesn't
necessarily
make
a
lot
of
sense
with
the
direction
where
Cupra
maids
is
headed.
A
We
actually
also
had
a
long
argument
about
whether
cubelet
should
even
be
an
input
field
or
whether
cubelet
should
be
a
result
of
the
operating
system
you
asked
for
but
I
think
Cuba
has
a
lot
of
value
in
terms
of
being
able
to
say
like
this
is
the
version
of
communities
that
I
want
running
on
my
node
where's.
It's
a
lot
less
clear
that
there's
a
value
in
doing
that
for
the
state
at
runtime.
A
So,
given
that
discussion,
I've
got
a
PR
ready
to
send
out
to
pull
out
to
contain
a
runtime
from
the
spec
I
mean
that
the
PR
basically
confirms
the
fact
that
nobody's
actually
using
this,
we
sort
of
put
it
in
their
spec.
You
to
believe
which
is
from
what
we've
been
saying,
the
wrong
way
to
design
the
API.
We
should
make
the
API
surface
as
small
as
possible
and
then
extend
it
later.
If
we
find
that
we
need
to
so
I
think
we
should
take
it
out.
A
We
can
always
put
it
back
in
later
if
we
find
that
this
is
actually
important.
I
think
the
main
reason
that
we
put
it
in
the
first
place
was
to
help
enable
end-to-end
testing
scenarios,
because
we
do
have
cases
today
where
we
want
to
like
test
a
new
version
of
docker,
or
you
know,
test
rocket
or
something
else
and
end
n
tests
and
I.
Don't
think.
That's
a
compelling
enough
argument
to
make
this
part
of
the
full
API.
So
unless
there
any
objections
on
this
ni,
PR
and
hope
it
goes
in
pretty
quickly.
A
In
that
same
vein,
there
a
couple
of
other
changes
that
we
talked
about,
making
to
the
API
surface,
to
kind
of
clean
it
up
and
tidy
it
up
and
make
it
more
consistent
with
the
rescue
Reddy's.
So
there'll
be
a
couple
more
things
coming
in
the
future.
I
just
haven't
had
time
to
write
them
up,
so
I
don't
want
to
bother
people
with
them
until
I
had
time
to
create
a
few
hours.
Are
there
any
questions?
Comments,
concerns
I
know
there
were
some
questions
on
the
PR
itself
or.
C
I
haven't
read
the
conversation.
Maybe
I
should
do
that.
First,
I
just
wanted
to
mention
that
historically,
we've
I
agree
that
the
container
runtime
is
fundamentally
part
of
the
operating
system.
The
communities
runs
on,
but
it's
been
a
super
important
part.
Core
OS
in
particular,
is
upgraded
darker
versions
in
ways
that
has
broke
us
in
the
past
and
then
doctor
before
they
had
the
long
term
support
policy
that
they
have
now.
C
A
Yes,
that's
a
great
question:
I
mean
in
gku.
We
have
the
same
thing
in
gke.
We
actually
paint
a
version
of
darker
that
we
know
works
and
there
have
been
many
many
times
where
a
new
version
of
docker
has
come
out
and
people
that
are
using
GK.
You
say
like
oh
there's,
this
new
version
doctor:
it's
not
this
great
new
feature.
A
You
have
pre-baked
node
images
using
something
like
wardroom
or
you've
rolled
note,
images
that
have
a
container
runtime
built-in
that
you
know
is
working
as
opposed
to
just
you
know,
pulling
from
you
know,
head
of
ubuntu
or
head
of
debian
or
whatever
they
have
in
there
after
posit
for
ease
and
I,
think
that
that's
sort
of
more
likely
to
be
effective
so,
like
each
provider
would
say
you've
told
me,
I
want
cubelet
1.11
that
and
an
Ubuntu
OS,
and
the
results
of
that
is
here
is
a
base
image
that
satisfies
those
two
input
parameters
and
comes
with
a
container
runtime
that
works
I,
think
that's
fair.
A
A
I
would
imagine
us
in
that
provider
specification
being
able
to
actually
want
to
specify
that
the
runtime
native
runtime,
in
some
cases
where
we
might
say
we
do
want
to
test
a
new
version
of
docker,
but
instead
of
having
that
be
a
top
level,
API
construct,
it's
a
it's
a
part
of
the
provider
configure
we
say
on
this
provider
that
we
use
for
end-to-end
testing.
We
might
want
to
override
the
version
of
docker
and
we
have
a
startup
script.
E
So
you
can't
necessarily
just
follow
a
node
reference
to
get
the
hostname
that
you
need
to
work
with
in
some
deployments
it
seems
like
it
may.
Email
II
need
that
information
to
get
the
node
registered
depending
on
how
you're
using
this-
and
we
also
are
working
in
a
scenario
where
we
have
sort
of
a
root
cluster
and
target
clusters,
and
the
node
would
obviously
not
exist
in
the
root
cluster,
where
the
machine
record
is
created.
E
So,
in
the
you
know,
in
the
sort
of
plan
for
tooling
that
could
be
generic,
it
seems
like
adding
those
to
the
Machine
status
might
be
universal
enough,
that
it
could
be
a
candidate
to
go
up
to
the
top
level
and
then
tolling
could
use
it
there
yeah
that's
pretty
much
it
I
proposed.
If
so
that
we
we
could
do
it.
The
saint's,
using
the
same
structure
that
is
used
for
the
node
addresses
today.
Okay,.
A
B
A
B
A
Yeah
that
that's
that
may
not
be
the
best
ap
I
designed
a
copy.
I
know
that
that
Tim
has
mentioned
a
couple
of
times
that
some
of
the
early
kubernetes
api
is.
He
would
love
to
change
and
I
know
he's
mentioned
endpoints
and
I
think
he
might
have
mentioned
services
as
being
some
of
the
ones
that
should
not
be
the
patterns
that
should
be
not
be
carried
forward.
So
I
think
if
we
do
want
to
put
it
in
the
spec,
we
should
not
necessarily
blindly
keep
out
of
the
status
I.
F
E
Potentially,
reaching
the
machine
to
get
it
registered.
If
you
need
to
take
some
action
over
SSH
or
something
like
that,
that's
not
what
we're
personally
doing,
but
it
seems
like
it
could
be
a
possibility
and
sort
of
general
tooling
that
might
you
might
write
around
the
cluster
API
I.
Don't
have
specific
examples,
but
it
just
seemed
like
it
was
reaching.
Those
machines
could
be
very
important
for
that.
Tooling.
B
A
I
mean
not
necessarily
right,
because
if
it's,
if
you're
talking
about
IP
addresses
and
names
you're
likely
to
have
more
than
one,
especially
if
you
think
about
machines
that
have
more
than
one
Nick
on
them,
like
you
might
say,
I
want
this
IP
address
and
this
IP
address.
So
you
need
to
have
an
array
right,
no
I,
think
I
think
so
we
don't
have
that
issue
for
services,
because
services
are
really
like
the
one
place
to
get
to
something.
E
A
Do
a
PR
as
well,
if
that's
useful
I
was
gonna
say
maybe
first,
what
we
should
do
is
in
the
issue.
Can
you
put
like
here's
what
we
think
it
would
look
like:
okay,
you're,
a
couple
of
options
or
link
to
a
doc?
Maybe
it's!
These
are
people
to
come
to
on
a
doc
where
we
do
that,
and
then
we
can
try
to
just
send
that
around
and
get
some
feedback
on
it,
and
maybe
we'll
come
back
next
week
and
try
to
make
a
decision.
Okay
will
do
Thanks.
A
G
No
sorry
I
meant
more
of
the
underpinnings
for
a
cluster
actuator
so
like,
for
instance,
right
now.
We
have
the
create
delete
methods
for
machines,
but
I,
don't
think
we
have
it
for
clusters.
Yet
that's
something
we
are
intending
to
add
as
well,
because
there
I
mean,
if
you're
doing,
if
you're
doing
actuators,
for
instance,
for
AWS
you're,
going
to
have
things
like
these
security
groups,
etc.
That
are
cluster
wide
I.
D
G
A
G
So
actually
I
have
a
related
question
as
well,
so
have
we
done
any
work
to
determine
how
we're
going
to
do
machine
references
against
a
cluster
so
right
now
it's
just
one
too
long
or
one
too
many
I
should
say,
but
it
seems
like
we
we
had
talked.
We
had
a
threat
for
awhile,
but
I,
don't
know
where
it
landed
about
how
to
map
a
machine
to
a
specific
cluster.
G
G
H
I'd
love
to
discuss
that
a
little
bit.
If
we,
you
know,
we
have
time
in
this
me
today's
Matt
Stabler,
the
couple
Conor
proposals
that
floated
out
in
slack
in
the
last
week
were
to
either
try
to
try
to
decide
whether
to
just
have
an
annotation
or
sorry
a
label
on
a
machine
that
just
links
back
to
the
name
of
the
cluster
or
whether
better
to
actually
have
something
in
the
spec.
That's
of
reference
back
to
the
cluster
I.
G
A
Yeah
I
mean
so
nodes
already
have
labels
on
them
right
because
they
contain
a
normal
communities
metadata.
So
you
could
have
a
cluster,
have
a
label
selector
that
it
tells
that
basically
bridges
between
the
two
things
that
we
have
sort
of
a
loose
loose
reference
between
them
in
some
ways.
That
sounds
a
little
bit
more,
like
the
committee
style
to
me,
rather
than
having
having
strong
references,
I
guess
one
one
thing
is:
what
is
the
use
case
for
connecting
these
things
tightly?
The.
H
A
H
Even
I
can
it
from
a
user's
user
experience
perspective
I
like
to
have
in
the
spec
of
machine
set.
So
it's
explicit
that
a
user
needs
to
establish
when
they
create
the
machine
set.
What
the
cluster
is
gonna
be
that
that
machine's
had
associated
with
and
then
that
information
can
trickle
down
to
machines
of
machines
that
creates
I.
Think
having
just
a
known
label
that
we
put
on
the
machine
makes
it
more
difficult
to
convey
to
the
end-user
that
they
need
to
supply.
That
information.
A
G
A
G
Yeah,
it's
just
basically
being
able
to
have
one
particular
cluster
API
instance
that
can
manage
multiple
clusters,
and
where
do
you
draw
the
line
between
which
jeans
belong
to
which
cluster
I
don't
really
know?
I
think
the
labels
would
help
for
like
querying
ability,
but
at
the
same
time,
there's
no
really
I
can't
really
think
of
a
good
use
case
where
a
she
should
ever
belong
to
more
than
one
cluster
at
a
time.
A
E
C
H
For
one
thing
with
the
owner,
reference
is
with.
In
that
scenario,
you
you
have
more
of
you
have
an
ownership
relationship
there,
where
a
replica
set
owns
a
pod
for
a
reference
to
no
cluster
machines.
That
I
don't
think
you
have
that
owner
relationship.
So
you
I,
don't
think
you
want
a
cluster
to
just
take
over
ownership
of
machines
that
just
because
the
machines
that
belongs
to
that
cluster,
because
the
user
is
creating
those
two
things
discreetly
and
separately,
you
don't
want.
H
It
would
be
a
weird
user
experience
if
you
create
a
cluster
or
create
a
machine
set
then
delete
the
cluster
and
it
happens
to
delete
your
machine
set
as
well.
I
think
there
is
an
argument
for
an
owner
relationship
between
the
cluster
and
the
machines
that
get
created.
If
you
want
to
say
that
a
machine
can't
really
exist
without
both
its
machines
and
its
cluster.
A
H
H
H
Ever
seen,
the
machines
that
controller
would
wait
for
the
cluster
that
the
machines
that
belongs
to
to
actually
exist,
and
at
that
point
it
would
create
the
machines
and
the
machines
would
have
owner
references
back
to
both
the
machines
at
and
the
cluster
and
if
either
of
those
got
deleted,
the
machines
would
be
deleted
as
well.
Oh.
A
I
H
A
H
And
I
think
that
the
owner
reference
aspect
of
it
is
kind
of
a
separate
issue
that
the
issue
that
I
kind
of
want
to
tackle
first
is
just
agreeing
about
how
we're
going
to
associate
a
machine
set
with
a
cluster,
and
then
we
can.
We
can
move
from
that
to
how
does
the
machine
actually
get
associated
with
cluster?
You
know
can
start
with
well
I
guess
we
should
probably
just
address
it
now
whether
we
wants
whether
the
machine
should
be
an
owner
reference
or
just
a
local
reference
or
again
a
label.
G
Okay,
can
I
backtrack
for
just
one
second,
and
just
ask
a
question
because
I
have
not
been
using
the
machine
set
type.
Here's
the
intention
that
you
always
have
to
associate
the
Machine
with
the
Machine
set
before
it
can
be
actuated
upon,
because
I
think
that's
not
the
case
right
now
that
I'm
actually
stitching
together
clusters
and
individual
machines
without
a
machines
that
say.
A
Yeah
so
I
think
the
intent
is
kind
of
like
pods
and
replicas.
That's
right,
so
you
can
create
pods
in
turbine
at
ease
or
you
can
create
replicas,
that's
which
great
pods
for
you,
and
so,
if
you
create
a
machine
set
that
will
create
machines
for
you
and
in
particular,
if
you
want
a
bunch
of
machines
that
kind
of
look
the
same,
then
it's
easier
to
create
machines
and
change
the
scale
factor
than
it
is
to
stamp
out
all
the
individual
machines
yourself
and.
G
Yeah
I
guess
I
get
that
part,
but
is
it
ever
whatever
be
the
case?
We
would
go
directly
from
machine
to
cluster
so,
like
let's
say,
I
want
to
scale
up
by
one
node
I,
don't
care
about
a
machine
setter
like
I
mean
kind
of
auto
scaling?
Would
there
be
any
association
between
machine
and
the
cluster
directly
without
a
machine
said
it
yeah.
G
All
right,
so
the
reason
why
I
asked
that
is
just
because,
like
I
think
that
changes
what
we
were
talking
about
just
second
ago
around
you
know,
do
we
like
who
has
the
one
reference,
and
how
do
we
tie
these
things
like
in
this
case?
If
we
intend
to
support
just
the
machine
is
associated
the
cluster,
then
the
costs
already
have
to
have
a
specific
reference,
some
out
whether
label
or
you
know
some
kind
of
reference
back
to
the
cluster
itself.
G
H
So
let
me
rephrase
what
you're
saying
to
make
sure
that
I
understand
your
point.
If
we,
if
we
used
owner
references
on
the
machine
back
to
the
cluster,
then
that
is
not
sufficient,
because
you
have
a
scenario
where
a
machine
may
not
be
actually
owned
by
the
cluster.
But
you
still
need
a
reference
to
the
cluster
correct.
A
What
Craig
is
saying
is
that
the
machinery
part
of
the
cluster,
but
in
what
you're,
describing
that
the
machines
set
controller
is
responsible
for
it.
Basically
attaching
that
machine
to
the
cluster
and,
if
you're,
just
creating
machines
yourself
and
what
is
responsible
for
attaching
them
to
the
cluster
cluster
or.
E
H
Right
yeah,
so
that's
that's
pretty
that's
very
much
what
I
was
saying
where
there's
there
would
still
need
to
be
a
local
reference,
local
object,
reference
or
a
label
or
something
where
you
can
get
to
a
cluster
from
the
machine.
The
owner
reference,
I
I
had
I
was
thinking
earlier
that
the
a
cluster
owns
a
machine
that
an
owner
reference
from
the
machine
to
the
cluster
would
be
sufficient
to
establish
that
association
between
the
cost
which
cheat
in
the
cluster.
G
I
would
agree
with
that
and
then
I
think
the
other
complicating
thing
there
is:
do
you
actually
actuate
the
create
of
a
new
machine?
If
you
don't
have
a
reference,
is
there
a
use
case?
I,
don't
think
there
is,
but
their
use
case
where
you
would
want
to
create
a
one-off,
independent
machine
independent
of
the
cluster
I
think
we
should
just
not
support
that
I.
Think
that
would
complicate
things
dramatically
so
I
don't
think
we
should
support
the
news
case.
Yeah.
H
A
Want
to
step
back
because
Jessica
put
something
in
chat,
even
though
she
hasn't
jumped
into
another
conversation
linking
to
an
issue
that
we
create
a
while
back.
They
Kris
Rousey
had
created
about
the
scoping
of
our
API
types,
whether
we
should
put
them
in
namespaces
or
not
and
I,
think
the
consensus
and
a
lot
of
people
on
the
call
were
in
that
discussion.
The
consensus
was
to
use
namespaces,
because
that
would
give
you
sort
of
the
boundary
between
having
like
a
cluster
and
machines
in
a
namespace.
A
These
were
tied
together,
loosely
I
think
what
we're
talking
about
now
is
is
even
within
the
namespace
you'd
create
multiple
clusters
within
the
second
namespace,
which
is
sort
of
a
departure
from
what
had
been
agreed
to
on
that
issue
back
in
January.
So
I
just
want
to
clarify
that
if
your
use
case
is
now
different
than
what
we
talked
about
before
and
and
I
want
to
make
sure
that
everybody
sort
of
agrees
of
that.
A
It's
a
valid
use
case
before
we
change
the
API,
which
I'm
worried
if
making
the
references
required
is
maybe
gonna
cut
out
other
use
cases
I
like
right
now.
If
you
just
look
at
the
GP
deploy
code
that
we
have
in
there,
you
don't
have
the
references.
I
works
just
fine,
because
the
cluster
API
is
just
for
the
cluster
that
it's
sitting
in
and
not
the
sort
of
multi-tenant.
It's
more
complicated
scenario
that
you
guys
are
describing.
A
H
G
No,
the
one
thing
about
splitting
binding
namespace
two
would
also
basically
require
that
we
have
additional
checks
that
you
effectively
could
not
create
multiple
clusters
in
the
same
namespace
right,
so
it
wouldn't
make
sense
to
cube.
Cut'
I'll
apply
cluster
1
and
cluster
2
in
the
same
namespace
right.
A
E
G
H
A
G
A
A
J
Thank
you,
I've
been
volunteered,
so
I
created
this
bug.
Yesterday,
a
while
I
was
working
on
a
different
bug,
which
is
too
moved
actually
some
context.
So
we
have
terraform
provider,
which
is
the
primary
provider
we
use
for
our
vSphere
integration
as
part
of
that
provider.
Config.
One
of
the
freeform
text
inputs
we
take
is
the
username
password
and
server
fields
in
the
machine
and
that's
duplicated
for
all
of
the
machines.
That's
all
good,
because
it's
credentials
and
they're
on
every
single
object.
C
C
And
so,
if
you
look
at
interview
us
in
particular,
there's
like
a
key
and
a
secret
key
and
it's
not
completely
clear
which
one
of
those
is
the
key
and
which
one
is
the
username
or
which
one's
secret
or
which
one's
not
right.
So
we
decided
that
it
was
best
to
actually
let
the
cloud
provider
choose
the
names
which
map
directly
to
the
concept
in
that
cloud
provider,
so
that
the
user,
using
that
cloud
provider
would
have
the
documentation
of
the
cloud
provider
to
fall
back
on
right.
J
K
J
J
K
J
Now
I
think
I
think
one
of
the
biggest
concerns
is
is
that
something
like
a
telephone
provider
can
help
with
quicker
prototyping.
So
if
you
want
to
add
OpenShift,
you
just
start
using
this
and
then
once
you
know
it's
working,
maybe
you
can
have
something
more
specific
and
I.
Think
like
that's
kind
of
the
route
I'm
taking,
is
that
now
we've
validated
that
terraform
can
work
for
our
use
case
on
vSphere
but
turns
out
the
API.
The
provider
config
API
is
not
perfect.
J
A
A
A
And
faster
to
make
it
just
feast
your
specifically
doing,
also
if
you,
if
we
build
two
things
on
terraform
and
then
look
for
the
similarities
and
try
to
pull
up
the
common
code.
That's
one
approach,
that's
kind
of
the
approach
we
take
with
the
API,
which
is
put
stuff
in
the
provider
config
blob.
If
we
find
commonality
across
different
use
cases,
then
we'll
bubble
it
up
to
be
first-class
API
objects.
A
So
maybe,
like
you
said,
doing
a
soft
copy
really
start
tweaking
the
beast
or
one
be
specific.
That
leaves
the
existing
terra
firma.
That
is
sort
of
pair
bones
and
dims
can
start
with
that,
one
for
OpenStack
and
then
once
those
things
are
both
been
flushed
out.
If
we
see
that
there's
a
lot
of
code
that
should
be
shared
between
and
then
everybody
can
work
independently
quickly
and
we
combine
and
get
the
advantages
of
not
having
code
duplication
right
once,
we've
sort
of
gotten
through
the
very
quick
iteration
prototyping
phase
right
yeah.
J
J
A
This
does
also
bring
up
this
bring
back
up
the
issue
and
I
know
Chris,
isn't
on
the
call,
but
moving
the
cloud
provider
code
out
so
now
we're
basically
gonna,
be
writing
feast,
your
specific
cloud
provider
code.
It
would
be
really
nice
if
we
could
think
about
the
path
of
so
again,
I'm
hesitant
to
just
slow
things
down,
buy
a
whole
bunch
of
time,
refactoring
and
moving
stuff
around
instead
of
making
code
work
all
right
think
about
the
path
forward
there
so.