►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180103 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.xmmltqf4u77o
Highlights:
- How do we expect to reuse common code across environments?
- Resource scoping - global or namespaced?
- Cluster bootstrapping
- How to pivot machine controllers or run multiple machine controllers for a cluster
- Encouraging new contributions
- 2018 roadmap
A
Hello
and
welcome
to
the
January
3rd
edition
of
the
cluster
API
breakout
meeting
for
a
sig
cluster
lifecycle.
It's
the
first
meeting
of
the
new
year
looks
like
we
have
an
agenda
item
from
Charles
Koch
at
the
top
of
the
meeting
here.
Charles.
Are
you
around
to
talk
to
that
or
do
me
to
read
it
for
you.
C
B
Great
so
I'm,
very
briefly,
I
was
just
wondering
if
people
had
more
input
on
how
that
pull
request,
that
I
linked
impacted.
How
people
trying
to
implement
the
cluster
API
should
go
about
doing
that,
because,
basically
it
moved
a
lot.
It
moved
all
of
the
GCP
specific
code
into
a
new
folder
and
the
cube
deploy
along
with
the
stuff
that
could
normally
of
being
considered
common
across
implementations
using
cluster
API
I.
Think
so,
if
somebody
but
more
knowledgeable
could
perhaps
describe
how
you
would
go
about
Jam.
A
Sure,
yes,
this
is
something
we
talked
about
right
before
the
holidays
and
I
wasn't
sure
if
it
had
been
done,
but
like
Jessica
snuck,
that
in
before,
before
now
and
and
the
so,
the
point
of
the
refactoring
here
was
that
the
the
cluster
API
directory
should
have
only
biodiagnostic
reusable
code.
So
that
includes
the
API
definition,
which
is
still
there.
A
It
includes
tools
we
build
on
top
of
that
API
definition
that
don't
have
any
specific
ties
that
are
vendor-specific
and
it
includes
reusable
libraries
and
so
I
think
the
what
we
did
ripping
out
the
GCP
code.
Let's
take
everything
out,
there
was
G's,
be
specific
and
a
lot
of
the
dependencies
with
the
intent
of
moving
the
common
code
back
in
to
the
shared
directory,
as
we
found
out
that
it
was
common.
A
So,
in
effect,
since
we're
in
the
same
github
repository,
we
don't
have
to
vendor
to
the
other
directory,
so
we
want
to
make
it
really
easy
to
vendor
so
that
you
could
create
your
machine
controller
outside
of
the
cube,
deploy
directory
the
very
easily
reuse,
both
the
the
API
definitions
and
any
reusable
code
that
gets
promoted
back
into
the
cluster.
If
a
directory
does
that
sort
of
answer
your
question?
B
A
A
That
Pro
would
have
been
useful,
I
think
we're
we're
sort
of
at
the
hopefully
now
at
the
tail
end
of
our
sort
of
rapid
iteration
that
we
were
trying
to
do
sort
of
as
part
of
lat.
The
last
quarters
development
push
to
get
something
that
was
actually
functional
and
immobile,
and
now
that
we
have
a
broader
group
of
people
that
are
interested
in
sort
of
following
along
and
helping
make
this
project
go
and
building
more
implementations.
A
I
think
we
need
to
switch
from
our
development
cycle
of
Bret
codes
self
commit
to
just
make
sure
we're
moving
really
fast
to
hooking
up
to
sort
of
the
common
kubernetes
sort
of
tooling,
where
we
can
get
the
the
merge
bot
and
the
LD
TM
bot
hooked
up
for
our
repo,
so
I
think
that
would
be.
That's
definitely
should
be
a
next
step
for
we're.
Actually
getting.
You
know,
code
reviews
right,
because
in
this
case
this
PR
had
had
no
description.
A
Jessica
and
I
had
talked
about
in
person
and
sort
of
had
the
conversation
that
I
just
told
you.
But
you
know
she
didn't
translate
any
of
that
into
the
PR
description.
You
know.
Probably
because,
like
she
assumed,
like
we've,
already
talked
about
this,
you
know
it's,
you
know,
what's
going
on
to
make
it
clear
as
work
sort
of
communicating
with
a
larger
group
I
think
if
that
had
gone
through
code
review
like
that
question
would
have
come
up
is
like
we
should.
You
know,
put
something
akin
to
a
release.
Note
ish,
I,
think
your
description.
B
A
A
E
A
E
Say
that
I
buy
today
I
did
a
spike
implementation
and
what
I
did
was
I
copied
and
pasted
the
cluster
API
DCP
code
into
my
code,
so
that
then
I
would
have
like
a
static
version
there.
Whereas
if
it
changes
upstream
and
then
I
can
see
like
what
I
can
I'm
still
closed
to
the
upstream
and
then
I
can
see
like
what
fits
and
what
doesn't
so,
it's
sort
of
controlled
but
nice
path
drop
streaming
or
making
suggesting
changes
back
up.
E
A
And
since
you've
done
that
and
sort
of
been
experimenting
with
it,
that
probably
helps
to
identify
like
here
the
large
chunks
that
are
common
right
and
don't
have
any
clouds
Pacific
dependencies
things
like
watches
against
the
API
and
so
forth.
That
would
be
really
obvious
candidates
to
pull
out
into
common
code.
That
would
help
make
that
you
know
next
implementation
easier
to
do.
A
B
C
B
B
A
Although
it
we
can
sort
of
take
a
stab
and
say
like
these
are
things
that
are
really
obvious.
That
would
be
ever
used,
let's
sort
of
from
of
those
now
and-
and
we
could
try
to
go
back
and
Aude
it
later
and
say
where.
Where
are
these,
where
the
dependencies
for
this
code
and
if
there's
really
only
dependencies
in
the
GCP
directory,
we
can
pull
them
back
out
again.
A
So
yeah
I
would
I
would
be
fine
with
a
PR
that
sort
of
made
a
speculative
like.
We
think
these
will
be
reused,
even
if
we
don't
have
a
concrete
implementation
somewhere
else
yet,
and
then,
if
we
build
those
other
concrete
implementations
and
we
find
out
if
they're
not,
we
can
pull
them
back
out.
Okay,
great.
Thank
you.
G
A
H
If
we
have
time,
I
would
be
curious
to
hear
discussion
around
the
believe
it
was
fallen
from
last
week
are
the
last
meeting
which
I
was
unable
to
attend,
but
Chris
Rossi.
We
filed
an
issue
around
the
subject
of
scoping
for
the
resources,
whether
they
be
global
to
the
cluster
or
namespace,
and
put
forward
three
options.
Actually
I'll
just
find
the
link
here
and
posted
in
chat.
A
H
A
H
Me
think
here,
I
I,
don't
know,
as
that
would
be
the
case
that
they
would
be
combined,
but
the
code
being
is
that
we're
working
on
could
be
used.
Potentially
in
both
cases,
our
primary
goal
was
a
was
a
cluster
that
was
managing
other
clusters,
but
we've
seen
some
paths
to
use
cases
where
a
cluster
managing
itself
may
be
useful
for
us
as
well.
So
the
code
that
we
were
working
on
potentially
could
support
both
I,
don't
know
as
it
would
be
together.
A
H
Yeah,
well,
that's
a
good
question.
I
mean
our
root
cluster.
We
haven't
really
discussed
how
that
would
be
itself
managed.
I
was
thinking,
it's
probably
be
one
of
our
pre-existing
systems,
but
pre-existing
assistance
for
installing
and
upgrading.
Okay,
that's
something
we
need
to
think
about,
though,
actually.
A
Sebastian
is
that,
along
the
same
lines
of
what
with
what
you
guys
might
do
at
Lutece
I
know
you
guys
are
really
interested
in
sort
of
nodes
and
managing
nodes.
Do
you
think
it'd
be
more
useful
to
store
that
definition
of
those
nodes
in
sort
of
the
customer
cluster
itself
or
they're
sort
of
local
to
that
cluster
or
to
make
more
sense
to
store
them
in
a
sort
of
management
cluster?
If
you
will,
along
with
your
control,
platens
hidden.
I
J
J
G
G
A
That's
that's
kind
of
how
I
imagine
it
working
also
is
in
a
lot
of
cases
the
controller
and
live
outside
the
cluster.
So
this
is
one
of
sort
of
the
key
differences
between
the
cluster
API
and
other
communities.
Resources
that
Chris
and
I
tried
to
present
in
our
talk
and
cube
con
is
for
pretty
much
all
other
kubernetes
resources.
They
are
local
to
that
cluster
by
definition
and
a
controller
for
those
resources
is
inside
the
cluster,
but
for
the
machines
themselves.
H
Yeah,
what
we
had
envisioned
was
that
they
would
be
using
some
kind
of
external
front-end
web
application
to
control
all
that.
We
have
not
discussed
having
the
controllers
outside
and
having
the
objects
in
which
actually
I'm
going
to
bring
that
to
the
team,
because
I'm
curious
what
they
think.
Okay,.
A
K
Could
somebody
explain
how
the
Orson
target
work
for
the
controller
so
just
to
clarify
where
the
source
objects
come
from
and
how
the
controller
remediates
those
in
the
target
cluster,
because
that
kind
of
seems
like
where
the
question
would
be
that
if
the
surrogate
on
the
controller
are
configurable,
then
it
would
allow
you
to
feasibly
and
run
like
two
controllers
in
one
in
both
the
cluster
and
then
one
external
to
the
cluster.
That
could
remediate
the
same
state
target.
K
H
F
A
So
I
think
there
are
a
couple
options:
if
your
controllers,
outside
the
cluster,
you
can
give
the
controller
that
definition
of
desired
state
and
then
it
can
simplify
the
cluster
and,
as
I
say,
into
the
cluster
as
sort
of
the
record
of
that
desired.
State
going
forward
obviously
have
an
issue
there.
A
That
says
great
my
cluster,
and
this
is
what
the
current
code
and
cube
deploy
does.
Is
it
basically
well
it's
actually
even
simpler
than
that?
It
creates
a
master
machine,
installs,
a
CR
D
and
then
runs
a
machine
controller
there,
but
I
think
as
we
expand
our
use
cases
to
things
like
multiple
masters,
we'll
probably
run
some
version
of
that
machine
controller
locally,
to
sort
of
reconcile
that
initial
state
and
then
pivot
the
machine
controller
into
the
cluster
afterwards.
A
Okay
I
think
in,
and
that
would
work
just
as
well.
If
that
machine
controller
is
run,
you
know
looking
on
your
laptop
or
if
it's
run
like
on
a
server,
you
know
as
a
hosted
service,
you
can
just
run
it
outside
the
cluster
to
sort
of
vivify
the
state
and
then,
if
you
want,
you
can
pivot
it
inside
the
cluster
or
you
could.
If
it's
you
know
as
a
service,
you
could
keep
it
outside
the
cluster
for
the
lifetime
of
the
cluster,
to
keep
it
running.
K
Okay,
for
with
something
like
Covidien,
where
the
PATA
pivot
is
kind
of
determined
by
the
availability
of
the
network
socket,
we
don't
really
have
that
trivalent
if
the
vacation
is
happening
from
like
a
different
machine.
So
would
we
want
to
put
like
a
lock
somewhere
in
the
cluster
api.
K
A
I
think
in
general,
we
need
to
make
it
safe
to
be
running
multiple
controllers
right.
In
the
same
way,
you
can
run
multiple
schedulers
like
not
just
not
multiple
schedulers,
and
this
is
a
two
different
schedulers,
but
the
same
scheduler
binary
twice
in
one
cluster,
the
same
controller
manager
binary
twice
in
one
cluster
and
only
one
of
them
is
active.
The
machine
controlling
that
have
to
be
smart
enough
to
do
the
same
thing,
because
the
easiest
way
like
you
should
you
still
run
three
of
them
in
your
cluster
and
have
them
failover.
Also
right.
K
A
I
think
the
scheduling
solar
manager,
both
more
coarse
than
that
I,
think
it's
just
like
a
lock,
basically
to
the
binary
like
there's
one
scheduler
binary,
that
is
in
charge.
It's
not
for
each
scheduling
decision
they
race
to
take
a
lock,
it's
one
of
them
is
in
charge
and
the
other
sort
of
a
hot
standby
and
is
sort
of
watching
all
the
things
come
by
and
not
taking
any
action.
A
And
then,
if
the
primary
one
disappears,
one
of
the
standby
ones
will
race
to
grab
the
lock
and
then
it
will
start
taking
action
instead
of
just
observing,
and
so
that's
sort
of
the
simplest
way
to
do.
It
is
to
have
this
sort,
of
course,
global
ones.
One
takes
action,
the
others
don't
and
you
could
then
break
it
down
and
have
more
fine-grained
locking.
If
you
wanted
to
shard
better.
A
I
think
we
need
to
do
that
exciting
it.
It
needs
to
be
safe
to
run
multiple
instances
as
a
controller.
If
for
no
other
reason,
then,
if
we're
scheduling
the
controller
into
the
cluster,
there's
no
guarantee
that
more
than
one
won't
run
at
a
time
unless
we're
using
something
like
stateful
sets,
which
does
have
a
strong
guarantee
that
more
than
one
won't
run
at
a
time
and
I
don't
think
staple
sets.
Has
the
right
to
Randy's
object
to
describe
the
machine
controller.
J
J
A
J
A
A
So
I've
had
a
couple
of
different
people
say
that
they're
excited
and
they
want
to
contribute
to
either
documentation
or
code,
but
aren't
sure
where
to
start
and
so
I
think
one
thing
that
we
need
to
start
doing
is
generating
a
backlog
and
a
set
of
sort
of
needs,
work
or
new
contributor,
labeled
issues
in
the
cube
deployed
repository
and
so
I
think
there
have
been
a
couple
things
brought
up
at
this
meeting.
That
would
be
perfect
for
that.
A
So
what
Charles
was
talking
about
about
lifting
what
we
think
is
reusable
code
and
sticking
that
into
the
cluster
API
directory
I.
Think
that's
something
that
would
be
pretty
easy
for
someone
new
to
look
at
and
say
here's
some
code.
It
doesn't
have
any
external
dependencies.
Let's
make
a
PR
move
that
over
I
think
that
the
thing
that
Tim
just
mentioned
about
adding
locking
code
will
be
relatively
straightforward
for
someone
who's
experiencing
kubernetes,
but
sort
of
maybe
wants
to
start
contributing
to
this
project
to
bring
that
forward
and
I.
A
Think
that
there's
probably
another
set
of
things
adding
sort
of
the
common
kubernetes
sort
of
infrastructure
into
our
project
with
various
spots
we
can
configure-
and
you
know
the
the
submit
queue
and
those
sorts
of
things
are
all
sort
of
pretty
good
starter
projects.
I
would
say
for
people
or
does
anybody
else
have
other
things
that
I
think
would
be
good
sort
of
easy
on-ramp
projects.
A
Okay,
I
will
try
to
keep
thinking
of
them
and
start
tagging
those,
and
if
people
are
looking
for
I
guess,
documentation
is
the
other
sort
of
obvious
one
terms
of
cleaning
that
up.
People
are
looking
for
small
things
to
get
started.
I
think
that,
hopefully,
maybe
by
the
end
of
this
week,
we
will
have
a
list
of
issues
that
we
can
point
people
to
be
a
label
according
to
to
find
sort
of
small
things
to
bite
off
and
start
contributing.
B
A
A
Yes,
we
don't
have
any
sort
of
code
level
owners
files
like
we've
started
to
put
into
the
main
kubernetes
Rico.
We
should
definitely
do
that,
and
that
way
we
don't
have
to
give
everybody
right
access
to
the
entire
git
project.
If
we
have
owners
files
and
some
of
the
the
common
product
automation
set
up,
people
should
be
able
to
use
the
bottom
to
merge
code.
A
It's
like
I
was
saying
I.
Think
that's
one
of
the
next
things
we
want
to
get
up
is
that
will
help
us
to
be
able
to
scale
contributions
and
then
sort
of,
as
with
other
owners
files,
we
need
to
make
sure
that
people
in
those
files
are
actively
reviewing
pr's
that
come
by
for
now
few
sen,
PRS
feel
free
to
ping
me
on
them,
and
I
will
either
look
at
them
or
make
sure
someone
does.
A
So,
if
I
read
through
the
section
about
the
cluster
API,
let
me
link
that
doc
chat.
Real
quick
people
haven't
seen
a
link
to
it
for
the
cluster
API.
The
straw,
man
that
he
wrote
down,
which
we
should
probably
validate
as
part
of
this
sort
of
sub
working
group,
is
in
20
team
we'd
like
to
basically
at
least
have
a
beta
version
of
the
clustering
VI,
so
that
is
broken
into
the
control
plane,
API
section
and
a
machines
API
section
you
know,
2018
is
gonna
be
a
long
year.
A
Hopefully,
hopefully
we
can
get
a
final
sort
of
v1
version
of
the
API
before
the
end
of
the
year.
I
think
it.
We
have
a
you
know
very
good
chance,
if
not
on
a
percent
of
having
a
beta
version,
because
I
think
the
bar
for
beta
is
not
too
hard
for
us
to
reach,
and
ideally
we'd
get
past
that
on
the
implementation
side,
having
machine
controllers
for
various
environments,
he
wrote
down,
GCE
terraform,
maybe
docker
know.
People
obviously
mentioned
AWS
vSphere
digitalocean,
bare
metal,
I,
think
Justin
said
he
was
tinkering
with.
A
So
there
are
lots
of
different
options
for
machine
controllers
that
we'd
like
to
build
this
year.
I
want
to
switch
our
kubernetes
and
implicit
like
the
cube
admin
tests
over
to
using
the
cluster
API
so
to
test
cube
admin
right
now,
we're
using
the
kubernetes
anywhere
project,
which
is
sort
of
under
minimal
support
purely
for
the
purposes
of
making
the
cube
admin
test.
Support
so
be
great
to
be
able
to
switch
those
tests
over
to
use
the
cluster
API,
since
we
want
to
have
that
be
general
useful.
A
That
would
also
allow
us
to
deprecated
and
stop
support
the
cube
up
code.
That's
in
the
main
communities,
repo!
That's
been
on
the
list
for
quite
a
while
to
get
that
code
out
of
the
main
repo,
either
by
moving
it
out
or
deleting
it,
and
so
this
will
allow
us
to
delete
it,
which
would
be
great
and
then
lastly,
he's
got
built,
something
that
actually
uses
the
cluster
API.
A
Right,
I
think
a
lot
of
systems
have
not
yet
implemented
things
like
automatically
repairing
the
masters
you're
automatically
repairing
the
nodes
when
we
notice
that
they
break
and
that's
something
we've
had
in
gke
for
a
while,
and
it
would
be
great
to
have
an
open
source
in
a
reusable
fashion
that
everybody
just
sort
of
can
get
for
free,
so
I
think
that's
the
other
thing
that
should
be
on
our
roadmap.
That's
not
in
the
dock.
At
the
moment,.