►
From YouTube: Kubernetes Contributor Summit 2018 - Past, Present, and Future of SIG Cluster lifecycle
Description
This session will cover the status of SIG Cluster Lifecycle's work this past year, what they're working on now, and what the plans are for 2019
Presenter: Tim St. Clair, VMWare
A
All
right
so
I'm
standing
between
you
and
lunch,
so
I'll
try
to
be
as
entertaining
as
possible.
There
are
many
representatives
from
cig,
clustered
lifecycle
up
here.
We
are
a
very
large
cig
with
many
sub
projects
underneath
it
and
we'll
go
into
more
detail
about
some
of
the
sub
projects
that
have
sort
of
amalgamated,
underneath
cig
clustered
lifecycle
a
little
bit
later.
So
today
we're
going
to
talk
about
a
basic
overview
of
what
is
cig.
Cluster
lifecycle.
I
know,
there's
a
lot
of
new
contributors
here
too.
A
A
So
what
is
it
that
we
actually
do?
There's
many
different
sub
elements
that
sort
of
derive
the
whole
complete
view
of
what
sync
cluster
life
cycle
does,
but
you
to
get
a
full
grokking
of
all
the
details.
We
have
created
a
charter
for
2018
and
all
of
the
sig
was
involved.
All
the
major
sub
project
leads
were
involved
in
creation
of
that
Charter,
so
I
highly
recommend
going
and
reading
the
Charter
in
detail
to
understand
what
exactly
we
work
on
and
how
it
sort
of
breaks
down
across
the
different
sub
projects.
A
So
concretely,
we
basically
creates
a
series
of
different
components.
One
of
the
things
we've
been
striving
to
do,
especially
in
recent
years,
is
making
a
composable
model
right
for
in
the
very
beginning,
in
early
stages
that
projects
there
were
a
lot
of
different
competing
installers
that
all
kind
of
did
the
same
thing
over
and
over
and
over
again,
and
that
was
actually
antithetical
towards
the
long
term
objective
of
sustainability
right.
So
a
person
would
fix
an
issue
in
one
component
but
then
have
to
percolate
those
fixes
and
all
the
other
components.
A
So
we
at
it
now,
we've
been
focusing
on
sort
of
having
this
layered
to
your
model,
so
at
the
base
layer
for
deploying
your
control
plane,
there's
koob
ADM
for
configuring,
your
components.
We
are
working
on
a
proposal
and
Lukas
will
talk
about
a
little
later
about
component
config
for
simplifying
infrastructure
deployment.
There's
clustering,
API
and
there's
also
other
installers
like
cops
for
add-on
management.
Justin
can
talk
about
bundles
and
STD
management.
Justin
can
also
talk
about
seda,
diem
and
and
much
much
more
okay.
A
So
it
sounds
neat
and
interesting
right.
It
sounds
really
cool
to
think
about
composable
layers
of
what
you
want
to
do,
but
in
reality
it's
it's
really
spaghetti,
it's
a
ton
of
spaghetti,
but
we
try
to
decompose
the
problems
and
then
you're
constantly
kind
of
reinvent
the
model,
which
is
a
good
thing
right.
Kubernetes
is
not
a
done
project.
A
The
idea
or
the
moniker
that
occurred
in
around
2016
that
actually
led
to
the
formation
of
sig
cluster
lifecycle
is
that
clustering
was
hard
right
with
the
course
of
the
work
that
we
did
since
the
very
beginning
of
the
formation
of
the
sig,
as
well
as
some
of
the
objectives
of
the
sig
is
to
simplify
that,
to
make
it
easier
for
other
folks
to
consume
and
to
use,
and
hopefully
over
time
it
just
becomes
easier
and
easier
and
easier.
This
helps
to
sort
of
spread
the
love
of
kubernetes
far
and
wide
right.
A
So
what
happened
in
2018
I'll
talk
a
little
bit
about
this,
then
I'll
hand
off
to
everybody
else,
so
in
in
2018,
from
the
sig
perspective,
we
formalized
the
Charter
one
of
the
things
that
we
focused
on
a
lot
both
in
sub-projects
and
the
sig
is
actually
fixing
the
docks.
So,
yes,
I
am
shaming
other
SIG's
on
purpose.
Here
we
focused
on
the
docks.
It
was
really
important
to
us
because
our
end
user
consumers
are
the
the
front
facing
people
who
basically
give
feedback
to
the
whole
community.
A
Several
other
projects
have
joined
the
sig,
we've
basically
accreted
a
lot
of
the
other
different
projects
and
we
tried
to
sort
of
restructure
our
meanings.
So
that
way,
we
had
a
venue
in
a
clearinghouse
for
all
the
SIG's
to
actually
have
a
conversation
to
bring
up
new
topics.
Ideas
think
if
we
should
meld
in
new
sub
projects
and
if
we
are
going
to
form
other
ideas
and
sort
of
split
things
apart.
B
B
B
We
want
to
unify
the
ecosystem,
as
Tim
said
here,
in
the
sense
that,
if
you're
building
this
full,
like
with
infrastructure
with
kubernetes
with
add-ons
with
everything,
this
full
solution,
you
can
use
cubed
em.
Now
as
a
toolbox,
you,
you
maybe
want
to
use
small
parts
of
it
or
all
of
it,
but
still
get
from
the
state
where
you
have
infrastructure,
whatever
BMS
machines
to
the
state
where
you
have
kubernetes
and
that's
what
you
want
to
use
cubed
and
for
with
sane,
reasonable
defaults.
B
As
the
the
security
folks
here
said
earlier,
the
default
in
security
isn't
that
great
in
kubernetes
today,
but
in
cuba
name
instead
we're
pioneering
the
these
secure
defaults
and
eventual
we'll
probably
get
them
into
core.
But
but
if
you
use
cubed
I
may
have
a
same
set
of
defaults
that
you
can
use
and
are
maintained
across
minor
versions.
So
the
goal
now
the
Cuban
name
is
GI
is
for
as
many
tools
as
possible
as
we'll
see
here
as
well
to
adopt
cubed
M
or
like
use.
It
consume
it
inside
of
your
product.
B
C
Right,
my
name
is
Robert
I'm,
also
one
of
the
co-chairs
just
a
cluster
life-cycle.
One
of
the
things
that
that
we've
done
over
the
past
year
is
we've
sort
of
stepped
back
from
Cuba
diem,
which
is
really
geared
towards
creating
a
great
sort
of
getting
a
started
experience
for
people
who
already
have
existing
infrastructure
and
looking
at
how
we
can
abstract
away
the
cloud
provider
infrastructure.
C
So
one
of
the
things
that
Brian
pointed
out
in
the
overview
this
morning
was
we
want
to
have
consistent,
abstractions
across
underlying
infrastructure,
and
he
pointed
to
storage
as
sort
of
leading
the
way
here
and
in
our
our
sake,
what
we're
trying
to
do
is
figure
out
how
we
can
abstract
the
rest
of
sort
of
your
cluster
infrastructure,
so
that
interacting
with
the
cluster
can
become
more
consistent
across
different
providers.
So
last
year,
at
Cuba,
Austin,
Chris,
Nova
who's
here
in
the
audience
and
I
gave
a
talk.
C
Sort
of
introducing
the
cluster
API
and
the
cluster
API
is
is
really
describing.
How
to
you
know,
create
and
upgrade
and
manage
the
machines
of
your
cluster
so
dealing
with
the
virtual
or
physical
machines
of
a
cluster
and
also
things
like
network
allocations
across
your
cluster.
Over
the
past
year,
we've
done
things
like
migrated
from
CR
DS
to
a
great
API
servers
and
then
back
to
CR
DS,
which
is
a
little
bit
humorous.
B
Add
one
more
thing
so,
as
we
have
made
these
layers
like
first
infrastructure,
we
have
common
areas
and
then
the
atoms
as
we're
building
this
layers.
We
have
Q
barium
in
the
kubernetes
layer
and
we've
by
default,
made
it
work
really
well
with
the
cluster
API.
So
if
you
do
add
your
machines,
VRC
Rd
and
have
a
cloud
specific
controller
that
makes
that
happen,
you
should,
or
you
could
use
Q
bedm
to
actually
run
bootstrap
the
things
on
the
machines.
D
I.
Think
one
of
the
interesting
things
we've
heard
from
user
feedback
is
that
is
that
users
like
the
fact
that
the
dot
zero
of
cops
is
a
production
suitable
release.
Even
if
that
means
that
we
are
six
ish
months
behind,
we
are
working
on
getting
a
little
bit
closer
on
that
and
yeah.
Once
we
once
we
finally
get
past
33,
we
really
will
start
on
the
bundle
and
add-ons
for
coming
upgrades
cluster
API
and
adopting
CR
DS
as
well
for
generally
cops.
E
Hi
I'm
Antoine
one
of
the
maintainer
of
keep
spray.
So
this
year
it
was
like
a
lot
of
activity
on
cue
spray,
and
recently
we
join
being
more
close
to
the
cluster
lifecycle.
So
we
joined
the
community
sig
organization
and
what
does
it
means
mean
more
collaboration
like
say
on
technical
decision.
We
will
go
to
the
sig
as
a
question.
E
What
what
which
path
we
should
take
and
as
a
concrete
example
for
the
next
release,
we're
going
to
be
only
used,
Kiba
diem,
as
we
can
say,
as
a
building
block
to
install
tracer
and
manage
em,
so
we're
not
using
all
or
the
feature,
but
we
start
with
certificate,
static,
pod,
etc,
and
we
work
more
closely
as
in
the
same
way,
we
receive
a
lot
of
contribution
around
Network
plugin.
So
we
have
many
many
contributions
across
different
topics,
and
so
that's
mainly
gives
free.
F
Hi
I'm
Tom
Stromberg,
one
of
the
maintainer
x'
for
mini
cube.
So
this
year
we
were
really
focused
on
improving
the
fidelity
and
reliability
of
mini
cube,
one
of
the
best
things
that
we
did
was
we
jumped
on
the
cube,
ATM
train
and
we
retired
local
cube,
which
greatly
simplified
the
mini
cube
codebase.
We
also
you
know
on
the
theme
of
fidelity
we
added
load,
balancer
emulation.
You
might
need
to
go.
Look
at
the
latest
release
as
of
this
morning
to
actually
use
it.
Lissa.
D
A
To
is
that
this
is
a
high
priority
item.
I
know
that
Brian
talked
early
on
about
wanting
to
improve
testing
and
testability
and
basically
making
the
core
of
kubernetes
more
solid
over
time.
This
is
a
key
aspect
of
some
of
the
efforts
that
we're
going
to
put
rally
the
community
around
in
the
next
in
the
coming
year.
A
So
again,
what's
coming
in
2019,
as
I
mentioned
earlier,
as
Brian
mentioned
earlier,
is
the
idea
of
TechNet
Baytown
versus
the
new
shiny
right
as
developers,
we
always
focus
on
the
new
shiny
right
because
always
fun
to
develop
those
things.
It
is
not
so
much
fun
to
actually
make
what
we
have
better
and
more
solid
I
reproducible
right.
It's
not
a
fun
thing
to
sort
of
pay
down
some
technical
debt
for
a
project
that
was
basically
never
designed
to
scale
at
this
pace
right.
A
D
Yes,
so
we
Google
open
sourced
a
specification
for
bundles
and
I
think
we
are
what
we
we
are
currently
open
sourcing
our
proposal
for
how
we
should
do
add-ons
via
operators
at
Jeff,
Johnson
I,
are
talking
that
out
about
that
on
Thursday.
The
hope
here
is
that
the
same
benefits
that
we
get
from
cube
ATM
will
apply
to
cluster
add-ons,
which
were
sort
of
pieces
like
the
CNI
provided,
for
example,
that
I
managed
alongside
your
cluster.
So
once
we
once
we
do
the
base
layer
that
there
will
be
homogeneity.
If
that's
a
word
at.
C
B
And
also
like
it's
it
clear
to
note
here
that
this
is
only
four
things
required
to
operate:
kubernetes,
not
my
random
app
that
I
want
to
deploy
automatically
when
starting
a
kubernetes
cluster.
It's
just
it's
very
it's
going
to
be
very
scoped
at
what's
needed
and
also
the
the
most
common
use
cases
I
upgrade.
My
cluster
I
now
need
this
new
CNI
provider,
or
this
new
CSI
thing
or
the
next
metric
server
version.
Things
like
that
so,
and
also
having
some
consistency
between
cops,
cube,
spray,
mini,
cube,
cube,
ATM.
D
And
at
CD,
so
this
is
I
think
another.
This
I
example,
if
I
sort
of
how
the
the
sig
is
working
well,
cops
was
working
outside
of
outside
of
cops
itself
on
a
tool
called
Etsy
D
manager,
which
is
sort
of
our
orchestration
layer
for
for
Etsy
D,
for
particularly
for
kubernetes
so
another
we
don't
want
to
rely
on
a
kubernetes
api
being
available.
D
So
that's
a
certain
team,
the
HDD
operator,
but
then
we
saw
a
project
from
platform,
9
called
Etsy
da
DM,
which
was
sort
of
a
CLI
management
layer
for
EDD,
similar
sort
of
goals,
sed
a
DM
being
more
manual
and
see
live,
driven,
Etsy
D
manager
that
the
cops
one
being
more
automated
and
we
combined
them
under
the
sink
cluster
lifecycle
umbrella.
We
decided
at
C
a
DM
was
a
better
name,
so
we
took
that
we
did
I
kept
for
it.
D
B
Yeah
and
then
we
know
that,
with
all
these
pieces
with
Cuba
and
with
the
add-ons
with
cluster
API,
how
do
you
actually
manage
these
things?
Well,
we
do
it
as
any
other
company's
API
object.
We
are
llamo,
that's
good
and
both
good
and
bad,
but
it's
at
least
better
than
using
flags
for
our
for
configuring
things.
B
So
the
way
we
want
to
go
here
is
move
from
using
flags
for
configuring,
the
component,
the
core
components
and,
for
example,
the
atoms
as
well
like
queue
proxy
and
core
DNS
and
stuff,
should
support
this
way
of
declarative
config,
and
then
we
also
have
loads
of
architectural,
like
called
structural
problems
inside
of
the
core.
So
basically,
all
the
components
implement
the
same
same
kind
of
features
in
a
slightly
different
way,
replicating
a
lot
of
code.
B
Yeah
and
well
now
it's
cubed
mga
our
config
is
we
be
the
one
we
won't
be
the
one
we
don't
expect
any
like
really
any
any
large
changes
to
that.
So
we're
just
gonna
go
ahead
and
bump
that
to
be
one
in
the
coming
year,
then
we
have
support
we've
added
support
during
the
year
for
cubed
M
join
for
yet
control
plane,
so
basically
joining
a
node
first,
but
also
starting
the
API
server
and
similar
tasks
needed
for
a
control,
plane,
node.
B
So
doing
doing
all
that
kind
of
automation
in
one
box
is
beneficial
to
do
more
of
this
unification,
but
also
security
like
less.
This
has
said
earlier
here.
There's
this
discussion
of
security
profiles.
Can
we
integrate
that
in
some
way,
can
we
the
default
security
features
we
already
have
incubating?
Can
we
merge
them
into
defaulting
core
without
having
like
introducing
too
many
backwards
in
incompatible
changes?
B
A
Not
just
related
to
cube
ATM,
but
to
all
the
sub
projects
underneath
sync,
cluster
lifecycle
is
better
ci
testing
release,
tooling
I
think
this
been
mentioned
several
times.
I'll
just
keep
on
reiterating
it.
How
many
people
in
the
audience
have
had
to
do
retest
if
all
your
hands
up
and
if
you
contributed
to
the
kubernetes
project
you
know
hands
should
be
out
now.
How
many
of
you
know
that
a
lot
of
these
tests
that
are
flaking
actually
have
nothing
to
do
with
test
flakes
at
all
very
few
all
right!
A
Well,
there's
a
couple
people
in
the
back.
Some
of
them
have
to
do
with
how
we
are
actually
executing
against
providers
right.
So
poor
Justin
gets
blamed
a
lot
for
cops
failures
and
then
we
and
most
of
the
time
it
has
nothing
to
do
with
cops
at
all.
It
has
to
do
with
the
fact
of
how
we
have
the
test
in
for
a
set
up
to
execute
against
taps.
So
what
are
we
gonna
do
to
try
and
mitigate
some
of
these
problems
that
exists?
We
are
going
to
throw
down.
A
We
are
going
to
have
folks
on
the
sig
actually
integrating
with
the
testing
folks
and
fixing
a
lot
of
the
core
problems
that
exist
in
part,
because
we
are
inextricably
linked
between
testing
and
release
process.
We
are
at
that
focal
point.
So
whenever,
whenever
a
version
upgrade
test
fails,
we
are
the
first
line
of
defense.
We
get
called
almost
immediately
right,
so
one
of
the
things
we're
going
to
do.
A
We've
talked
with
Ben
as
well
as
other
folks
about
trying
to
get
kind
as
one
of
the
being,
hopefully
in
the
future,
the
only
PR
blocking
job
to
do
intend
tests
and
move
all
of
the
other
infrastructure
provisioning
jobs
to
be
periodic.
So
you
will
get
signal.
It
will
be
at
a
less
granular
scale,
but
it'll
probably
be
far
less
flaky
to
as
well.
A
See
I
release
artifacts.
How
many
folks
know
that
there's
actually
like
split
versioning,
there's,
there's
a
split
brain
between
what's
tested
in
CI
and
what's
actually
released,
only
people
on
say,
clustered
lifecycle
and
release
know
these
things.
So
that
is
a
really
unfortunate
situation
and
it
has
nothing
to
do
actually
with
a
problem.
It
has
mostly
to
do
with
legacy
and
ownership.
A
So
again
we
are
gonna,
try
and
throw
down
this
cycle
and
try
to
fix
these
problems
because
we
we
ever
created
so
much
technical
debt
and
as
we
have
moved
things
to
GA,
we
need
to
be
able
to
sort
of
pay
down
that
debt.
So
I
want
this
to
be
a
theme
coming
off
from
Brian
as
well
as
other
SIG's
is
that
we
are
trying
to
pay
down
this
debt.
This
is
important
to
make
sure
that
we
have
a
solid
functioning
system
that
is
sustainable
and
maintainable
over
time
and
yeah.
Basically,
that's
about
it.
C
C
As
Tim
was
saying,
we
wanted
to
spend
a
lot
of
time
paying
down
debt
and
building
tests
and
release
automation,
and
in
particular,
one
of
the
goals
here
is
in
sort
of
parallel,
with
moving
to
kind
for
release.
Blocking
tests
is
to
finally
deprecated
and
remove
the
clustered
directory
from
the
core
Committee's
repository,
which
is
something
that's
it's
it's
been
there
for
a
long
time.
We've
long
had
plans
to
get
rid
of
it,
and
it's
really
on
the
roadmap
for
twenty
Natur
starting
to
gain
steam.
On
some
of
these
efforts,
like
with
cube.
A
C
H,
a
is
something
that
we're
starting
to
design
and
sort
of
in
concert
with
the
cube
ATM
folks,
as
many
of
the
providers
delegate
to
cube
ATM
as
they're,
creating
the
control
plane
for
their
cluster,
our
Dimension
X
100,
scalar
integration.
This
is
something
that's
sort
of
in
one
of
the
the
promised
features
of
the
cluster
API
is
when
you
have
a
generic
way
to
manage
your
cluster
infrastructure.
You
start
to
get
things
for
free,
like
cluster
auto
scaling,
maybe
note
auto
provisioning
across
different
infrastructure
providers.
C
We've
been
having
a
lot
of
conversations
with
sig,
auto
scaling,
and
we
expect
in
2019
2019
those
discussions
to
come
to
fruition
s
to
actually
have
an
auto
scaler
implementation.
The
targets
a
cluster
API
that
we
can
use
across
multiple
providers
right
now,
the
the
bootstrapping
process
uses
mini
cube,
which
is
proved
to
not
be
great
in
a
number
different
scenarios,
and
we
have
a
number
of
people
that
are
investigating
alternatives
to
mini
cube
for
the
the
places
where
it
doesn't
work.
E
So
try
to
be
shot
on
cue
spray.
We
integrate
with
the
CI
to
get
a
senior
when
Cuba
DM.
There
is
a
new
release
or
just
a
measure
master.
We
need
to
know
if
you
spray
still
working,
and
so
we
look
to
this
senior.
So
we
know
there
is
an
issue
integrate
with
closer
API
and
one
of
the
feedback
is
performance
to
deploy
at
scale
on
hundredth
node,
so
we're
going
to
work
at
that
and
intimate
with
its
CG
at
the
M.
E
Also
as
soon
as
it's
available,
we
continue
to
work
also
to
integrate
with
we
support
cryo
as
cotton
energy
in
docker,
and
we
continue
to
work
on
adding
more
option.
This
is
part
of
the
project
to
become
possible,
so
everyone
can
choose
what
the
component
behind
the
scene
and
working
with
cluster
band
also
likely
integrates
everything
from
cluster
lifecycle
into
cubes
prior
projects.
A
So
folks
want
to
get
involved.
I
usually
asked
folks
join
the
mailing
list
or
the
standard
sequester's
our
cycle
channel.
There
are
so
many
sub
projects
we
can
use
that
for
routing
there
are.
There
are
different
channels
for
all
the
different
sub
projects,
so
if
even
individual
providers
for
given
sub
elements
of
a
sub-project
right
so
because
of
that
probably
go
to
the
main
sub
project
channel
and
just
ask
a
question
or
send
a
mail
to
the
mailing
list.
A
And
there's
also
more
conversations
going
on
today,
as
well
as
this
week.
If
you
want
to
know
more,
there's
a
Boff
this
afternoon
and
we're
going
to
talk
a
lot
about
the
roadmap
items
that
we've
been
currently
discussing
right
now,
as
well
as
there's
other
talks
this
week,
both
sig
related-
and
you
know
just
broader-
in
scope
pertaining
to
sig
cluster
lifecycle.
A
Last
but
not
least,
shoutouts.
This
sig
is
a
very,
very
large
I've,
been
in
awe
by
the
amount
of
people
and
contributions
and
effort
that
has
gone
into
it.
One
person
I,
don't
actually
have
on
this
list,
which
I
should
have
in
this
list,
is
DIMMs,
he's
done
a
ton
of
work
and
so
just
give
it
give
us
a
hand.
The
bins
there's
been
a
ton
of
folks
involved
in
all
of
this
to
get
things
out
the
door
to
get
them
on
time
to
get
them
tested
and
vetted.
B
One
last
thing:
I
forgot
to
mention
we're
also
starting
a
new
working
group
for
this
component
refactoring
thing,
so
we
haven't
even
decided
a
name
but
myself
Mike
Turin
and
Stephanie
Lansky
yeah
yeah
there.
Both
of
you
are
gonna,
start
this
off
the
cube
now
cube
con
and
at
the
beginning
of
next
year,.