►
From YouTube: SIG Cluster Lifecycle New Contributor Onboarding
Description
Greetings newcomers!
We've decided to hold our first sig-cluster-lifecycle new contributor session. The goal of the meeting will be to outline several of the sub-projects of the SIG and where they fit in the stack.
We will also outline the scope of the tools and some core first principals that the SIG adheres to. From there we will discuss how we operate and how you can engage and help contribute.
All skill levels welcome! You can find more about SIG Cluster Lifecycle here: https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle
A
Hello
today
is
Friday
April,
12th
2019.
This
is
the
first-ever
Sikh
cluster
lifecycle,
new
contributor
meaning.
So
the
agenda
today
is
to
basically
give
folks
an
overview
of
what
Western
life
cycle
is.
What
we
do
give
you
an
idea
of
some
of
the
projects
that
we
work
on
and
also
get
an
idea
of
where
you
can
contribute
and
how
you
can
contribute
so
I'm,
going
to
first
start
by
sharing
my
screen
and
give
a
little
bit
of
history
here.
A
A
There
were
only
a
lot
of.
There
were
a
lot
of
tools
that
existed
on
the
ecosystem
around
provisioning
clusters
and
there
were
vendors
that
actually
even
no
longer
exist
today
that
that
were
in
this
space
as
well-
and
there
was
this
common
theme
that
was
happening
in
the
ecosystem,
that
was
kubernetes
was
hard
to
deploy
and
management,
and
even
with
the
tooling
we
have
today,
it
still
kind
of
can
be
a
little
bit
difficult
to
do
and
we
keep
on
with
every
iteration
and
cycle
inside
of
the
cluster
life
cycle.
A
We
try
to
refine
it
and
make
it
a
little
easier,
but
what
we
were
seeing
in
the
ecosystem
is
that
there
were
all
these
different
tools
that
are
essentially
doing
the
same
thing,
and
so
we
started
sit
closer
lifecycle
with
the
initial
scope
of
creating
a
bootstrapping
tool
which
in
this
case,
started
out
as
pervading
n.
So
that
way,
instead
of
all
of
these
other
tools
having
to
do
the
same
thing
over
and
over
and
over
again,
we
had
one
tool
that
that
everyone
could
leverage
and
rely
upon.
But
what?
A
A
Hopefully,
folks
can
see
this
I'll
try
and
zoom
in
a
little
bit.
So
what
we
kept
on
seeing
over
time
was
that
there
are
sets
of
tools
and
we
wanted
to
be
able
to
separate
the
responsibilities
of
those
individual
tools
to
allow
folks
to
be
able
to
opt
into
using
pieces
of
the
puzzle.
So
we
started
out
with
Kubb
ATM,
but
then
there
was
a
need
and
a
desire
to
basically
split
out
the
functionality
for
doing
sed
management.
So
there
is
a
separate
tool
called
sed
ATM.
It's
it's
still
a
very
nascent.
A
We
haven't
switched
a
lot
of
the
code
from
qadian
that
does
sed
management
to
be
using
NCD
ATM,
but
in
the
fullness
of
time,
we'd
ideally
like
to
do
that.
Hopefully
folks
can
see
this.
If
you
can't
feel
free
to
speak
up
and
I'll
try
and
zoom
in
even
more
and
then
what
happened
recently
is
there's
a
desire
for
doing
add-on
management
sort
of
outside
of
the
core
of
kubernetes.
We've
always
done
it.
A
So
a
lot
of
the
things
we
do
inside
of
SiC
cluster
lifecycle
is
we.
You
know
we
follow
the
batteries
included
with
swappable
option.
So
that
way
we
have
a
default
configuration
that
we
use,
but
we
always
want
to
be
able
to
have
knobs
or
parameters
or
options
for
folks
to
be
able
to
change
what
they
need.
A
A
great
example
of
this
is
some
of
the
things
design
principles
that
we
baked
into
the
tools
like
one
of
the
key
things
that
we
added
in
cube
ATM,
is
that
even
for
bootstrapping,
there's
very
opinionated
ways
of
doing
it.
So
we
have
separate
phases
for
the
bootstrap
cycle
and
it's
basically
like
it's
a
linear,
dag
of
a
series
of
execution,
steps
that
go
in
order
and
a
person
if
they
want
to,
can
choose
to
use
kuba
diem
in
in
the
phases
model.
A
C
A
A
What
also
kind
of
happens
too
is
there's.
There's
these
weird
questions
that
kind
of
exist
of
how
you
deploy
or
how
you
consume
the
artifacts
versus
a
cluster
lifecycle.
There
are
things
that
are
core
out
of
X
and
there
are
things
that
kind
of
exist
outside
of
the
core
and
I
think
add-ons
are
in
this
weird
space,
they're
kind
of
like
right
at
the
surface
layer
right
so
currently,
troub
ADM
is
in
core
I.
Think
in
the
fullness
of
time.
A
We
want
to
move
comedian
outside
of
the
core
in
order
for
us
to
do
things
like
that,
we
want
to
promote
other
projects
such
as
a
sort
of
having
what
we
call
a
unified
component
config,
a
way
for
us
to
be
able
to
configure
components
using
API
types
and
standard
model
inside
of
turbidity.
So
we
get
well-defined
versioning
semantics
and
it
allows
us
to
basically
use
the
kubernetes
style
api
for
controlling
components
of
kubernetes.
A
Once
we
have
that
capability,
we'll
probably
move
coupe
idiom
outside
of
the
core,
so
I'll
pause
here
for
a
second,
because
I
want
folks
to
ask
questions.
This
is
not
just
me
doing
a
monologue.
It's
for
folks
to
see
like
does
this
model
makes
sense.
Are
there
questions
about
the
pieces
of
the
puzzle
that
exists
inside
of
seed
cluster
lifecycle
are
the
main
projects
that
are
there.
A
Tree
so
the
key
reason
why
we,
what
that
is
a
hard
requirement,
is
because
the
if
people
change
the
way
components,
work
and
operate,
and
it's
not
a
standard
API
which
it
has
happens
in
several
several
times
over
the
course
of
communities.
We
end
up
having
a
broken
installation
method
mechanism
for
that
for
a
given
release,
so
we
didn't
used
to
always
have
policy
or
strict
policies
around
command
line
flags.
We
have
better
policies
in
place
now,
but
we
have
even
tighter
policies
on
the
API.
A
A
But
there's
other
logistical
reasons
why
we
want
to
wait
to
move
it
out.
The
Federation
of
repositories
inside
of
the
main
of
communities
system
is
not
is
not
it's
like
a
half
start
that
is
like
stuck
in
this
limbo
state
that
is
existent
for
a
long
period
of
time,
because
how
we
do
sort
of
release
versioning
across
a
federated
set
of
repositories
has
never
actually
been
fully
fleshed
out.
So
if
a
person
wants
to
get
a
kubernetes
for
even
the
core
set
of
things,
we
don't
actually
have
a
release.
A
Bundle
that
exists
today
that
isn't
straight
out
of
kaykai,
so
like
they'd,
have
to
go
to
all
these
different
repositories
to
get
a
kubernetes
and
the
promise
in
recent.
The
original
state
of
promise
is
that
if
we
had
all
these
repos,
we
would
have
a
way
of
basically
combining
the
bundle
together
to
get
a
core
release
for
a
given
release
cycle.
E
The
question
here
this
michael
sorry,
so
basically
the
model-
I
probably
not
I'm
sure
I'm-
not
fully
understanding
it
yet,
but
the
so
the
the
idea
is
to
basically
the
hook.
Merge
connect
all
these
separate
pieces
by
well-defined
and
well-maintained
set
of
api's,
so
that
replacing
one
portion
will
guarantee
that
the
rest
don't
break
is
that
the
idea
do.
A
F
A
A
So,
at
this
stage,
it's
to
be
determined,
ideally
that
the
promise
of
add-ons
or
the
idea
of
add-ons
is
instead
of
folks
having
to
grab
yeah
mel,
manifests
and
bundle
them
into
these
other
tools.
Is
we
basically
have
a
tool
with
a
single
manifest
that
manages
them
all
for
all
deployments
and
manages
the
lifecycle
of
all
these
different
things?
A
So
if
you
go
to
different
tools
today,
such
as
like
goop
spray
or
cops
right
for
a
given
version
of
kubernetes,
it's
possible
today
to
have
different
versions
of
add-ons
right,
and
it
depends
upon
where
you
are
in
the
stack
an
ideally.
We
don't
want
to
have
that
fragmentation
in
the
fullness
of
time.
We
would
rather
just
rely
on
a
single
canonical
source
of
truth
and
that
way,
gke,
cop
scoop,
spray,
etc,
etc,
etc.
Just
all
reference
to
the
same
version
and
do
lifecycle
management
through
the
add-ons
manager
that.
A
That's
pretty
common
right,
so
I
mean
like
that's
what
higher-level
installers
kind
of
do.
Is
they
basically
bundle
all
the
other
pieces
right
that
they
need
for
for
kubernetes
cluster,
and
some
of
those
things
include
core
add-ons
and
that
set
of
add-ons
can
could
be
you
know
fundamental
or
it
could
be
a
larger
set.
Like
some
people
want.
E
A
Big
or
small,
that's
the
premise
we'll
see
how
it
rolls
right,
like
usually
what
happens
with
inside
of
projects
and
said
cluster
lifecycle.
Is
that,
like
out
on
all
these
project?
Only
this
one
is
GA
right.
We
go
slow
and
we
go
slow
on
purpose
and
we're
trying
to
try
to
be
pretty
methodical
with
our
releases
and
try
to
have
backwards
compatibility
support
even
for
alpha
things.
That
way
we
get
feedback
from
the
wild
at
regular
intervals.
A
For
us
to
be
able
to
know
if
we're
going
in
the
right
or
trending
in
the
right
direction,
and
if
it's
generally
useful
so
I
I
expect
you
know
we'll
probably
have
an
alpha
and
release
this
year
for
add-ons
and
we'll
try
to
get
more
feedback.
And
ideally
we
try
to
incorporate
that
into
the
other
projects
as
well
to
get
broader
feedback.
G
So,
thank
you.
So
much
I
have
an
anecdote.
Kind
of
related
to
the
current
situation,
with
Kubb
ADM
in
particular,
so
I
was
building
an
implementation
of
cluster
API
and
this
turned
out
to
be
just
kind
of
like
an
implementation
detail
with
Kubb
ADM,
but
some
of
the
functionality
was
only
exposed
in
flags,
and
if
you
used
those
flags,
then
you
couldn't
use
the
API
and
there's
like
a
whitelist
mechanism.
Now
in
cube
ADM
of
flags
that
are
allowed
when
you're
also
using
the
API.
G
A
G
And
I'm
I'm
saying
the
API
is
great
and
I
love
how
quickly
it's
advanced,
but
if
there
are
flags
that
we
just
it's
kind,
I
think
it's
kind
of
strange
that
there's
a
whitelist
of
flags
that
are
allowed
when
you
use
the
API
like.
Why
not
allow
all
Flags
when
the
API
is
used?
That's
that's
what
I
mean
yeah.
A
It
had
mostly
extruded
feedback
and
we
didn't
combinatoric
explosion
right
so
like
we
had
originally,
as
we
pruned
a
bunch
of
the
flags
out
before
we
actually
even
went
to
beta.
So
we
had
a
comment
or
explosion
of
options
that
existed
through
I
would
call
an
API
called
a
configuration
file
because
it's
not
really
an
API.
A
So
the
the
configuration
file
for
committee
and
got
massively
pruned
before
went
to
beta
because
we
didn't
want
to
expose
or
managed
all
of
these
things,
and
we
also
envisioned
that
the
component
configuration
for
some
of
these
things
should
be
its
own
API.
And
what
was
that
kind
of
happening
is.
Is
comedian
was
kind
of
subsuming
a
bunch
of
the
component
configuration
for
these
other
things?
So
instead
we
kind
of
made
it
explicitly
dumb.
A
Yeah
so
I
think
in
the
fullness
of
time.
You
know,
I
keep
on
saying
that
that
phrase
over
and
over
again,
that
component
can
fade
will
be
the
way
to
configure
most
ear
components
for
most
of
the
core
could
be
ninnies
components.
There
will
also
be
some
version
of
that
configuration
for
the
add-ons
likely
to
I.
H
A
Meant
to
be
a
reusable
library,
so
mic'd,
often
and
Lucas
and
Stefan
have
been
working
on
this
as
a
general-purpose
library,
as
well
as
a
set
of
api's
to
be
used
by
the
components
right.
So
it's
meant
to
be
composable
in
part,
because
other
people
want
to
use
that
same
style
of
generating
component
things
for
their
own
components,
whether
they
be
custom
controllers
that
they
write
or
operators
of
their
right.
They
want
to
have
a
unified
way
of
configuring
and
managing
them.
A
A
A
But
again
much
like
cube
a
DM.
We
released
something
to
get
it
out
there
and
it's
going
to
evolve
over
time.
So
the
model
we
have
today
will
likely
take
awhile
for
us
to
settle
to
be
able
to
understand
where,
where
the
boundary
lines
of
cluster
API
are
and
where
we,
where
we
want
to
stop
things
because
there's
two
separate
pieces
inside
of
cluster
API,
there's
the
machines
and
the
cluster
API
and
clearly
defining
the
boundary
lines
of
where
one
begins
and
what
responsibilities
exist
there
and
where
the
other
one
ends
is
gonna.
H
So
I'm
very
new
to
cluster
API
I've
kind
of
tinkered
with
it
and
just
got
as
far
as
the
end
of
the
book,
which
is
pretty
useful.
Thank
you.
Do
you
see
cluster,
so
one
thing
I
notice
in
them
in
the
versions
API
part
of
it
I
think
it's
pretty
specific
around
the
Attic
there's
a
version
of
the
control
plan.
There's
a
virginal
of
cupola.
A
That
sounds
like
a
separate
concern
for
the
version
of
the
individual
components,
but
the
supported
deployment
versions,
hopefully
in
the
fullness
of
time
again
I'm
using
that
phrase
again.
Currently,
this
is
going
to
be
being
one
vanity
or
for
its
config
and
then,
as
we
push
that
over
time,
to
get
that
to
actually
be
one.
A
What
was
happening
or
what
we
were
seeing
in
a
monologue
a
little
bit
about
this
is
that
it
was
kind
of
commonplace
for
folks
to
have
one
cluster
right.
They
were
using
several
different
tools.
They
were
using
the
it
could
spray
or
cops,
or
you
know
some
other
custom
installer
that
used
pieces
of
these
puzzles,
or
they
did
it
on
their
own
right,
and
that
was
great
for
installing
engine
stalling
and
managing
one
cluster.
A
But
then
it
became
more
difficult
as
they
were
trying
to
install
and
manage
and
clusters
and
the
pattern
we
were
seeing
in
the
community
is.
We
have
like
stats
on
this
from
from
feedback
from
the
wild
from
surveys.
That
we've
done
is
that
the
average
of
people
who
are
managing
clusters
have
around
about
five
between
five
and
ten
different
clusters
inside
of
their
IT
organization.
A
Some
people
have
very
interesting
environments.
The
more
automated
you
get
the
more
clusters
you
have
a
tendency
to
see.
So
this
is
the
average,
but
people
who
are
highly
automated.
They
have
on
the
order
of
like
ten,
you
know
hundreds
to
thousands
of
clusters,
and
once
you
start
getting
into
that
space,
your
standard
tools
start
to
break
down
a
little
bit,
because
you
need
any
constructs
that
allow
you
to
manage
higher
up
the
stack
and
do
it
in
a
more
abstract
way
right.
So
that's
exactly
where
cluster
API
aims
to
to
solve
that
problem.
A
You
can
use
it
for
firm,
manage
I.
Personally
think
this
is
my
personal
opinion.
If
you're
trying
to
manage
one
cluster
using
cluster
API,
it's
kind
of
overkill,
once
you
start
getting
between
two
and
three
and
higher,
then
it
starts
to
make
a
lot
of
sense,
because
you
want
to
have
a
unified
way
of
managing
your
clusters
over
time
being
in
love.
Great
then,.
E
Think
it
makes
total
sense.
It's
just
manual
approach,
only
works
of
the
so
far
I
mean
we're
seeing
it.
So
it's
a
the
the
more
distributed.
You
go,
the
the
more
automation
you
get,
the
more
stuff
you
put
out
there
and
the
containers
to
pour
it
in
the
clusters.
It's
just.
Basically,
you
can't
I
mean
how
many
people
are
you
willing
to
hire.
I
Did
Bill
part
of
this
but
I
think
so?
In
other
words,
you
can
you
can
like
list
multiple
clusters
and
manage
multiple
clusters
individually
through,
like
one
glass
but
I?
Think
the
real
part
of
the
cluster
API
is
that
you
can
you
get
a
real
kerbin
areas,
API
and
you
can
use
the
tooling
like
customize
or
helm
or
whatever
tool
you're
using
to
actually
manage
multiple
clusters,
sort
of
modulate
or
okay
in
bulk,
rather
than
treating
them
as
onesies
and
twosies
yeah.
G
Back
yeah,
of
course,
of
course,
if
we
consider
what
might
be
the
next
line
under
there,
which
would
be
the
next
order
of
magnitude
or
even
beyond
that,
then
cluster
API
makes
a
lot
of
sense,
and
some
of
these
issues
that
we
encountered
with
Kubb
ADM
was
kind
of
related
to
those
limitations
that
you
mentioned
of
you
know
five
using
Covidien
for
five
to
ten
clusters.
So
there's
always
the
feature
feature
creep
of
considering
extreme
use
cases,
but
I
think
we
really
should
keep
the
extreme.
These
cases
in
mind.
Yeah.
A
Well,
I
think
what
we
were
trying
to
do
is
solve
problems
for
each
layer
of
the
stack
and
that's
kind
of
why
I
drew
it.
This
way
is
that
there's
a
domain
of
problems
that
it's
addressing
and
each
one
is
unique
and
they're
trying
to
not,
even
though
they
currently
do
kind
of
bleed
over
into
each
other,
which
right
now,
but
in
the
fullness
of
time,
as
I
mentioned,
we
want
to
meet
these
clear
boundary
lines.
A
Good
Lego
blocks
that
you
can
build
with,
and
a
person
that
wants
to
build
with
them
can
use
these
Lego
blocks
or
they
can
swap
out
this
piece
and
put
in
another
one
right.
That's
that's
the
basic
premise,
but
doing
this
is
actually
really
hard.
It's
it's
a
lot
harder
than
just
making
a
monolith,
because
with
a
monolith,
you
can
just
assume
all
the
responsibilities
and
then
try
to
give
you
one
layer.
A
H
Think
the
other
risk
of
that
approach
is
that
it's
it's
pretty
confusing
for
newcomers,
which
is
why
I
think
this
talk
has
been
pretty
pretty
valuable
for
me,
because
so
maybe
and
I
can
collaborate
on
some
kind
of
this
simple
roadmap
or
simple
map
of
the
landscape.
Page,
because
when
you
have
these
little
components
that
it's
a
component
oriented
approach
when
you,
you
need
to
have
kind
of
helped
a
little
bit
of
hand-holding
to
get
a
mental
model
of
those
relationships.
A
That
is
true,
and
this
is
kind
of
why
I
wanted
to
have
this,
because
I
do
realize
from
the
external
observer.
There's
there's
a
ton
of
implied
context
right.
A
lot
of
the
people
who
are
working
in
this
course
say
like
Justin
myself,
Jason
we've
been
doing
this
for
years,
so
we
have
this
model
baked
into
our
brains
and
we
know
how
the
whole
system
functions
and
we,
you
know
how
to
stripe
across
horizontally
into
these
different
projects,
but
to
the
external
observer.
It's
not
it's
opaque.
A
That's
part
of
the
reason
why
we
want
to
have
this
talk,
but
the
having
a
unified
roadmap
is
kind
of
hard,
because
they've
released
a
different
Cadence's,
perhaps
making
like
this
model
more
explicit
in
documentation
for
employers
to
where
people
are
at
with
their
own
roadmaps.
That
would
probably
be
a
potential
approach.
Yeah.
A
A
D
So
it's
not
one
machine
equals
infrastructure
and
configuration,
but
at
the
same
time
we
want
to
make
sure
that
we
stay
true
to
the
projects
underlying
mission,
which
is
managing
the
lifecycle
of
kubernetes
clusters.
So
we
don't
want
to
have
these
discussions
about
changes
to
or
possible
changes
to
api's
without
keeping
that
broader
context
in
mind.
But
we
are
definitely
appreciative
that,
like
in
pretty
much
every
software
engineering
effort,
you
do
want
to
have
separation
of
concerns.
So
that's
what
we're
looking
at
doing.
A
A
So
that
sort
of
you
know
create
something
new
then
find
out
where
the
boundary
lines
are
then
iterate
and
trying
to
tear
apart.
The
model
is
kind
of
how
we've
done
things
over
time,
like
Etsy
da
DM
never
really
existed
in
the
beginning,
but
it
makes
a
ton
of
sense
to
tear
those
pieces
apart
and
the
add-on
management
didn't
really
exist,
even
though
we
talked
about
it
since,
like
the
beginning
of
the
projects,
I
see,
I
suggested,
smiling,
I
mean
we've
been
talking
about
for
years,
but
it's
actually
coming
to
fruition.
E
A
So
image,
stamping
like
one
of
the
things
that
we've
seen
over
installation
across
any
one
of
these
utilities,
is
that
mutability
it
becomes
very
difficult
to
manage.
If
you
have
a
cluster,
that's
mutates
consistently
over
a
long
period
of
time.
So
imagine
you
had
a
v1
14
and
this
course
does
some
of
the
basic
concepts
that
are
inside
of
that
we
baked
into
some
cluster
API.
A
A
So,
like
you
can't,
you
shouldn't
be
mutating
an
upgrade
for,
if
you
are
this
you're
kind
of
using
kubernetes
in
the
wrong
way
of
mutating
the
images
you
have
a
new
version
of
your
application.
The
one
thing
we
don't
really
support
the
cluster
API
is
Canaries
I,
don't
think
we
should,
although
we
might
want
to
have
the
idea
of
that
right.
I
do
think
that
we
were
in
the
fullness
of
time
and
have
some
forms
of
strategies
that
are
more
sophisticated
than
just
kind
of
rolling
it
out.
There.
J
One
thing
I
want
to
add
to
our
API
is
something
that
helped
along
the
way
like
separate.
The
concern
is
something
that
it's
an
actual
work
stream
or
now
that
we're
working
on
is
to
define
an
extension
mechanism,
we're
talking
about
separating
concerns,
and
one
thing
is
like
we
have
this
gap
right
between
what
we
want
achieve
and
how
we
want
to
achieve
it.
So
it
helped
like
a
define
this
generic,
like
extension,
making,
is
that
we
see
as
a
black
ball.
J
This
pretty
much
says
we
have
some
actions
that
we
want
to
define,
for
example,
like
a
bootstrap,
a
node
or
join
another
cluster,
and
we
defined
this
as
an
action.
And
then
we
fill
in
the
gap,
for
example,
with
a
default
implementation
of
kuba
Kem,
Sokha
billion
will
fulfilled
their
purpose
for
now.
For
now,
and
if
like
in
the
future,
there's
gonna
be
another
billion
version
three
or
something
we'll
be
able
to
like
have
another
box
on
the
side
like
that,
can
grow
on
its
own
and.
C
A
I
think
the
one
pattern
that
we
kind
of
see
that
we
do
inside
of
sick
cluster
lifecycle
is
you
know,
old
school
is
like
a
scatter
gather
right,
so
we
basically
see
we
see
this
inside
of
providers
today.
So
like
you
have
P
1,
you
have
P
2,
P,
3,
P,
4
right
and
the
original
implementation
of
cluster
api's,
and
this
also
happened
with
a
lot
of
the
other
tooling
as
well.
Even
the
very
basic
installation
is
there.
Were
there
were
dozens
of
installers,
then
we
distilled
it
down
into
like
a
central
atom
right.
A
Is
it's
okay
for
fragmentation
at
the
start?
Don't
try
to
create
abstractions
that
inherently
become
leaky
like
allow
the
ecosystem
to
sort
of
evolve
and
then
over
time,
distill
that
debt
the
patterns
down
into
a
common
set
that
is
usable
by
the
broader
ecosystem.
If
we,
if
we
tried
to
create
the
abstraction
first,
you
inherently
create
these
weird
leaky
abstractions
that
that
kind
of
happen.
You
see
this
in
all
the
other
tools,
like
you
see
this
in
every
configuration
management
tool
that
has
ever
been
created.
A
So
the
goal
of
this
is
to
its
learn
from
the
lessons
over
time
and
only
create
the
abstractions
once
we
are.
We
are
concrete
that
this
is
what
we
want
to
do,
but
another
thing,
that's
a
central
premise
to
all
of
these
tools
is
that
when
you're,
not
creating
abstraction
layers
for
clouds,
that
is
not
what
we
were
doing.
Everything
is
related
to
communities,
management
and
the
abstractions
are
around.
The
basic
premise
of
the
instructions
are
around
clusters
and
clustering
and
cluster
utilities.
So
in
this
case
the
basic
premise
is
around
sed
management.
A
K
K
Some
underlying
API
some
one
wrong
layer
to
prevent
your
nose
and
right
now
that's
done
in
the
provider
and
I
guess
the
the
question
is
whether
there's
there's
abstraction,
more
abstraction
that
can
be
made
over
there
and
and
how
do
we
address,
but
the
while
you
state?
How
do
we
make
sure
that
the
machine
controller,
the
is
completely
separable
from
other
providers-
can
be
split
into
multiple
parts
that
are
they.
B
L
But
the
goal
essentially
is
is
to
really
define
more
tightly
scoped
interfaces
around
the
different
extension
points,
more
so
than
we
have
today.
Today
we
have
this
kind
of
concept
of
just
like
one
monolithic
provider,
but
we
want
to
kind
of
break
that
up
a
little
bit
more
so
that
it's
a
little
more
granular
for
a
few
different
reasons.
One
we
want
to
be
able
to
have
things
like
come
and
config,
regardless
of
what
cloud
or
hardware
you're
running
on.
L
So
we
want
to
be
able
to
break
that
out
separately
from
kind
of
the
infrastructure
kind
of
providers.
The
other
aspect
is
is:
if
we
start
talking
about
being
able
to
handle
like
bare
metal
environments,
we
need
to
be
able
to
have
kind
of
more
granularity
around
which
providers
provide
what
components,
because
right
with
the
with
the
state
of
the
world
today,
you
would
basically
have
to
build
a
custom
monolithic
provider
for
just
about
every
bare
metal
environment
because
of
the
different
combinations
of
you
know:
infrastructure
providers
that
are
required
to
provision
all
the
hard.
L
That
said,
we
also
realized,
like
closer
API,
is
a
big
thunk
to
kind
of
consume
all
at
once.
So
if
we
want
to
let
other
kind
of
infrastructure
tools
have
a
method
of
adopting
cluster
API,
we
need
to
have
a
more
formally
staged
kind
of
on
ramping
process
to
enable
that
instead
of
you,
know,
okay,
here's
this
cluster
API
thing
just
you
know
rebase
all
of
your
stuff
to
just
work
with
this
is
not
only
going
to
be
feasible.
L
So
you
know
we
want
to
make
kind
of
like
a
staged
adoption
path,
so
that
you
know
obviously
can
start
with
machines.
And
then
you
can
bring
in
some
of
the
additional
management
pieces
and
and
what
exactly
that
looks
like
don't
know
yet,
but
but
it
is
something
that
we
identify
the
need
to
kind
of
have
and.
L
To
do,
and
not
yet
so
that
this
trains
a
little
bit
into
speculation
here,
obviously,
because
I
don't
want
to
step
on
the
work
stream.
That's
to
have
to
define
this
out.
I
don't
want
to
necessarily
you
know,
try
to
influence
that
in
any
way,
but
I
do
think
we
can
get
more
granular
than
just
what
we
have
today,
which
is
like
cluster
for
kind
of
the
infrastructure
components.
L
In
my
mind,
I
see
you
would
have
kind
of
like
a
load.
Balancer
would
be
kind
of
a
common
kind
of
granular
piece
of
abstraction,
maybe
like
network,
so
that
you
could
interact
with
potential
networking
equipment.
You
know
if
your
provisioning
bare-metal
and
you
have
to
wire
it
up
into
you,
know
proper
VLAN
or
whatever.
You
know
something
to
enable
that
type
of
wiring
and
then
firewall
is
another
one.
That
I
see
because
I
see
I've
seen
plenty
of
kind
of
bare
metal
environments
where
those
are
like
three
different
distinct
providers.
L
L
H
So
the
couplet
in
there
you
have
Kubler
a
string
for
couplet
as
they
semantic
version
of
the
crib.
Is
that
intended
to
be
have
a
logical
version
for
a
couplet
and
all
the
other
packages
which
in
interact
with
it's
like
CN,
I
or
C,
or
The,
Container
Iser
or
whatever?
Was
that
supposed
to
be
specifically
for
the
couplet
binary,
as
released
by
the
cubit
main
company's
project?
H
L
Loads
of
today,
it
really
isn't
defined
how
that
behavior
is
supposed
to
be
I.
Do
think.
That
is
something
that
we
potentially
want
to
define
better
going
into
the
future.
I
can
say
what
we
do
with
the
AWS
provided
for
cluster
API
right
now.
Is
we
basically
just
kind
of
we
we
well
one?
We
stick
with
just
a
single
CRI
implementation,
which
is
container
D
and
we
basically
just
kind
of
like
rev
the
dependencies
when
will
rev
the
kubernetes
version.
But
you
know
the
selection
that
we
do
is
basically
to
filter
pre-existing
a.m.
L
eyes
with
the
AWS
provider
as
well.
So
we
could
easily
still
keep
that
same
model
and
extend
you
know
those
other
components
and
versions,
and
just
kind
of
like
do
more
granular
filtering
on
on
the
images
that
are
there.
Some
of
the
other
providers
don't
use
pre-baked
images
today
and
they'll,
basically
just
install
the
latest
version
of
whatever
their
kind
of
predefined
opinionation
is
around
those
things
at
the
time
of
bootstrapping,
but
the
system
that
I
think
we
want
to
have
more
control
over
in
the
future,
but
is
kind
of
yet
to
be
defined.
H
A
Was
talking,
I
said
the
the
question
is
interesting
because
it
conflates
to
ideas.
When
is
a
single
field,
but
if
there
are
multiple
components,
and
ideally
I
would
like
to
see
some
level
of
Providence
built
into
things
like
add-on
management,
it
says
this
is
the
1:14
stream
of
vetted
add-ons
for
given
release.
It's
been
branched
as
conversions,
and
that
way
folks
can
rely
on
that
for
a
given
release
series,
because
that's
my
qualification
currently
happens
in
upstream.
A
A
All
right
so,
first
things
first
is
like:
how
do
we
work?
We,
we
worked
in
a
very
similar
model
that
we've
kind
of
developed
a
crop.
It's
not
the
same
for
every
single
sub
project,
but
most
of
projects
kind
of
adhere
to
this
is
that
we
try
to
just
basically
create
a
milestone
and
for
things
like
comedienne
or
things
that
are
core.
We
have
the
release
milestone
that
we
have
to
adhere
to
whether
I
hope
I'm
incorrect.
We
cannot
avoid
it,
but
for
things
that
are
outside
this,
the
core.
We
also
can
create
a
milestone.
A
We
did
that
for
two
at
least
set
a
marker
in
the
sand
for
releasing
right,
then
from
there
would
go
through
a
planning
cycle
or
process.
What
high-level
and
we're
kind
of
doing
this
right
now
with
clusterer
api
or
what
are
the
high
level
objectives
of
things
we
want
to
accomplish
we're
kind
of
trying
to
spec
it
out
and
scope
it
out,
first
and
then
sort
of
have
a
cut
line
and
that
cut
line
will
be
the
actual
milestone.
A
A
You
know
it's
a
color
code
coded
extravaganza.
The
the
one
thing
that's
interesting
is
that
we
try,
if
you're,
going
to
have
something
for
a
milestone,
assign
a
priority
to
it,
assign
a
person
to
it.
So
that
way,
it's
at
least
denoted
that
somebody
is
actively
assigned
to
work
on
these
things,
and
it's
very
much
Athenian
democracy
in
that.
If
you,
if
you
volunteer,
if
you
make
your
case
and
you
want
to
work
on
this
item,
go
ahead
and
let
us
know-
and
we
will
try
to
federate
the
work
out
where
possible.
A
Ideally,
there
are
certain
people
who
have
a
legacy
or
history
in
certain
areas,
and
you
know
they
make
it
a
little
bit
of.
They
might
be
the
shepherd
so
to
speak
for
this,
but
we
also
actively
solicit
and
welcome
new
contributors,
because
you
know,
there's
only
so
many
people
that
are
core
contributors
and
they
are
always
overworked
to
the
point
where
you
know
I
wish.
A
Lumiere
would
go
to
sleep
some
days,
but
the
idea
is
that
we
assign
a
person
there
and
they
might
have
a
shepherd
what's
their
working
on
something
we
mark
it
as
active.
This
denotes
that,
like
hey,
even
though
you're
assigned
something
that
doesn't
mean
that
you're
going
to
be
working
on
it,
I
just
lets.
People
know
that
it's
signals
that
we're
going
to
try
and
get
into
the
milestone.
What
this
mark
is
active.
A
We
do
go
through
regular
cycles
of
grooming.
Ideally
we
do
it
actively
throughout
the
milestone.
So
during
an
individual
milestone
as
images
here,
we
mark
it
as
active
once
we're
actually
working
on
it.
Then
we
do
continuous
triage
so,
ideally
for
every
single
meeting
that
we
have,
we
got
to
go
through
new
items
that
come
in
and
we
determine
if
they
should
be
met.
She
this
milestone
or
the
next
milestone,
and
then
over
time
you
know
it
kind
of
comes
clear.
A
We
had
the
initial
planning
and
we
kind
of
sort
things
into
bins
and
we
just
keep
on
executing,
but
we
don't
want
to
sort
of
conflate
a
milestone
if
at
all
possible,
right,
try
to
just
cut
a
release
and
just
get
right
there
with
a
nice
feature
of
having
sort
of
time-based
cases
is
that
you
have
another
release.
Coming
just
keep
going
right
and
there's
got
to
be
a
cut
line
somewhere.
Just
keep
iterating
give
feedback,
that's
kind
of
the
way
this
the
sequester
life.
A
Some
projects
have
matured
it's
by
having
these
continuous
releases,
it
allows
us
to
get
continuous
feedback.
So
you
know
the
one
thing
that
we
do
do
is
that
we're
trying
to
be
conservative
on
when
we
do
promotions.
By
being
a
little
bit,
conservative
allows
us
to
get
extra
single
back,
and
then
we
distill
down
that
signal
to
make
sure
that
we're
trending
in
the
direction
that
actually
makes
the
most
sense
for
the
community.
Otherwise
we
try
to
sort
of
push
promotion
too
fast.
A
Last
but
not
least,
is
like
you
earn
your
stripes
in
the
community
by
doing
by
chopping,
wood
and
carrying
water.
If
you
are
active,
if
you
help,
if
you
do
the
hard
work
in
the
project,
we
will
absolutely
you
know,
reward
those
folks
who
are
doing
the
hard
work
and
the
labour
of
the
project
and
promote
them
from
two
reviewers
and
approvers
and
get
them
in
the
owners.
But
honestly
this
this
is
a
community-based
project.
We
want
to
empower
people
and
we
want
to
see
it
grow
over
time.
A
So
if
you
are
active
and
you
want
to
get
engaged
the
best
way
to
do
it,
the
best
way
to
start
is
when
you're
looking
at
the
original
labels,
look
for
Help
Wanted
issues,
you'll
actually
see
a
bunch
of
them
over
time
start
by
doing
the
things
that
are
sort
of
easy
to
do
and
then
earn
your
stripes
over
time.
Once
you
kind
of
understand
the
code
base,
then
you
know
start
volunteering
as
part
of
the
triage.
C
Just
wanted
to
add
in
terms
of
issues
that
it's
much
better
if
you
create
a
ticket
for
the
feature
you
want
to
work
on
or
the
book
you
want
to
fix
first,
because
we
want
the
visibility
that
everybody
should
comment
on
this
first
before
you
know,
you
start
the
work
before,
because
you
can
technically
start
working
on
something.
There
is
two
thousand
lines
of
code,
and
this
can
waste
your
time
if,
like
some
of
the
maintainer,
has
decide
to
not
approve
your
change.
A
Ideally
because
the
treaty
is
so
large,
like
a
key
if
I
were
to
show
you
my
PR
backlog
that
I
have
to
get
through
today,
it's
on
the
order
of
hundreds.
So
if,
if
you
look
at
somebody
like
team
Hawken,
that's
just
even
an
order
of
magnitude
larger
than
mine,
so
ideally
break
down
your
PRS
into
quantum,
even
if
it
takes
you
longer
to
get
from
A
to
B.
L
A
Ideally-
and
this
is
this-
is
like
a
lesson
learned
as
such
a
project
of
this
scale
and
magnitude.
Is
that
break
them
down
into
finite
chunks,
where
each
chunk
is
a
composable
piece
of
the
puzzle
and
then,
if
it
takes
a
little
bit
longer,
that's
fine,
but
that
way
your
you're
getting
your
you're,
your
your
notice
that
you're
you're
making
progress
in
this
area,
and
that
way
it
also
like
it,
but
it
eases
the
burden
of
the
reviewers
too,
as
well.
A
One
thing
I
should
do
inside
of
our
document
is
linked
to
the
what
makes
a
good
PR
doc
so
that
way
like
for
folks
who
are
new
to
cyclist
or
life
cycle,
and
they
see
our
sort
of
grooming,
doc
and
engagement
that
links
out
to
like
what
makes
a
good
PR
in
it.
The
what
makes
a
good
PR
actually
spells
this
all
out
to
you.
One
thing
you'll
find
is
that
there
is
actually
explicit
context
written
in
the
community
repo.
It's
just
that
there's
a
lot
of
it
and
sifting
through
it
is
not
actually
easy.
G
M
A
And
again,
everyone's
really
friendly
feel
free
to
go
on
the
state
cluster
lifecycle.
Channel.
If
you
have
questions
comments,
complaints
concerns,
we
can
always
try
to
help
wrap
you
into
the
right
area
and,
if
you're
kind
of
new
to
the
community
welcome
it's
a
it's
a
very
interesting
and
exciting
place
to
be
a
with.
That
said,
we're
kind
of
out
of
time.
So
thanks
everybody.
Thank
you.