►
From YouTube: Kubernetes SIG Cluster Lifecycle 20181121 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.m14nsau0a37c
Highlights:
- Namespacing the machine annotation
- Renaming ProviderConfig to ProviderSpec
- Machine Phases
- Splitting machines into two yaml documents
- Excluding resources when pivoting a cluster
- How do addons / cluster bundles relate to the cluster API?
- What’s current consensus of how detailed the Cluster kind should be?
- Release status
A
Hello
and
welcome
to
the
Wednesday
November
21st
edition
of
the
cluster
API
subgroup
meeting
for
sig
cluster
lifecycle.
We
have
a
couple
things
on
the
agenda
today
and
we'll
just
dive
right
in
the
first
is
there's
a
PR
that
got
opened
about
namespacing
the
machine
annotation
number
593
and
it
actually
just
adds
a
prefix
to
the
existing
annotation
key
to
make
it
clear
that
the
annotation
belongs
to
the
cluster
API.
A
So
those
look
fine
to
me,
but
I
wanted
to
bring
it
up
here,
since
it
is
sort
of
a
change
to
the
quote
API,
if
you
will
just
make
sure
nobody
had
objections
or
was
depending
upon
that
in
a
way
that
this
would
break
before
we
took
the
hold
off
and
merged
it.
So
it
looks
like
it
was
discussed
during
the
AMIA
provider
implementation
meeting
this
morning,
I
saw
a
couple
of
thumbs
up
get
added,
so
I
just
wanted
to
bring
it
up
here,
also
and
see.
If
anyone
objected
before
you
remove
the
hold.
A
D
E
D
Let's
say
on
you
know:
on
the
off
of
off
of
my
own,
like
a
cluster
api,
would
these
changes
off
my
own
repo,
for
example,
but
that
was
now
I'm
wondering
you
know
if
it's,
if
it's
really
necessary,
if
it
is,
then
I
will
try
to
find
time.
I
think
I
can
find
sometime
next
week,
but
if
it's,
if
it's
not,
then
perhaps
we
can,
we
can
make
the
change
and
providers.
A
A
Where
is
it
and
I
had
to
plant
the
provider
config
and
your
issue
and
saying
it
hasn't
been
renamed
yet
here's
here's
what
you're
looking
for,
but
I
think
they
did
also
mentioned
reading
back
through
the
notes
that
he
suggested
that
we
wait
until
you
actually
have
a
chance
to
do
a
bit
more
testing
and
there
were
some
sort
of
broader
conversations
about
how
much
we
wanted
people
to
do.
Testing
before
merging
things
and
in
terms
of
assuring
competitive,
so
I
think
I
saw
yet
David.
F
Sorry
that
wasn't
what
I
was
gonna
say
is
that
the
the
fact
that
people
are
reading
to
get
booked
and
running
into
problems
makes
me
think
this
is
more
than
merge.
I
do
agree
that
most
providers
check
in
their
vendor
directory,
and
so
it
probably
won't
break
them,
but
there
are
providers
that
run
depinger
unconditionally
every
time
they
build
and
it
will
rake
them.
A
A
A
B
A
Think
one
of
the
reasons
we
want
to
get
this
in
now,
while
we're
still
alpha
and
telling
people
that
I
was
unstable,
is
that
we
don't
have
to
go
through
those
extra
hoops.
Yes,
once
once
we
sort
of
settle
down
and
we're
sort
of
promising
more
around
compatibility,
then
we're
relying
on
conversion
logic
to
ensure
that
compatibility.
If
we
need
to
make
these
other
changes
and.
B
Also,
it's
good
to
mention
that
you
can,
you
can
still
so
say,
say
you
release.
So
what
we've
done
incubating
is
that
we've
treated
our
every
awful
API,
that's
better,
so
we
like
kind
of
ahead
of
the
curve,
so
we
like
will
change
fields
as
we've
gone
through
this
year,
but
still
we've
made
all
those
changes
or
like
backwards-compatible
in
the
sense
that
cube,
am
that
supports
alpha
to,
for
example,
can
still
read
all
for
one
and
internally
convert
directly.
B
A
One
point
I'm,
looking
at
the
released
of
its
for
1.13
it
looked
like
cube,
Edmund
was
claiming
it's
gonna
drop
support
for
reading
the.
If
you
went
off
of
three
in
the
following
release
so
right
now,
you'll
read
b1
off
of
three,
but
starting
with
1.14
you'll
only
read
the
beta
for
guess
three,
so
you're
just
maintaining
a
small
window
of
backwards
compatibility.
So
you
don't
end
up
with
lots
and
lots
of
code
debt
for
all
the
old
versions.
It
sounds
like
right
right,
so
we
have.
A
A
B
B
A
So
I
understand
that
I'm
just
wondering
like
if
you
wanted
to
like
drastically
restructure
the
API
I
guess
what
you're
saying
is
you're
not
restructuring
it
in
such
a
way
that
it's
impossible
to
convert
right.
In
all
cases,
it's
possible
to
convert
up
to
the
new
one,
and
maybe
that's
some
fancy
defaulting
and
ignoring
of
fields
that
no
longer
apply
so
so.
B
One
thing
we,
for
example,
did
was
we
split
the
monoliths
master
configuration
into
unit
configuration
in
won-young
document
and
cluster
configuration
another
young
document
so
like
that
is
a
super
breaking
change,
because
because
nothing
is
like
the
same,
not
not
the
kinds.
Now
we
have
two
young
documents,
but
it's
still
convertible.
If
you
write
some
some
extra
code,
so
I
think
most
most
things
are
upgradeable.
A
I
guess
I'd
put
that
under
that's
like
a
big
restructuring
of
sort
of
where
everything
lives,
but
it's
not
actually
changing
sort
of
your
internal
representation
necessarily
of
the
fields
you
expect
to
get
it's
just
where
you
get
them
from
and
I
think
that
that's
a
pretty
straightforward
case
to
do.
Conversion
yeah
right
so.
B
A
E
Created
this
PR
for
machine
cases
quite
some
time
back
and
I
can
already
see
there.
A
lot
of
comments
on
that
and
I
just
want
to
do
a
quick
check
that
what
we,
what
we
understood
out
of
this
is
correct.
So
what
I
can
see
is
in
general,
in
the
proposal.
I
have
basically
made
I,
so
I'll
basically
define
the
type
for
the
50s
and
that's
where
I
guess.
The
suggestion
is
that
we
basically
keep
it
as
open
string
so
that
it
can
to
sea
wall
and
not
basically
type
it
out.
E
E
We
have
one
approach
where
either
we
can
have
specific
fields
defined
by
us,
where
we
can
explicitly
mention
that,
what's
the
state
state
or
the
fields
of
the
machine
which
we
can
control
this
turnover
on
these
cases
where
we
can
include
draining
extend
by
on
what
part
and
the
other
approach
could
be,
that
we
only
provide
the
mode
of
load
reference,
and
we
expect
that
also,
and
basically,
where
our
node
reference.
Also,
we
can
infer
most
of
the
steps
and
phases
at
the
higher
level.
E
G
Yeah,
so
maybe
I
can
say
a
little
about
that,
because
I
commented
a
lot
on
this
PR.
So
my
main
concern
with
that
is
was
that
it
feels
a
lot
like
it's
actually
adding
some
kind
of
API
and
into
the
status,
and
this
is
something
I
would
really
like
to
avoid,
because
we
had
to
evolve
and
other
than
that.
A
lot
of
the
information
that
we
put
there
can
be
derived
from
the
existing
naught
reference,
and
they
would
really
like
to
avoid
that.
G
We'd
applicate,
the
heartbeat
mechanism,
which
is
basically
something
higher
level
controllers
neat,
and
that
can
cause
a
lot
of
issues
and
what
some
people
brought
up
inside
the
poi
request
that
it
would
be
very
useful
to
have
some
way
to
basically
put
the
status
and
the
Machine
object
in
order
to
display
it
in
the
UI
or
something
they
like
and
yeah.
And
that's
where
the
idea
of
like
just
making
it
untyped
Springs
came
from.
E
E
Because
the
much
evil
being
thing
happened,
but
for
us,
what
we
can
probably
do
is
that
we
can
keep
it
and
type,
keep
it
open
at
the
screen
and
and
then
that
status
be
wall
just
like
just
that
way.
We
will
that
be
okay,
so
I
was
just
wondering
then
I
can
maybe
make
the
necessary
changes
on
the
PR
and
keep
it
going.
E
F
E
Okay,
so
yeah
so
take
a
look
again.
There
were
those
quite
some
time
before
in
between
I
was
involved
here
and
there,
but
I
will
take
a
look
again.
I
will
make
the
changes
that
way
then
I
will
make
it
spring
rather
than
having
the
types
is
like
this,
and
if
you
see
your
need
of
having
type
is
totally
in
the
future,
then
you
can
anyway
introducing
right.
So
it's
there
anyway,
so
and
then
we
can
efficiently
spread
on
the.
F
Is
it
really
true
that
we
would
have
to
duplicate
health
checks?
And
you
wouldn't
this
sort
of
be
like
conditions
where
we
have
some
process?
That's
periodically
looks
at
the
node
health
checks
and
just
copies
that
information
over
into
a
different
field
in
the
status,
so
there's
no
actual
logic
to
who
do
health
checks,
there's
just
logic,
to
copy
the
implicit
record
implicit
data
and
make
it
explicit.
E
Yes,
the
question
is
media,
so
I
was
of
the
opinion
that
we
would
be
basically
learning
not
only
from
the
node,
but
in
case
in
future.
If
we
see
the
things
coming
from
the
node
problem,
detector
and
so
on,
so
it
will
also
be
possible
that
they're
from
there
also,
we
can
see
some
kind
of
signal
and
all
those
information
can
that
be
which
are
going
to
be
used
by
the
machine.
Higher-Level
mission
controllers
could
be
put
at
one
place
and
then
tetran
comes
in
from
them.
That
was
the
picture
that
I
was
enriching.
G
So
I
will
say
something
to
what
David
said
and
then
go
on
to
the
next
and
regarding
just
copying
info
over
the
says.
In
my
opinion,
two
huge
two
objects:
first
off
it
won't
happen
with
the
controller
is
not
up
for
whatever
reason
and
second
off
one
big
issue
with
this
node
health
checking.
Is
that
basically
the
whole
object?
G
That's
at
the
end
of
the
day,
sort
in
the
HDD
gets
a
new
vision,
even
if
you
just
update
it
as
timestamp
in
that
creates
a
lot
of
issues
which
is
basically
why
I'm
in
class
itself.
It's
currently
in
the
process
of
being
moved
outside
of
the
north
object
itself,
and
it's
a
dedicated
FBI
group,
if
I
recall
correctly
and
regarding
using
more
sources
than
the
nodes
itself
for
the
health
checking.
Do
you
have
a
sample
of
what
such
a
source
could
be.
E
Yes,
so,
for
example,
in
the
case
of
NPD
right,
so
we
could
have
different
kinds
of
signals
coming
from
the
NPD.
Very
agents
are
running
on
the
machine,
so
at
the
moment
in
PD
also
works
in
a
way
that
NPD
can
directly
put
the
conditions
on
the
node,
so
it
could
be
then
coming
from
there
either
or
for
the
temple,
so
they
difficult
define
it
in
the
temporaries
problems
in
permanent
problems,
so
the
permanent
problems
will
be
represented.
E
Then
we
could
also
aggregate
that
kind
of
events
and
somehow
make
it
available
on
the
machine
of
the
crane
that
following
machine
is
facing
some
kind
of
temporary
problem
or
this
could
be,
and
that
could
be
also
one
of
the.
We
also
explicitly
mentioning
that
the
people's
outside,
which
which
could
be
again
I.
E
E
So
I
guess
I
had
a
call
with
the
Jason
on
the
day
when
we
had
approved
and
I
think.
The
concern
also
was
on
a
similar
lines
that
may
be
keeping
it
type,
may
maybe
a
blocker
for
evolving
it
further.
So
but
I'll
check
with
him
again
and
anyway,
we'll
take
him
again
since
I
will
make
the
changes.
We
can
also
have
a
quick
look
on
it
now,
but
in
general,
I
just
wanted
to
make
sure
that
we
should
have
machines,
face
kind
of
feel
available
there.
So
that's
possible!
Then
we
can.
A
Things
I
snuck
in
here
were
just
issues
that
I
found
looking
through
the
issue
backlog
that
I
thought
would
be
worth
discussing
briefly.
The
first
one
Chuck
opened
about
splitting
machines
into
multiple
documents
and
adding
more
flags
to
cluster
cuddle
and
David,
responded
that
he
dislikes
how
many
options
we
already
have
to
pass
to
cluster
cuddle
and
I
I
would
sort
of
before
I
put
my
opinion
out
there.
What
do
other
people
think
about
this
I'm,
not
sure
how
many
people
like
actually
look
at
the
issue
backlog?
B
B
That
said
is
like
you're
gonna
need
component
config
in
some
point
at
some
point,
I
don't
know
if
you've
seen
my
lighters
proposal,
but
I
I,
just
linked
it
there
in
last
last
of
the
doc
so
basically
well
I
well,
I
run
a
cube
con
talk
on
how
to
implement
component
config
and
it's
to
a
large
X
and
what
you
already
do,
and
it's
like
nothing
magical.
Just
like
this
comment
set
of
conventions
from
reading
conflict
from
disk
I
think
it
if
you're
getting
to
the
stage
where
flags
are
getting
painful.
B
You
should
sooner
rather
than
later,
support
just
reading
from
a
config
which
kind
whatever
cluster
or
CTL
specification
or
I,
don't
know
what
you
want
to
call
it,
but
like
the
inputs
to
class
A
CDL
for
creating
a
cluster.
That's
like
the
first
thing
that
pops
up
in
my
head
and
with
that
in
place,
I
think
it
does
make
sense
to
split
those
two.
If
you
have
this
kind
of
ordering
like
first,
you
need
to
create
end
machines,
and,
after
that
you
need
to
create
these
em
machines.
A
Yeah
I
mean
I
think
if
we
step
back
a
look
at
the
bigger
problem
of
that
we
have
like
three
command
line:
flags
right
now,
which
point
to
three
different:
yellow
files.
I
feel
like
we
could
sort
of
even
consolidate
those
all
into
a
single
yo
file,
separate
it
with
dashes
or
just
have
the
cluster
huddle
tool
Rita
directory
and
be
able
to
sort
of
parse
and
split
apart.
F
H
Do
like
the
Robie,
your
idea
of
the
directory,
all
right,
I
feel
like
I
guess
this
gets
until
I.
Think
about
the
next
item
on
the
agenda
is
which
is
Lucas
this
one
about
add-ons
right,
I
feel
like
there
will
be
other
things
which
a
bucket
of
other
things
you
want
install
in
the
cluster
may
be
as
manageable
as
for
CTL
and
I.
H
Don't
think
we
want
to
have
Kubb,
dns,
mint
or
DNS
manifest
and
networking
manifest
in
like
a
series
of
flags
for
that,
so
I
would
I
think
a
bucket
where
we
just
say
like
put
all
your
stuff
in
strictly
and
off
you
go
and
then,
ideally,
we
put
the
user
machine
deployment
in
that
bucket.
Cuz
I
would
hope
it
wouldn't
matter,
but
if
it
didn't
matter,
maybe
we
could
just
put
it
in
that
bucket.
Maybe
we
didn't
put
the
master
in
that
same
bucket,
I,
don't
know
if
there's
any
reason
to
delineate
the
master
so.
B
I
said
incubator
and
we
had
this
monolith
structs
with
like
just
key
value
pairs,
and
we
we
split
this
up
now
in
better
two
distinct
kinds,
one
that
is
only
for
runtime
information,
just
use
once
and
one
that
is
persisted
in
the
cluster
for
like
more
this
cluster
level,
config
so
and
we
separate
those
with
its
Yom
documents,
and
we
all-
you
also
can
specify
component
configs
for
the
proxy
and
the
cubelets
down
the
road
in
the
same
config
file
and
I.
Don't
know
how
much
you've
been
thinking
about
customized.
B
But
if
you
have
this
like
these
tens
of
different
add-ons,
you
want
to
apply
and
like
15,
different
mutations
and
and
patches
and
stuff
anyway,
instead
of
doing
templating
on
your
directory,
you
have
can
then
just
pull
them
together.
Using
customize
customize
will
give
you
seven
separate
young
documents
and
of
you
go
to
pipe
it
into
cluster,
see
deal
yeah.
H
J
H
Provider
thing:
yes:
what
isn't
it
isn't
the
full
ATF
SDK,
but
it
is,
it
is
a
surprisingly
large,
subset
I
guess
we
do
have
a
I'm
gonna
like
rattle
on
this,
like
we
have
to
figure
out
whether
we
would
prefer,
like
I,
could
write
a
an
s3
fetcher.
The
question
is:
do
we
want
to
maintain
our
own
versus,
like
ven,
during
the
subset
of
the
80's
SDK?
That
does
that
which
is
mainly.
J
B
Yes,
then,
if
you're
running,
there
needs
to
be
setup
and
initialize,
directly
etcetera,
etcetera,
so
there
so
I
I
like
the
instead
I'm
having
yeah
yeah
when
I
said
the
thing
about
flags
and
do
a
component
config
from
Flags
I
haven't
looked
at
class,
the
CDL
that
command
so
it
when
it's
just
other
Yama
files.
It
definitely
makes
sense
to
just
support
this
mega
mega
llamo
file,
or
that
is
all
the
kinds
in
different
young
documents
or
this
folder.
You.
A
All
right,
I
think
that
was
just
background
noise
prashanta.
If
there's
something
you
wanted
to
say,
you
can
meet
yourself
again,
I
think
in
the
interest
of
time
we
should
move
on
to
the
next
thing
on
the
agenda,
but
it
sounds
like
people
have
quite
a
few
thoughts
here.
So
I
would
encourage
you
to
go
and
add
some
of
those
on
to
the
issue
linked
issue,
number
594
and
I'll.
A
Try
to
do
the
same
next
I
want
to
talk
about
556,
which
was
an
issue
that
Jason
opened
about
providing
a
way
to
exclude
resources
when
pivoting
a
cluster.
In
particular,
the
adias
provider,
has
a
use
case
where
they
have
a
secret.
They
want
to
put
into
the
bootstrapping
cluster,
but
not
have
it
automatically
pivoted
into
the
target
cluster
I
feel
like
there
have
been
there
been
at
least
one
or
two
other
cases
where
we've
run
across
something
similar.
A
Don't
know
people
have
thoughts
about
how
we
might
want
to
specify
this.
It's
sort
of
related
to
the
last
conversation
of
we
can
separate
things
out
into
flags,
because
different
flags
or
different
configuration
files
are
one
way
to
signal
to
the
tool
which
which
place
which
bucket
these
things
fall
into.
We
can
do
things
like
annotations
to
signal
to
the
tool
which
bucket
things
fell
into
with
some
reasonable
defaults
and
there's
probably
some
other
other
ways
to
tackle
those
too
so
I
don't
know
if
people
have
thoughts
about
that.
One.
H
K
Yeah
I
use
case
it's.
If
people
want
in
class
to
cuddle
on
their
machine
with
mini
cube,
then
they
can
do
some
explicit
AWS
credentials
that
we
are
feeding
in
as
secret
on
the
target
clustered
with
money.
Leverage
I
am
instance
by
file,
so
we
don't
actually
want
the
secret
there
anymore,
because.
H
A
Don't
have
to
solve
it
right,
Peter
I
did
just
want
to
bring
it
up.
As
you
know,
here's
an
issue-
people
haven't
really
commented
too
much
on.
You
know.
Please
take
a
look.
It
sounds
like
there's
some
consensus
that
this
is
a
reasonable
feature,
requests
which
I
agree
with
it's
just
a
question
of
how
we
expose
it
to
users
rights
or
how
much
we
can
hide
it
from
users.
A
A
B
Okay,
next
one
should
I
just
read
it
out.
Louder,
yes,
go
ahead
and
couple
since
they're,
all
from
you,
okay,
yeah,
so
yeah
I've
been
I've
been
out
for
a
while,
as
you
as
you
know,
and
I'm,
just
thinking
like
okay
through
now
now
I'm
back
and
want
to
know
what
I
would
be
doing
with
add-ons
and
cluster
API.
And
how
do
you
relate,
especially
when
we
talk
starts
talking
about
the
cluster
bundles
formal
kept
proposal
thing?
B
L
H
A
H
B
H
H
H
B
A
Yes,
so
so
right
now,
it's
kind
of
like
what
cube
ATM
does,
which
is,
after
a
pivots
to
the
target
cluster.
It
just
queue.
Pedal
applies
everything
you've
specified
in
that
add-on,
CMO
file,
which
is
fine
for
bootstrapping.
It's
not
necessarily
great
for
cluster
management
going
forward
right,
and
so
that
was
put
in
there
specifically
to
get
things
like
storage
classes
in
the
target
cluster,
which
is
something
that
we
were
interested
in
getting
set
up
because
that's
not
sort
of
there
by
default,
but
yeah,
it's
not
intended
to
be
the
long
term
solution.
H
We
can't
talk,
given
that
the
talk
is
public.
We
can
talk
about
the
idea
that
what
we're
doing
in
some
places
is
operators,
so
we're
going
to
use
operators
to
manage
that
and
you
could
yeah.
So
the
an
operator
would
be
a
much
better
solution
for
how
to
do
that
in
the
current
add-on
manager,
and
so
that
is
what
I
can
talk
about
so
far,
because
unless
someone
has
more
information
in
the
summer
from
the
summary,
but
yes.
K
H
Think
it'd
be
nice
to
specify
that
information
in
the
bundle
and
have
guessing
this
would
still
be
hard
coded
logic,
but
it
like,
but
it
would
pick
up
the
information
from
the
bundle
so
that
in
theory,
you
could
change
that
it
gets
a
little
weird
think
about
where
it's
being
applied
like.
Is
it
being
applied
on
the
cluster
or
is
it
being
applied
by
cluster
CTL
and
like
if
it's
being
applied
on
the
cluster?
Who
has
permissions
to
change
security
groups?
Was
it
security
groups?
H
So
that's
that
is
a
tricky
update
as
a
particularly
the
changing
of
that
is
a
particularly
tricky
update,
but
I
think
if,
if
the
list
of
security
groups
was
somehow
encoded
in
the
bundle,
that
would
be
a
better
solution
than
what
we
have
in
the
current
PRC
in
cops,
she's,
just
sort
of
says.
Oh,
this
is
calico,
so
it
needs
like
this
man.
A
We
sort
of
answered
your
question
about
add-ons
and
cluster
bundles
I.
Think
the
answer
with
cluster
bundles
is
it
doesn't
relate
yet
because
we're
not
using
the
cluster
API
Justin's
experimenting
with
using
him
in
cops
I
would
love
to
start
experimenting
with
using
them
in
a
cluster
API
as
well,
because
I
think
it
in
the
similar
sense
of
what
we
were
talking
about
before,
with
lots
of
different
yellow
files
and
splitting
across
directories.
The
cluster
bundle
is
an
attempt
to
sort
of
pull
a
lot
of
those
pieces
together.
A
We've
also
talked
about
the
cluster
bundle
in
relationship
to
there's
a
proposal
to
consolidate
the
bootstrapping
scripts
across
providers
and
a
cluster
bundle
has
a
facility
to
actually
embed
sort
of
a
node
startup
piece
that
we'd
expect
to
run
on
nodes.
So
we've
looked
at
at
integrating
it
there
as
well.
So
it's
right
now.
It
is
a
sort
of
separate
thing
that
we've
open
sourced
from
Google
side,
there's
guy
named
Josh
at
Google,
who
was
here
last
week
that
was
talking
about
it
and
he's
the
main
driver
from
our
end.
A
There
and
I
think
he
would
love
to
see
it
start
to
get
some
adoption
across
tools
and
I
think
that
the
biggest
benefit
there
is
that
the
bundle
itself
is
really
just
a
schema
for
describing
the
different
pieces
and
each
tool
can
put
what
they
feel
like
is
most
important
into
that
schema.
But
if
we
have
a
common
way
to
describe
those
things
and
that
moves
us
towards
more
consistent
logic
across
the
board
in
terms
of
our
different
cluster
management
tools
and
then
I
think
we
answered
your
question
about
add-ons,
which
is
today.
E
H
Think
there's
one
there's
one
other
piece
which
is
great
in
the
bundle
which
might
be
useful
for
I
guess
to
conversations
ago,
which
is
I
believe
the
current.
It's
going
a
lot
I'm
doing
a
lot
of
refactoring
but
I
believe
the
current
metaphor
is
a
bundle,
is
a
set
of
components
and
each
component
is
essentially
a
manifest
with
some
extra
stuff,
but
I
want.
One
of
those
extra
pieces
is
that
you
can
add
some
metadata
to
the
components
to
say
like
which
control
plane.
H
It
should
be
whether
it's
like
the
target
cluster
or
there,
whatever
really
mini
cube
one
baby.
E
B
H
A
H
B
I'm,
just
thinking
from
the
concert
context
on
the
core
community
right
so
like
we
say
that
we
instil
the
bundle,
afraid
or
incubate
them,
and
then
everything
else
is
referenced,
the
a
bundle
and
like
things
so
you
can
in
in
the
cube
LM
config
you
can
reference
wonderful
sets,
for
example,
and
add
bundle
packages
if
you
want
it
to
all
reference
other
bundled
packages
from
and
don't
upgrade.
You
can
reference
a
new
bundle.
Just
like
these
kind
of
things
like.
B
A
That
that
was
the
hope
the
Josh
had
for
the
bundle
was
that
it
could
be
a
common
way
to
describe
these
things,
and
we
could
use
it
at
the
different
layers
to
make
it
easier
to
a
move.
Your
configuration
between
those
different
pieces
of
tooling
and
be
how
the
tooling
communicate
with
each
other
and
more
consistently
right.
B
And
with
customize,
you
can
also
like,
if
you
have
this
kind
of
templating
thing
or
needs
this
patch,
alright
anything
you
can
start
with
your
director
with
every
environment,
then
feed
it
to
customize,
then
through
cubed
mo
cluster
API,
and
we
will
flow
down
to
the
desired
state.
Okay,
cool!
That's
what
I
want
to
know.
H
B
So
it
also
kind
of
answer
the
next
question:
I
had
how
detailed
should
a
cluster
kind
be,
and
with
this
we're
saying
that
atoms
are
completely
separate,
we're
just
thinking
about
the
core.
Still
it's
not
clear
to
me
are
we
gonna
so
now,
Q
barium
has
made
greater
than
before
comp
compare
comparability
record
are
like
guarantees
on
its
configuration.
Api
we've
decided
on
our
as
small
as
if
we
could
get
it
field
set
on
what
is
the
desired
state
little
cluster
that
cube
atom
cares
about.
So
I
was
just
thinking
like.
B
B
F
A
The
main
thing
that
I
see
being
put
in
the
cluster
object
is
the
shared
like
network
configuration
across
the
cluster.
So
you
say
like
here,
is
the
range
of
addresses
that
I
want
to
allocate
for
pods
or
services
or
so
forth,
and
having
a
single
place
to
put
that.
There's,
no
we're
really
in
a
machine
definition
that
you
can
put
that
information
and
when
you're
allocating
those
things
you
need
to
have
a
place
to
pull
them
from
the
other
thing,
I've
seen
it
used
for
in
some
provider
implementations.
A
If
there
are
like
shared
secrets
or
shared
information,
you
know
that
need
to
be
use
across
all
the
nodes,
for
instance,
set
not
having
to
replicate
that
information
in
all
of
the
nodes,
but
instead
consolidating
it
into
the
cluster
object,
so
that
whenever
a
new
node
needs
to
be
created,
the
users
have
to
reify
like.
Oh,
here
are
the
credentials
for
that
node.
It's
stored
in
one
place,
and
maybe
with
with
different
rules
about
who
can
access
it,
but
the
node
controller
can
still
get
that
information
in
creating
notes.
K
A
F
F
F
B
A
What's
the
value
of
trying
to
synchronize
that
with
the
carriage
release,
so
we
had
a
discussion
about
this
yesterday
there
in
this
the
cig
meeting
that
you
missed,
where
you
know
we're
talking
about
release,
notes
and
release.
Cadence
and
Justin
said
that
it
was,
was
actually
really
useful
for
cops
to
be
detached
from
the
kubernetes
release
process.
Because
then,
when
cops
said
now,
we
have
support
for
1.13
people
actually
trusted
that
that
had
been
validated
and
would
our
we're
willing
to
use
it
right.
And
if
we
are
arbitrarily
saying
well
kubernetes
cut
1:30.
A
B
Yeah
I'm,
not
thinking
about
I'm,
not
saying
we
should
tie
it
to
the
cabinet's
release.
I
was
just
thinking
for
marketing
purposes.
Could
we
say
so
I'm
writing
the
cuvette
into
GA
blog
post?
Could
we
say
that?
Well,
we
also
have
this
more
higher
level
spec.
So
cubetto
is
very
small
in
this
case
scope.
So
cluster
wife's
Argo
has
been
working
on
this
cluster
level
scope.
Now
it's
a
perfect
because
cluster
APRI
has
released
their
first
alpha
version.
You
can
try
it
out.
It
has
these
basic
functionalities.
H
I
I
think
that's
absolutely
the
right
positioning
like
cube,
idiom
vs.
cluster
API
and,
like
you
know,
cubed
aim
is
going
Shea
but
like
doesn't
have
some
of
the
cluster
level
concepts
and
but
then,
like
point
to
the
class
trip,
I
think
it
makes
a
ton
of
sense.
I,
don't
know
if
I
think
I
don't
know.
If
we
need
it
to
be
alpha
by
then
it
would
like
a
fake
alpha
right.
We're
like
yeah.
B
A
No
I
mean
I
think
the
thing
about
try
it
out,
though,
is
you
can't
just
take
the
cluster
API
repo
and
try
it
right?
You
actually
have
to
go
to
the
provider
implementation
repo
and
try
it
there.
So
if
we
just
tag,
cluster
API
is
alpha
that
doesn't
actually
help
users,
because
they're
gonna
say
how
do
I
use
this.
Oh
I
follow
the
link
to
the
AWS
provider
and
use
that.
But
oh
is
it
tagged
alpha
also
like
what's
its
release
Kayden's?
B
True,
but
just
like
from
the
perceived
stability
like
perceived
progress,
things
like
we
started
a
favorite
year
ago,
we've
now,
we
now
have
a
lot
of
community
like
tools
around
implementations
around.
This
is
anything
that
we
blocks
us
from
saying
this.
Is
we
offer
one?
This
is
what
we
got.
We
raised
one
zero
one
zero
with
the
offer.
One
next
version
is
going
to
be
offered
two.
If
we
find
when
we
find
out
a
lot
of
breaking
changes,
we
need
maybe
need
to
make
so,
and
that
is
more.
B
A
And
with
that,
it's
it's
11:00
I
think
everyone
should
probably
move
along
to
their
next
appointment.
Thanks
for
showing
up,
we
had
actually
a
lot
more
to
discuss
today
than
I
thought
we
would-
and
this
has
been
great
I
won't
be
here
next
week.
I
will
find
someone
else
to
chair
the
meeting
and
I
will
see
everyone
else
in
two
weeks.