►
From YouTube: Kubernetes SIG Cluster Lifecycle 20171129 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.ketwyhmq5g4
Highlights:
- Discussed whether the machine API should include optional references to configmaps vs. embedded strings
- Discussed docs for cloud providers who want to support Cluster API
- Worked through a Kubernetes Enhancement Proposal (KEP) for the machine API
- Discussed the status of the control plane API
A
Hello
and
welcome
to
the
November
29th
edition
of
the
cluster
API
breakout
meeting
from
cluster
lifecycle.
Today
we
are
going
to
start
the
meeting
with
Martin.
Has
an
agenda
item
talking
about
the
Machine
API,
our
Jana
as
you're
typing
into
the
notes,
dark
sort
of
verbalize?
What
you're
talking
about?
Okay.
B
So
yeah
finished
up
in
later
on.
So
currently
the
machine
is
a
machine
specification.
The
provider
for
the
quad
provider
is
a
string
and
and
it's
good
bad
string
curls.
So
you
can
have
a
literal
value
and
you
can
do
something
very
quickly
on
the
flight.
However,
in
instant
other
cases,
it
is
also
okay
to
provide
a
reference
to
like
the
configuration.
B
So
you
can
reuse
it
and
on
don't
know,
I,
don't
apply
it
every
single
time
and
I've
been
thinking
that
the
provider
perfect
is
very
similar
to
the
environment
in
the
coop
in
the
container
so
Derek
there.
You
can
have
both
literal
literal
value,
such
a
string
core
or
you
can
come
a
reference
to
config
map
or
a
secret.
So
it
is
a
possibility
to
add
to
choose
to
spec,
possibly
in
the
machine
API
and
in
the
Questor
api.
So
we
can
can
see
so
can
have
both
the
flexibility
of.
A
So
one
thing
that
we've
talked
about
so
right
now
we
just
have
machines
and
we
are
sort
of
moving
eventually
after
we
have
machines
to
having
sets
machines
and
sets
of
machines
are
gonna
have
to
have
templates,
so
you
can
stamp
out
new
items
and
asset,
and
that
sounds
also
similar
to
what
you're
talking
about
like
it
like
it
sounds
like
using
a
reference.
Is
it's
an
awful
lot
like
creating
sort
of
a
template?
C
A
reference
is
that
it
is
mm
like
mutable
or
unburdened
or
whatever
you
want
to
call
that
right.
You
know
that
we
don't
point
to
a
particular
version,
so
you
know
what
I'm
saying
trying
to
I'm
trying
to
verbalize
this.
You
know
we're
not
going
to
if
we're
pointing
to
a
separate
object.
We
aren't
pointed
to
a
perversion
of
a
particular
object,
and
so
that
object
could
change
underneath
us
and
that
introduces
they
can
flex
it
into
like
the
PAS.
B
A
A
B
D
On
pod
behavior,
but
if
you
do
do
a
reference
like
this
and
say
the
provider
config
specifies
like
a
machine
type
like
a
size
or
something
and
you
change.
The
underlying
config
map
is
the
reconciler
then
supposed
to
go
and
just
kill
all
the
nodes
and
upsize
them
to
the
right
machine
type.
I
mean
it.
It
provides
a
backdoor
to
like
massive
change
that
may
be
unintended.
D
C
This
could
also
be
something
that
we
do
a
B.
We
could
I
like
the
I
like
the
use
case,
but
maybe
this
is
something
that
we
do
at
the
higher
level.
So
a
or
the
machine
deployment
can
have
a
reference,
and
then
we
define
the
semantics
of
changing
it
as
being
yes,
this
is
a
horrific
like
redeploy,
but
we
see
them
in
it
gets
stamped
the
actual.
The
configuration
gets
stamped
into
the
machine
itself,
maybe.
A
Right
so
you're
saying
that
if
it's
at
a
higher
level,
we
can
effectively
copy
the
current
value
and
serialize
that
into
the
string
at
the
lower
level,
and
then
you
know
that
it's
immutable
at
that
point
going
forward
and
then,
if
you're
changing
it,
you
are
purposely
changing
it.
You're,
not
indirectly
changing
it
via
reference
you're.
A
C
D
D
I'm
just
saying
it's
a
discussion
that
we
need
to
have,
but
it's
not
certain
that
there
will
be
immutable.
It
will
be
possible,
but
you
guys
also
raise
a
good
point
that
the
Machine
types
are
not
the
designed
user
interface
they're,
just
a
convenience
type
to
help
people
or
to
help
test
it
right
now
where
the
higher
levels
are
supposed
to
be.
The
touch
points
for
users.
A
Was
there
anything
else
Martin
you
wanted
to
discuss
about
that
so
I
think
the
the
it's
a
I
think
it's
a
really
interesting
idea.
It
might
be
worth
experimenting
with,
but
we
definitely
have.
There
are
some
people
on
the
call.
I
have
some
reservations
about
whether
it's
sort
of
the
right
approach
at
this
level
or
if
it
makes
more
sense
at
a
higher
level,
I
mean.
B
F
F
F
Api
is
in
some
sort
of
state
for
things,
some
set
of
documentation
for
cloud
providers
who
want
to
support
the
cluster
api,
I'm
not
sure
where
that
doc
lives,
but
I'm
a
consumer
of
it,
and
I
would
like
to
have
it
and
I'd
like
to
have
it
be
as
good
as
it
can
be.
So
we
can
make
this
easy
and
that's
all.
E
F
There's
no
other
volunteers,
but
I'm
coming
I'm
not
coming
from
a
position
of
strong
knowledge
of
how
all
this
stuff
is
supposed
to
work.
Yet
I'm
more
coming
at
it
of
it
like
happy
to
write
down
things
as
I
discover
them,
but
some
of
it
really
knows
it.
He's
gonna
have
to
help
me
out
with
things
which
is
to
say
I'm,
happy
to
pair
with
someone.
If
someone
else
wants
to
work
on
it,
yeah.
E
When
we
give
a
big
talk
when
Robert
gives
this
talk
and
and
demos
this
we
want
to
have
a
really
nice
landing
page
that
anyone
who's
interested
more
information
just
has
like
a
one-stop
shop,
jump
off
page
for
everything
they
need
to
know,
link
to
our
open
PRS
for
the
API
definitions
and
greater
context
of
the
project
and
who's
on
the
call
brought
up
the
fact
that
we
also
have
a
lot
of
this
tribal
knowledge,
especially
there's
the
prototype
right
now
of.
Oh,
here's.
E
How
you
do
say,
build
the
Michigan
to
rebuild
the
machine
controlling
who's,
how
you
debug,
so
all
of
that
should
really
be
documented
somewhere.
If
we
actually
want
to
encourage
new
people
to
have
any
chance
of
succeeding-
and
you
know
cloning
a
repo
and
hacking
on
it.
So
someone
is
definitely
going
to
picking
up
that
from
our
side
very
soon
and
it'll
be
awesome.
We
kind
of
divide
up.
E
F
E
F
E
From
a
from
a
non
documentation
perspective
from
a
technical
perspective,
if
you're
actually
trying
to
start
using
the
cluster
API
in
in
another
repository,
are
you
ven
during
cube
deploy?
Are
you
reinventing
the
API?
Do
you
do
you
have
an
idea?
What
that
looks
like
right
now
like?
Is
there
a
way
we
can
make
that
easier
for
you.
F
You're
finished
yep
I,
hear
you
again
so
I'm
coming
at
it
from
the
general
perspective
at
packet.
I
want
to
support
as
many
of
these
sorts
of
deployment
tools,
as
can
be
made
to
work
so
farmers,
one
instance
of
it,
but
keep
the
core
and
keep
spray
down.
The
list
is
all-important
and
then,
with
my
works
on
arm
button
on
I,
also
want
all
these
things
to
work,
to
not
only
provision
for
an
cops
to
not
only
one
of
these
things
to
provision
for
for
Intel
systems.
F
X86
systems,
but
I
want
them
to
all
work
on
our
64
systems
and
32-bit
arm
as
well.
If
it
makes
sense
to
the
extent
that
it
can
be
made
to
work.
So
my
field
of
vision
is
pretty
broad,
and
this
looks
like
a
good
hat
to
hang
things
on
the
place
to
hang
a
hat
on
to
get
stuff
done.
Whatever
the
metaphor
is,
is
just
I
need
to
find
someplace
to
attach
to
to
start
making
incremental
improvements.
E
F
And
I'm
happy
we've:
we've
gathered
good
working
relationships
with
the
cube
corn
team.
We've
got
a
he's
working
on
that
code.
From
the
packet
side
we
get
someone
working
with
cube
spray.
Sam
is
working
on
that
and
I
latched
on
to
the
one
that
I
mentioned
and
they're
just
sort
of
like
swarming
at
it
to
make
sure
that
everyone
supports
packet.
So.
A
That
sounds
great
I.
Think
one
of
one
of
our
goals
is
with
the
cluster
API
is
to
try
and
consolidate
some
of
that
work.
So
in
the
future
you
guys
hopefully
don't
have
to
swarm
against.
You
know
ten
different
tools,
there's
sort
of
a
central
place.
You
can
go
where
if
you
can
build
packet
support
underneath
this
API,
then
you
automatically
get
support
in
cops
in
cubic
corn.
Thank
you,
spray.
Excetera
right,
that's
that's
sort
of
one
of
the
impetus
for
this
project.
That's.
A
So
I
had
the
next
agenda
item,
which
is
to
talk
about
planning
for
110.
So
yesterday,
during
the
generic
cluster
life
cycle,
meeting
Jace
walked
us
through
this
new
kubernetes
process
for
doing
future
planning,
which
is
called
caps,
ke
Peas,
which
is
the
kubernetes
enhancement
proposal
process,
which
is
a
long-winded
way
of
saying
it's
a
slightly
more
detailed
version
of
what
we
were
doing
in
the
past,
following
a
standardized
format
that
we
can
then
convert
to
markdown
and
used
to
track
issues
across
multiple
releases.
A
So
we're
going
to
definitely
want
to
start
filling
these
things
out
for
the
cluster
API
and
put
some
stakes
in
the
ground.
It
asks
questions
for
things
like
you
know,
who's
working
on
it.
How
far
do
you
think
we're
going
to
get
in
the
next
release
cycle
and
so
forth?
So
if
people
think
that's
useful
to
do
with
a
group,
we
can
try
to
walk
through
one
of
those
now.
A
C
C
A
C
Well,
I
was
gonna
say
like
should
we
do
like,
should
we
do
like
the
Machine
say,
like
I,
think
the
problem,
one
of
the
things
I
thought
you
said
we
did
a
very
broad.
Maybe
this
is
what
it's
supposed
to
be.
We
did
a
very
broad,
like
very,
very
people
defined
one
and
I
feel
like
we
could
do
the
machines,
API
and
like
say
like
well.
How
are
we
running
on
that
and
then
certainly
do
that?
You
know
it's
a
suggestion,
but
you
know
that's
yeah.
A
A
A
A
A
C
A
A
A
A
A
C
A
C
See
how
to
do
the
implementation,
great
abyss
without
look
like
it
cops.
You
know
what
I'm
saying
like
they
you
wanna
yeah,
the
one
I
would
do
for.
If
I
need
one
for
AWS,
it
would
be
with
copses
the
no
commissioner
exactly.
A
Right
and
I
think
if
Chris
was
building
afraid
of
us,
you
would
probably
do
it
inside
of
Cuba
corn,
which
is
why
I
kind
of
put
both
the
same
so
dependencies
on
other
SIG's.
So
there's
a
print
parenthetical
remark
here
about
API
review.
This
is
not
a
core
Korea's
API,
but
it
would
probably
be
useful
to
have
an
API
review
pass
by
someone
who
is
part
of
API
review
team.
A
If
such
a
thing
exists,
we're
actually
just
talking
about
this
in
the
hallway,
because
is
there
a
sig
API
or
is
there
a
set
of
API
reviewers?
We
should
be
talking
to
I,
know,
there's
an
API
machinery
sig
and
there's
an
architecture
saying
I,
don't
know,
there's
an
actual
like
set
of
API
reviewers
I.
C
A
Exactly
I
think
and
I
think
they
don't
want
to
be
the
people
trying
to
to
approve
what
the
API
looks
like
okay,
so
we
should
sign
up
for
a
say,
architecture,
slot
sometime
next
quarter
and
make
sure
that
they
take
a
look
at
our
API
and
make
sure
it
looks
good
to
them.
So
who
is
planning
on
working
on
this?
So
if
we
look
at
what
we
did
yesterday,
we
signed
up
a
number
of
folks
from
the
sig
who
said
that
they
were
gonna
dedicate
time
to
this
effort
over
the
next
quarter.
A
C
F
A
A
A
A
It's
kind
of
a
misnomer,
isn't
it.
We
want
high
availability
through
all
things.
I,
don't
think
this
requires
any
o
blocking
tests,
although
I
believe
we
do
have
the
goal
of
starting
through
replace
some
of
the
cube
admin
tests
that
run
like
you
raised
anywhere
to
start
using
the
cluster
API
infrastructure,
so
we'll
sort
of
effectively
be
used
in
blocking
tests,
and
this
is
in
my
mind,
also
our
path
of
getting
blocking
Edes
off
of
Cuba
as
well,
and
if
we,
if
we
can
do
that,
then
we
can.
A
A
A
And
then
the
I
think
the
schedule
after
that
is
a
little
bit.
Tbd
I
mean
ideally
we'd
like
beta,
not
to
too
long
after
that
and
to
be
able
to
progress
forward
pretty
quickly
here
risks
people
have
suggestions.
What
risks
are
gonna
cause
us
to
not
finish
this
in
time
for
1.10,
it's
gotta
be
something
when.
A
A
No,
so
the
fact
that
cube
con
is
happening
about
the
same
time
as
1.9,
which
is
coincidental
I,
don't
think
the
conference
planners
are
planning
a
conference
around
it
release.
The
releases
are
just
scheduled
to
be
about
once
a
quarter
right
now,
there's
a
discussion
we
might
have
next
week.
If
the
contributor
summit
about
whether
that's
the
right
release,
cadence
assuming
we
follow
the
past
precedence.
It's
basically
like
halfway
through
the
last
month
of
the
quarter,
I'd
be
like
middle
of
much
I
have.
G
A
A
Now
they
are,
are
sort
of
based
around
the
edge
release
cadence,
but
as
we
have
ecosystem
projects
that
aren't
actually
released
at
the
same
cadence
or
with
the
same
release
tools
as
the
core
of
kubernetes
like
we
can
set
our
own
deadlines
for
these
as
well
I
think
part
of
the
bit
here
is
we
want
to
be
able
to
have
roll-up
status,
some
people
can
go,
look
at
kubernetes
community
of
repositories
and
see
like
where
is
the
cluster
API
right
now
like
how
far
along
is
it?
What
is
the
status
of
it?
Is
it?
F
The
only
thing
I
can
think
of
them
might
generate
or
consume
external
dependencies
is
how
any
of
the
cloud
providers
api's
might
evolve
or
change
or
otherwise
need
to
be.
You
know
accounted
for
with
things.
I
know
that
doesn't
happen
very
often,
but
as
you
get
more
cloud
providers
involved
the
chances
that
some
of
them
are
going
to
be
young
and
not
have
quite
as
stable
and
API
goes
up
great
other.
E
A
E
Well,
I'll
say
if
this
is
a
umbrella,
blanket
issue
that
you've
been
numerate
a
lot
of
work
there
for
supporting
different
cloud
providers.
If
we
just
don't
have
enough
bodies
that
are
actually
sign
up
for
that
work,
and
we
don't
actually
have
some
like
a
one-to-one
correspondence
between
the
action
items
and
people
signed
up
to
do
the
work
and
it's
also
wholly
committing
to.
C
A
Right
I
mean
each
of
these
bullets
we
put
under
issues
for
the
milestone.
You
could
break
down
into
a
kept
themselves
right
and
assign
people
to
that,
and
you
could
kind
of
do
this
recursively
unless
that
ad
nauseam
I'm
hoping
that's
not
the
process
that
they're
proposing,
because
that
sounds
kind
of
terrible.
E
A
It's
reviewed
it's
submitted,
and
then
it's
impossible
to
tell
after
the
fact
whether
it
was
actually
built
whether
it
was
abandoned
at
proposal
stage,
how
far
along
it
is
if
it
is
being
built
currently,
at
the
same
time,
we
have
the
features
repository
which
we're
using
for
tracking
release
issues
and
like
what,
as
part
of
this
stupid
ideas
released
like
what
are
we
announcing
what's
in
here?
What
stage
is
it
at
right?
A
Is
it
alpha
as
a
beta
I,
think
the
goal
is
to
sort
of
combine
those
two
things
together,
so
the
future
issues
have
become
somewhat
cumbersome
to
be
able
to
track
what
features
are
coming
up
in
a
kubernetes
release
and
also
have
sort
of
up
to
date
documentation
about
what
we're
building
and
why
right?
So
a
lot
of
proposals
are
like
I,
want
to
put
this
cool
new
technology
into
kubernetes
right.
But
what
is
then
user
value
for
doing
that?
E
E
A
E
E
Steps
we,
the
super
you
hear
about
I-
would
need
to
go
through
like
every
single
thing
in
this
format.
I
am
generally
interested
in
what
I
want
to
see
done
in
q1,
like
what,
from
a
high
level,
I
think
to
go
through
each
one
of
them,
as
a
group
would
be
a
little
bit
tedious,
but
and
do
if
we'll
want
to
like.
Let's
say
we
want
to
break
this
down
into
subtasks
and
someone
says
I'm
signing
up
for
a
degree
of
support.
Do
we
just
want
them
to
copy-paste
this
template
and
fill
it
in
themselves?
A
It
was
a
place
where
you
could
encourage
new
people
to
jump
in
and
contribute.
If
you
break
down
what
you're
working
on
into
issues
and
some
of
them
aren't
assigned
or
have
you
know
the
help
once
if
they
belong
them,
that
people
who
go
look
at
like
what
are
the
current
things
being
worked
on
and
kubernetes
can
find
places
to
contribute.
A
So
my
opinion
would
be
that
we
we
stopped
at
this
level.
We
don't
go
deeper
on
the
caps.
We
use
issues
to
track
sub
items
we
farm
those
out
and
and
basically
like
the
people's
names
they
get
put
at
the
top
level
or
roll
up
people
sort
of
shepherding
defeat
the
feature
as
a
whole,
and
then
the
people
that
are
working
on
it
right
I
would
also
suggest
that
it's
not
useful
for
us
to
spend
this
meeting,
creating
lots
more
of
these.
We
can
create
a
couple
more
of
them.
E
C
Sorry
I
would
love
to
see
I,
don't
think
it
necessary.
Is
a
cap
or
even
something
we
document,
but
let's
say
I
start
to
experiment
with
the
next
level
up
on
machines.
Api
I,
don't
know
whether
we
anyone
else
is
interested
in
playing
around
with
I,
don't
think
we're
gonna
like
commit
to
an
API
or
anything
but
I,
don't
know
whether
we
should
write
therefore
write
on
a
cap,
but
it
would
be
great
to
have
you
know
some
ideas
about
this
whole.
This
works
when
we
get
to
the
next
level
right
yeah.
A
So
right
now,
as
as
part
of
talk,
Chris
and
I,
are
giving
we
have
sort
of
a
a
client-side
machine
set
which
allows
you
to
do
things
like
scale
up
and
down,
and
so
I
think
the
the
question
is
that
you're,
probably
asking
is
experiment
with
a
pea
I
like
how
do
you
define
like
what
the
template
is
to
stamp
out
machines,
because,
basically,
if
you
look
at
replica
sets
it's
like
a
pod
template
plus
count,
and
so
presumably
machines.
That's
what
looks
somewhat
similar
for
that.
C
Yeah,
that's
it
like
I'm
hoping
it
will
be
simple.
It's
like
you
know.
We
do
that
in
it.
It's
great
yeah
right
like
trying
to
get
that
running
somewhere
and
seeing
how
it
feels
and
has
a
controller
and
like
I
mean
cops.
Do
the
same
thing
right.
We
started
with
a
CLI
and
we're
switches
to
a
guy
machinery
and
we're
trying
to
move
it
to
a
controller
and
all
this
and
yeah
it
seems
like
it's
a
it's
a
fairly
sensible
pattern.
A
So
I
think
the
only
reason
and
shying
away
from
doing
that
thus
far
is
we
wanted
to
make
sure
we
had
sort
of
all
the
right
fields
and
machine
before
we
create
the
template,
because
otherwise
you're
updating
things
in
multiple
places
and
it
just
causes
sort
of
churn
as
you're
doing
that.
If
we
think
we're
close
or
close
enough
on
the
machine,
I
think
it'll
be
pretty
pretty
easy
to
take
that
next
step
and
create
the
sets.
A
Which
comes
back
to
like
if
we
can
wrap
up
like
we
all
agree
that
this
is
the
the
first
alpha
version
of
the
API.
Let's
start
building
building
on
top
and
around
it.
That's
that's
why
I'm
sort
of
shooting
from
that
as
our
first
milestone
so
I
think,
what's
once
we
agree,
then
it's
pretty
easy
to
layer.
This
that's
on
top
and.
C
A
A
E
Just
not
to
go
through
the
entire
exercise
again,
but
we've
talked
about
the
machines.
Api
I
think
there
are
more
interesting
questions
and
and
possibly
even
more
work
associated
with
the
control
plane.
Definition
Chris
had
already
started
looking
into
I,
won't
say
hunting
down,
but
finding
all
of
the
different
components
that
haven't
migrated
to
configure
yet
and
try
to
figure
out
what
forcing
function
to
apply
there.
E
So
I,
don't
know
if
that
works
being
tracked,
I,
don't
know
if
we
created
an
exhaustive
list
of
everyone
who
needs
to
convert,
but
that
would
be
I
mean
we'll
have
a
lot
of
external
dependencies
there.
There.
The
current
incarnation
of
the
control,
plane
level
definition
is
hinging
a
lot
on
being
able
to
delegate
component
config
to
configure
every
individual,
API,
Garriga,
sort
of
control,
plane
component
and
so
I
think.
There's
a
lot
more
interesting
work
there,
but
not
going
through
the
cap.
D
Think
Robert
actually
has
a
lot
more
context
on
the
status
of
this
than
I
did
because
I
talked
with
him.
The
others
and
I
spoke
up
and
took
it
on
was
he
was
not
there
that
week
so
I
said
I
would
take
it
on,
but
from
what
I
gather
the
only
four
components
we
really
truly
care
about
on
the
control
plane
sign
are
Q
proxy
cube,
DNS
controller
manager
and
API
server
and
scheduler
sorry
v,
because
those
are
the
bare
minimum
components
you
need
to
stand
up
a
cluster.
D
E
D
It
is
important,
but
it's
also
one
of
the
most
ready
once
and
I
think
queue
proxy
and
cube
DNS
or
pretty
much
there
as
well.
The
ones
will
have
to
lean
on
most
or
probably
API
the
API
server
to
just
at
least
get
them
to
define
a
schema
and
the
others
I
think
have
a
preliminary
schema
defined,
but
we'll
have
to
lean
on
them
to
make
it
like
beta
quality,
at
least
Robbie
again
I
said
you
probably
have
a
little
more
context
on
that
than
I
do
so.
If
you
want
to
correct
me,
yeah.
A
So
the
the
scheduling
sig
has
actually
started
moving
cubes
scheduler
over
to
component
config
of
their
own
volition,
which
is
braked
so
I
think
you
know
they
definitely
have
a
schema
defined,
if
not
actual
code
in
place
to
be
the
right
thing.
The
controller
manager
and
the
API
server
don't
and
I
agree
cue
proxy
I
believe
it's
done.
Qb
Ness
I'm,
not
sure
if
they've
done
anything
but
there's
maybe
less
to
do
there
also
and
the
cubelet
I
think
mr.
A
A
I
think
it
will
I
manage
a
little
bit
tricky
because
there's
a
lot
of
stuff,
that's
in
flux,
with
the
separation
of
the
cloud
controller
manager,
part
out
of
the
controller
manager,
so
we
definitely
should
sit
down
with,
but
the
folks
that
are
working
on
that
which
I
know
what
Google
is
Walter
and
try
to
up
the
priority
of
component
version,
or
at
least
define
the
schema
so
that
we
can
use
that
in
our
API.
Even
if
we,
even
if
they're
not
they
haven't
fully
implemented
that
schema
on
their
end.
C
I
mean
cops,
has
a
there
was
an
initial
component
config
and
cops
had
took
that
one
and
added
flag
annotations
to
it.
So
if
we
do
want
something,
we
can
have
a
look
at
that,
but
I
agree.
It
would
be
better
to
get
them
to
act,
commit
to
an
API
because
it
is
uncomfortable
having
the
cups
API
cup
component
complicated,
which
is
now
different
from
that
presumably
gonna
meet.
It's
unlikely
be
identical
to
whatever
they
come
up
with
right,
yeah.
A
That's
a
good
point:
yeah
I
think
we've
been
we've
been
encouraging
people
to
put
their
component
config
definitions
sort
of
next
to
their
component
because
they're
not
part
of
necessarily
the
full
public
API.
But
you
have
a
good
point
that
that
might
make
them
really
difficult
for
us
to
import
elsewhere
and.
E
D
The
very
least
we
should
like
really
enforce
that
they
be
a
separate
package,
even
if
they
are
right
next
to
the
controller
that
doesn't
depend
on
any
other
part
of
the
kubernetes
tree.
So
you
could
import
those
sub
packages
without
having
to
import
the
rest
of
the
christian
industry
but
yeah
it's
it's
definitely
not
ideal.
E
D
C
I
regret
not
doing
in
cops
and
may
be
useful
here
is
to
split
out
like
we
have
a
cluster
object
and
we
have
instance
creep
objects,
but
that's
it
and
the
cluster.
Is
this
big
amalgam
of
all
these
component?
Comping
objects
and
I
regret
not
splitting
it
out
right
away
into
a
collection
of
component
company
manifests,
and
maybe
a
network
manifest,
and
all
these
things,
which
would
then
make
you
know
it
is.
It
is
punting
it
right,
but
it
is
saying,
like
everything
is
an
amalgam
we
have
add-ons
when
they
come
out.
C
A
That's
not
any
you've
mentioned
this
before
Dustin
about,
like
sort
of
you
have
a
model
if
they
can
figure
out
now
and
it'd,
be
nicer
if
there
were
individual
sort
of
separable
configs
and
we've
gone
back
and
forth
on
that.
A
little
bit
with
the
cluster
api
is,
is
the
cluster
api,
an
object
which
has
a
bunch
of
things
embedded
inside
of
it
or
is
the
cluster
api
sort
of
a
notion
notional
like
object
where
it's
really
an
assemblage
of
multiple
different
pieces
that
are
all
themselves
individual
objects,
I.
C
I
guess
it
may
be
semantics
in
that
we
can
have
typed.
We
can
have
an
array
of
typed
objects
in
a
cluster
if
I
object,
so
yeah
I-
guess
maybe
it
maybe
doesn't
matter
but
I'd
like
I,
think
conceptually
having
it
be
a
collection
of
objects
that
are
like
owned
by
the
networking
team
right
and
owned
by
the
scheduler
team.
C
D
You
know,
and
the
only
reason
I
pulled
the
networking
part
out
was
because
it's
a
part
that
is
kind
of
highly
dependent
on
like
how
you
provision
it,
whether
you
need
to
map
that
to
a
certain
C
and
I
config,
whether
you
map
that
to
provider
but
there's
always
usually
something
special
you
need
to
do.
That's
not
captured
in
any
of
the
current
component
component.
Conveyux
are
not
captured
completely
there,
because
some
of
actually
some
of
those
flags
do
affect
the
component
configs,
but
it
also
affects
things
outside
them.
A
D
And
the
unfortunate
thing
I
discovered
is
the
service.
Dns
has
to
be
plumbed
all
the
way
into
the
system.
D
configuration
that
cube
ATM
installs
for
cubelet.
They
get
the
right,
cubelet
flags,
so
something
that
affects
cube,
DNS
and
cubelet,
which
ideally
should
go
into
dynamic,
cubelet
config.
When
we
get
that,
but.
C
E
Okay,
so
that's
that's
API
definition
in
terms
of
implementation.
Do
we
think,
add
different
cloud
providers
for
machines
first
and
then
tackle
control
planes
second
or
try
to
deliver
them
both
at
the
same
time,
this
might
differ
based
on
cloud
provider
if,
if
we're
really
sure
about
GCP
but
not
about
Atos,
etcetera,
I.
E
A
C
A
A
C
D
C
My
my
sulfur,
for
how
I
was
gonna
do
with
cops,
was
that
we
would
still
have
a
cops
configuration
and
the
the
the
roller
that
spins
up
a
machine
would
reference
the
cops
figuration
for
that
side
of
it.
So
it
wouldn't
need
to
be
in
you.
It
wouldn't
really
be
it.
The
ability
to
do
much
more
than
say,
I
want
a
machine
of
a
particular
type
that.
D
D
A
Does
anyone
else
have
any
other
outstanding
items
I
think
we
can
cover
in
two
minutes?
I'll
also
note
that
it
seems
really
unlikely
we're
gonna
have
this
meeting
next
week,
giving
cube
con
the
number
of
people
attending
Q,
Khan
I,
think
actually,
my
talk
about
the
cluster
API
is
literally
at
the
same
time
slot
as
this
meeting
so
I
don't
be.
There
did.