►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180207 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.fr5wd1ldzmbj
Highlights:
- Discussion on CRDs vs API Aggregation
- Review of alpha milestone issues
- Overlap / integration with the Cluster Registry effort
- Moving machine & cluster API to different groups
- Documentation updates
- Improving test coverage
A
Hello,
everyone
and
welcome
to
the
Wednesday
February
seventh
edition
of
the
sig
cluster
life
cycle
cluster
API
breakout
meeting.
Today
we
have
a
nice
discussion
topic
to
start
us
off,
which
is
I,
think
something
we've
talked
about
a
little
bit
before,
but
has
sort
of
come
back
around
in
recent
discussions
inside
of
Google,
and
we
wanted
to
talk
about
it
in
this
form
as
well,
and
that
is
whether
we
should
be
using
crts
or
API
aggregation
for
the
machine,
API
and
a
host
or
API.
So
thank
you
want
to
kick
the
discussion
off.
I
know.
B
So
in
the
last
week,
I
sort
of
busy
like
to
migrate
all
the
functionality
from
the
calendar
ceoddi
implementation
to
the
using
to
the
API
application.
So
it's
all
mostly
a
but
I
want
to
like
that
discussed
with
this
with
the
community,
so
to
see
it
to
look
for
the
opinion
becomes,
which
one
is
actually
better
in
terms
of
deployment
for
the
research
work,
because
the
front
API
is
kind
of
view.
B
B
B
Someone,
someone
is
supported,
I
think
that
the
the
API
cuckoo
teeny
sort
of
working
on
the
like
improving
the
Ciotti
supposed
to
add
in
his
feature,
so
some
figure
eyes
that
we
will
be
coming
soon.
Even
we
are
not
there
today
for
the
EPI
aggregation.
Of
course
it's
a
fully
functional
API
server,
so
it
has
all
the
features
we
needed
and
it's
print
additional
flexibility
for
deployment
which
mean
we
can't
deployed
a
case
over
outside
the
cluster.
B
So
for
some
scenario
and
40k,
for
example,
I
think
could
be
Prince,
unlike
advantage
I'm
sure
for
it's
for
other
departments
in
area
you'll
be
value
too.
So
another
thing
to
consider
is
for
the
CID,
because
it
didn't
mean
it
may
be
a
server.
So
you
don't
worry
about
the
Riker,
the
patch
in
so
every
API
server
go
to
next
version.
If
it
has
issue
get
a
fix,
so
you
automatically
benefit
from
that.
Even
you
see
a
care
application.
B
It's
it's
in
case
of
a
liability,
it's
a
branch
down,
so
so
the
idea
be
something
between
here
and
there.
So
when
the
issue
cadfix,
we
give
prod
to
into
the
api
server.
Like
extension,
like
the
branch
and
get
fixed
yeah,
so
that's
one
of
the
thumb
side
is
using
the
API
application.
So
so
here,
you're
just
initially
discussion.
I
would
really
like
to
figure,
see
the
what
you
guys
think
a
GRE
or
a
preoccupation.
C
From
our
experience,
the
API
service,
the
best
approach,
because
I
mean
there
was
like
pull
request
several
months
ago,
which
I'm
not
sure
that
I'll
be
merged
to
add
to
custom
resource
is
the
possibility
to
add
the
status,
the
scale,
the
resource
and
those.
So
the
make
me
to
data
generation
as
well:
mm-hmm
yeah,
that's
right,
there's
no
sub
resource
inside
me,
I
mean
there
was
the
poor
question.
A
Now,
all
right,
thanks,
Martin
I,
just
had
a
little
bit
more
background.
This
came
up
in
a
conversation
with
Eric
toon
a
couple
days
ago,
who's
been
one
of
the
sort
of
leaders
in
the
state,
app
space
and
sort
of
defining
application.
Logic
he's
been
working
with
a
lot
of
other
teams
that
are
building
in
the
kubernetes
ecosystem
and
he's
been
encouraging
them
to
use.
Crd
is
whenever
they
can
and
his
reasons
for
doing.
That
are
that
you
don't
have
to
fork
and
rebase
code
right.
A
A
I
can
use
and
do
defaulting
using
a
web
hook
and
that
he
expects
that
CRTs
will
have
things
like
multi
virtual
support,
neither
110
or
111,
and
are
likely
to
get
things
like
metadata
generation
support
earnest
and
a
status
sub
resource
support,
even
though
those
things
are
not
on
the
near-term
roadmap.
Things
like
strategic,
merge,
patch
support
is
being
discussed,
but
has
not
been
committed
to
and
they're
likely
to
get
sort
of.
A
If
there's
a
lot
of
momentum
around
them
better
sort
of
development
framework
and
tools,
so
I
think
those
are
sort
of
his
reasons
for
sort
of
trying
to
push
people
towards
CR
DS,
which
kind
of
caught
us
a
little
bit
by
surprise
since
in
I
think
both
this
form
and
internally
we've
been
thinking.
The
API
Eurasian
was
sort
of
clearly
the
right
way
to
go
and-
and
he
seemed
to
be
surprised
that
we
drawn
that
conclusion
so
before
I
go
into
any
rebuttals
of
any
of
those
points.
A
D
Robber,
we
just
in
a
service
catalog
said
we
had
a
had
air
to
come
and
give
a
little
brief
talk
on
this
as
well,
and
he
had
some
slides,
I
think,
probably
maybe
the
same
slides
you're
talking
about
for
from.
Would
it
be
helpful
to
put
a
link
to
the
slides
in
the
meeting
notes?
Just
so
other
people
can
have
that
if
you
don't
have
a
link,
I
can
add
it.
If.
A
You
believe
that
be
great
I
have
a
set
of
slides,
but
I
can't
tell
who
they're
shared
with
or
if
they're
visible
outside
of
Google,
which
is
always
fun,
because
it's
kind
of
difficult,
sometimes
for
us
to
share
documents
outside
of
the
company.
So
if
you
have
a
link
to
slides,
if
you
could
paste
them
into
the
chat
or
stick
on
the
meeting
notes,
that
would
be
great
and
I
can
verify
those
the
same
same
slides
I
mean
yes,.
B
A
A
That's
not
entirely
true,
so
we
were
talking
to
Eric
yesterday.
What
he
was
saying
was:
you
could
still
use
CR
DS
outside
the
cluster
by
running
a
standard,
API
server
with
all
of
the
core
resources
or
resources
turned
off
and
just
see
Rd
is
turned
on,
so
you
can
still
get
the
benefit
of
having
sort
of
the
separation
of
concerns
where
you
run
the
core
API
server
for
things
like
pods
and
replica
sets.
A
A
A
Daniel
Smith
was
also
in
discussion,
and
he
mentioned
that
they're
thinking
about
building
a
sort
of
control
plane
in
a
box,
or
they
basically
have
a
main
function
that
drops
you
into
the
API
server
with
a
whole
bunch
of
flag
set
to
sort
of
give
you
that
experience
a
net
CD
running
inside
the
same
binary
as
well.
So
you
have
sort
of
a
single
deployment
container
that
would
be
like
here
is
my
API
server,
but.
B
A
Yes,
right
so
I
think
I
don't
know
if
Chris
is
on
the
call.
Chris
and
I
were
talking
about
this
Chris
Rosie.
Now
we're
talking
about
this
a
little
bit
yesterday,
our
desks,
and
we
had
a
couple
of
other
thoughts
here,
so
one
of
which
is
so
Sierra
teas
are
working
on
multi
version
support,
but
they
might
come
in
110.
They
might
come
111
I'm,
not
sure
we
want
to
wait
six
or
nine
months
for
those
things
to
show
up
and
sort
of
block
our
progress
on
that.
A
The
running
a
separate,
Etsy,
D
I,
don't
think,
is
necessarily
a
problem
in
our
case,
because
I
think
we
actually
want
to
have
separate
storage.
You
were
saying
like
having
consolidate.
Storage,
gives
you
a
lower
memory
footprint,
but
having
separate
storage
gives
you
a
separate
failure
domain,
and
so,
if
you
blow
up
your
at
CD,
because
you
have
a
job
that
you
know
runs
out
of
creating
pods
and
Etsy
the
ohm's,
you
still
want
to
be
able
to
see
the
machines
in
your
cluster
and
still
be
able
to
repair
machines
in
your
cluster.
C
A
State
to
same
skill,
okay,
excellent,
so
sort
of
the
two
I
think-
probably
most
common
ones
are
like
it
being
added,
but
if
we
did
want
to
add
other
sort
of
custom
verbs
for
machines,
I
think
that
would
probably
be
difficult
to
do
with
C.
Are
these
it's
not
clear
if
we
want
to
or
not,
but
that's
something
that
we've
thrown
out
there
as
a
possible
idea.
A
So
Martin,
it
sounds
like
from
the
the
things
you
pointed
out
from
your
experience
of
status
and
scale
sub
resources
generated
metadata.
Those
look
like
they're
all
sort
of
on
a
table
to
be
added
to
CR
DS
sort
of
close
those
gaps.
Are
there
any
other
reasons
that
you
guys
have
found
running
a
separate
API
server?
It's
beneficial
I
mean.
C
Just
having
more
or
less
the
the
wife
cycle
pool
is
so
we
might
want
to
have
like
different
notification
for
those
kinds
of
free
resources,
so
it
gives
us
from
of
flexibility
in
in
that
domain
and
also
the
the
initial
proteins.
Well,
we
use
like
webhook
what
they
are
got.
The
remote
police
and
remote
mission,
ponies
and
well
better
defaulting
and
pretty
much
those
things,
but
better
defaulting
can
better
validation.
So
if
you
have
x
and
y,
you
need
to
care
for
lose
it
satisfying.
A
So
air
engine
is
slides,
it's
starting
in
one
dot.
I
don't
know
what
version
hangs
it
as
of
1.9.
You
can
do
complex
validation
using
a
validating
webhook,
where
you
can
say
a
field,
a
is
set,
then
field
B
must
also
be
set,
and
you
can
also
use
defaulting
using
a
mutating
web
code
or
you
can
say,
if
field
a
is
not
set,
then
set
field,
B
tofu.
So
that's
on
slide.
Four
of
the
slides
of
Matthew
linked
into
the
chat.
C
I
have
to
check
them,
but
I
don't
know
it
feels
fun
farm
on
natural
just
to
run
on
eight
guys.
The
were
food,
so
this
use
case,
especially
yeah.
If
you
need
to
do
some
complex
ability,
I
should
like
even
it
up
on
to
see
prerogatives,
I
think
metadata
or
some
other
way
to
the
customer,
so
definition
but
I
have
to
check
the
document.
A
A
B
Okay,
the
Chrissy's
Tecna
chat,
I,
have
one
more
like
one
here
is
Eric
mentioned
to
use
the
API
aggregation.
You
need
to
copy
like
a
thousand
five
thousand
lines
of
code
from
sample
API
server,
but
the
you
know
our
coming
the
implementation.
We
are
using
a
piece
of
a
pewter
which
can
help
us
automatically
generate
the
most
difficult.
So
it's
part
of
its
think
the
not
that
bad.
So
you
don't
need
to
read
ready
to
the
replace
here.
A
Yeah
I
guess
the
only
thing
I'll
say
about
that
is.
If
that
works
well
today,
because
the
API
machinery
team
thought
they
were
gonna
to
promote
API
irrigation
and
they
decide
it's
not
worth
maintaining
the
API
server
builder,
because
no
one
is
using
it.
Then
that
may
become
a
burden
that
we
have
to
to
hold
ourselves
sure
I.
C
Mean
with
our
custom
editor,
we
simply
went
and
took
the
sample
at
I
server
implementation,
it's
it's
very,
it's
very
simple
and
just
in
in
one
they
were
quick.
You
can
simply
remove
everything
that
you
don't
want
and
then
cook
it
with
your
API
types
and
etc
and
small
things.
So
it's
it's
very
easy
doable
just
with
the
sample
API
server,
Louis.
A
C
F
F
Wow,
okay,
it
only
took
me
like
two
days
to
finally
be
able
to
talk
any
kubernetes
call.
I
think
it
is
just
worthwhile
to
share
my
story
doing
this
in
the
past,
both
internally
and
trying
to
code
it
into
cops
in
Cuba
corn.
The
the
dependency
hell
for
lack
of
about
it
better
term,
while
trying
to
upgrade
from
one
version
of
API
machinery
to
another
was
a
significant
amount
of
work
like
it
wasn't
just
a
day.
For
me,
it
was
like
having
been
like
go
through.
F
We
were
using
deff
at
the
time,
find
the
new
version
recompile
it,
and
then
that
would
introduce
other
dependency
problems
for
other
packages
that
they
need
to
be
manually
resolved.
So
it
was
like
this
whole
exercise
of
like
adjusting
constraints
like
figuring
out
how
we're
gonna
get
the
code
even
recompile
itself
again
because
of
the
whole
like
I'm
gonna,
vendor
API
machinery.
That
then
there's
some
package
that
I'm
also
vetting,
and
then
we
get
into
this
whole
recursive
dependency
problem.
A
A
A
Interesting,
can
you
describe
briefly
so
it's
a
you
said:
you've
run
a
separate
API
server
where
you
put
these
things.
Does
that
sound,
like
the
architecture
I
mentioned
earlier
to
in
response
to
thing
where
you'd
run,
one
API
server
with
things
like
pods
and
replica
sets
and
a
separate
API
server
with
the
machine
definitions
is
that
we
were
talking
about
with
the
second
one
or
is
it
more
of
like
a
front
proxy
yeah.
B
E
Then,
in
our
main
cluster,
where
all
the
like
rest
is
running,
we
talked
to
this
one
and
we
have
a
separate
API
server
who
talks
to
the
user.
Api
server
of
C
cube,
anita's
cluster
and
gets
all
the
information,
that's
get.
We
say,
C
at
ease
and
what
you
see
from
there
so
like
we
have
something
like
something
similar
like
the
gardener,
there's
a
big
guys
have.
E
So
we
have
something
like
a
kubernetes
operator,
but
this
runs
on
a
separate
set
up
and
talks
to
Z
to
the
user,
a
cluster
and
inside
the
cluster
they're.
All
this
affirmations
thought,
and
so
we
can
talk
to
Z
to
the
user,
API
server,
so
communities
cluster
to
get
the
information
and
use
our
API
server
to
through
updates
and
so
on.
A
Interesting,
so,
okay,
so
it
sounds
like
for
your
use
case,
CR
DS,
assuming
that
they
have
things
like
object,
versioning
scale,
sub
resource
status,
sub
resource
generated
metadata,
like
the
other
things
that
are
sort
of
all
in
the
roadmap
for
CR
DS,
would
be
most
efficient
for
the
use
cases
that
you
guys
have
right
now
and
a
simpler
deployment
model
as
well.
Yeah.
D
D
To
you
see
at
ease
in
some
way,
mostly
from
the
standpoint
of
we,
we
would
not
like
our
cluster
API
cluster
API
pod,
to
have
the
full
access
to
write
into
the
same
entity
as
the
control
plane,
and
we
also
don't
want
to
have
a
separate
entity.
So
CRE
solves
some
of
those
requirements
that
we
have
and
personally
from
from
doing,
API
aggregation
updates
across
cube
versions,
a
number
of
times,
I
know
it's
it's
a
it's
a
big
pain
and
I
relish
not
having
to
do
that.
G
Have
one
one
basic
question
about
this
whole
discussion?
If
we
pick
one
one
solution
inside
you
move
to
the
other
one,
is
it
like
too
hard
like,
for
example,
if
you
know
for
now,
we
just
go
ahead
with
a
separate
API
server.
Given
the
features
you
know
that
or
lack
in
easy
RDS,
and
we
decide
to
move
to
that.
You
know
later.
C
Mean
it
depends
on
what
features
do
you
actually
use?
So
if
you
use
only
the
the
standard
API
source,
as
Robert
said,
without
any
custom
step
resources
like
reboot
and
whatever
yeah,
it
would
be
easy
to
migrate
from
custom
API
server
to
the
co
gist.
But
the
problem
is
that
you,
you
also
set
the
requirement
for
your
kubernetes
version
to
be,
for
example,
one
pointing
1.10
or
1.11
whenever
those
features
come
so
there's
also
trade-off.
You.
B
B
Maybe
kind
you
need
a
bit,
but
the
market
is
Sendai
from
controller
side.
Then
you're
still
like
a
pudding
from
API
server
and
reconciling
so
and
the
even
to
the
API
aggregation.
Why
the
controller
is
it
mostly
put
in
from
the
main
idea,
so
I
threw
the
aggregation,
so
it
don't.
Let
me
see
the
difference
and
so.
A
B
G
Yeah
cuz
of
one
suggestion
there
would
be
in
terms
of
timeline
that
you
know
to
get
something
out
there.
You
know
in
front
of
more
people.
We
could
go
with
the
approach
that
we
have
right
now
constraining.
You
know
the
use
of
the
external
API
servers
with
the
thought
that
we
would
be
migrating
to
no
to
the
no
see
are
these
once
it
has
an
all
the
features
or
like
I'm,
essentially
following
to
know
Erik
tunes
advice,
but
not
necessarily
adopting
that
right
now
and
not
blocking
on
that.
It
specifically.
A
Yeah
I
think
that
sounds
great.
That
also
allows
us
to
a
it
allows
us
to
make
progress
now
with
the
features
that
we
know
exists
and
are
on
their
roadmap.
Doesn't
give
us
a
strong
dependency
on
something
that
has
no
in
a
timeline
right,
so
he
says,
like
they
might
be
in
110
they'll,
probably
be
in
111,
if
they're,
not
in
110.
That
doesn't
give
me
a
high
confidence
that
we
have
any
sort
of
date
when
these
things
are
actually
all
going
to
work,
but
it
does
sound
like
once
they
are
there.
A
It
shouldn't
be
terribly
difficult
to
switch
over
to
use
them
if,
if
it
makes
sense
to
switch
at
that
point-
or
you
know,
as
Matthew
said,
maybe
we
pick
both
and
say
the
controllers
are
the
same
and
it's
up
to
the
deployment,
how
we
want
to
host
it
right
and
give
people
the
flexibility,
of
course,
supporting
multiple
ways
to
do.
It
is
always
less
desirable,
maybe
make
sense
for
this
case.
It's.
C
So
this
was
pretty
much
our
experience,
so
you
expect
something
to
be
there,
but
and
to
operate
certain
or
certain
level,
and
then
not
everyone
to
be
able
to,
for
example,
modified
the
status
of
resource.
But
this
thing
happens
and
we,
for
example,
we
made
a
very
nasty
hacks
just
to
try
to
get
around
this
and
our
controller
in
its
initial
state
was.
C
A
A
A
We
have
a
meeting
that
we're
gonna
have
with
aratoon,
where
he's
gonna
try
to
talk
us
into
doing
this
and
I
wanted
to
make
sure
we
got
feedback
from
the
community
that
we
could
push
back
with
him
and
I.
Think
I'll
also
try
to
get
him
to
come
to
this
meeting
next
week.
So
we
can
have
that
discussion
in
this
forum
as
well
and
not
just
internally
at
Google
or
maybe
we'll
do
that
instead
of
having
an
intro
meeting
because
I'm
not
sure
it
makes
sense
to
have
both
I.
A
Okay,
so
next
thing
on
the
agenda
was
a
bug,
scrub
and
review
of
alpha
Bravo
blocking
issues.
So
if
you
has
been
following
the
repository,
Rodrigo
and
Karen
created
a
whole
bunch
of
issues
over
the
last
week
or
so,
and
then
they
all
got
tagged
with
four
different
milestones.
So
the
that's.
The
effort
here
was
to
try
and
say
what
are
the
things
we
actually
think.
We
need
to
do.
A
That
would
block
us
from
calling
the
cluster
API
alpha
and
what
are
the
things
would
block
us
from
calling
it
beta
and
sort
of
starting
to
group
those
and
categorize
those
I
went
through
a
bunch
of
them.
Last
night
and
relabeled
various
things
I
had
some
comments,
but
I
was
hoping
in
this
forum.
We
could
spend
a
couple
of
minutes
going
over
the
alpha
blocking
issues
list.
A
There
are
last
I
checked
about
20,
open
issues
and
just
to
skim
through
those
briefly
get
feedback
from
people
about
whether
these
things
should
be
alpha
blocking
or
not
and
see.
If
anything
large
is
missing
from
the
set
because
would
be
nice
to
have
sort
of
an
agreed
to
side
of
this
is
what
we
think
is
alpha,
so
we
can
start
burning
that
list
down
and
have
a
realistic
time
frame
of
when
we
think
we'll
be
done.
G
A
G
Sure
we
have
machines
that
they
don't
have
in
a
file
kind
of
a
you
know
outside.
You
know
like
rapping
out
the
word
from
machine
set
which
would
no
we
haven't
been
able
to
do.
You
know
I,
think
you
got
some
traction
on
last
week,
which
is
great.
What
do
we
kind
of
want
to
get
the
know?
Sort
of
that
finalized
in
terms
of
API
and
some
of
the
implementation
that
I
have
going.
A
The
one
thing
I'll
say
there
is
a
number
of
designs
for
machines
and
machines
and
machine
deployments
have
involved
classes
and
there's
another
issue
farther
down
the
Alpha
list
or
529
about
adding
machine
classes
to
the
API.
This
also
ties
in,
in
my
mind,
to
the
architecture
diagram,
to
try
and
sort
of
get
everybody
on
the
same
page
about
here
are
the
core
resource
types
and
here's
how
they
all
fit
together
and
I.
Think
if
we
have
an
agreement
on
that,
then
getting
a
machine
set,
API
definition
emerged
will
become
very
trivial.
G
G
G
And
outside
that,
the
machine
life
cycle
state
machine
then
also
the
discussion
that
was,
you
know,
I
think
a
few
drove.
You
know
like
a
couple
of
weeks
back
I
think
I
should
finalize.
You
know
like
how
we're
gonna
go,
but
you
know
about
their
own
life
cycle
and
what
kind
of
what
kind
of
states
and
/
or
hoping
to
that
in
the
first
implementation.
There
is
what
kind
of
is
reasonable
for
a
first
implementation
in
terms
of
like
I.
G
Think
those
are
the
main
ones
I.
Looking
at
this
list,
I
think
there
are
some
other
implementation
parts
that
we
are
usability
to
know
we've
run
into
another
cluster
creation,
for
example,
we're
not
necessarily
working
on
any
at
least
right
now,
a
little
for
alpha
or
not
blocking
on
any
other
implementation.
I
think
we
only
have
GCP,
because
that's
one
of
being
kind
of
working
on,
mostly
we
like
to,
of
course,
one
of
the
other
implementations
for
beta,
but
that's
also
up
for
debate.
G
You
know,
if
you
feel
I
know,
there's
some
different
different
partners
know
that
have
been
working
on
a
the
blasts
or
the
no
providers,
and
if
we
feel
like
that's
something
we
should
add
for
Alfaro
going
to
be
able
to
do
it,
you
know
for
alpha,
wouldn't
be
able
to
image.
You
had
that
to
release
as
well
I.
A
Guess
the
only
other
thing
is
on
a
list
that
I
would
like
to
call
out
is
some
of
the
automation
being
put
in
place
on
the
repository
right.
We've
noticed
over
the
last
few
days.
It's
right
now
it's
really
easy
to
merge
things
that
favor
gold.
We
want
to
make
sure
that
the
PR
is
it
get
merged
by
the
Box.
You
know
past
the
fact
that
they
build
the
unit
test,
pass,
etc.
So
getting
some
of
that
set
up
I
know.
A
G
A
A
F
Don't
know
if
we've
talked
about
this
yet,
but
if
we
have
somebody
just
jump
in
and
and
tell
me
that
we've
already
talked
about
this,
but
we
had
an
internal
call
yesterday
and
a
lot
of
folks
had
a
lot
of
questions
about
the
differences
and
different
concerns
between
the
cluster
registry
project
and
our
cluster
API
that
we're
working
on
here
also
I
joined
sig
Docs
yesterday
and
the
cluster
registry
implementation
of
the
clusters.
Api
object
is
on
its
way
to
be
merged
into
kubernetes
io.
F
H
H
So
I've
been
looking
into
cluster
registry
and
how
it
can
use
cluster
API
I.
Don't
have
notes
of
designs
right
now,
because
I'm
still
in
early
stages,
but
I
do
have
an
issue
in
the
repo.
If
you
have
any
thoughts,
just
you
know
put
it
in
the
issue
or
I'm
on
slack
and
reach
out,
but
pretty
much
I
think
what
we
are
looking
to
do
is
allow
registration
of
any
cluster
to
any
registry
after
the
cluster
is
created.
C
A
That
I
just
pasted
something
in
the
chat
that
format
terribly.
This
is
from
the
challenges,
an
open
question
section
of
the
cap.
The
cap
is
based
on
a
dock
that
Jacob
Beecham
had
originally
authored.
With
myself,
where
we
had
sort
of
discussed
the
scope
of
the
cluster
API
and
whether
it
was
going
to
be
a
single
cluster
or
multi
cluster
API
and
where
we
landed
there
was
it
made
sense
for
the
cluster
API,
a
single
cluster,
because
the
cluster
registry
was
that
sort
of
multi
cluster
view
on
top
of
your
clusters.
A
So
the
way
we've
been
thinking
about
it
is
when
you
want
to
list
clusters,
go
to
the
cluster
registry
to
figure
out
where
your
clusters
are
and
how
to
get
to
them,
and
then,
when
you
want
to
interact
with
a
single
cluster,
you
do
that
through
the
cluster
API.
That
is
part
of
that
cluster.
So
we
don't
I.
Don't
we
haven't
spent
enough
time
with
the
multi
cluster
sig
to
actually
figure
out
how
the
API
is
lined
up
to
make
that
work
correctly,
but
I
think
conceptually.
That's
that's
what
we're
thinking
right
now,
I
think.
F
H
There
is
a
proposal
that
Brian
grant
brought
up
to
rename
cluster
registry
to
like
a
bi
server
registry,
which
I
think
makes
a
lot
of
sense.
Okay,
I
think
there
is
discussion
about
like
what
exactly
where
exactly
does
the
cluster
definition
for
our
the
cluster
API
cluster?
Let's
whether
that
lives
on
the
cluster
eregistry
cluster
object,
whether
that
lives
in
our
cluster
itself?
Is
it's
an
open
question.
A
Yeah,
that's
his
point
right
because
their
cluster
registry
I
the
last
time
I
looked
at.
It
was
effectively
a
way
to
share
cube
config
files
without
off,
and
so
it
was
a
basically
a
way
to
say
like
here:
are
the
API
server
endpoints
that
I
know
about,
and
it
wasn't
actually
a
list
of
anything
describing
clusters
except
for
their
location.
So
what
one
option
would
be
is
we
could
say
we
do
want
it
to
actually
be
a
cluster
registry
and
we
want
to
make
them
use
our
cluster
struct
and
our
cluster
types.
A
And
when
you
list
clusters,
you
see
the
desired
state
of
all
your
clusters.
You
have
an
issue
there
about
synchronization
between,
like
where
is
the
canonical
state
for
those
clusters
stored,
but
there's
one
thing
that
we've
considered
I:
don't
think
we
proposed
that
to
them,
because
we
haven't
thought
through
what
that
ramifications
of
that
would
be.
H
A
So
yeah,
that's
a
great
point:
Chris,
that's
something
that
that
we
kind
of
have
been
maybe
putting
off
a
little
bit
because
it's
sort
of
at
the
fringes
in
some
ways
of
like
we
want
to
get
the
core
thing
working
before
we
try
to
plug
it
in
everywhere,
but
if
we
totally
ignore
it,
it's
gonna
not
get
plugged
in
properly
later
right.
So
thank
you
for
prodding
us
about
that
and
I
think
said.
We
definitely
get
those
conversations
moving.
C
I'm,
nothing,
that's
the
hot
topic,
but
right
now,
with
these
discussions
and
the
Machine
API
and
the
coaster
API
that
we
have
both
in
our
repository
are
there
in
the
same
group
and
originally
there
were
in
different
groups.
So
is
it
possible
to
separate
them
because,
like
those
things
can
block
each
other
very,
very
customs
and
lots
of
things
we
don't
with
the
versioning,
especially.
A
G
Public
source
announcement
we
we've
been
improving
their
own
the
documentation
right
now
we
have
a
couple
of
contributing.
You
know
documents
we're
going
to
be
restructuring
that
and
have
a
single
one
to
know
at
the
root
and
have
a
little
things,
a
little
more
streamlined,
but
I
believe
you
know
where
we
have
all
these
steps.
No
need
it
to
actually
for
someone
to
start
contributing
the
Mojo
tools
to
the
codebase
and
those
those
have
been
tested
by
someone
with
a
brand
new
desktop
like
myself
and
those
kind
of
seem
to
be
working,
which
is
great.
G
But
if
you
have
any
kind
of
suggestions
there,
let
us
know
I
have
the
link,
don't
you
chill
Asia
as
well,
the
restructuring
all
the
documentation
and
having
run
instructions,
etc.
We
we
made
some
progress
on
on
actually
how
to
contribute.
You
know
the
whole
API
and
I
think
around
the
know
did
a
good
job
there.
Don't
you
explain
what
is
needed?
You
know
for
someone
with
a
new
provider.
G
H
G
H
A
A
E
B
E
G
G
A
G
A
A
Going
once
going
twice
sold
all
right
thanks,
everyone
for
coming
I
will
try
to
convince
Eric
to
come
join
us
next
week.
Hopefully
he
doesn't
have
a
conflict
at
this
time.
If
not,
we
may
need
to
schedule
another
time
to
chat
with
him
as
a
community
group
and
if
that's
the
case,
I'll
post,
something
in
slack
and
send
an
email
out
to
the
the
mailing
list
to
figure
out
when
people
are
available,
but
hopefully
it'll
be
at
this
time
next
week.
Thank
everyone
for
coming
and
we'll
see
you
next
time
take
care.