►
A
Computer
good
morning,
everybody,
this
is
the
first
official
cluster
api
provider
nested
office
hours.
This
meeting
will
be
recorded
and
posted
to
youtube.
Later
so
don't
say
anything
you
wouldn't
want
shared
with
the
whole
world
and
yeah
to
kick
this
thing
off.
I
don't
think
we
have
anybody,
that's
new,
yet
at
least
not
on
this
call
that
I
can
see,
but
everybody
everybody
knows
everybody
already
right
cool
now.
Would
somebody
mind
actually
taking
notes
in
this
agenda.
A
You
all
should
have
added
access
as
long
as
you're
part
of
the
syd
cluster
life
cycle.
I
just
want
to
make
sure
that
everything
gets
documented
as
well.
Please
add
yourself
to
the
agenda
or
sorry
to
the
attending
list,
so
we
can
document
who's
all
here
and
cool.
I'm
gonna
share
my
screen
and
start
off
with
the
first
thing.
That's
on
our
list,
which
is
kind
of
just
an
overview
I
just
wanted
to.
I
wanted
to
get
something
out
in
place
that
we
could
talk
about
architecture.
A
See,
can
you
all
see
chrome,
yep,
cool,
all
right
great?
So,
let's
see
so
we
have.
The
first
thing
I
wanted
to
talk
about
was
I
pushed
a
proof
of
concept.
This
is
not
to
get
merged.
This
is
more
just
to
talk
about
the
architecture,
so
I
wanted
to
throw
this
out
there
for
everybody
to
look
at
it.
The
idea
here
is
it
kind
of
follows
around
what
fae
you
and
I
have
been
talking
about
this
for
a
little
bit
and
ciao.
A
A
And
that's
a
nested
control
plane,
but
in
essence,
we
have
like
a
nested
control,
plane
type
which
implements
the
same
functions
as
the
virtual,
the
virtual
cluster
cr.
So
things
like
cluster
domain,
a
template,
ref
the
pkx
expiration,
pretty
much
everything
that
you
already
implemented
just
pulling
that
over
into
a
control
plane
type,
and
the
idea
here
is
the
control.
Plane
types
allows
us
to
have
a
reference
from
as
a
control
plane
reference
from
the
nested
con
nested
cluster
resource
that
we'll
actually
be
implementing.
A
So
this
is
the
the
implementer
of
the
of
the
nested
cluster
and
the
how
it's
all
actually
done
from
a
controller
perspective
is
using
multiple
controllers,
and
this
is
where
I'm
I'm
interested
to
hear
everybody's
thoughts.
A
But
the
idea
is,
it
goes
and
creates
I'm
going
to
actually
find
the
nested
control
plane
concealer,
so
the
nested
control
plane
reconcile
in
essence.
What
it's
doing
is.
It
goes
through
checks
to
see
that
there's
a
template
ref
already
added
to
it,
fails
if
it
doesn't
just
added
a
bunch
of
random
stuff
in
here,
but
then
it
goes
through
and
it
will
create
resources
as
it
needs
to
and
instead
of
doing
everything
implemented
in
a
single
controller.
A
We
actually
have
multiple
resources
that
implement
different
separate
functions,
so
I
kind
of
broke
it
down
into
the
nested
pki,
which
I
have
a
node
in
here
that
we
can
bike
shed
about
the
naming.
I
don't
like
that.
It's
called
nested
pki,
but
the
idea
is,
it
goes
and
manages
all
the
certificate
management
similar
to
how
cube
adm
does
all
of
the
certificate
management
and
how
vc
does
it,
but
not
in
a
single
controller
and
the
idea
there
is,
you
can
hot
swap
that
controller.
A
If
you
don't
want
to
be
using
the
default
implementation
and
say,
for
instance,
you
have
your
own
routes
that
you
need
to
be
generating
all
your
certificates
based
on
then
so
on
and
so
forth.
It
would
go
and
create
another
resource
which
would
be
the
nested
scd
and
again
we
can
bike
shed
about
all
the
of
the
actual
naming
of
this,
but
the
the
idea
is
the
nested.
The
nested
cd
could
be
just
a
like
a
shadow
resource,
so
it
goes
and
implements
pretty
much
nothing
in
the
spec.
B
A
One
more
there
we
go
so
it
pretty
much
doesn't
implement
anything
in
the
spec,
but
all
it
has
is
status
fields
and
the
stat
the
the
spec.
We
can
implement
things
as
we
find
parameters
that
we
want
to
submit
in
as
optional
things
to
change,
but
the
status
actually
manages
how
to
get
the
the
addresses,
for
example,
and
so
now
we
can
have
independent
implementations
and
say,
for
instance,
somebody
wanted
to
go
and
use
like
the.
What
is
it
there's
like
a
there's,
a
new
etcd
operator
that
you
could
go
and
implement.
A
You
could
implement
this
control
or
this
cr
going
and
reaching
out
and
creating
whatever
that,
whatever
that
cd
cluster
is
and
then
it
just
has
to
add
in
the
ip
address
or
the
host
and
the
ports,
so
that
the
next
downstream
object
can
go
and
use
those
things
in
the
say,
for
instance,
the
api
server
flags.
This
is
this
kind
of
the
architecture
that
we
were
talking
about
before.
I
wanted
to
throw
this
out.
There
kind
of
just
open
up
the
conversation
about
this
and
then
yeah,
that's
about
it.
C
Yeah,
I
I
have
a
few
comments,
so
first
computer,
the
nested
control
plane
yep
yeah.
By
the
way
I
don't
have
a
chance
to
do
it.
No.
A
Worries
I
pushed
it
about
five
minutes
ago,
so
I
totally
understand
so.
I
want
to
just
open
the
conversation
up
to
to
any
thoughts
and
feelings
and
about
it
and
then
yeah
move
forward
from
there.
B
A
Yeah,
so
the
there's
a
couple
things
in
here
so
by
implementing
each
piece
as
a
separate
component,
we
can
we're
able
to
replace
any
of
them
that
we
need
to
so,
for
example,
if
you
didn't
want
to
use
the
current
inbuilt
stateful
set
for
etcd,
you
could
go
and
implement
a
a
controller
that
goes
and
creates
an
ncd
con,
full
cluster
that
it
could
go
and
actually
run
against
and
that
that
cluster
can
be
can
be
have
its
own
life
cycle
built
around
it.
A
So
things
like
we
could
implement
backup
and
recovery
into
that
controller,
for
example
same
with
the
api
server
and
controller
manager.
Right
now,
the
implementation
everything
has
to
get
updated
all
at
once
and
as
as
we
all
know,
etcd
the
way
that
it's
implemented
currently
is
completely
ephemeral.
A
So
the
second,
we
start
updating
this,
you
lose
your
entire
cluster
state,
and
so
the
idea
is
if
we
can
separate
all
those
components
out,
you'd
be
able
to
independently
update
the
api
server
and
then
roll
out
a
new
version
of
the
controller
manager
and
scd
doesn't
even
have
to
be
touched
same
with
as
we
go
through
and
get
into
the
long
term
management
of
these
clusters.
The
pki
is
going
to
eventually
have
to
expire,
we're
going
to
have
to
update
those
and
so
providing
individual
like
phases
that
we
can
update.
C
Yeah,
so,
okay,
so
and
you
go
back
to
the
nasty
control
plane,
so
just
that
crd.
Actually,
yes,
I
think
this
is,
let
me
think
about
it,
so
so
the
workflow
wise
should
be.
This
is
the
entry
point.
A
D
C
A
You
have
to
create
them
both
at
the
same
time
and
the
idea
would
be
cluster.
Cuddle
can
actually
do
the
scaffolding
for
that.
A
So,
if
you
look
at,
if
you
go,
do
like
the
docker
implementation
or
if
you
do
the
docker
implementation
for
cluster
api,
like
their
actual
quick
start,
what
it
does
is
you
can
you
end
up
running
like
some
forget
the
exact
exact
command,
but
it's
cluster
cuddle
config,
create
dash
dash
provider,
something
else,
and
it
will
generate
all
the
manifest
files,
and
you
can
write
that
to
you
can
write
it
to
a
file
and
then
you
can
just
cube
cuddle,
apply
that
and
then
everything
gets
created
behind
the
scenes
for
you.
C
A
We
were,
and
so
that's
where
it
wouldn't
fit
with
the
cluster
api
model,
to
actually
use
the
other
objects
like,
like
the
health
checking,
that's
built
into
cluster
api.
A
So
if
you
have
the
nested
cluster
and
a
if
you
have
a
nested
cluster
alone,
it's
not
going
to
know
how
to
go
and
connect
into
the
control
plane
to
do
things
like
control,
plane,
health
or
deploy
add-on
resources
like
what
you
wanted
to
talk
about
after
this,
which
is
like
the
cluster
resource
set,
and
so
you
have
to
have
the
like
the
base
level,
which
is
the
cluster,
and
then
the
cluster
has
a
reference
to
something
that
goes
and
implements
it.
Okay,.
A
It's
it's.
Every
cluster
is
implemented
using
if
it's
not
eks
or
aks.
It's
implemented
using
the
cube,
adm
control,
plane
provider
and
so
we're
in
essence,
creating
a
nested
cluster
control,
a
nested
control,
plane
provider.
C
Yeah
but
so
to
me,
I
think
it's
just
the
loose
couple
and
the
tight
couple
thing,
because
if
you
put
the
ref
in
this
bag,
pretty
much
just
lose
couple,
because
there's
no,
the
ref
can
be
anything
right
exactly
back
type
in
the
another.
Crd
is
the
tight
couple,
because
you
you,
you
said
this
component
has
to
follow
this
crd.
This
is
my
understanding,
maybe
I'm
wrong,
but
are
you
saying
that
we
have
other
so
even
to
the
nested
cluster?
We
probably
have
another
another
type
of
the
control
plane,
design
or
well.
A
This
is
the
only
one
technically
but
in
cluster
api
they
have
the
control,
plane
reference
or
the
control
plane
providers
which
can
be
any
different
control,
plane,
provider
and
so
we're
implementing
ours,
which
does
a
nested
implementation
of
it.
So.
C
We,
okay,
I
think
it's
okay,
we
put
visa.
You
know
loose
decoupling
of
these
two
things.
So
then,
then
that's
fine.
I
just
have
a
few
comments.
For
example,
I
think
you
in
this
design,
you
can
I
this.
I
don't
think
this
is
by
definition
this
necessity
control
plans
back
has
to
mapping
exactly
with
the
vc
object,
because
to
my
understanding,
as
I
said,
increase
the
meeting,
I
want
to
decouple
this,
so
this
only
here
is
the
control
plan.
So
there's
no
api
to
to
tell
how
to
use
it.
Basically,.
C
C
C
Needs
to
be
removed
and
let
me
see.
A
C
Fine
namespace
yeah,
okay,
so
so
what
are
the
template?
Ref?
Is
this
describe
the
easiest,
the
class
version,
the
previous
class
version,
the
template
ref.
A
Yeah,
so
the
template
ref
is
an
object,
reference
that
references,
a
nested
template
and
a
nested
template,
looks
like
what
we
were
originally
talking.
Where
we
have
there,
it
is
where
it
basically
has
components
and
components
have
names,
and
so
you
name
your
components,
sed
controller
manager
and
api
server,
and
it
will
provision
those
three
resources
for
you
and
that's
what
that's
what
the
con.
The
this,
in
essence
is,
is
a
mirror
or
re-implementation
of
the
cluster
version.
Cr.
C
A
The
idea
here
is
that
it's
that's
a
really
good
point.
The
idea
here
is
supposed
to
be
like
the
the
golden
state
for
deploying
a
new
kubernetes
cluster,
so
whatever
you're
supposed
to
map
to
each
version
of
kubernetes
from
an
ncd
perspective
or
like
the
the
okay,
the
right
state
for
this,
and
so
the
idea
is
it's
a
template
that
provisions
the
entire
control
plane.
I.
C
A
Yeah,
so
I
did
a
really
loose
version
of
that,
where
you
basically
can't
you
can't
version
down,
you
can
only
version
up
and
you
can
only
version
patch
versions
like
you
like.
We
all
talked
about
a
couple
weeks
back
before
we
had
an
official
meeting.
So
basically
you
can
update
a
patch
version
which
will
roll
out
a
new
release
of
it,
but
you
can't
change
the
the
minor
version
as
it
currently
stands.
C
Okay,
that's
pretty
cool.
Okay,
go
back
up
to
the
yeah,
so
the
top
plane
yeah,
okay,
even
the
pk
stuff.
If
we
have
a
controller,
should
this
be
move
out
or
what
do
you
think
so.
A
That's
yeah,
that's
one
of
the
interesting
things
I
I
think
it
could
be
passed
through
and
I've
been
trying
to
figure
out
if
we
want
to
make
it
so
that
we
have
I've
been
struggling
a
little
bit
to
be
honest
with
the
crd
explosion
here,
because
the
amounts
of
crds
that
behind
the
scenes
are
actually
getting
created
is
a
little
bit
much
because
if
you
do,
if
the
way
that
it's
implemented
currently
is
it
creates
a
nested
pki
like
shadow
crd,
that
just
has
status
now
technically,
this
could
have
the
pki
expiration
on
it.
A
But
that
means
that
up
front.
You
need
to
create
that
cr
from
you,
you
as
a
human
need
to
create
that
cr
versus
this
orchestrating,
creating
that
cr
for
you.
If
that
makes
sense,.
C
C
I
think
this
is
all
about.
I
think
this.
This
is
a
border
question.
Actually,
if
you
remember,
we
talked
about
how
to
upgrade
those
components
right,
you
know
we
can.
We
can
just
let
individual
operator
operate
the
version
by
their
own.
We
don't
care
about
the
version.
We
only
collect
them.
We
only
have
entry
points
to
find
all
of
them
and
we
have
an
entry
point
to
do
the
upgrade.
Basically,
we
don't
implement.
C
This
is
one
option
right.
Another
option
is
you
know
we
we
take
full
control
every
way
to
need
to
go
through
the
one
single
entry
point.
Then
our
controller,
our
new
controller,
to
do
all
the
upgrades
for
you
all
right,
yeah,
even
if
we
don't
do
the
upgrade,
we
can
call
out
the
other
operator
to
upgrade.
But
the
question
is:
where
is
the
entry
point
to
do
this?
C
This
is,
I
think
this
problem
can
be
combined,
for
instance,
but
I
think
we
discussed
before
right
if
we
want
to
remember
the
template
for
template
ref,
if
you,
if
you
want
to
change
the
your
new
version,
we
just
hard
code
to
give
another
make
that
each
template
is
immutable.
If
we
want
to
change
emulator,
we
just
change
the
template,
wrap
to
a
new.
A
C
A
Yeah
keeping
it
all
in
a
single
makes
it
makes
it
so
that
we
can
at
least
make
the
initial
implementation
much
simpler.
So
we
don't
have
to
create
as
many
crs
out
of
the
box,
because
if
we
do,
if
we
do
it
so
that
pki
is
managed
through
the
nested
pki
again,
don't
agree
with
the
name,
but
just
use
that
for
right
now.
But
if
we
do
that,
where
we
implement
the
parameters
for
pki
on
there,
unless
we
had,
I
guess
we
could
just
have
same
defaults,
but
to
do
that.
A
Generate
each
one
of
these
crs
with
the
value
with
like
the
defaults,
which
is
totally
possible
at
the
end
of
the
day,
we're
just
scaffolding,
those
crs
up
front
versus
letting
this
controller
go
and
off
we
go
and
orchestrate
creating
everything
for
you
interesting.
Let
me
see
so
another.
C
We
put
certain
mandatory
in
the
template,
or
so
you
understand
I'm
talking
about.
Currently,
our
template
is
only
for
the
control
that
the
the
mask
component,
like
lcd,
ccm
and
the
api
server,
should
we
go
to
manager
as
part
of
it.
A
Oh,
oh
yeah.
It's
an
interesting
idea,
yeah,
it's
an
interesting
idea.
A
I
know
I've
talked
with
james
munley,
who
used
to
work
on
well
still
works
on
cert
manager,
about
that
it
would
be
interesting
to
see
if
we
could
use
that
to
do
to
do
all
the
generation
from
the
super
cluster
basically
run
it
outside
of
the
outside
of
the
nested
control
plane
and
use
that
to
do
csrs
against
to
create
for
this.
For
these
clusters,
yeah.
C
But
you
will
that
be,
maybe
it's
it's
too
much
or
I
don't
know
if
this
is
a
reasonable
request
or
not
or
because.
C
C
Right,
they
come
so
so
you
are,
we
are
assuming
we're
using
whatever
sort
of
managing
pseudomonas
is
in
supermaster,
you'd
creation
kind
of
thing.
I
don't
know
it
is
reasonable
to
consider
the
individual
certification
manager
for
each
tm
or
yeah
that
work.
I
think
it
still.
A
D
A
C
B
But
is
this
safe
to
to
disclose
all
the
nasty
cluster
certification
in
a
central
place?
B
Yeah,
so
everyone
can
find
this
someone's
certification
or
do
we
have
the
secure
measure
to
make
sure
the
only
tenant
can
have
its
own
certification,
but
not
everyone
else.
A
C
Think
the
only
requirements
is
tenants
need
to
spread
the
certifications
for
their
users.
This
is
the
only
thing,
but
I
don't
think
small
box
use
cases
for
now,
but
for
design
perspective,
maybe
we
can
think
about
where
that'll
be.
You
know
useful
or
meaningful,
at
least
for
now,
in
a
very
early
stage.
I
don't
think
people
will
do
that
away,
but
who
knows
if
they
want
to
generate
certification
for
their
users?
What
should
they
do?
They
can't
implement
their
own
or
we
can
follow
up.
A
I
think
that
is
a
good
call.
I
just
added
a
note
on
there.
We
can
bring
in
james
as
well,
for
this
there's
definitely
some
stuff
in
here
that
we
could
do
from
what
ciao
originally
had
implemented.
There's
a
couple
things
in
here
that
we
definitely
can't
implement
that
way.
A
If
I
drop
drop
down
in
here,
things
like
generating
the
the
cube
configs
and
things
like
that
for
the
controller
manager
and
the
api
server
we'd
have
to
implement
those
separately,
but
I
think
we
could
probably
at
least
generate
the
the
root
cas
and
the
api
servers
and
fcd
certificates
and
keys.
C
Yeah,
so
I
think
I
think
this
may
be
a
broader
question
for
encounter
class
api.
So
do
they
do
this
things
like
what
I
said.
Maybe
it
is
reasonable
for
them
because
they
actually
they
are
actually
using
class.
If
you
have
to
develop
a
kind
of
supermaster
kind
of
thing
you
think
about
that
way.
It
is
good
to
include
a
certain
manager,
but
for
us-
maybe
maybe
it's
too
much-
maybe
waste
too
much
resources,
but
maybe
next
video.
If
you
want
to
do
that
later,.
A
A
The
the
management
cluster
that
provisions
these
technically
uses
out
of
the
box
uses
cert
manager
to
do
things
like
set
up
the
the
web
hook
tls,
but
I
don't
believe
it
uses
any
it
for
any
of
the
cl.
The
actual
downstream
cluster
creation.
C
Okay,
I
understand
it,
but
it
makes
sense
for
other
you
know
aws,
they
do
have
their
own
intel
certification
system
to
do
that,
you
spread
the
certificate,
so
maybe
right
for
the
manager.
But
anyway
for
us
we
republic,
the
only
thing
we
leverage
upstream
is
a
manager
or
we
do
this.
Everything
by
hand
like
like
exactly
in
the
screen:
yeah,
okay,
so
yeah
other
than
that.
C
So
for
for
lcd
part,
let
me
check
so,
but
at
the
ct
part
you
said
there
is
nothing
in
the
spec,
so
we
don't
need
to
specify
the
version
or
what
do
you
think.
A
I
think
we
definitely
can.
I
left
it
blank
for
right
now,
so
we
can
add
to
it
as
we
go,
I'm
just
trying
not
to
trying
to
create
like
the
lowest
the
the
simplest
implementation
first
pulling
over.
What
we,
what
we
implemented
with
with
stateful
sets
as
the
as
the
first
implementation
and
then
changing
from
there
is
the
idea.
C
Okay,
so
let
me
just
make
clear
so
the
lcd
part
for
now
you
you
said
you
know,
we
would
rather
make
the
flash
making
sense
that
if
people
want
to
hook
to
their
own
every
operator,
they
just
do
the
glue
code
by
themselves
and
plug
it
into
the
stack.
So
for
us,
do
we,
because
we
are
now
you
we
are
currently
using
you
know,
put
every
storage
in
the
memory
is
slowly.
Should
we
keep
that
away
or
we
need
to
have
some
humiliation
that
use
actual
storage.
A
And
anybody
else,
please
add
your
thoughts
in
here.
My
theory
is
that
we
should
do
the
first
implementation,
similar
to
the
way
that
we
have
it
structured
with
vc,
because
it
does
work
and
it
proves
out
the
model
now,
the
underlying
controller
that
implements
it.
The
idea
there
for
in
between
passing
between
each
one
of
these
so
say,
for
instance,
nested
pki
down
to
the
nested
xcd
to
the
nested
api
server
to
the
nested
controller
manager.
A
The
idea
is
to
move
from
one
phase
to
another
in
the
actual
provisioning
lifecycle,
implement
each
one
having
a
ready
function
and
the
ready
function
is
what
tells
it,
and
so
the
controller
actually
what's
actually
implementing
it,
isn't
quite
as
important
out
of
the
box,
because
the
idea
there
would
be
if
you
created
an
ncd
operator
that
was
full
functional,
because
we
shouldn't
be
implementing
that.
That's
something
that
other
folks
have
already
done,
and
we
should
just
leverage
open
source
tools
at
that
point.
So
what
we
would
have
is
you
could
re.
A
You
could
create
a
new
nested,
etcd
controller.
That
goes
and
creates
the
cr
to
go,
create
whatever
operators
behind
the
scenes
and
then
your
nets
at
cd
controller.
All
it
has
to
do,
is
update
the
status
fields
and
set
it
to
ready
and
set
the
host
names
and
ports
so
that
the
next
phase
can
move
on.
So
I'm
thinking
that,
like
lightest
weights,
don't
even
don't
even
worry
about
making
sure
that
there's,
there's
persistent
storage
in
here,
keeping
it
the
same
way
that
it's
implemented
with
in
memory.
C
C
A
Yeah
there
is
one,
that's
that
started
in
here,
but
it
has
nothing
implemented
yet,
but
the
idea
was
that
it's
going
to
grab
the
template,
the
template,
attributes
from
the
nested
template
and
be
able
to
create
whatever
it
needs.
So
in
that
implementation,
it's
a
stateful
set,
it's
going
to
look
for
a
stateful
set
and
then
it's
going
to
mutate
its
status.
C
A
D
I
think
it
looks
looks
good,
I'm
thinking
about.
We
are
using
we're
going
to
use
the
cookie
builder
to
implement
all
this
three
controller
right:
the
adcd
controller,
api
server
controller
into
the
controller
manager
controller
right.
C
A
They
each
have
their
own
cr,
that's
something
that
I've
been
thinking
about
as
well,
whether
we
want
to
just
update
the
status
on
a
single
cr
being
like
the
nested
control
plane
or
have
an
individual
cr
for
each
with
each
of
the
components
and
manage
the
status
across
those
crs,
because
that's
that's
totally
feasible
as
well,
where
we
could
have
so
if
the
nested
control
plane.
A
So
if
the
nested
control
plane
under
status,
besides
just
having
cluster
namespace
and
its
own
ready
field,
if
it
had
say,
for
instance,
etcd
hosts
and
it
managed
the
std
host
there,
and
then
we
had
something
like
etsy
ready
as
an
attribute.
We
could
technically
do
this
using
a
single
cr
and
have
multiple
controllers.
Reconciling
the
same
nested
control,
nested
control,
plane,
cr,
it's
kind
of
a.
C
So,
by
the
way,
let
me
think
about
it.
So
a
question
is:
how
does
the
nested
control
plane
knows?
What
are
the
full
list
of
the
sub
components
that
need
to
be
ready?
A
They
don't
so.
The
idea
was
that
it
was
going
to
create
out
of
the
box.
It
was
going
to
create
from
the
template,
always
create
xcd,
controller
manager
and
api
server,
just
every
every
one
of
those.
So
in
the
template,
the
component,
config
or
sorry.
The
component
add-ons
has
a
name
attribute,
always
name
them:
etcd,
controller
manager
and
api
server.
C
I
see
so
so
they
probably
have
to
so
you're
saying
that
they
have
to
check
individual
templates
back
to
see
everything,
at
least
in
the
10
minutes
back,
but
we
want
to
extend
it.
We
just
use
standard
templates
back.
It's
the
right
format,
saying
that
we
add
another
azone
controller.
We
check
that
okay.
A
Exactly
and
then
we
can
do
something
where
we,
where,
like
as
we,
build
out
those
things
and
realize.
Oh,
we
need
to
make
sure
that
we
support
this
attribute
as
well
say,
for
instance,
we
we
implement
some
crazy
scheduler
on
top
of
it.
That
I
don't
know,
does
something
weird
you
could
say
you
could
have
it
do
an
injectable
to
check
to
see
if
it's
defined
and
if
it's
defined
it
goes
and
deploys
it.
But
if
it's
not
it's
it,
it
doesn't
deploy
it
kind
of
thing.
C
Yeah,
but
I
see
lcd,
definitely
we
should
go
that
way,
because
the
actual
implementation
is
it's
very,
you
know,
user
specific,
but
for
the
controller
manager
and
the
apso
do
we
need
to
do
that
because
that
we
probably
have
to
introduce
another
cr,
so
we
probably
have
too
many
crs.
When
is
the
odds
you
represent?
A
I
think
I
see
some
benefits
of
doing
it
that
way
where,
if,
if
it
needs
to
get
updated,
it's
pretty
easy
to
touch
it
and
say
make
sure
that
you're
using
say,
for
instance,
when
we
do
an
update.
It's
easy
to
say
is
this
cr
in
the
right
in
in
the
right
states,
because
we
have
it
at
the
api
server
and
we're
not
going
to
update
the
api
server
and
the
controller
manager.
A
At
the
same
time,
for
example,
we
could
do
a
rolling
update
of
this
api
server,
cr
gets
touched
and
that
triggers
a
reconciliation
of
just
the
api
server
resource.
We
can
technically
do
that
with
with
a
single
resource,
a
single
overarching
control,
plane,
cr2.
C
That
I
understand,
but
my
question
is:
should
we
need
to?
Let
me
try
to
put
this
way.
Should
we
provide
the
exactly
solution
to
upgraded
api
server,
as
I
said
so,
basically,
you
have
you
have
a
template,
so
it's
easy.
We
actually
we
put
it
in.
We
do
it
in
the
way
that
unless
the
default,
you
can
do
it
upgrade.
Otherwise
you
let
somebody
whenever
you
use
your
dark
weight
right,
yep,
okay,
server,
maybe
another,
the
one
thing
that
you
can
do.
C
We
we
just
have
one
cr
which
say
you
can
service
dr,
but
we
implement
actual
controller.
The
controller
is
responsible
to
you
changing
the
version.
Let's.
A
B
A
D
A
I
think
that's
the
right
way
of
doing
it.
It's
exchangeable
at
that
point
still
so
like.
As
long
as
it
still
supports
the
ready
attribute,
somebody
could
turn
off
the
the
idea
is
basically
make
it.
So
it's
pluggable
if
somebody
goes
and
implements
a
different
way
of
managing
control.
A
Planes
say,
for
instance,
because
there's
lots
of
lots
of
folks
out
there,
as
we've
seen
from
the
multi-tenancy
working
group
like
the
loft
folks
or
the
kiosk
folks,
they're,
all
implementing
different
ways
of
doing
even
arctos,
different
ways
of
doing
these
virtual
clusters
and
they're
using
things
like
k3s
to
provision
it
now.
Technically.
A
If
we
do
this
right,
we
could
support
those
implementations
being
wrapped
into
cap
n
as
long
as
we
make
it
so
that
if
you
turn
off
the
api
server
and
controller
manager
and
ncd
controllers-
and
you
write
your
own-
that
goes
and
provisions
whatever
you
have
at
the
end
of
the
day,
it's
just
a
spec.
A
C
Having
a
default
and
we
still
need
to
provide
a
competitive
capability
to
upgrade
default,
so
that's
what
we
do
and
for
the
design
perspective,
I
think
it's
whatever
whatever,
because
he's
a
templated
wrap.
So
what
do
we
need?
Is
you
need
to
tell
me
what
is
a
ready
flag
where
to
check
the
red
flag
and
the
centralized
controller
just
keep
holding
that
ready
flag?
If,
if
every
component
is
ready,
we
mark
ourselves
ready
and
tell.
D
So
it
sounds
like
if
we
have
some
other
other
providers
say
if
apple
have
a
have
a
pool
of
api
server
or
control
plan.
That
is
ready.
Then
we
can
use
this
api,
but
behind
this
api
we
are
actually
calling
apple,
maybe
open
api
or
something
and
then
provisioning
and
apple's
internal
controller
plan,
and
that
one
will
be
used
by
this
pattern.
Master
right.
If
we
do
the
api
server
design
correctly,.
A
Okay,
yeah
exactly
yeah
at
that
point
it
could
be,
and
it
would
be
the
same
thing
for
what
you,
what
you
all
are
currently
doing,
where
you're
calling
out
to
the
all
young
control
plane
to
go,
create
your
control
planes,
it
would
be.
You
could
re-implement
the
that
control
those
controllers
specifically
to
to
talk
to
whatever
public
api
or
or
internal
apis
yeah.
C
A
You
could
still
implement
a
single
controller
that
was
like
when
one
of
these
resources
comes
through.
The
single
controller
is
listening
for
all
three
of
the
scd
controller
manager
and
api
server
and
just
goes
and
does
everything
for
you
or
it
doesn't
even
have
to
listen
for
those
things.
If
you
don't
want
to
at
that
point
next,.
C
C
Because
it's
not
here,
I
know
you
haven't
done
that,
but
I
think
this
part
is
it's
kind
of
clear
to
me
at
least
to
me.
I
don't
know
if
your
other
guys
have
other
comments,
but
I'd
like
to
see
the
let's
see
the
cluster
spec.
Okay,
we
still
need
to
be
very
or
we
we
better
document,
the
way
that
we,
how
the
exact
workflow.
C
If
I
am
an
administrator,
what
is
the
executive
workflow
to
do
the
provision
you
said
we
need
to
create
a
qcr,
then
we
have
an
order
problem
or
what
do
we
expect
so
without
moving
controller,
I'm
clear.
So
if
I
create
a
control
panel
cr,
I
can
just
as
a
client,
I
think,
just
keep
holding
the
ready
state.
If
it
is
ready,
then
it
is
ready
how
about
the
next
cluster
cr,
what
are
as
a
client?
What
should
we
need
to
watch
about
that.
A
Gotcha
yeah,
the
the
implementation,
I
believe,
is
pretty
simple,
so
it
basically
implements
the
control
plane
reference
which
points
to
this.
This
control,
plane,
cr
and
then
the
status
fields.
It
has
the
conditions
and
then
it
has
a
ready
status
and
the
ready
status
is
what
tells
tells
cluster
api.
I
can
move
on
to
do
whatever
I
want.
A
So,
if
I
wanted
to
go
and
deploy
the
cluster
resource
set
into
this
new
created
control
plane,
it
has
an
end
point
that
it
that
sorry,
so
it
has
ready
conditions
and
a
control,
plane,
endpoint,
and
it
can
now
hit
the
control,
plane,
endpoint
and
go
and
create
whatever
other
like
add-ons
need
to
be
deployed
into
the
cluster.
I
see
so
basically.
C
D
So
yeah,
I
have
a
question
so
what
about?
If
we
wanted
to
install
some
actual
add-on
like
core
or
core
dns,
or
something
like
that,
there
will
be
happen
in
a
cluster
cr
or
in
the
cluster
control
plan.
Cr.
A
A
It
does
a
reference
to
config
maps
and
those
config
maps
can
be
any
any
any
type
of
resource
that
you
want
to
deploy
into
the
cluster
so
say
for
instance,
if
you
wanted
to
deploy
coordinates,
maybe
you
want
to
deploy
opa
or
gatekeeper
or
any
of
those
things.
A
You
add
those
into
the
super
cluster
as
a
configmap
and
then
when
you,
when
you
do
the
initial
bootstrapping
for
the
cluster,
so
say,
for
instance,
when
you
create
the
virtual
cluster
cr
and
the
virtual
control
plane
cr,
you
can
create
a
cluster
resource
set
cr
that
points
to
all
of
those
add-ons
that
you
want
to
deploy
into
your
cluster.
And
now,
when
the
virtual
cluster
cr
comes
to
a
ready
state
and
you
have
a
control,
plane
endpoint,
it
will
loop
through
the
resources
in
here
and
deploy
them
into
your
cluster.
For
you.
A
C
A
C
A
For
the
resource
set,
no
that
actually
gets
that
gets
implemented
because
it
knows
when
the
the
nested
cluster
is
in
a
ready
state
and
has
a
control,
plane
endpoint.
It
can
deploy
those
into
that
nested
control
plane,
because
it
also
knows
where
the
cube
configs
are.
I
understand
that,
but
my.
C
Question
is:
who
do
you
actually
provision?
Because
this
tells
you
what
you
provision
right,
but
who
doesn't.
A
Because,
oh,
oh
sorry,
we
jumped
a
little
bit
forward.
So
when
you
deploy
all
these
things,
you
have
to
deploy
the
cluster
api
controllers
as
well,
which
is
what
operates
those
things.
So
the
cluster
api
controllers
are
deployed
into
your
management
cluster,
and
then
you
have
a
provider
which
implements
our
implementation,
and
so
there's
the
there's
a
whole
set
of
controllers
that
will
manage
the
life
cycle
of
these
resources.
C
A
Now
that
being
said,
there's
also
because
of
the
way
that
we
built
this,
we
can
technically
do
this
a
little
bit
more
funky.
If
we
don't
like
this
implementation,
I
would
say
that
we
should
go
down
this
path,
because
I
think
it's
gonna
be
the
right,
the
most
supportive
of
everything
else
that
cappy's
doing.
But
if
we
don't
like
this
technically,
we
do
have
a
weird
patch
around
ours
because
of
the
nested
template.
A
A
If
you
wanted
to,
if
you
need
like
a
golden
state
for
every
single
control
plane,
there's
always
that
out
of
the
box,
I'd
love
to
support
everything
that
cappy
implements
just
so
we
don't
have
to
do
it
because
they're
going
to
have
health
checks
against
those
add-ons
and
things
like
that
and
make
sure
that,
if
an
add-on
that
you
told
it
in
the
cluster
resource
set,
isn't
working
properly,
it'll
go
and
reapply
it
to
the
cluster
and
make
sure
that
it
tries
to
bring
it
back
up
to
the
right
state,
which
I
think
is
a
really
nice
benefit
that
we
don't
have
to
implement.
C
I
see
I
see
I
I
think
I
think
this
has
a
has.
A
value
has
a
point
as
a
upstream
solution.
I
think
it
is
formalizing
the
way
that
how
do
the
provisioning
has
on
you
just
give
me
just
give
me
a
counter
play
ref
then
probably
do
the
rest
of
the
thing.
The
only
thing
is
that
spec
maybe
looks
a
little
bit
simple.
You
only
give
a
list,
as
you
probably
have,
can
talk
with
upstream
guys.
If
we
want
to
enhance
that
resource,
I
mean
how's
that
controller
like
yeah.
C
This
brings
out
my
my
very
initial
proposal
or
topic
to
you
guys
to
say,
because
I
will
internally
have
requirements.
Even
we
have
a
resource.
I
want
to
have
an
order
of
how
that
all
of
the
provisioning
of
resources,
maybe
the
the
right
way,
is
actually
point
to
the
resource
back.
You
know
the
class
of
resource
back
this
crd
right.
Maybe
this
one.
You
want
to
support
the
order
kind
of
thing
and
that
controller
handle
the
rest.
A
Of
thing
that's
interesting,
yeah,
that's
a
really
interesting
use
case.
Is
that
so
that
you
could
have
things
like
you
always
deploy
a
cert
manager
that
you
then,
after
you
have
cert
manager
up,
you
can
deploy
web
hooks
and
then
once
you
have
web
once
you're
able
to
deploy
web
hooks,
you
can
deploy
things
like
opa
into
your
cluster
and
then
oh
wait.
You
need
to
have
core
dns
first,
because
nothing's
going
to
work
without
cordy
is
that
kind
of
the
idea.
C
C
They
have
dependencies
so
sure,
so
either
we
change
the
order
or
they
change
the
order,
because
if
we
don't
change
the
order,
they
every
every
containers
need
to
change
the
logic
to
wait.
Something
else
to
do
so.
It
depends.
It's
all
dependent
implementation
right.
So
if
you
have
some
legacy
controllers,
they
don't
they.
They
require
the
dependency
for
the
others
we
are
infra
can
support
ordering.
This
is
just
a
bonus.
C
A
A
Could
do
I
wonder
if
you
so?
I
believe-
and
this
is
don't
quote
me
on
this-
but
I
believe
one
of
the
main
things
that
where
this
original
object
came
from
from
for
cappy
was
if
you're
familiar
with
the
cluster
add-ons
group,
at
least
the
cluster
add-ons
group
has
been
building
operators
that
that
manage
the
life
cycle
for
resources
within
your
cluster.
A
So
you
can
deploy
a
core
dns
operator
which
goes
and
deploys
coordinates
into
your
cluster,
which
is
a
little
bit
weird
because
now
you're
operating
like
now,
you
have
an
operator,
that's
creating
another
deployment
in
your
cluster,
same
thing
for
like
opa
or
gatekeeper
or
whatever
all
the
other
components
that
are
gonna
that
you
could
deploy
into
your
cluster.
I
wonder
if
that
would
be
the
implementation,
because
I
think
what
we
can
have
here.
What
what
in
your
implementation,
you
can
have
a
cr
that
goes
and
creates
that
ordered
set.
A
So,
instead
of
having
to
manage
the
order
in
like
a
physical
list,
you
could
have
when
this
deploys
the
operator
goes
and
checks
to
see.
If
everything
else
is
there
and
doesn't
deploy
until
it's
in
its
proper
state,
it's
kind
of
basically
telling
it
the
desired
state
and
then
letting
kubernetes
reconcile
it.
C
Yes,
you
can
listen
to
everything
right.
The
other
way
you
you
just
make
it
the
right,
your
single
element
and
you
create
a
bunch
of
cr
answers.
That's
definitely
doable,
but
I
mean
from
from
the
user
perspective.
Is
I
I
feel
it's
more
reasonable.
You
can
just
have
a
central
base
to
kind
of
register
everything
in
the
central
place
and
somehow
you
tell
them
the
order,
so
they,
if
they
specify
orders
so
the
controller,
will
honor
the
order.
So
I
don't
know-
maybe
I
don't
know
if
you
can.
A
There
is
a
whole
design
dock
on
cluster
resource
set,
if
I
recall
correctly
in
here
experimental
yeah.
So
if
you're
not
familiar,
cappy
has
its
own
form
of
keps
but
they're
yeah.
So
this
was
the
kept
for
cluster
resource
sets.
So
I
have
not
fully
read
through
this,
so
I
might
be
wrong
in
the
way
that
it's
structured,
but
the
idea
there
would
be.
Oh
if
you're
going
to
deploy
calico.
You
might
need
to
do
this
and
you
need
to
deploy
it
once,
for
example,
into
your
cluster.
A
C
A
Right
yeah,
I
agree,
I
think
it
should
be
and
then
yeah
using
this
to
deploy
anything
else
we
need
to
in
the
cluster.
C
Good,
I
think
I
think
we
yeah
I'm
okay,
so
we
have
about
eight
minutes.
Should
we
list
things
to
do
or.
A
Sure
yeah,
let's
do.
A
Cool
yeah:
how
do
you
wanna,
how
do
you
wanna
handle
this.
A
It
sounds
like
I
should
go
and
implement
the
or
somebody
on
our
side
of
things
should
implement
what
the
nested
cluster
looks
like.
So
we
can
actually
look
at
that
and
get
the
like.
Definitely
finish
out
the
finish
out
the
proof
of
concept
so
that
it
can
deploy
a
control
plane
into
a
cluster.
I
think,
is
one.
A
A
C
So
I
think
I
think
you
guys
have
a
little
message
from
this
controller's
crt
yeah.
C
C
Meeting
yeah,
that's
a
good
naming.
We
need
to
define
any
designer
naming.
I
wish
we
can
do
it
this
week,
controller
wise.
I
think
we
can
divide
an
sd
control,
that's
the
cr,
that's
a
cluster
and
that's
the
control
plane.
This
gives
you
these.
Maybe
you
guys
can
handle
it
or
you
you
or
you
guys
can
handle
that,
including
the
crd
and
the
controller,
and
I
think
how
things
can
all
the
vc
manager.
I
think
he
can
take
out
the
currently
three
in
the
api
server
controller
manager
and
the
edcd.
C
That's
three
crds
with
current
default
behavior
there's
another
patch
here,
behavior-wise,
it's
very
similar
to
current
cluster
version
and
vc
manager,
but
just
I
believe
that
is
mostly
like
it
is
cody
restructuring
kind
of
workflow
restructures
functionality
wise.
I
think,
since
how
was
implemented-
and
maybe,
if
you
get
over
this
problem.
C
Yeah,
I
think
these
two
things
are
pretty
separate.
As
from
my
perspective,
you
can't
do
it
in
parallel.
There's
no
strong
dependency
right
right
away.
As
I
said
you
know
you
guys,
we
are
another
provider,
just
set
a
flag
ready
and
you
guys
are
assuming
you're
just
completing
that
ready,
yep,
yep
cool
sounds
good
yeah.
Okay,
then
yeah,
please,
chris!
You
just
start
the
issue.
Let's
talk
about
the
naming
in
the
issue.
A
Sounds
good
I'll
start
something
after
outside
of
this
meeting
and
then
drop
it
into
the
cap
and
slack
channel.
A
Cool
I'll
make
a
couple
issues
I'll
make
a
couple
other
issues
about
this,
so
we
can
track
against
actual
github
issues
and
then
drop
some
links
into
that
channel
as
well.
D
A
Thanks
everybody
see
you
next
week
at
at
10
a.m.
Next,
next
tuesday.