►
A
Computer
cool
good
morning,
everybody,
this
is
the
november
10th
edition
of
the
cluster
api
provider
nested
office
hours.
Today
we
have
a
short
agenda
and
before
we
get
started,
I
think
there's
only
one
new
person
here
should
we
would
you
want
to
introduce
yourself.
B
A
Cool
and
then
the
big
agenda
item
today
is
charles
put
in
a
pr
this
morning
for
the
updated
design,
dock
and
so
you're
going
to
take
it
over.
You
should
be
able
to
screen
share
now.
Okay,.
C
Okay,
sure,
let
me
try
to
share.
D
C
Okay,
great
so
so
I'm
going
to
go
through
the
proposal
I
just
submit.
C
So
in
this
proposal
we
m
we
m
at
come
up
with
mfd5
how
we
are
going
to
use
all
this
nested
cr
to
create
the
actual
components
for
the
ncp.
So
before
we
start,
I
would
like
to
explain
some
terms
I'm
going
to
use
in
this
proposal.
C
All
these
terms,
we
we
need
some
further
discuss
discussion
about.
We
wanted
to
add
these
terms
to
the
glossary,
but
this
turns
we
are.
I
I'm
just
temporary
using
these
terms
for
this
proposal.
So
first
one
is
the
super
cluster,
which
is
an
underlying
cluster,
that
managing
the
physical
nodes
and
the
super
classes.
Supercluster
will
run
all
the
parts
and
second
one
is
the
nested
control
plan.
C
The
next
next
three
are
the
nasa
crs
we
are
going
to
talk
about
the
first
one
is
the
nasa
etcd,
which
contains
the
atc
information,
that's
required
to
create
the
necessity
and
the
next
two
are
nested
ks,
which
is
nasty,
the
coupon
nested
api
server
and
that's
the
kcm,
which
is
the
nas.
The
controller
manager.
Both
of
them
can
also
then
contain
the
information
required
to
create
the
kobe
api
server
and
the
qb
control
manager
for
ncp,
and
I
also
devised
two
new
terms.
C
So
there
are
two
user-facing
cr,
the
first
one
is
ncp
and
the
second
one
is
ncpt
and
the
user
are
not
supposed
to
create
the
nasd
aks
and
kcm
directly.
So
the
standard
process
will
be
something
like
you
layer.
Their
their
standard
process
will
be,
first
will
create
an
asset
control
plan
templates
and
then
a
user
can
create
a
nested
control
plan.
Nc
pcr
and
one
ncbcr
need
to
refer
to
an
ncpt
cr
and
then
the
ncpt
ncp
controller
will
create
the
rest
of
the
nested
controller
and
etcd
ks
and
kcm.
C
C
C
First,
one
is
it:
have
they
have
to
be
portable,
so
the
crd
should
hold
information
that
are
required
by
different
component
providers.
Sometimes
user
may
want
to
use
etcd
operator
or
etc
cluster
operator
to
preserve
their
etcd
components.
So,
for
to
this
end,
the
crd
need
to
hold
enough
information
for
different
component
providers.
C
The
second
requirement
is
the
crd
need
to
be
need
to
need
to
be
customizable.
We
should
allow
users
to
customize
each
component
of
the
ncp.
For
example,
we
should
allow
user
to
specify
the
image
of
the
component
like
and
also
the
component
version,
and
also
something
like
the
command
line
options.
For
example,
sometimes
user
may
want
to
turn
off
some
of
the
controller
inside
the
controller
manager
and
second,
second,
one
second
goal
is
we
wanted
to
define
a
standard
process
of
creating
nested
control,
nested
components
for
ncp,
regardless?
C
What?
What
can
what
component
provider
are
specified
by
the
user
and
also
we
will
present
two
two
examples
of
creating
ncp
natively,
which
means
we
will
create
all
the
components
on
the
cluster
in
memory
as
parts
and
then
the
second
one
is
we
are
going
to
second
example.
We
will
present
how
we
can
create
the
kuber
api
server
could
be
api
server
and
kcm
still
natively,
but
we
are
going
to
use
this
a
third-party
provider
to
create
the
n-e-t-c-d
component.
C
C
C
C
At
the
same
time,
end
users
also
wanted
to
customize
ncp
components
like
add,
specify
their
own
command
line
command
line
options.
Therefore,
we
define
a
new
struct
named
nested
components
back,
which
contains
common
information
required
by
different
provider,
as
well
as
customized
information
specified
by
the
end
user
and
each
nasty.
The
cr
as
the
etcd,
ks
and
kcm
will
have
this
nested
comments
back
so
what's
inside
and
that's
the
comments
back
first,
we
will
have
something
like
a
common
space,
common
spec
defined
the
version
and
the
channel
of
this
component.
C
C
The
third
one
is
the
replicas
number
of
replicas
say:
if
we
are
going
to
create
the
etcd
natively,
then
we
are
going
to
use
stable
set
to
create
the
atcd.
Then
we
can
specify
how
many
instances
we
wanted
to
have
inside
the
stable
set
and
then
the
last
field
is
the
patch
spec
patch
spec
allow
user
to
specify
their
own
customized
settings.
C
We
are
going
to
borrow
the
common
space
and
the
patch
back
from
the
kobe
builder,
the
collaborative
pattern,
this
project.
So,
as
we
can
see
here
inside
the
comments
back,
we
have
the
version
and
a
channel
inside
the
patch
spec.
We
have
the
row
extension,
so
that
means
user
can
specify
whatever
they
want
valid
yamo
text
or
contents
so,
and
also,
if
sometimes
maybe
the
component
provider
say,
if
user
specify
you
wanted
to
use
the
etcd
cluster
operator.
C
But
the
cluster
operator
does
not
allow
user
to
specify
some
to
turn
to
specify
the
settings
of
the
etcd.
Then
the
patch
spec
will
just
be
ignored
and,
yes,
on
scfcr
need
to
contain
the
nested
components
back.
C
A
Yeah,
sorry
for
interrupting
so
with
within
that
regards,
if
somebody
were
to
go
through
and
say,
try
to
use
the
ncd
control
cluster
operator,
for
example.
Could
we
technically
because
of
the
implementation
of
this?
Are
we
expecting
that
and
you're
probably
gonna
get
to
this
later?
A
But
are
we
expecting
that
if
we
were
to
go
and
build
say,
for
instance,
an
operator
using
the
keybuilder
declarative
patterns
setup
like
the
actual
the
q
builder
integration,
for
it,
could
we
make
that
so
that
the
channels
that
are
the
channels
and
the
manifest
that
are
built
into
that
represent
whatever
operator
actually
implements
it,
and
so
that
we
could
actually
still
support
those
fields?
Would
that
actually
work?
A
No,
no,
I
think
so
what
I'm
trying
to
get
out
is
like
one
of
the
things
that's
really
interesting
about
the
q
builder
declarative
patterns
project
is
how
you
get
how
you
get
an
actual,
how
the
manifest
are
configured
they're,
mana,
they're
configured
in
the
channels
directory,
like
I
think
it's
it's
channel,
slash
stable,
and
then
you
can
have
all
of
your
releases
underneath
there
as
like
nested
directories
and
what's
interesting
there
is
you
basically
have
all
of
the
raw
yaml
manifested
to
provision
the
control
plane
or
to
provision
whatever
component
is,
is
built
on
so
like
if
it's
core
dns,
it's
going
to
be
the
coordinates,
deployment
and
the
service
and
all
the
are
back
for
it
is
just
raw
manifest
files
on
disk.
A
Now,
if
we
were
to
go
and
say
we
wanted
to
like
hot
swap
these
and
say
that
ncd
could
be
implemented
by
another
operator
because
of
it
just
being
nested
at
cd,
we
could
just
implement
that
nested
scdcr,
but
have
the
channels
set
up
to
be
the
actual
files
that
are
mounted
locally
could
point
to
crs
creating
other
controllers
right?
In
essence,
I
believe
that
would
work.
C
Okay,
so
my
question
is:
what's
the
manifest
you
mentioned
about,
the
channel
can
be
point
to
certain.
E
A
Essence:
yeah
yeah
in
essence,
because
if
you,
if
you
I'm
just
going
to
drop
a
link
into
the
the
the
main
slack,
you
can
check
it
out
after,
but
for
all
of
those
things,
I
just
dropped
the
accordion
s,
implementation,
for
example,
and
it
has
an
underneath.
It
has
packages
core
dns
and
then
under
packages,
corey
and
s.
A
It
has
1.13
1.67
and
1.70,
which
are
like
the
re,
the
available
versions
of
that
operator,
and
I
think
it
would
be
the
same
for
this
like
when
we
pack
those
manifest
for
the
nested
kas.
For
example,
it's
going
to
be
the
deployment
or
the
staple
set,
the
service
we
create
or
the
ingress
we
create,
whatever
their
components
are
that
we
expose
those
would
be
just
mounted
in
as
as
files
already
compiled
in
out
of
the
box
for
like
the
default
implementation.
A
But
you
could
always
override
them
and
mount
a
new,
a
new
directory
in
there,
so
that
when
we
did
an
implementation
that
says,
use
the
scd
cluster
operator.
Instead
of
using
the
deployment
manifest
just
mount
in
a
new
cr
and
then
behind
the
scenes.
It
should
just
be
doing
a
cube,
cuddle
apply.
It
does
a
little
bit
more
magic
than
that,
but
in
essence
a
cube
cut
will
apply,
which
means
whatever
you
supply
in.
I
think
this
is
in
essence.
I
think
this
is
super
supportive
of
what
of
what
you're
proposing
here.
E
Yeah
the
problem
is,
I
was
so
both
char
and
I
were
discussing
the
channel
and
we
are
not
very
sure
what
exactly
the
channel
means.
So
things
like
what
you're
saying
that
the
channel
is
actually
there.
It's
a
url
that'll
point
to
the
underlying
role
manifest
to
deploy
whatever
port
kind
of.
We
were
thinking
the
same
same
thing,
because
if
you,
if
you
look
at
our
proposal,
we
we
abstract
everything-
I
mean
we
abstract
the
products
back
into
a
few
fields
right,
so
we
still
need
to
have
a
prospect
template
somewhere.
E
So
we
were
thinking
either
we
hard
coded
in
the
controller,
because
there
are
other
fields
in
the
product
spec
right
for
for,
for
all
the
components,
then
another
way
you
think
of
is
even
in
the
comments
back
in
the
channel.
E
We
can
point
to
whatever
raw
yama
file
for
this
particular
cr,
that
I
mean
prospect
kind
of
thing
we
want
to
have,
and
you
were
saying
that
in
that
alphabet
we
probably
can
do
more,
because
if
you
want
to
introduce
a
third
party,
whatever
cr
you
just
put
in
another
road
template
and
because
we
have
to
kind
of
could
be
applied
anyway.
To
I
mean
we
we
will
based
on
this
raw
manifest.
E
We
will,
you
know
finally
build
the
yamo
for
all
the
parts
or
order
whatever
object,
or
we
want
to
create
to.
You
know
to
create
the
the
the
component
anyway,
so
yeah
that
may
be
okay,
but
our
current
proposal
is
kind
of
more
straightforward.
Is
that
you
just
tell
exactly
what
provider
do
you
want
to
have
release
the
auto
provider
that
we
support
and
we
write
interface
code
like
exactly
we
did
in
the
vc
manager,
so
for
for
a
particular
provider,
we
write
the
interface
for
it.
E
I'm
thinking
this
way
is
more
kind
of
strict.
I
mean
there's,
no,
I
mean
in
the
channel
way
is
kind
of
flexible.
You
put
anything
in
that
channel.
We
just
apply
it.
It's
debatable.
Some
people
like
this
approach,
like
flexibility,
but
we
currently
our
thinking
is
because
I
don't
think
there
are
too
many
provider
at
this
moment,
maybe
just
one
two
as
either.
A
Not
sure,
I'm
not
sure
if
the
providers
are
what
I'm
worried
about,
I'm
thinking
more
along
the
lines
of
like
being
able
to
heavily
customize
each
component,
if
you
needed
to
like
just
just
the
flexibility
to
to
yeah
in
essence,
change
whatever
you
want,
but
the
the
using
the
declarative
pattern.
Stuff
in
essence,
is
going
to
give
us
that,
because
you
obviously
have
like
the
patch
spec
already
you
embedded
the
patch
spec
in
there.
So
you
can
just
patch
any
cr
or
any
any
examiner
deployment
or
any
manifest
that
you
put.
E
C
It's
okay,
yeah,
yeah,
yeah,
okay,
and
also
I
wanted
to
say
something
because
we
think
it
can
be
different
provider.
They
may
want
to
use
their
own
templates.
So
if
we
just
specify
some
very
general
templates,
we
want
to
do
specific
templates,
but
maybe
the
etcd
cluster
operator.
They
don't
want
to
use
list
templates
or
it
does
not
work
with
their
scenarios.
C
So
I'm
not
sure.
If
we
can,
we
should
open
up.
We
can.
We
should
open
up
fields
like
the
channel
to
allow
users
to
specify
a
very
highly
customized,
yaml
file
or
configurations,
I'm
not
sure
yeah.
C
We
can
discuss
it
later,
so,
okay,
okay,
so
we're
now
going
to
talk
about
how
the
ncpt
or
ncp
controller
are
going
to
generate
that's
the
common
components
back
and
but
I
think
there
there
will
be
a
way
there
will
be
an
easier
way
is
to
put
in
the
the
add
three
new
fields
inside
the
nested
control,
plane
templates
back
so
for
each
nested
component.
E
C
C
Yeah,
it's
just
to
to
you
know
from
my
perspective,
this
one
can
be
easier
for
me
to
like
get
all
this
information
yeah.
So
first
one
we
will
have
something
like
nasa
etcd.
This
one
will
code
all
the
information
required
to
create
the
etc
same
for
ks
and
kcm,
and
we
also
allow
users
to
specify
the
components
a
provider
they
wanted
to
use.
C
So
we
defined
three
new
types.
First,
one
is
ks
provider.
Second,
one
is
etcd
provider,
the
third
one
is
kcm
provider
and
the
following
are
some
sample
provider
variables
constants?
If
user
does
not
specify
any
providers
we
are
going
to
use,
they
are
going
to
use
the
native
provider.
We
are
going
to.
C
The
native
providers
for
them,
so
the
first
one
is
the
native
ks
provider
and
the
second
one
is
a
native
etcd
provider
and
also
the
native
kcm
provider,
but
the
user
can
define
their
user
can
add
their
own.
They
add
new
providers
like
the
etc
cluster
operator
or
etc.
C
Operators,
and
the
same
as
before,
we
wanted
to
allow
user
to
specify
what
provider
they
wanted
to
use,
and
I
think
if
we
can
have
something
inside
the
nested
control
plan
spec,
then
that
can
make
it
easier
for
users
to
to
to
customize
so
say
something
like
we
can
add
three
fields
inside
a
nested
control,
plane's
back.
We
can
allow
users
to
specify
what
etc
provider
they
wanted
to
use
what
kks
or
kcm
provider
they
wanted
to
use.
C
Okay,
so
let's
go
through
the
let's
go
to
the
bootstrap,
how
we
are
going
bootstrap
process,
how
we
are
going
to
create
the
components
so
before
we
start
creating
the
real
components
we
need.
The
ncp
controller
help
us
to
prepare
some
resource.
The
first
one
is
the
cluster
namespace.
C
This
namespace
are
going
to
store
all
the
required
information
for
creating
the
ncp
components.
Like
the
nasa
cr
and
some
tpi
config
config
map,
the
second
one
is
the
ntas
service,
because
we
are
going
to
create
generate
the
certification
certificate
certificates
and
the
could
be
config
for
an
anastasia
ks
and
when
we
generate
the
certificate
and
it
could
be
config,
we
need
to
have
some
ip
address
or
whatever
network
address
available
to
bake
this
address
into
the
certificates
or
the
could
become
picker
files.
So
we
need
to
create
this
ks
service
in
advance.
C
A
Do
we
need
the
the
actual
kubernetes
api,
the
cube
api
server
service
before
we
create
the
cube
api
server.
C
A
Way
that
if
we
were
to
implement,
if
you
were
to
push
off
from
the
native
provider
to
say,
for
instance,
the
scd
cluster
operator,
like
that's,
going
to
go
and
handle
its
its
own
pki
on
its
own.
And
so
what?
If
we
did
it,
so
that
each
component
contained
all
of
the
logic
it
needed
for
its
own
thing.
So,
like
yeah
and.
A
And
created
the
pki,
and
then
there
was
like
a
shared
set
of
labels
that
not
labels
a
shared
set
of
names
for
those
resources
for
where
it
needed
to
sync
between
the
two
so
say.
For
instance,
when
the
kcm
gets
kicked
off
and
starts
to
try
and
hit
the
cube
api
server,
it
grows
and
generates
the
certificates
based
on
the
ca
that
it
knows
about
from
the
case,
the
kis
resource.
A
C
Yeah,
I
think
we
can,
I
think,
but
at
least
we
maybe
need
the
ncb
controller
to
generate
generate
to
the
root
ca
files,
maybe
because
all
is
etcd,
ks
or
kcm,
etc
and
ks.
They
need
to
share
the
same
ca
files
right
because
all
your
certificate
will
be
generate
based
on
a
cmca
file.
A
That
we
can
set,
as
like
a
you,
can
see,
you
can
supply
an
override
cica
for
the
entire
setup,
so
that,
if
you
already
have
one,
for
example,
if
we
were
to
use
like
our
standard
root,
ca.
E
E
Maybe
we,
but
I
think
somehow
in
the
document
we
would
better
say
people
think
about
they
want
flexibility.
They
want
to
have
their
own
student
manager
for
each
individual
component,
but
the
problem
is,
if
you
use
different
rules
for
different
components,
you
know,
communication
between
them
would
be
would
be
very
complicated.
C
Okay,
so
so
as
to
so
as
layer
exists,
dependencies
between
components
like
the
api
server
cannot
work
without
etcd
controller
manager
cannot
work
with
rks.
So
I
think
when
creating
ncp
components
there
may
exist.
Some
others
like
we
need
to
create
the
etcd
first
then
create
the
api,
server
and
controller
manager,
and
also
when
we're
creating
the
api
server.
We
need
to
know
the
ip
address
or
reachable
network
address
of
the
etc
servers.
So
that
means
the
nested
ks
nested
kscr
need
to
be
able
to
visit,
to
reach
the
nasa
etc.
C
Dcr
same
for
the
nested
kcmcr,
because
when
you
start
a
controller
manager,
you
need
to
know
how.
Where
is
the
api
server
so
for?
To
this
end,
I
think
each
nested
component
cr
need
to
point
back
to
the
ncbcr
so
and
and
after
the
nasa
cr
point
reaching
ncbcr,
it
can
use
an
ncbcr
to
find
out
which
another
other
nasa
cr
they
wanted
to
visit.
So
I
think
we
can
add.
Maybe
we
can
add
something
like
the
nasty
the
etcd
object
reference
for
each
nasty
cr
inside
the
nascar
control
plan
status.
C
C
Okay
and
okay,
so
let's
talk
about
how
we
are
going
to
actually
create
the
components
we
allow
user
to
specify
the
component
provider
in
ncp
and
if
user
does
not
specify
any
control
component
providers,
then
we
are
going
to
use
a
native
way
to
create
sap
and
we
are
going
to
implement
this
native
way
as
the
default
methods.
C
So
let's
go
to
the
figures.
So
here
is
how
we
are
going
to
create
this,
how
we
are
going
to
create
an
ncp
component
natively.
So
first
we
will
assume
that
there
is
a
nasa
control
plan
template
somewhere
on
cluster
and
then
a
user
create
the
nasa
control
plant.
Cr,
the
nasty
control
plan,
controller
notice
that
the
ncpcr
is
created.
Then
it
will
start
creating
all
the
pre-requested
resource,
create
the
cluster
name
space
and
all
the
nested
cr,
and
the
first
is
the
then.
The
first
is
the
nested,
etc.
C
Dcr
nasty,
atc
controller,
star
notice
that
a
nested
atc
dcr
is
created.
Then
it
starts
creating
actually
etcd
workload.
So
by
default
we
are
going
to
use
the
stable
set
to
hold
all
this
etc
instance,
and
after
the
all
this
edc
instance
already,
then
the
nested
atc
controller
will
mark
the
etc
nasa
atc
dcrs
ready
and
also
store
the
the
certificates
into
a
config
map
we
used
to
use.
C
In
last
pr
we
use
the
nasa
the
pkicr,
but,
as
chris
suggests,
maybe
we
just
wanted
to
use
this
cr
to
pass
around
all
this
certificate
and
config,
and
I
I
think
I
think,
if
we
do
not
have
some
logic,
we
will
not
have
a
controller
for
this
cr.
Maybe
we
can
just
use
a
config
map
for
for
to
make
things
everything
straightforward.
C
Okay,
so
once
the
nasa
etcd
cr
is
ready,
then
the
nested
ks
controller
will
start
creating
the
nest.
It
will
start
creating
a
ks
workload
so
same
as
before
it
will
create
it
will
mark
and
that's
the
kscrs
ready
once
all
the
ks
instance
are
ready
and
also
store
the
certificates
that
could
be
config
files
into
the
pi
configuration.
C
And
finally,
the
nasty
kcm
controller
will
start
creating
a
kcm
kcm
workload
and
mark
the
nested
pcmcrs
ready
store
the
certificates
into
the
oh.
It
does
not
need
to
store
that
certificate.
It
does
not
have
certificates
for
kcm.
You
just
need
to
read
the
pki
configure
map
to
read
the
could
be
config
file
of
the
ks
and
use
it
to
connect
to
the
ks
and
once
the
kcm
is
ready
it.
C
C
C
E
C
And
that
is
nested.
There
is
a
new
nasty
edc
dcr,
but
this
time
the
user
wanted
to
use
wanted
to
use
a
third
party
controller
third
party
operator
to
create
the
edcd
workloads.
So
the
nasd
controller
will
create
an
etcd
cluster
cr.
Instead,
then,
the
etcd
cluster
operator
will
do
the
heavy
lift,
will
create
the
real
etcd
workloads
and
send
it,
and
it
will
mark
the
edc
cluster
crs
stable.
C
Once
all
these
etcd
instances
are
stable
lamp
same
as
before
as
a
native
way,
the
nasa
etc
controller
will
mark
the
netc
dcrs
ready
and
store
all
the
certificates
into
the
pki
config
so
same
as
the
native
way.
The
ks
controller
and
kcm
controller
will
create
the
ks
and
kcm.
And
finally,
the
ncp
controller
will
mark
the
nasty
control
plan
crs
ready.
C
So
in
this
part
say
if
we
wanted
to
support
the
say
you
user
wanted
to
add
a
new
new
provider
mode,
then
it's
the
user's
responsibility
or
the
providers
responsibility
to
change
the
code
of
the
nested
etcd
controller,
say,
for
example.
Here,
if
we
wanted
to
use
the
etc
cluster
operator,
then
the
user
need
to
add
a
code
for
creating
the
etc
cluster
cr
and
watching
the
life
cycle
of
the
atc
cluster
cr
yeah.
C
Okay,
so
let's
take
a
look
at
the
definition
of
the
nastiest
cr,
the
crd
first
one
is
the
nasa
atcd
crd.
So
inside
the
spec
we
will
have
object,
reference
point
back
to
the
nested
control
plane
and
then
we
will
have
the
next
common
nested
components
back
as
I
defined
at
the
beginning
of
this
proposal,
hold
the
common
information
required
by
the
etcd
component
and
also
some
user
specified
settings.
C
Then
we
will
also
allow
users
to
specify
the
what
provider
they
wanted
to
use
to
create
the
etcd
inside
the
status.
We
will.
We
we
wanted
to
add
in
a
new
field
called
provider
object,
because
sometimes,
when
you're
using
a
third
party
provider,
then
you
may
need
to
create
a
new
cr
to
let
the
third
party
providers
start
working
on
the
phone
on
a
previous
in
a
previous
example
that
is
the
etc
cluster
cr.
So
this
project,
this
provider
object,
will
be
used
to
refer
to
the
third
party
cr.
C
And
the
same
same
as
the
nasty,
the
etc
dcr
we
are
going
to
in
the
ks.
We
have
the
same
thing
as
the
control
plan.
That's
the
common
spec
and
the
provider
and
inside
the
status
we
are
going
to
have
the
provider
object
and
also
for
the
kcmcrd.
We
are
going
to
have
the
same
fields
and
yeah.
That's
that's
all!
That's!
A
I
don't
think
any
question
at
least
not
from
me.
I
think
this
all
makes
sense.
I
have
a
couple
things
of
feedback
that
I'll
drop
in
on
the
on
the
pr
things
like
potentially
embedding
the
nested
component
spec,
instead
of
making
an
attribute
just
to
make
the
api
a
little
bit
slimmer
so
that
common
spec
is
at
the
like.
The
top
level
like
it
is,
with
most
other
cluster
add-ons
type
projects
yeah.
D
I
just
feel
that
this,
the
second
diagram
you
just
presented
using
the
third-party
xcd
kind
of
a
cluster
a
little
bit,
not
that
nature
kind
of
a
little
bit
complicated.
I
think
the
diagram
looks
like,
and
the
implementation
and
explaining
explanation,
so
we
probably
can
make
it
a
little
bit
more
straightforward.
D
D
D
Okay,
for
example,
we
can
have
a
load
balancer
just
to
create
for
our
own
cr,
and
that
will
be
this
decoupled
with
the
whatever
the
ecd
the
cluster
customer
want
to
use
and
use
this
load
balancer
to
connect
with
them.
D
And
this
way
you
can
scale
up
scale
down
the
hcd
without
any
kind
of
a
reboot,
your
api
server,
because
your
idea
server,
you
use
the
hd
client
inside
this
api
server.
So
without
with
all
this
epcd
tightly
coupled
you
have
specified
all
instance
ip
address
in
the
api's
server
yama
file.
This
will
be
a
little
bit,
not
easy
to
change,
and
if
we
want
to
scale
up,
the
instance
of
the
hcd
will
take
a
more
look
into
this
details
and
give
you
some
feedback.
E
Okay,
yeah,
I
I
think
I
think,
in
production,
whatever
ip
we
provided
for
the
api
server
for
the
edcp,
it
has
to
be
your
either
load,
balancer
or
class
ip
kind
of
thing.
So,
but
by
the
way,
in
terms
of
the
complexity,
do
you
think
this
figure
lose
complex
or
the
workflow
looks
complex,
because
yeah.
D
A
E
A
I
want
to
clarify
real
quick,
I'm
actually
not
suggesting
getting
rid
of
the
two
crs.
I'm
I'm
recommending
still
keeping
the
two
crs
just
doing
them
independent.
I,
your
workflow
here,
is
nearly
identical
to
what
I'm
saying
the
implementation
of
it
is.
I
think
what
I
was
mentioning,
because
the
implementation
would
be
the
nested
lcd
controller
being
able
to
change
internally.
What
the
manifests
are
that
it's
using
which
it
could
just
change
from
being
deployment
or
stateful
set,
or
what
have
you
like?
The
built-in
implementation?
So,
okay.
A
E
Okay,
just
in
blue,
I
think
that
then
the
difference
is
in
implementation.
Side
is
maybe
make
this,
so
let
me
think
about
it,
so
you
are
trying
to
avoid
people
to
write.
You
know
provide
a
specific
code.
Is
it
what
you're
trying
to
achieve.
A
So
you
want
to
make
it
not
me
personally
what
we
have
here.
What
we
have
here
would
be
exactly
what
I,
what
I'm
talking
about
from
from
the
support
perspective
being
able
to
go
through
and
say
the
only
there's,
a
couple
changes
that
I
would
make,
but
those
I'll
talk
about
those
in
a
in
the
comments
where
we
have
the
nested
control,
plane,
controller
going
and
creating
the
nested
scdcr,
and
then
it
going
and
creating
the
other.
A
Cr
is
exactly
like,
like
what
like
declarative
patterns
typically
does,
and
it's
the
same
thing
as
this.
A
The
way
that
this
is
all
set
up
would
allow
us
to
be
able
to
swap
out
nested
at
cdcrs
or
the
implementation
controller,
at
least
for
the
nested
cdcr,
the
nested
kubernetes
api
server
could
be
swapped
and
the
control
point
or
the
controller
manager
could
be
swapped
the
same
exact
way
and
backed
by
any
custom
crs
that
you
wanted,
because
you're
just
cube
cuddle
applying
behind
the
scenes,
and
then
there's
some
tweaks,
of
course,
that
you're
going
to
have
to
do
to
make
sure
that
we
report
status
back
through
to
the
original
objects,
and
that's.
A
E
A
E
A
A
Do
it
separately,
you
could
do
you
could
just
swap
the
controller
implementation
and
not
have
to
think
about
having
everything
everything
in
I.
E
A
Yeah
you
write
the
you
write
the
controller
logic.
The
cr
is
just
the
standard
cr,
that
is
the
etsy,
the
nested
scdcr,
and
then
you
write
rewrite
the
controller
implementation.
If
you
need
to
use
a
custom
one,
the
inbuilt
should
be
fine
for
for
for
people
just
playing
with
the
the
project
and
getting
off
the
ground.
Oh.
E
A
Yes
and
that's
an
answer
to
your
question,
yeah,
I'm
saying
that
you
would,
you
could
go
through
and
say
instead
of
having
to
compile
in
a
new
implementation
based
on
like
create
update,
delete,
you
could
just
re-implement
a
controller
that
listens
for
a
nested
cdcr.
Don't
you
deploy
that
controller?
That's
my
point,
cluster
admin!
I
mean
in
our
implementation.
It
would
be
something
like
we
would.
We
would
swap
that
out
to
say,
for
instance,
use
the
cd
cluster
operator.
A
If
that's
what
we
go
with
or
if
we
have
another
one
that
we're
going
to
end
up
using.
We
would
just
go
and
write
that
in
write
the
the
controller
for
nested
etcd.
Instead
of
using
the
inbuilt
we
just
flag
it
off.
In
essence,
say
I
only
want
to
use
the
nested
kas
controller
and
the
nested
cube
controller
manager
controllers
and
then
when
and
when
the
nested
control,
plane
controller
creates
the
nested
scdcr.
A
This
custom
controller
kicks
off,
does
the
same
exact
logic,
and
we
just
have
a
standard,
I'm
going
to
call
it
an
interface,
but
it's
a
standard
interface,
that's
implemented
as
the
status
fields
on
the
scd
implementation
or
the
fcdcr
super
easy.
We
don't
have
to
even
think
about
anything,
then,
because
I
think
this
supports
that
I.
E
E
D
D
E
Yeah,
that's
exactly
say,
because
yeah
I
was
thinking
you
know
I
was
trying
to
make
this.
You
know
as
strictly,
at
least
in
the
beginning.
We
want
to
make
it
strict.
So
that's
a
reason
we
we
we
want.
If
we
want
to
say
we
support
this
edcd
operator,
but
at
least
which
means
we
agreed
that
the
rose
cdcr
has
kind
of
the
field
that
we
need.
If
somebody
have
a
another
edcb
operator,
but
this
we
cannot
grab
the
status
in
in
our
in
a
recognized
way.
A
No
100
are
great
yeah.
Absolutely
and
one
of
my
comments
that
I'll
put
on
this
is,
I
would
actually
say,
get
rid
of
all
of
the
provider
logic
and
just
create
it
as
an
inbuilt
thing
and
then
make
it
so
that
you
can
flag
them
on
and
off
and
then
make
it
so
that
if
somebody
wants
to
build
a
custom
one
it's
not
in
tree
for
us,
because
then
we
don't
have
to
maintain
it.
And
that's
just
going
to
be
like
we
can't.
A
The
thought
process
there
is
like,
if
you
look
at
what,
because
in
essence,
the
way
that
I'm
I'm
almost
viewing.
This
is,
if
you
look
at
the
inbuilt
way
that
all
the
cluster
api
providers
work
today,
they
use
the
the
cube
adm
controller
control,
plane
provider,
so
kcp,
and
so
kcp
is
in
essence
just
using
cube
adm
under
the
hood,
and
it
provides
all
of
the
hooks
that
you
can.
A
You
can
change
with
cube
adm
to
any
cluster
api
provider,
and
I
I
want
to
treat
this
very
similar
to
that
where,
while
we
have
the
nested
control
plane
and
the
nested
control
plane
uses
all
of
those
three
components.
The
the
bottom
half
of
your
diagram
from
two
down
should
be
able
to
be
implemented
very
similar
to
the
behind
the
scenes
of
the
kcp,
and
it's
just
it
could
almost
be
pulled
away
and
you
don't
have
to
use
the
nested
control
plane
provider
if
you're
the
nested
control
plane
controller.
A
E
Make
sense
I
totally
agree
that
makes
liberty
at
least
the
cleaner,
simpler,
because
what
exactly
did
we
propose?
Is
we
create?
We
define
all
the
crd?
Basically,
we
define
the
interface,
the
the
interactive
workflow
with
the
ncp
controller.
That
is
part
we
need
to
implement
anyway,
because
that
is
a
central
place
that
control
everything
but
orchestration,
yeah
yeah,
but
beneath
underneath
that
we
just
have
a
we
just
define
crds
and
we
have
a
default
implementation
for
the
crds
and
you,
if
you
want
to
plug
in
your
controller,
you
just
follow
the
same
crd.
A
E
A
Now
I
I
have
one
more
contentious
thing.
We
do
have
10
minutes
left,
but
I
have
one
more
potentially
contentious
thing.
The
one
thing
that
I
I'll
add
in
as
a
comment
here
and
I'll
I'll
document.
What
I
mean
more
in
in
github,
but
to
kind
of
like
open
the
conversation
we
could.
Almost
it
just
means
that
you're
creating
more
cecrs
on
your
own.
You
could
almost
rip
out
the
template.
A
You
could
rip
out
the
template
and
make
it
make
it
so
that
the
standard
default
implementation
declare
using
the
q
builder
declarative
patterns
project
for
each
one
of
those,
the
default
cr.
If
it
gets
created
it
can
provision
the
default
implementation,
which
is
just
the
built-in
it
works
out
of
the
box.
You
don't
have
to
configure
anything
different
and
you
don't
need
the
template,
then,
because
the
template
in
essence,
is
already
implemented
in
each
one
of
the
independent.
A
For
I
mean
because,
because
because
how
you
have
it
implemented,
if
you
scroll
up
a
little
bit,
you
have
you,
do
you
do
a
bi-directional
association
of
ncp
and
the
each
one
of
the
crs?
If
you
just
basically
make
it
so
that
the
the
the
you
rely
on
the
component
having
a
reference
back
to
an
ncp,
then
we
don't
need
the
nct
ncpt,
it's
just
that
out
of
the
box
every
single
time.
Somebody
creates
one
of
these
things.
A
You
either
need
to
create
those
three
crs
or
or
the
controller
goes
and
creates
those
three
crs.
I'm
not
sure
what
the
best
path
is.
There
yeah.
E
Yeah,
that's
exactly
so
so.
Currently
our
thought
is,
I
mean
ncp
controller
has
to
create
real
crs,
because
I
know
I
like
that
idea
is.
We
need
to
have
another
step
to
create
those
cr
separately.
That
doesn't
make
too
much
sense
to
me
in
terms
of
the
workflow
I
mean
you.
E
E
We
need
to
support
both.
That's
okay,
that's
ncp
design.
I
mean
yeah
for
the
underlying
component
perspective.
All
the
requirement
is,
we
need
to
have
a
place
to
find
out
the
rest
of
the
components
which
is
ncp
status.
We
if
we
agree
with
that
who
populated
that
and
as
long
as
you
have
that
and
that's
okay,
so
so
the
template.
What
I
was
thinking
is
yeah.
You
can
get
rid
of
a
template,
but
then
you
have
to
I
mean
somehow
guarantee
that
the
cr
version
that
you
provided
is
a
valid
version
combination.
E
A
Tell
me
more:
what
do
you
mean
by
that?
What
do
you,
the
version
compatibility
like
of
the
components?
Yes,.
E
Because,
in
sap
controller,
you
need
to
create
a
three
vr
right
in
a
three
cr,
you
can
ignore
all
the
field,
but
at
least
you
need
to
provide
the
version
which,
which
kubernetes,
which
kind
of
version
you
need
to
have
right.
What?
If
people
were
saying
that
the
combination
is
a
kubernetes
1.18,
but
ecd
is
a
what
two
point
something,
and
these
two
things
may
not
just
work.
It
might
not
work
at
all.
So
we
need
to
have
a
place
to
do
the
validation.
That's
that.
A
Yeah,
no,
that's.
That
makes
a
lot
of
sense.
I
would
almost
say
that
that
could
just
be
in
the
ncp
and
if
you
saw
if
you
supplied
into
the
ncpcr
into
the
spec,
for
that,
you
supplied
a
kubernetes
version
and
an
scd
version,
and
then
it
uses
the
channels
association
within
each
one
of
those
crs
to
pull
that
specific
object.
Then.
E
A
The
nc,
the
n
kas,
has
to
just
implement
a
channel
that
is
118
and
then
the
scd
has
to
implement
the
whatever
version
you're
supplying,
and
then
we
can
maintain
a
map
for
validation
purposes
of.
Does
this
control
plane
work
with
this
etcd
version
and
then
that's
just
something
that's
built
into
the
controller.
E
Yeah,
I
think
you
can
get
rid
of
a
template
because
you
might
understand
the
template.
It
would
be
at
a
symbolized
version
of
each
component.
That's
it
no
more!
No
more
because
the
rest
of
the
thing
you
can
use
is
default.
People
can
just
not
specify
I
always
give
the
default.
The
other
is
rapid
card
re
reverse
card
resource.
You
can
just
give
the
default
one,
but
the
version
is
the
one
that
you
have
to
provide
yeah.
If
you
can
somehow
maintain
the
valid
map
somewhere.
E
If
you
figure
out
either
I
don't
know
you
either.
There
is
a
url
point
to
the
right
combination
or
you
build
a
logic,
some
somewhere
in
ncp
controller
in
memory.
That's
another
topic,
but
I
feel,
but
my
own
concern
is:
we
have
to
do
the
validation
and
yeah.
You
know
we
don't
realize
on
that
template
at
all,
because
if
you
look
at
our
design,
we
just
get
rid
of
the
template,
because
we
look
at
the
actual
object
right
so
that
yeah.
A
If
you
wanted
to
change
those
being
able
to
do
what
I
said
before,
where
you
could
supply
the
the
nested
cdcr
and
then
it
just
like,
absorbs
the
association
or
sets
up
the
association
in
the
ncp
controller,
then
you
could
say
I
want
five
replicas
of
std
and
three
replicas
of
the
controller
manager
and
three
replicas
of
the
cube
api
server.
By
supplying
those
crs
they
get
configured
with
the
ncp.
Everything
gets
it
in
essence,
it's
very
similar
to
the
way
that
a
replica
set
can
get
reassociated
with
the
deployment.
If
they're
ever
disconnected.
E
The
whole
problem
is
it'd,
be
nice
to
have
a
declarative
way
in
one
place.
You
declare
so
I
understand
you
know
this
loose
recovery
flexibility,
but
I
don't
so
if
I'm
an
operator,
if
I
cannot
see
all
the
replicas
in
one
big
picture,
I
mean
you
know
what
I'm
saying
so
I
want
to
look
at
one
place.
I
know
exactly
how
many
revenue
cards
each
component
has
that's
easier
for
us.
Otherwise
I
have
to
type
three
times
to
find
out
the
replicas
of
all
the
components
so
in
other
senses.
E
So
maybe
that
that's
the
reason
somehow
I
want
to
put
a
template
some
somewhere
there,
because
that'll
give
you
a
place
to
specify
exactly
the
information
that
we
exposed
to
the
this
number
of
replicas
somewhere
anyway.
If
you
get
rid
of
the
template,
then
you
probably
have
to
in
the
same
spec
somewhere.
You
need
to
specify
to
replica
account.
A
A
It
across
each
one
of
the
components
like
if
so,
I
think
I
think,
there's
two
ways
here.
If
you
go,
if
you
go
the
really
abstract
route
that
we're
talking
about
where,
where
each
component
implements
its
own
fields,.
B
A
Paths
that
we
could
do
like
we
could
build
when
we
actually
set
up
the
like
the
the
print
columns,
for
example
on
cube
cuddle.
A
We
could
apply
a
category
that
says
this
is
a
nested
control,
plane
component
and
when
you
do
cube,
cuddle
get
nested
control
planes,
it
lists
off
all
those
and
we
can
add
in
a
print
column,
that
is
the
replicas,
for
example,
if
that's
like
attributes
that
we
need
to
be
able
to
share
so
that
operators
can
see
it
there's
other
ways
that
we
could
do
that
versus
giving
them
a
declarative,
cr
up
front.
That
goes
and
does
all
of
it.
Oh.
E
Another
way
all
we
can
do
is
actually
we
can
just
put
the
replica
information
in
the
stators.
So
the
whole
thing
yes,
yeah,
that's
true!
Yeah
in
the
ncp
controller,
you
need
to
watch
row
cr.
If
row
cr
change
does
not
update
your
status,
then
you
have
a
centralized
place.
You
see
all
the
changes
and
the
individual
controller,
just
respect
rows.
Cr
changes
to
change
the
replica
which
is
kind
of
coupled
but
yeah.
It's
all.
So
I
mean
we
just
have
two
design
choices
right.
E
Either
we
configure
everything
in
a
single
place,
that's
one
design
choice
and
then
the
ncb
card,
ncb
space,
needs
to
tell
exactly
what
video
component
look
like
in
terms
of
the
yeah
every
car
resource.
Whatever
right,
that's
one
way,
we
do
the
other
way
you
do.
Is
we
leave
this
out
to
all
the
controller?
We
only
have
a
place
to
I
mean
the
ncp
is
more
likely.
You
know
the
orchestrator,
you
know
orchestrate
the
workflow,
that's
it
we
don't.
E
We
don't
specify
exactly
detail,
declare
status
and
let
each
individual
component
handle
this
and
we
have.
A
state
has
reported
the
results
because
somehow,
because
maybe
we
want
to
okay,
we
want
to
do
something
is
like
you
know,
for
all
for
order.
Yeah,
I
I
yeah.
I
think
I
think
that's
that's
two
way
we
can
do
I'm
okay,
so
I
think
both
have
their
advantages.
E
I
mean
the
number.
The
first
one
is
more
strict.
I
mean
you
say
I
tell
you,
you
have
x
number
of
replicas,
then
you,
your
controller,
is
the
complication
is
overwritten,
so
no
one
can
change
it.
Unless
anyone
want
to
change
the
replica,
you
have
to
change
the
ncp
spec
that
that's
the
whole,
the
only
entry
entry
point
to
change
the
entire
class
configuration
yeah.
E
A
The
one
thing
that
I
would
say
the
the
the
one
way
that
I'm
looking
at
this
is
I
I
look
at
a
template
as
being
a
one-way
door.
It'd
be
really
hard
to
unwind
that
in
the
long
run
versus
if
we
go
the
other
route,
where
we
go
more
flexible
up
front
and
then
add
on
the
abstraction
layers
later
like
adding
it
because
you're
right,
a
template
will
be
really
useful.
If
we
can
add
that
component
into
it.
A
E
Yeah,
I
I
think,
as
long
as
we
agree
that
we
have
a
central
place
to
look
at
the
picture.
For
example,
we
just
leverage
status.
We
are
fine,
I'm
fine,
because
all
the
things
I
think
should
just
work.
It's
just
the
the
operation
kind
of
differences
that
provides
because,
in
the
end
I
don't
yeah,
I
don't
think
people
will
do
such
a
very
fine
low
energy
customized
vcr
in
practice.
I
doubt
it
will
be
the
case
more
likely.
They
look
the
same.
A
Maybe
we'll
see
we'll
see
what
happens
there:
cool
okay!
We
are
out
of
time
we're
at
11
o'clock,
so
I'm
gonna
stop
with
this
recording.
Thank
you
so
much
for
for
taking
us
through
this
whole
design,
doc
everybody
on
the
on
the
phone.
Please
make
sure
you
go
and
actually
review
the
pr
and
let's
get
feedback
written
up
on
there
and
then
we
can
start
moving
forward
and
next
week
will
be.
We
will
be
canceled
because
of
cubecon.
I
hope
everybody
can
actually
attend.
E
I
think
in
chao
I
think
we
can.
I
think
we
can
take
chris
coleman
about
that
service.
Part.
Let's
make
that
more
dig
a
little
bit
more
to
see
if
you
can
move
something
into
the
k:
api,
okay,
next,
api
server
controller
to
do
the
pica
thing
and
move
move
and
just
put
exactly
necessary
part
in
the
ecb
controller.
E
Is
that
okay
is
that
we
know
just
build
a
service
which
looks
like
it
is
a
little
bit
disjointed
workflow
perspective,
but
we
need
to
think
about
it
more
to
see
if
that
part
can
be
optimized
other.
A
A
I
was
gonna
say,
don't
be
afraid
of
potentially
just
using
like
if
you
go
on
the
on
the
cluster
api
docs
there's
a
whole
section
on
the
pki
setup
within
the
kcp
implementation.
A
Use
a
shared
set
of
names
so
like
it's
like
the
ca
for,
for
example,
gets
stored
as
a
secret
in
the
namespace
called
the
cluster
name,
dash
ca,
and
so
you
can
always
use
those
things
to
reference
it,
and
so,
like
no
matter
what
creates
those
components
you
could
always
pull
them
from
every
other.
Every
other
cr,
for
example
like
if
you
need
the
root
ca
from
the
from
the
full
entire
setup.
Etcd
could
go
and
grab
that
same
exact
secret
ks
could
grab
that
same
exact,
key
secret
and.
E
A
Cool,
thank
you
so
much
everybody
all
right.
I'm
gonna
stop
the
recording
and
then
we'll
resume
this
in
actually
wait.
A
second,
so
three
weeks
from
now
is
also
thanksgiving,
so
we'll
resume
this
in
december:
okay!
Okay!
If.
A
We
we
should
do,
we
should
do
some
stuff
async
over
github
and
then
as
we
need
to.
We
can
always
schedule
an
an
ad
hoc
meeting,
so
we
can
go
through
this
stuff
as
we
move
forward.
Yeah.
E
Yeah,
I
think
we
need
to
have
this
proposal
approved
as
soon
as
possible.
Then
maybe
charles
can
do
some
scaffolding
code
for
this.
A
Please
yeah
that'd
be
great
awesome.
Also
on
that
note,
ciao
I'm
working
on
some
pull
requests
for
for
qbuilder,
there's
two
things
on
the
actual
implementation
as
we
get
to
it,
we're
looking
at
pretend
we
want
to
be
able
to
use
the
new
v3
setup
within
cube
builder,
so
that's
still
alpha.
A
It
should
be
released
end
of
next
week-ish
we're
going
to
be
releasing
a
new
version
of
controller
runtime
0.7
and
then
on
top
of
that,
we're
also
going
to
do
that'll,
be
the
new
v3
plugins,
which
also
gives
us
an
inbuilt
component,
config
setup
and
a
bunch
of
other
cool
cool
ways
of
doing
this
so
and
then
also
the
plan
when
we
actually
go
start
to
implement
this.
A
Just
to
completely
call
this
out
is
we're
gonna,
try
and
use
the
v1
alpha
4
apis
for
cluster
api
instead
of
the
v1
v1
alpha
3,
so
that
we
can
kind
of
help
spearhead
driving
that
forward.
But
those
are
just
extra.