►
Description
#sig-cluster-lifecycle
#capn
#capi
B
Computer
all
right
folks,
sorry
for
that
mishap,
we'll
get
started
real,
quick
and
I'm
gonna,
I'm
gonna
start
by
demoing
what
I've
been
working
on
from
the
ncp
side
of
things
and
the
nc
controller
side
of
things,
but
anyways
just
to
kick
off
the
actual
recording
good
morning.
Everybody.
This
is
the
may
4th
cap
and
office
hours
and
today
we're
going
to
be
going
through
a
quick
demo,
and
then
I
assume
we'll
probably
talk
about
a
little
bit
about
road
map.
As
long
as
my
network
doesn't
drop.
B
B
A
B
Let's
see
it
is
going
to
be
an
incredibly
short
demo,
because
it's
it's
just
bringing
up
a
control,
plane
and
applying
a
handful
of
manifest
and
a
cluster
comes
up,
as
we
already
saw
from
chao
last
week,
but
this
is
basically
just
the
automation
of
the
certificate
generation
and
the
cube
could
cube
config
generation
and
hooking
it
up
so
that
cluster
cuddle
will
actually
work
as
well
so
share
screen.
B
You
all
should
be
able
to
see
that
there
can.
I
do
awesome
all
right,
so
the
quick
demo
here
is,
I
put
in
a
pull
request
that
just
about
five
minutes
ago,
it's
still
a
little
bit
work
in
progress.
I've
gotta
write
some
tests
for
all
the
controllers
that
got
written
for
for
all
these
functions.
So
I
still
gotta
write
some
tests
for
this,
but
in
essence,
if
you
wanna
go
look
at
what
what
actually
got
implemented.
All
of
that
code
is
up
there
and
to
actually
show
this.
B
I'm
running
a
bare
bones:
mini
cube
cluster.
Okay,
get
all
a
bare
bones:
mini
cube
cluster,
but
I'm
also
running
all
of
cappy
deployed
into
it.
I
did
not
specifically
omit
any
of
the
like
the
cube,
adm,
bootstrap
or
the
or
the
q
idiom
control,
plane,
components
they
aren't
being
used
they're
just
automatically
deployed
when
you
do
cluster
cuddle
in
it,
but
in
the
long
run
those
won't
get
deployed
with
our
so
without
provider,
because
they're
not
necessary
for
us
to
actually
run
with
so
so
now
we
actually
have
this
cluster
running.
B
B
And
so
those
are
all
just
our
categories
for
our
crds
that
we
can
actually
filter
through.
So
if
we
go
and
check
take
a
look
at
what
we're
actually
going
to
be
deploying
we're
going
to
look
at
the
config
directory,
we're
going
to
go,
look
at
samples
and
you'll
see
there's
a
handful
of
components
in
here.
I'll
move
that
aside
for
a
little
bit,
so
I
can
see
them
all.
B
Okay,
so
in
here
we
basically
have
the
nested
api
server,
the
nested
controller
manager
in
the
nested
control,
plane,
the
nested
lcd,
the
nested
cluster
and
an
actual
cluster
resource.
So
the
the
cr4
con
fork
cappy.
So
if
we
actually
go
take
a
look
at
some,
what
these
actually
look
like,
there's
some
interesting
things
that
I
had
to
implement
that
I'm
still
working
through
how
we'll
do
this
long
term,
but
starting
with
the
actual
cluster
we
basically
just
have.
B
So
we
basically
just
have
the
actual
cluster
object
in
here
right
now,
I'm
specifying
the
host
set
to
the
actual
name
of
the
service
that
gets
generated
so
you'll
notice
that
this
cluster
is
called
cluster
sample.
The
host
for
that
cube
that
control
plane
endpoint
is
the
internal
cluster
service
that
it's
going
to
be
routing
through
and
that's
because
the
cube
config
gets
generated
using
that
that
control,
plane,
endpoint
and
the
certificates
and
the
cas
that
are
generated
through
the
nested
control
plane.
B
Now
we
have
our
two
refs:
we
have
our
control
plane
and
our
infrastructure
ref,
which
actually
associates
all
of
the
components
together.
So
instead
of
using
kcp
out
of
the
box,
we're
using
the
nested
control
plane
and
instead
of
using
another
control,
another
cluster
cr
we're
using
our
nested
cluster.
B
So
if
we
go
and
take
a
look
at
what
that
actually
looks
like
just
to
make
sure
sure
everybody
is
on
the
same
page
that
nested
cluster
is
very,
very
basic
and
actually,
I
believe
I
should
be
able
to
get
rid
of
the
control
plane.
Endpoint,
that's
in
there
right
now.
I
think
I
need
to
be
setting
that
later,
but
in
essence,
very,
very
bare
bones
and
the
rest
of
them.
I
have
oh
sorry.
B
Let
me
go,
show
v1
control,
plane,
v1
control,
plane
so
and
the
last
one
is
the
actual
control
plane,
which
is
very
similar
to
what
ciao
demoed
last
week,
where
we
basically
have
the
nested
cd.
We
have
the
nested
api
server
and
the
nested
controller
manager
all
as
references
for
that
for
that
so
that
we
can
set
up
all
the
associations
for
the
owner
references
now.
B
I've
got
this
running
in
another
tab
because
I
don't
know,
don't
actually
have
it
deployed
as
a
container
in
the
cluster
right
now,
but
in
essence
this
is
just
running
as
a
go
process
separately
and
from
one
side
of
this,
I'm
going
to
treat
the
right
hand
side
of
this
as
the
tenant
control
plane
once
it
comes
up
and
as
the
left
left-hand
side
will
be,
the
super
cluster
in
essence.
So
over
here
we're
going
to
do
k,
apply
dash,
f
and
before
I
do
that,
let's
do
a
watch.
B
So
what
we're
going
to
do
here
is
we're
going
to
basically
go
config
samples,
we're
just
going
to
apply
this
whole,
this
whole
directory,
so
similar
to
what
you
would
do
after
you
do
cluster
cuddle
generation
of
all
of
these.
All
of
these
actual
manifest
in
the
long
run,
so
we
go
and
apply
those
you'll
see
that
they
were
all
created,
you'll
see
just
like
before
in
chow's
demo,
all
the
pods
are
coming
up,
and
now
we
have
certificates
that
are
generated
outside
of
this
flow.
B
So
I
go
over
to
the
right
hand,
side
again
I'll
close
out
of
this,
because
now
we
know
that
the
pods
at
least
came
up
and
we're
just
waiting
for
it
to
to
cycle
through,
and
it's
going
to
go
through
a
couple
transitions.
This
is
the
same
exact
thing
that
ciao
demoed,
where
it
transitions
from
you'll,
see
that
controller
manager
errored
because
it
doesn't
have
a
can't
validly
connect
to
the
api
server
same
with
the
scd
instance.
It's
going
to
cycle
for
a
second
and
the
api
server.
B
So
let's
go
look
at
what
we
actually
generated
to
make
this
all
happen.
So
if
I
do
k
get
secrets,
you'll
see,
we
have
a
ton
of
components,
and
some
of
these
are
non-standard
for
cappy,
because
of
the
way
that
our
our
control
planes
are
brought
up
as
pods.
So
if
we,
if
I
could
filter
this,
I
would
I
guess
I
could
have
sorted
this,
but
basically
what
ends
up
happening
is
all
of
the
cas
get
generated
first.
So
we
have
the
actual
cluster
ca.
B
We
have
the
ncdca,
we
have
the
proxy
ca
and
then
we
have
the
actual
service
account
for
the
for
the
control
plan
to
come
up.
So
those
are
just
using
the
standard,
the
standard
functions
for
generating
any
of
those
cas
that
that
kcp
actually
uses
and
then
once
we
have
all
those
components
we
go
through
and
in
each
one
of
the
nested
component
controllers.
We
call
and
use
that
ca
similar
to
what
cube
adm
would
be
doing
on
a
on
a
physical
host
through
typically
through
user
data
or
through
cloud
init.
B
Rather,
it's
going
to
go
and
generate
the
search
for
each
one
of
the
components,
and
so
this
is
where
our
solutions
again
is
going
to
have
to
differ
a
little
bit
because
it's
deployed
as
a
stateful
set.
We
need
to
generate-
and
we
end
up
with
one
single
client
for
all
all
of
the
nested
components
based
on
whichever
component
is
so
controller
manager,
all
three
pods
or
five
pods.
B
However,
many
you
want
to
deploy
of
it
are
all
going
to
have
the
same
exact,
client,
certs
and
all
of
that
right
now,
but
we
basically
go
through
and
we
generate
an
api
server
client.
We
generate
an
fcd
client,
a
health
client,
a
cube
config
for
it.
Well
that
actually
is
already
generated.
B
We
generate
a
cubelet
client
that
we'll
be
able
to
use
later
with
from
like
the
vn
agent
side
of
things,
and
then
we
generate
the
proxy
clients,
so
you
can
reference
those,
and
so
now,
if
I
go
to
k,
get
po,
this
got
brought
up
a
bit
ago
and
it
was
actually
already
running,
but
you'll
see
that
the
cycling
that
it
did
and
now
because
of
how
I
actually
deployed
this
one,
big
change
that
I
did
from
the
the
way
that
it
was
set
up
with
from
ciao
is.
B
Is
these
now
all
have
a
very
common
naming,
you'll
notice
that
cluster
sample,
which
is
actually
what
the
the
cappy
cluster
object
was
called,
is
what
everything
is
prefixed
with
so
down
here
we
have
cluster
sample
api
server,
cluster
controller
manager,
cluster
sample
controller
manager,
so
on
and
so
forth,
and
it's
the
same
thing
for
all
the
services
as
well,
which
makes
it
a
little
bit
easier
for
us
to
to
set
up
all
of
the
all
of
the
certificates
and
to
kind
of
like
filter
through
these
objects
at
a
per
cluster
level.
B
B
We
can
go
to
a
port
forward,
and
so
I'm
just
going
to
port
forward
to
cluster
sample
and
actually
sorry
before
I
do
that
we're
going
to
use
cluster
cuddle
cluster
ctl,
so
I'm
going
to
go
and
basically
run
this
cluster
cluddle
get
cube,
config
cluster
sample,
which
is
going
to
go,
reach
out
and
grab
this
cube
config
and
then
I'm
going
to
pipe
that
to
a
local
file.
That's
just
called
keep
config,
so
we
can
actually
take
a
look
at
what's
generated
for
it.
B
So
we'll
do
that
you'll
notice,
it
pops
out
some
debugging
and
if
we
go
to
bat
cube
config
you'll
now
see
that
this
is
the
actual
cube.
Config
that's
generated
for
this
local
cluster
you'll,
see
where
I
was
talking
about
how
it
references
that
cluster
sample
api
server
6443,
which
is
generated
based
on
the
control,
plane,
endpoint
and
now,
if
I
go
port
forward
this,
I
can
come
over
here
and
say:
k,
cube,
config,
cube,
config,
get
svc
and
you'll
see
that
this
cluster
is
normally
is
addressable.
B
B
So
those
are
all
in
a
terminating
state,
and
so
now
the
cluster
can
come
up
and
come
down
without
needing
any
extra
extra
components
generated
or
anything
generated
out
out
of
tree
or
out
of
kubernetes
at
least
now
any
of
those
components
as
well,
because
they
use
a
common
naming
scheme.
We
can
technically
replace
them,
and
so
the
the
all
the
controllers
will
check
to
see
if
each
one
of
them
exists
before
creating.
B
So
if
you
went
and
generated
a
ca
or
had
a
root
say
ca
that
all
of
your
nested
clusters
are
supposed
to
use
you
could
you
could
supply
that
in
this
secret
and
then
also
generate
all
of
your
own
certs
off
of
it
and
supply
those
to
the
cluster
as
well?
And
it's
just
not
going
to
recreate
them
if
they're
already
there,
we
do
like
a
look
up
against
any
of
those
common
names
before
generating,
and
so
you
can
supply
any
of
your
own
certificates.
B
For
example,
if
you
wanted
to
use
cert
manager
to
generate
these
upfront
and
then
go
from
there,
you'd
be
fully
able
to
do
that,
and
so
that's
the
quickest
demo
of
that
code
base.
So
I'm
still
writing
some
tests
for
this
and
there's
a
pull
request
up.
If
you
want
to
take
a
look
at
it,
but
that's
pretty
much
where
we
are
with
the
nested
control
plane.
C
Yeah,
that
sounds
good,
so
a
few
questions,
so
let
me
see
what
exactly
the
nest
cluster
controller
will
do.
B
Yeah,
so
it
kind
of
just
coordinates
between
the
the
actual
cluster
resource
and
it's
like
the
overarching
object
that
that
is
owned
by
us
for
how
how
it
interfaces,
I'm
sure,
there's
other
functions
that
I'm
missing,
that
it
should
be
doing.
But
as
of
right
now,
that's
all
it's
doing.
C
Yep,
my
understanding
is
the
everything
that
you
demo
today.
I
think
the
most
of
the
orchestration
happens
on
the
ncp
controller
right.
B
Yeah,
so
the
only
thing
that
the
actual
nc
controller
is
doing.
I
can
pop
open
the
code,
real,
quick.
The
only
thing
that
the
nc,
sorry,
the
nested
cluster
controller
is
doing-
is
it's
going
and
making
sure
that
the
status
is
reflected
back
on
the
the
nested
cluster
or
sorry
on
the
cluster
resource
for
cappy?
A
B
I
believe
that
should
show
chrome
and
should
show
what
we're
looking
at
so
in
essence,
basically,
what
the
nested
cluster's
doing
is
it's
going
and
checking
it's
getting
the
the
owning
cluster,
so
the
actual
cappy
cluster
cr
and
it's
gonna
go
and
make
sure
that
it
has
the
control,
plane,
ref
and
then
once
it
has
a
control,
plane.
B
Ref
it's
going
to
go
and
update
the
status,
whether
it's
been,
if
it's
not
ready
already
and
it's
if
the
ncp
is
set
to
ready
and
it's
set
to
initialized
it'll
just
go
and
update
it
so
that
it's
now
accessible
to
other
components,
and
so
I
believe
and
scott
I
think
you'd
probably
have
more
insight
into
this.
The
way
that
this
is
going
to
function
downstream
is
like
the
cluster
resource
set.
B
So
the
thing
that
can
deploy
add-ons,
for
example,
is
going
to
check
to
see
if
each
cluster
is
ready
using
that
that
capi
cluster
cr,
and
so
it
will
so
as
long
as
we
set
those
resources,
then
this
cluster
is
accessible
via
other
components.
To
start
operating
on
it
so
we'd
be
able
to
deploy
olm
into
this
cluster,
using
that
using
the
cluster
resource
set.
D
Exactly
yeah
once
the
clusters
control
plane
is
considered
as
up
from
the
based
off
of
the
moving
over
status
fields
and
initialized,
I
believe,
is
the
one
that
it
works
off
of
at
that
moment,
the
cluster
resource
that
will
attempt
to
apply
the
manifests.
D
If
it
doesn't
succeed
in
applying
the
manifests,
it
will
try
again
so
the
apply
once
is
until
it
succeeds
to
apply
basically
and
once
the
control
plane
of
the
workload
cluster.
The
nest
and
cluster
accepts
it.
Then
cluster
resource
set
stops
doing
anything
basically
and
then
it's
up
to
the
workload
cluster
to
actually
provision
the
resources.
D
But
if
you
make
a
change
to
the
config
map,
this
is
a
newer
thing
that
if
you
make
a
change
to
the
config
map
or
secret
in
which
the
yamo
manifests
are
located.
So
if
you
wanted
to
update
olm,
if
you
make
a
change
in
the
config
map
that
is
considered
as
if
it's
a
new
cluster
resource
and
it
doesn't
apply
again,
so
you
can
upgrade
later
on
as
well.
The
inserted
add-ons
by
just
updating
in
the
supervisor
cluster
in
the
management
cluster,
the
crs
sweet.
E
B
Nope,
so
this
actually
out
of
the
box,
you
can
check
it
out
in
the
pull
request.
I
mean
I
could
share
my
screen
as
well,
but
in
essence,
what
we
did
is
pulled
a
lot
of
the
functions
that
are
used
from
the
for
for
kcp,
so
which
is,
which
is
the
control
plane,
the
the
cubitium
control
plane
provider,
that
is,
the
entry
implementation
for
control
planes
within
cappy.
So
we
pulled
a
lot
of
the
functions
that
they
do
in
there
to
generate
certificates
and
reuse.
B
Those
now
that
doesn't
it
doesn't
have
a
hard
dependency
on
cert
manager,
but
you
can
overwrite
with
any
other
certificate
management
you
want,
as
long
as
you
generate
the
secrets
properly.
Okay,.
D
Yeah
kcp
allows
you
if
you
pre-create
the
secrets,
also
with
the
right
names
in
the
right
name,
space,
which
is
currently
the
way
it's
done.
If
you
want
to
bring
your
own
ca,
that's
the
way
to
do
it
today
as
well,
and
that's
fully
supported
in
the
functions
that
they
export
in
kcp.
B
Yeah
the
code
path
in
there,
if
you
want
to
check
out
that
pr,
basically
just
there's
a
function,
called
lookup
or
generate
and
lookup
or
generate,
goes
and
goes
and
lists
loops
through
a
list
of
certificate,
resources
and
checks
to
see
if
it
exists
in
the
cluster,
and
it
just
uses
that
that
the
the
actual
cluster,
the
cluster
cr
name
as
the
prefix
and
then
whatever
the
purpose,
is
of
the
certificate
and
so
whether
that's
ca,
cd
or
controller
manager
or
any
of
the
other
secrets
that
I
showed
you
that
were
generated
obvious
control.
B
C
Yeah
because
I
you
know
because
since
powell
is
not
here,
but
I
look
at
his
report
last
week
about
this
project,
he
said
he
discussed
with
you.
If
you,
in
order
to
use
the
kubernetes
control,
plane,
controller
generator
yeah,
you
need
to
change
the
thinker,
but
I
don't
understand.
B
That
is
a
separate
topic
that
I
wanted
to
add
into
this.
I
don't
want
to
have
the
conversation
without
child,
though,
because
I
want
I
wa.
I
want
to
there's
a
there's,
a
thought
experiment
here
that
we
sh
that
I
think
we
should
eventually
have,
but
it
it
can
go
in
parallel
with
this.
I
think,
there's
ways
that
we
could
leverage
kcp
a
little
bit
more,
which
would
gener,
which
would
benefit
us
by
like
not
having
to
do
all
of
duplicate.
All
of
this
work
that
we've
already
done.
B
And
so
I
wanted
to
have
a
conversation
of
like
okay.
If
we
were
to
take
this
there's
there
is
some
change
I'll
quickly,
explain
what
what
I
sent
to
ciao,
which
is
basically
if
we
were
to
take
five
steps
back
and
look
at
what
kcp
actually
does
for
us.
It
generates
a
bunch
of
a
bunch
of
certificates.
For
us
it
generates
the
cube
config.
So
it
generates
the
s
of
the
the
the
rsa
token
for
the
the
service
account
so
on
and
so
forth.
B
The
then,
where
it
actually
leaves
leaves
us
off,
we
are
able
to,
or
what
typically
happens,
is
it
paired
with
machines
within
the
cluster
will
go
and
apply
or
we'll
go
and
configure
cloud
init
scripts
that
get
provisioned
onto
vms,
for
example.
So
user
data
is
going
to
have
a
cloud
in
its
script.
B
It's
going
to
go
and
provision
the
actual
the
control
plane
on
each
one
of
the
nodes,
and
so
there's
a
there's
a
potential
here
that
we
could
leverage
some
of
this
and
write
a
custom
machine
controller,
which
is
a
standard
thing
for
every
provider
to
do
where
we
would
be
able
to
leverage
cubadm
from
the
from
the
actual
spec
and
definition
side
of
things.
As
long
as
we
could,
then
in
a
machine
controller
have.
B
This
is
where
it
gets
a
little
bit
a
little
bit
weird.
We
basically
can
do
something
where
we
would.
We
would
fake
machines
into
the
clusters
into
the
virtual
clusters,
and
this
is
where
it
requires
synchro
changes,
there's
so
many
pieces
here.
I
think
I
have
to.
I
have
to
write
this
up,
but
in
essence
what
we
would
do
is
instead
of
having
a
bunch
of
controllers
managing
all
of
the
components.
Kcp
would
go
and
be
the
definition
of
it.
We'd
have
a
machine
controller
that
goes
and
you'd
have
to
deploy
a
machine
cr.
B
That
machine
cr
would
then
reference
a
virtual
node
within
a
control
plane
or
within
a
virtual
cluster,
and
those
have
to
have
a
mapping
and
that's
where
it
gets
really
crazy
and
once
you've
done
that
you
can
set
up
a
control
plane
node,
which
is
where
the
control
plane
components
would
get
deployed.
Now
we're
obviously
talking
virtual
cluster.
So
there
isn't
a
single
virtual
node
that
is
potentially
running
your
control,
plane
components
because
it
could
be
spread
anywhere,
but
we
could.
B
We
could
fake
that
because
we
have
full
control
over
the
node
object,
so
we
could
create
a
control,
plane,
node
provider,
for
example.
It
drops
a
node
on
there
that
node
ref
gets
synced
as
a
s
as
the
node
association
for
any
control
plane,
pods
that
we
would
then
have
to
populate
in
the
nested
clusters.
That's
where
it
also
gets.
Weird
is
now
from
the
cube
system.
Namespace.
B
You
would
have
a
faked
pod
or
potentially
a
real
pod,
but
it's
like
synced
in
a
weird
way,
and
so
that
would
make
it
so
that
kcp
could
actually
have
all
of
its
functions
and
where
I'm
going
with.
That
is
what
kcp
needs
that
that
I
learned
about
that
makes
it
kind
of
difficult
out
of
the
box,
for
us
to
use
is,
if
you
were
to
do
this,
if
you
use
kcp,
it
does
health
checks
and
it
does
a
health
check,
for
example,
against
etcd,
and
to
do
a
health
check
against
ftd.
B
It
does
a
cube
cuddle
exec,
so
it
uses
its
admin
credentials
to
keep
cuddle
exec
to
typically
what
is
a
a
static
pod
on
a
physical
host.
It
goes
through
that
loop
now
if
we
could
basically
fake
that
by
putting
the
the
kcp
control
planes
in
or
the
pods
in
the
cube
system,
namespace
and
still
have
it
accessible,
we
in
theory
could
leverage
a
lot
of
what
kcp's
already
done.
B
So
there's
some
there's
some
weird
things
in
there
and
it
would
need
a
a
big
exploration
and
it's
a
a
parallel
effort
for
sure
and
doesn't
need
to
derail
this
so.
C
B
Exactly
and
there's
some
benefits
there
I
mean
if
we
did,
that
we'd
be
centering
around
cube
idiom,
spec
we'd
have
the
cluster
configuration
spec
that
they
already
have
defined.
It's
the
same
thing,
that's
in
kind
clusters,
for
example,
and
so
there's
some
like
there's
some
there's
some
alignment
pieces
that
are
nice.
B
There's
like
there's
other
pieces
as
well
that
I
think
make
this
difficult
like
currently
cubidium,
only
outputs
cloud
in
its
scripts.
You
can
technically
like
on
the
cloud
internet
scripts.
It
goes
and
generates
static,
pods,
and
so
I
want
to
find
out
if
there's
some
hooks
within
keep
adm,
that
we
could
go
and
output
the
the
pod
template
the
pod
specs
the
static
pods
templates.
C
So
yeah,
I
think
so
so
the
real
benefit
is
you:
can
all
the
kc
kcm's
the
code
around
the
management
or
the
certain
certificate?
I
believe
they
have
something
to
handle
the
third
update
and
say
so.
The
rollout
search.
B
Which
I've
caught,
which
I
brought
over
because
I
mean
in
essence
like
kcp,
was
the
the
inspiration
for
what
the
whole
nested
control
plane
looks
like.
So
I
brought
over
a
lot
of
it,
but
it's
just
that.
We
now
have
to
maintain
it
ourselves
so
like.
If
you
go,
look
at
the
the
the
how
I'm
generating
certificates
the
lookup
or
generate
those
functions
behind
the
scenes
will
check
to
see
if
anything's
expired
and
at
a
six
month,
life
cycle
it'll
automatically
refresh
the
the
out
of
the
box.
B
The
certificates
are
are
allowed
for
a
year,
and
then,
after
that,
year
or
after
six
months
in
kcp,
it
expires
them
and
regenerates
those,
and
so
I
use
those
same
exact
functions
so
in
theory
we're
getting
the
same,
the
same
pieces
out
of
it.
It's
just
that
we
had
to.
We
have
to
maintain
that
ourselves
now.
B
Separate
completely
parallel
for
sure
and
and
could
be
something
that,
like
at
the
end
of
the
day,
we're
just
bringing
up
control
planes.
This
is
all
just
interfaces
for
for
bringing
up
these
these
control
planes
and
it
shouldn't
stop
us
from
doing
what
we're
doing
now.
C
B
C
Okay,
let
me
see
another
thing
is
about
the
cytochrome.
You
use
cytokine
energy
life
cycle
of
certain
certification.
B
We're
not
actually
doing
that,
so
that's
where
the
nested
control
the
nested
component
controllers
are
doing
it.
So
this
is
where
I
was
exploring.
What
like
the
paths
around
how
we
could
generate
client
certs
for
each
one
of
the
each
one
of
the
pods
that
get
generated
so
now.
The
caveat
that
I
said
early
on
in
here
was
the
nested
control
plane
generates
the
cas.
Then
the
nested
component
controllers
now
create
their
client
certs.
C
B
Because
we
set
a
replicas
on
on
on
an
orchestration
object
like
a
deployment
or
a
stateful
set
it's
using
the
same
exact
search
for
all
n
replicas
of
that.
So,
if
anything
were
to
happen,
the
identities
are
always
going
to
be
the
exact
same.
There's
no
there's
no
like
ownership
level
of
the
identities
behind
those
certs,
and
so
what
I
was.
What
I
was
the
the
conversation
I
was
having
with
chow
about
that
was:
okay.
B
Are
there
ways
that
we
could
get
around
this
where
we
can
have
each
pod
come
up
and
the
only
way
that
I
could
come
to
to
do
this
would
be
okay,
similar
to
how
cloud
in
it
works
for
cube
adm
in
a
in
a
in
a
sidecar
pod
generate
the
certs
every
single
time
the
pod
boots
every
time
the
control
plane,
the
api
server
or
lcd
boots
grab
the
ca
generate
the
cert
and
then
put
it
in
the
put
it
in
the
file
system
where
it
needs
to
be
which
saves
secrets,
but
it.
C
Yeah
yeah,
okay,
I
see
so
so
I
think
your
requirements.
If
you
have
three
copies
of
the
control
plan,
you
do
the
redundancy
or
yeah
but
prepare
for
the
pay
over
to
all
three
possible
have
the
same
serve
so
you
guys
want
each
quarter
have
different
third.
B
I
mean
I'm
not
it's
not
a
hard
pressed
thing.
It's
just
it's
the
way
that
that
default
things
work
within
cappy.
They
automatically
have
different
search
because
they're
automatically
generated
on
a
vm
boot,
for
example.
So
it's
just
a
different
implementation
and
not
saying
that
it's
technically
wrong
in
our
actual
implementation
of
this,
it's
going
to
be
a
little
bit
different
because
we
manage
certificates,
yeah.
C
B
C
B
C
Now
I
understand
what
you're
talking
about,
because
the
original
one
every
time
they
are
every
control
player,
every
new
investment.
They
will
pretty
much
start
a
new
vm
anyway,
so
every
time
they
have
a
new
address
server,
they
will
create
a
user,
but
we
are
using
part.
So
we
are
coming
from
the
same
stack
from
the
same
stable,
set,
interesting.
Okay.
Now
I
got
it
yeah.
B
B
I'm
thinking
about
like
upgrade
strategies
here
as
well
like
how
is
this
going?
How
what
is
this
going
to
affect
long
term,
especially
because
they're,
not
especially
because
they're,
not
versioned,
clients.
B
Search
so
like,
if
you
update
one
it's
going
to
have
to
refresh
in
all
of
the
components,
so
every
single
one
of
those
components
which
kubernetes
is
good
about,
I
mean
I'll
I'll,
give
it
that
at
actually
watching
certificate
directories
and
auto
refreshing
for
you
and
usually
clearing
clients
like
it's,
it's
relatively
good
at
it.
So
I'm
not
super
worried
about
it,
and
these
things
are
not
immutable
secrets
right
now,
which
would
cause.
B
I
mean
that
would
cause
a
huge
problem
if
we,
if
all
of
these
see
these
client
certs,
were
immutable
secrets.
C
Yeah,
I
I
actually
I
never
thought
about
it.
I
even
don't
know
how
that
works.
You
have
three
edcb
cops.
You
have
three
aka
several
copy
and
all
six
parts
we
are
having
six,
you
know
sirs,
but
only
certain
mappings
will
work
right
that
you
need
to
tell
every
time
you
there
is
no
failure.
You
tell
the
other.
B
Yeah,
luckily,
luckily,
a
lot
of
these
components
are
doing.
Like
a
I
mean
in
controller
runtime,
we
have
a
cert
watcher
package
and
I
know
this
is
implemented
in
core
kubernetes
as
well,
where
it
basically
just
does
an
fs
watch
on
a
spec
on
on
whatever
the
cert
directories
are
that
you
specify,
and
it
does
a
refresh-
I
mean
it
goes
and
clears
all
the
clients
and
does
a
refresh
for
you.
It's
just
more
a
matter
of
like
there's,
always
the
potential
of
something
happening.
B
A
B
Where
vms,
you
have
a
whole
new
entire
deployment,
and
you
don't
have
to
think
about
those
kind
of
functions,
it's
just
things
that
we
have
to.
We
have
to
deal
with,
and
I
like
a
side
car
would
solve
this.
A
sidecar
treats
this
very
similar
and
as
long
as
you
yeah,
a
sidecar
would
solve
this
to
some
degree.
If
you
refresh
things,
then
we
need
to
make
sure
that
the
sidecar
re-kicks
and
does
things
so
that
sidecar
would
actually
have
to
be
like
a
full-on
process.
C
B
Are
you
on
is,
are
you
running
pod
based
control
planes,
or
are
they
vms?
Definitely,
okay,
cool.
B
C
This
is
my
my
I
recall
that
probably
yeah,
but
I
need
a
double
check.
Maybe
they
have
exactly
like
one
control
plan,
one
vm,
that's
also
possible.
You
have
three
vm,
oh
yeah,
one
big
vm,
but
you
put
all
three
copies
in
one
bbm,
that's
also
possible,
but
I
need
to
double
check
because
the
vm
probably
separates
separate
cells.
That
makes
more
sense
right.
If
you
have
three
copies
in
my
vm,
I
don't
know.
B
Yeah
I
mean
I
would
assume
that,
like
I
mean
in
a
typical
cappy
deployment,
these
things-
or
I
mean
typical
qadm
whatever.
What
have
you
all
of
these
deployments,
you're
going
to
have
something
where
you
could
like
that
cd's,
probably
going
to
be
completely
separate,
we're
going
to
have
like
an
ester?
B
Well,
although
that's
a
feature,
that's
still
somewhat
being
developed
within
cappy
right
now,
but
like
a
next,
I'm
assuming
you
all
have
an
external
ncd
cluster
and
then
maybe
your
control
plane
components
are
just
the
scheduler
controller
manager
and
api
server
that
you're
actually
working
with
or
not.
C
B
Yeah
we
didn't
yeah
the
in
in
vc
manager
and
like
none
of
the
we
just
kind
of
went
where
the
like.
The
path
is
just
like
all
right,
just
use
all
the
same
search
for
everything
and
realistically
I
don't
we're
not
really
spreading
a
super
ton.
So
probably.
C
B
It's
more
about
it's
more
about
the
process.
That's
run
in
that
sidecar
and
then
doing
the
life
cycle.
Management
of
the
sidecar
is
gonna,
be
the
probably
the
more
difficult
piece
than
like
running
an
fs
watch
and
and
doing
some
certificate
management
there.
But
it's
also
like
is
that
the
right
path?
And
what
does
this
actually
there's
so
many
other
pieces
that
it
that
it
kind
of
introduces
which
is
now
the
identity
of
those
identity
or
service
account
for
that
pod?
B
Now
has
to
be
able
to
read
the
cas
for
the
cluster
yeah,
because
now
it
needs
to
be
able
to
do
like
lots
more
things
which
is
right.
Now
these
control
planes
are
pretty
isolated,
like
the
service
account
doesn't
need
to
have
any
credentials
to
the
super
cluster
to
be
able
to
do
anything
which
is
really
nice
from
a
security
perspective,
so
yeah
cool
all
right,
so
we
are
at
10
50.,
that's
pretty
much.
The
the
demo
where
it
is
the
code
base
is
up
there.
B
I
put
a
whip
tag
or
a
work
in
progress
label
on
it,
so
that
it's
not
doesn't
get
merged
for
right.
Now,
I'm
still
writing
tests,
there's
a
little
check,
there's
a
little
to-do
box
or
to-do
checklist
in
that
that
pr,
I
would
love
reviews
if
you
want
to
take
a
peep
at
it
and
see
what's
in
there
and
how
it's
all
implemented,
but
it's
pretty
pretty
simple.
At
the
end
of
the
day,.
C
Okay-
okay,
that's
good!
So
it
seems
like
at
least
with
your
pr
is
in.
We
at
least
have
an
npm
version,
or
you
can
make
andrea
to
do
with.
B
Yep
yeah,
it
would
be
great
if
we
can
do
a
release
of
it
and
then
we
can
start
to
really
integrate
like
the
next
I'll,
actually
go
after
this
and
file
a
couple
issues,
because
once
we
do
this
once
we
have
a
release
of
it
as,
like
you
know,
rc1
kind
of
thing.
B
We
can
go
and
start
integrating
this
more
with
cluster
cuddle
and
see
what
that
true
implementation
is
and
make
it
so
that
you
could
just
bring
up
a
kind
cluster
and
then
do
cluster
cuddle
in
it
with
this
as
a
provider
and
just
have
it
deployed,
and
you
can
start
using
it
immediately.
So
there's
some
really
nice
things
for
for,
like
that
next
step,
to
move
this
forward
so.
C
That's
very
good
good.
I
like
this
integration
with
the
same
tool,
the
same
organization
just
like
tagging,
stuff,
yeah
he's
mostly
he
has
already.
B
Yeah,
the
less
we
have
to
write
so
cool.
Do
we
have
anything
else
we
want
to
bring
up
or
anything
like
that.
B
Cool
should
we
call
it
then,
and
maybe
next
week,
I'll
put
an
agenda
together
for
next
week
to
basically
start
to
look
at
what
is
what
we
need
to
work
on
next
and
we
can
chat
over
slack.