►
Description
#sig-cluster-lifecycle
#capn
#capi
A
All
right,
good
morning,
everybody,
this
is
the
I
don't
know
what
day
it
is,
may
18th
2021
cap
and
office
hours.
We
got
a
help,
a
couple
things
on
the
agenda
and
I
think
I
want
to
spend
as
much
time
as
we
possibly
can
doing
some
backlog
and
milestone
management.
If
we
can
at
the
end
of
this,
if
there
isn't
other
topics,
if
there
is,
please
make
sure
you
add
them
to
the
agenda,
so
we
can
cover
those
things
at
some
point
as
way
as
well.
A
Vince
will
join
and
we'll
talk
about
we'll
talk
about
kcp
and
not
not
the
kcp
that
we
talked
about
last
week,
but
specifically
cappy's
kcp.
A
I
think
that'll
be
an
interesting
conversation
charles
and
I
have
been
chatting
a
little
bit
about
that
and
I've
had
some
one-off
conversations
with
vince
and
I
kind
of
wanted
to
see
if
we
can
solidify
a
direction
that
we
could
do
a
proof
of
concept
there
just
to
have
another
piece
in
in
place
within
cap
end,
just
in
case
we
ever
want
to
switch
towards
using
more
cube,
adm
based,
workflows,
the
first
psa
as
well.
A
Actually
we're
all
the
same
people
that
we
always
are
so
there's
no
new,
no
new
introductions,
the
first
psa
that
I
wanted
to
actually
jump
into
is.
Let
me
share
my
screen.
A
We
did
not
actually
close
our
milestone,
so
it's
basically
just
looking
like
it's
been
open
for
a
hot
minute,
and
I
want
to
see
if
I
want
to
make
sure
that
everybody's
clear
with
me
going
and
closing
this
come
on.
Zoom
handles
go
away.
Oh
there
we
go
so
under
milestones.
Let
me
go
find
that
milestones.
We
when
we
were
originally
doing
the
planning
for
this
stuff.
A
We
basically
created
that
1.0
we've
been
100
complete
for
the
last
week
and
a
half
or
so
I
wanted
to
make
sure
before
I
close
this,
that
everybody
feels
comfortable
with
that
and
feels
like
they
can
at
least
use
it.
Charles,
I
know
you
got
you
did
that
the
docs
today
for
actually
being
able
to
provision
that
that
might
have
made
sense
to
actually
have
that
in
this
1.0,
but
at
least
we
can
close
it
outside
of
it.
If
you
want
yeah,
I.
B
Think
I'll
just
close
it
outside
outside
of
it,
I
I
will
make
it.
I
will
revise
it
based
on
your
comments
today,
yeah.
A
Cool
yeah,
so
then
I
guess
we
can
put
that
in
if
it
isn't
already
in
here
it
is
okay,
so
that
can
be
in
the
1.x
release.
So
we'll
close
that
out
with
that
cool,
does
anybody
have
any
concerns
about
me
actually
closing
this,
so
that
it,
you
know,
looks
like
we
aren't
18
days
past
due.
A
Let's
put
that
on:
let's,
let's
file
an
issue
for
that,
let's
see
if
we
can
do
that,
that'd
be,
I
think,
that's
a
good
idea,
because
I
think
we
are
getting
to
that
point
where
we
would
be
able
to
create
create
the
cluster
cuddle
integration
with
this.
A
So
we
could
at
least
deploy
those
control
planes,
but
let's
file
an
issue
and
see
if
we
can
get
to
it
see
if
we
can
have
somebody
work
on
that,
because
I
haven't
actually
done
any
of
the
the
release,
proud
jobs
and
those
will
take
a
little
bit
of
time
just
to
make
sure
that
everything's
all
set
up
properly.
So
we
need
to
spend
some
time
on
it.
C
A
True
true,
we
can
definitely
do
a
release
without
it,
it's
more
that
we
need
to
set
up
all
the
proud
jobs
to
actually
do
the
releases,
because
that's
where
all
that
g
cloud
stuff
actually
gets
initialized
and
it'll
get
pushed
over
there.
I
don't
know
what
the
what
the
timeline
is
on
doing
those
kinds
of
things
like
do.
We
need
to
get
a
project
set
up.
A
Who
do
we
have
to
work
with
from
from
sig
cluster
lifecycle
to
make
sure,
because
those
are
all
going
to
be
owned
by
that
gcloud
account
that
that
all
of
the
cappy
providers
actually
get
rolled
up
under.
So
there's
like.
C
Okay,
so
because
this
is
because
I
haven't
make
a
release
for
seek
project
so
for
other
projectors,
it
seems
to
be
simple:
just
click
github
make
a
release,
and
that's
that's
it
so
so
because
of
this
is
a
secret,
so
we
have
more
steps
need
to
follow.
So
I'm
just
curious.
C
A
D
I
haven't
worked
on
the
releases
so
not
sure.
Okay,.
A
C
A
Versus
yeah
yeah
cool
all
right.
Well
then,
I'm
going
to
close
this
just
so
we're
at
least
done
with
one
milestone,
and
then
I
want
to
go
back
over
to
here
and
see
where
we're
at
okay,
so
cool,
so
we're
at
least
done
with
that.
Well,
at
the
end
of
this,
we'll
we'll
jump
into
doing
issue
management
and
we'll
get
some
things
filed
so
that
we
can
fill
out
that
one.x
release
and
we
can
put
that
together
as
and
actually
set
a
timeline
for
when
we
want
to
achieve
that.
A
A
So
there's
if
you're
not
familiar
with
this
there's
a
working
group
naming
a
naming
working
group
that
got
created
and
we're
in
the
process
of
switching
a
lot
of
the
repositories
over
from
using
the
master
name
in
within
branches
and
switching
that
to
being
main
a
couple
of
the
capi
providers.
Specifically
aws
I
saw,
has
already
gone
through
this
process,
but
there's
a
handful
of
instructions
here,
one
of
the
one
of
the
gates
that
I
have
to
that.
We
have
to
do
to
be
able
to
close.
A
This
is
to
talk
about
it
and
make
sure
that
everybody
knows
this
will
happen,
and
so
I
wanted
to
talk
about
this
last
week,
but
we
just
didn't
get
to
it.
But
in
essence
this
week
I'm
going
to
spend
a
little
bit
of
time
and
push
up
a
new
tracking
branch
for
us,
so
we'll
be
tracking
against
maine.
A
Instead
of
master
for
all
of
these
things,
it
shouldn't
change
anything
because
we
have,
because
I
already
set
up
the
proud
jobs
that
we
have
in
place
to
work
off
of
both
master
or
main
so
it'll,
just
it'll.
A
It
should
just
flip
relatively
simply-
and
this
is
just
kind
of
to
align,
all
the
all
the
repositories
with
the
the
proper
path
and
so
basically
need
to
bring
up
bring
it
up
with
everybody
on
this
call
make
sure
it's
been
announced,
and
then
I'm
going
to
write
a
a
quick
little
email
to
send
out
to
the
sig
cluster
lifecycle
mailing
list.
A
If
I
recall
correctly,
basically,
there's
like
this
send
comms
that
you
have
to
do
a
couple
times
that
that
tell
folks
that
are
that
might
have
seen
the
project
that
this
change
is
happening.
So
I'm
just
being
pedantic
about
the
the
steps
that
need
to
be
done
for
this.
If
you're
interested.
E
Either
of
those
links
master
project,
sorry,
are
we
going
to
remove
the
masked
branch.
A
C
A
Yeah
yeah,
I
think
that's
one
of
the
steps
in
here,
so
we
need
to
rename
the
default
branch
to
main
set
remote
tracking
to
remote
head
to
track
main
and
then
yeah.
We're
gonna
have
to
go
clean
up
where,
where
I
have
master
still
in
the
proud
jobs,
and
then
I
don't
think
we
have
anything
from
oh
no,
we
do
actually
use
the
milestone
applier.
A
So
I
do
need
to
change
that
proud
job
as
well
after
so
so
this
week,
well,
that'll
happen
and
I'll
send
out
messages
in
both
the
mailing
list,
because
that's
part
of
the
this,
the
docs
for
actually
doing
the
branch
rename
and
then
I'll
also
send
out
a
note
in
the
cap
and
slack
channel.
So
we
can
all
know
when
to
switch
our
tracking
branches.
A
This
is
going
to
happen
for
all
the
sikh
project
over
time
over
time
it
is-
and
I
believe
it's
actually
gonna
eventually
gonna
happen
on
on
kubernetes
too.
I
just
don't
know
what
the
timeline
is
like.
There's
a
this
is
impre,
there's
a
whole
dock
that
the
the
working
group
naming
or
the
naming
working
group
wrote
up
for
how
how
to
go
about
this
and
how
to
reason
about
it,
and
only
a
few
projects
so
far
have
done
it.
But
the
idea
is
that
everybody
should
be
centering
around
maine.
E
A
E
A
Yeah,
so
we
gotta
follow.
We
gotta
follow
the
the
the
the
proper
naming
for
all
things
so
that
we
can
be
inclusive
to
everybody,
which
I
think
we
should.
I
think
it's
a
good,
a
good
idea
to
to
get
behind,
and
if
we
can
get
in
front
of
it,
then
we
can
make
sure
that
we
we
have
if
we
find
anything
or
run
into
any
issues.
A
Cool
all
right
and
now
that
vince
is
here,
charles
and
I
wanted
to
basically
talk
with
you,
I'm
going
to
stop
sharing
my
screen,
so
I
can
see
everybody's
faces
so
before
we
jump
into
all
the
backlog.
Grooming
vince,
you-
and
I
have
chatted
about
this,
and
we
chatted
in
slack
just
right
before
this,
just
to
give
you
the
primer,
but
in
essence
charles
and
I
have
had
some
one-off
conversations
similar
to
the
the
ones
that
you
and
I
have
had
about
where
we
are
with
the
project.
A
So,
to
give
you
a
quick
update,
we
basically
just
closed
out
the
the
the
0.1
milestone.
We
didn't
actually
make
a
release
because
we
haven't
set
up
any
of
the
release,
proud
jobs.
So
I'll
talk
to
you
about
that
a
little
bit
later,
but
we
basically
have
it
so
that
you
can
create
control
planes.
We
have
a
video
from
two
weeks
ago
that
will
that
actually
shows
the
control
planes
getting
brought
up
as
well.
I
sent
you
the
link
to
the
the
current
docs
for
how
to
set
up
cap
n.
A
That
being
said,
as
you
and
I
have
had
chats
about
this,
we
have
a
control,
plane
provider
that
we
implemented
ourselves
and
now
have
to
maintain
that
whole
code
base,
which
is
nice
because
we
have
control
over
it
and
we
can
figure
out
what
we
want
to
do
in
the
long
term
and
make
sure
that
it
fits
everything
to
make
nested
control
planes
work.
A
But
you
all
have
already
written
a
a
very,
very
thorough
control,
plane
provider,
that's
based
on
cube
adm,
and
so
now
we
kind
of
have
two
different
implementations
within
this
structure,
where,
if
somebody
were
to
be
a
heavy
user,
say,
for
instance,
scott,
because
he
does
this
already
works
with
aws
and
works
with
google
and
works
with
a
bunch
of
other.
A
bunch
of
other
providers.
Comes
over
to
cap
n.
It's
a
completely
different
implementation
and
the
api
is
different,
and
so
we
wanted
to.
A
We
wanted
to
explore
a
little
bit
more
from
a
proof
of
concept
perspective
what
it
would
be
like
if
we
could
utilize
kcp
to
actually
do
the,
maybe
not
provisioning
or
the
setup
in
essence.
A
So
I
think
the
things
that
you
and
I
have
chatted
about
for
how
this
would
actually
work
is
or
the
things
that
kcp
actually
needs
to
work
are.
It
needs
to
actually
have
machines,
which
is
something
we
don't
currently
have
within
the
nested
control
plane,
correct.
D
So
kcb
takes
a
infrastructure
ref
that
you
could
use,
so
you
could
just
have
like
some
sort
of
shim
there
to
then
yeah,
but
that's
going
to
be
tricky.
So
what
were
you
thinking
like
like?
How
would
you
want
to
do
this.
A
Well,
I
think
I
don't
know
if
it's
the
best
idea,
but
the
the
way
that
you
and
I
have
chatted
about
it
before
is
like
if
we
were
to
take
what
we
have
from
the
nested
cluster
side
of
things
and
we
implemented
nested
machines
and
nested
machines.
Didn't
really
they
didn't
actually
represent
physical
machines,
but
we
kind
of
mocked
that
we
could
potentially
utilize
what
kcp
is
doing
now
the
so
it
would
be
implementing
a
a
little
bit
more
on
the
infrastructure
provider.
A
We
just
kind
of
shelve
the
the
nested
control
plane
code
base
for
right
now.
If
we
were
to
go
down
this
path
and
it
would
be
following
the
same
model
as
what
we
have
with
the
docker
provider,
in
essence,
where
the
docker
provider
kind
of
doesn't
actually
correct
me
if
I'm
wrong,
but
the
docker
provider,
that's
in
tree
doesn't
really
use
kcp's
any
of
the
cloud
init
or
any
of
that
those
those
pieces.
It
just
tries
to
model
itself
based
on
whatever
the
cube
adm
config
is
that
gets
generated
for
that
control,
plane,
correct.
D
Yeah
it
will,
it
won't
use
that
directly,
but
it
will
use
that
information
to
then
create
those
machines.
A
And
so
I
think
in
essence,
we're
trying
we're
that's
kind
of
the
path
that
that
I
think
could
potentially
work
doing
a
very
similar
thing,
but
in
our
machine
provider
we
do
some
hacky
things
to
basically
generate
pod,
manifest
from
that
some
some
questions
in
there
is
like
is
there
ways
that
we
can
get,
which
I
have
never
found
and
I
haven't.
I
don't
think
I've
dug
deep
enough
into
this.
It's
like
is
there
ways
that
we
could
find.
A
Sorry,
all
my
all
my
my
desk
just
shook.
Is
there
ways
that
we
could
get
the
pod
the
static
pod
templates
that
come
out
of
cube
adm
that
are
typically
put
on
the
actual
disk
for
a
physical
machine?
And
can
we
get
that
pod
template
and
use
that
as
the
pod
spec
within
a
deployment
or
a
stateful
set
to
to
generate
these
things
without
having
to
actually
have
a
physical
machine
that
runs
the
imperative
or
like
the
the
cubedium
campaign,
commands
on
them?
A
Because
because
I
want
to
leverage
as
much
as
we
can
out
of
cube
adm
to
make
this
thing
a
little
bit
more
stable.
In
essence,.
D
Yeah,
so
I
don't
think
you
can
you
can
do
that
specifically
mostly
because,
like
you
will
need
to
run
to
your
videom
and
we
don't
render
kubernetes,
because
it's
in
kubernetes
kubernetes,
which
is
a
dependency
issue,
because
we're
going
to
swarm
multiple
versions
of
it.
Now
that
said,
like
the
cubed
m
config,
if
you
use
the
kcp,
and
so
it
means
like
you-
will
use
like
a
cappy
k
as
a
booster
provider,
you
could
use
that
cubed
m
config
to
then
generate
your
pod
as
you
best
wish
pretty
much
like.
D
D
So
we
could
probably
like
a
reuse
as
in
copy
some
of
that
code
for
now
and
then
use
this
as
like
a
forcing
function
as
well.
To
add
to
the
use
case
like
hey.
Maybe
we
want
to
like
have
a
staging
group
or
something
like
that.
A
That's
an
interesting
idea,
so
the
idea
there
would
be
basically
take
like
start
to
start
to
use
this
copy.
Some
of
that
code
out
start
to
use
this
as
a
hey,
we're
trying
to
leverage
these
packages.
Can
we
start
to
talk
about
what
it
would
be
like
to
to
to
make
it
so
that
more
of
cubadiem
is
not
just
in
just
vendored
in
kubernetes
itself
and
we
could
actually
okay
and
so
so
from
a
cappy
side
of
things
we
don't
vendor
any
of
cube
adm.
A
We
basically
have
copied
over
a
lot
of
anything,
we're
actually
working
with
okay,
all
right
that
makes
sense
and
so
from
a
machine
provider
just
to
just
to
fully
draw
out
the
story.
Make
sure
that
I'm
on
the
same
page
as
well,
charles,
if
you
have
any
questions,
make
sure
you
make
sure
you
bring
those
up
from
a
key
from
a
bootstrap
provider
side
of
things.
A
That's
going
to
go
and
write
all
of
the
cloudinet
scripts
that
are
then
going
to
be
instrumented
by
a
machine
controller
which
is
what's
going
to
actually
drop
it
into
wherever
it
needs
to
to
actually
run
the
script.
And
so,
if
we
were
to
take
that
same
sort
of
model,
we
would
just
kind
of
omit
the
actual
cloud
init
scripts.
Take
the
cube,
adm
config
from
the
from
kcp
and
try
to
generate
manifests
in
that
control,
plane,
provider,
okay,
all
right
and
okay.
So
we
would
we'd
be
able
to
generate
the
manifest.
A
C
Now,
besides
of
the
code,
you
know
leveraging
the
same
code
structure.
So
what
is
what
are
the
other
benefits?
Okay
does
cook,
handle
all
the
upgrade
story
and
the
all
the
upgrades
workflow
that
we
can
leverage,
because
in
the
first
place
I
was
thinking
you
know
functionality
wise,
I
mean
unless
we
can
leverage
a
lot
of
workflow,
that
we
are
missing
if
it's
just
trying
to
leverage
some
code,
for
let's
say,
certification.
D
C
We
can
somehow
just
like
rinse
that
just
let's
copy
that
piece
of
code
in
the
first
place
yeah
unless
kubernetes
can
resolve
some
problem.
That
is
tough
for
us
for
now
or
we
haven't,
have
a
real
solution
like
upgrade
like
all
the
virtual
management
that
does.
Google
adm
support
that
with
very
good
workflow.
D
D
So
I
know
that
that's
like
not
what
you
folks
do
so
like
the
other
thing
is
like
cube:
adm
does
a
stacked
control
plane
so
like
it
would
run
api
server,
controller
manager
and
the
lcd
pod
in
in
one
node,
and
it
expects
to
execute
into
then
node
and
actually
find
those
pods
as
well.
D
I
don't
know
how
we
would
want
to
support
that
truthfully,
like
it's
just
so
like
it
just
uses
a
client
to
get
those
pods,
and
I
I
believe,
like
those
spots,
are
named
in
a
certain
way,
so
we
could
probably
use
that
convention
to
name
those
plots
in
a
certain
way
in
management
plus
they
would
just
connect
to
itself,
but
it
seems
like
a
hack,
so
we
need
to
test
this
out.
A
Yeah
yeah,
so
the
benefits
that
I
see
are
like
the
the
already
drawn
out
story
for
for
upgrades
and
life
cycle
management,
the
all
the
certificate
management
that
you
already
do.
We
pretty
much
copied
it
over,
which
is
fine.
I
mean
we
in
essence,
just
re-implemented
and
used
a
lot
of
those
same
packages.
The
one
thing
that
it
doesn't
do
that
we
still
have
to
figure
out
within
our
machine
controller
is
client
certificates,
which
is
where
we
have.
We
have
to
do
some
kind
of.
I
would
call
it
hacky.
A
It's
basically
like
every
single
control
plane.
Every
every
individual
component
of
the
control
plane
shares
the
same
certs
for
all
of
them.
So
if
you
have
five
replicas
of
the
controller
manager
or
five
replicas
of
the
api
server,
it
uses
the
same
client
search
for
all
five
of
them,
there's
no
specific
identities
between
those
that
we
can
that
we
can
deal
with.
So
we
still
have
to.
A
We
still
have
that
problem
within
this
world
because
we're
all
pods
and
we
can't
we
can't
just
automatically
run
those
imperative
commands
to
generate
those
certs
based
on
the
cas.
The
other
benefits
I
mean
like
what
what
vince
is
calling
out
there,
I
think,
is
really
nice
that
we
could
bring
over.
Is
it's
it's
doing
that
to
do
health
checks,
correct
so
like
it's
health
checking
against
lcd.
A
So
if
something
goes
wrong
in
std,
it'll
go
and
it'll
go
and
try
and
help
reconcile
itself
back
to
the
proper
state
that
it
should
be
within
the
cluster
and
so
in
our
world.
If
something
went
wrong
with
our
api
servers,
our
lcd,
we
could,
in
essence,
have
kcp
help
to
re-kick
those
and
make
sure
that
they're
in
a
proper
state
for
any
of
the
nested
control
planes.
C
Either
lcd
using
the
physical
storage
I
mean,
or
just
like
us,
using
memory-
storage
yeah
in
first
place,
because
I
think
in
kpm
most
likely
you
will
switch
the
lcd
controller
to
your
own
implementation.
To
use
the
physical
storage
right.
A
Yeah
yeah
in
in
theirs
they're.
A
Be
using
their
they're
going
to
be
going
to
be
using
the
physical
disk
to
actually
do
it,
which
we
eventually
need
to.
We
need
to
be
able
to
support.
I
think
for
right
now,
like
our
our
implementations
are
okay,
because
we're
only
doing
this
for
we're
still
in
development,
but
eventually,
I
think
we
need
to
be
supporting
that
as
well
like
expecting
that
you
have
some
pv
provider
or
some
csi
provider
within
your
cluster,
that
you
could
actually
use
to
to
do.
Yeah.
C
A
Yeah
now
one
of
the
hacky
ways-
and
this
is
what
charles
and
I
were
talking
about
as
well-
is
like
we
could
in
theory,
because
we
control
both
both
sides
of
this,
and
it
doesn't
really
like.
We
can
translate
a
node
into
a
into
a
workload
cluster
into
the
nested
cluster
relatively
easily,
like
those
are
just
it's
hacky,
so
I
don't
want
to
call
it
out
as
like.
A
This
is
the
perfect
way
of
doing
it,
but
it
would
be
hacky,
but
we
could
easily
do
a
bind
mount
and
create
like
shadow
pods
or
static
pods
within
our
control,
within
our
nested
control
planes
that
map
to
the
same
inter
the
same
pods
within
the
actual
super
cluster.
So
we
could
go
and
say,
register
the
pods
with
whatever
or
register
with
the
pods,
with
whatever
names
cubadm
actually
needs
them
to
be
yeah.
A
We
could
do
that
and
then
we
could
do
within
at
least
in
the
in
the
nested
control
plane,
because
nothing's
going
to
reconcile
those
out
of
it.
If
we
don't
want
them
to
and
then
we
could
also
make
it
so
that
from
the
when
we
fully
integrate
virtual
cluster
into
this,
we
can
make
some
label
or
something
on
those
pods
that
like
doesn't
sink
them
down
to
the
suit,
doesn't
try
to
sink
them
down
back
to
the
super
cluster.
A
So
it's
not
reduplicating
that
effort
and
we
could
also
do
syncing
of
the
node
object
to
that
to
represent
the
same
exact
naming
of
those
and
because
of
the
an
agent
you'd
still
have
the
exact
flow.
As
long
as
those
mapped
somehow
in
the
actual
connections
we
could
map
it
so
that
we
could
do
the
exact
to
an
etcd
pod
that
was
in
the
super
cluster.
But
in
your
cl
in
your
tenant
control
plane,
you
can't
actually
do
anything
against
that.
A
Example,
I
believe
I
mean
crack
me
over
wrong.
I
like
it's
hacky
and
it
might
it,
but
it
might
work
and
it
would
create
like
a
a
full
control
plane,
like
experience
like
more
like
a
full
like
an
actual
an
actual
cluster.
A
This
is
more
just
on
the
actual,
the
nested
side
of
things.
It
doesn't
actually
have
anything
to
do
with
multi-cluster
as
of
right
now,
although.
A
In
the
long
run
from
the
side
of
like
you
could
run
these
control
planes
across
or
you
could
you
could
do
the
same
sort
of
thing
that
that
phase
already
working
on
from
the
experimental,
scheduler
side
of
things
and
still
leverage
these
things?
It's
just
your
control
plane
host,
would
show
up
in
your
meta
cluster
in
that
world.
But
this
is
really
just
how
to
bring
up
nested
control
planes
and
more
align
with
with
cappy.
E
A
Yeah
there's
a
lot
of
things
that
that
brings
in
that
are
that
are
different
from
how
it's
currently
implemented,
for
it
would
just
mean
that
in
like
the
cube
system,
name,
space
you'd
have
some
pods
that
aren't
actually
deployed
in
your
cluster,
and
I
think
I
think
in
theory.
We
could
even
do
this
where
we
created
the
cube
system
namespace
prior
to
control
plane,
getting
brought
up,
deploy
those
into
the
cube
system.
A
But
we
just
don't.
We
just
have
some
controls.
That
say
you
can't
mutate.
These
objects
that
got
back
back
populated
up
into
your
cluster.
C
I
know
for
static
party,
it
will
do
this
way,
but
in
our
implementation
we
would
want
it
the
same.
We
would
do
the
same,
so
we
put
that.
A
D
D
D
A
We
wouldn't
have
a
machine
map
to
a
physical
node
and
the
physical
node
wouldn't
actually
be
able
to
be
deleted
where
it
would
actually
do
anything,
we
would
just
have
to
build
those
controls
that
say:
okay,
if
you
delete
this,
this
go
delete
the
pods
that
were
supposed
to
be
on
that
I
mean
I
guess
that
could
work
yeah,
you
need
to
link
them
all
together.
Somehow
does
that?
A
Does
it
expect
that,
like,
if
you
do
do,
if,
if
you
do
delete
a
machine,
it
knows
about
the
pods
that
are
supposed
to
be
on
there,
and
it
will
expect
that
those
specific
pods
got
deleted.
It
wouldn't
be
that
we
could
just
delete
one
set
of
of
the
of
the
pods,
not
really
caring,
which
ones
they
are.
D
No
because
the
names
all
have
to
match
with
the
provider
id
and
all
those
fancy
things
like,
there's
a
naming
convention
that
needs
to
be
followed.
D
So
they
expect
those
plots
to
have
certain
names,
and
then
it
also
does
the
linking
between
depth
machine,
slash,
kubernetes,
node
and
the
scd
member,
because,
for
example,
sometimes
it
can
happen
that
the
sd
member
is
unhealthy
and
the
apo
the
controller
manager
isn't
because
it
connects
with
another
ltd
or
vice
versa.
D
So,
when
we
detect
that
we
make
sure
that
we
still
have
water
before
doing
any
operations
and
then
remediation
kicks
in,
we
delete
the
machine.
The
whole
chain
of
pods
gets
deleted
and
we
create
a
new
one
or
you
can
set
it
up
to
create
a
new
one
first
and
then
delete
the
old
one,
which
is
the
default.
I
believe.
A
Okay,
that's
interesting,
so
we
would
in
essence,
basically
not
so
we
really
wouldn't
be
able
to
leverage
deployments
or
stateful
sets.
If
we
did,
this
we'd
have
to
be
deploying
static,
pods
themselves,
but
we
would
set
up
the
naming
properly
to
map
to
machines
because
the
naming
that
would
come
out
of
deployments
wouldn't
actually
work
or
stateful
sets,
wouldn't
actually
work.
A
A
If
you
did,
if
you,
if
you
change
something
that
had
that
naming
scheme
api
server,
one
is
on
apis
on
ape,
is
on
host
one
and
controller
manager.
One
is
on
that
same
host
and
we
just
need
to
make
sure
that
that
that
stack
gets
torn
down.
A
A
Those
are
really
interesting
in
my
opinion,
but
the
machine
controller
would
end
up
being
pretty
would
have
to
be
pretty
fancy
in
essence,
to
be
able
to
go
implement
these
things.
Yeah.
C
My
my
worry
is
is
because
that
our
assumptions
are
completely
different
right.
It's
part
based
another.
Is
you
know
no
machine
based
yeah,
I'm
just
don't.
I
don't
know
what
ending,
if
you
ending
have.
What
is
the
user
experience?
So
I
still
I
yeah
I'm
listening,
but
this
is
again.
It's
not
gonna
for
cloud
base
is
different
right.
You
change
providers,
so
nothing
just
for
default
yeah.
We
still
need
to
figure
out
all
the
potential
problems.
If
you
expose
of
the
pot.
A
A
A
Now,
yeah:
that's
why
they
have
the
they
have
their
own
fcd.
I
mean
eks
control,
plane
provider.
What
about
so?
What
about
so
like
the
aws
provider,
not
the
eks
provider
specifically
or
any
of
the
others.
A
That
are
just
physically
on
host,
or
even
the
vsphere
users
of
that.
Do
they
see
any
concerns
about
having
the
the
control
plane
pods
showing
up
in
the
cluster,
or
is
that
just
like
you
can't
really
operate
on
them?
You
can't
change
the
api
server
flags.
You
can't
do
much
to
them
other
than
exacting
into
health
check.
I
guess
exacting
in
would
provide
a
lot
more
control
than
we
might
want.
C
Yeah,
for
example,
for
that
part,
you
still
can
read
all
the
configuration
like
how
many
resources
you
give
to
that
partner.
Like
request
reservation,
I
don't
know
from
some
cloud
provider
perspective
should
that
they
may
want
to
hide
that
information,
so
they
don't
want
to
fully
expose
how
many
results
that
give
a
control
plan.
I'm
just
wrong.
C
I
think
people
I
don't
know
so,
if
I'm
a
cloud
provider,
I
would
like
to
hide
it.
So
I
don't
want
to
tell
people
how
many
resources
I
use
for
this,
especially
especially
if
they
have
some
kind
of
charging
model.
You
know
I'm
saying
if
you're
charging
money
for
the
control
plan
right,
but
if
I
show
you
all
the
configuration
all
the
resources
I
use.
Maybe
that's
not
that's
not
a
very
good
idea,
but
for
our
you
know,
upstream
people,
okay,.
A
Yeah,
we
can
also
just
not
sync
what
the
resources
look
like,
because
we
control
the
pods
we'll
just
make
fake
completely
fake
pods,
we'll
make
it
look
like
they
use
so
much
memory
and
cpu
will
just
charge
up
the
wazoo
forever
for
whatever
customers
using
your
control
plate,
you
can
do
whatever
you
want.
You
know
just
fake
the
entire
look,
so
that
you
can,
you
know,
make
more
money
off
of
people
just
kidding.
Okay,
now,
vince
specifically
like
do
people.
A
Do
people
see
problems
with
this
or
from
a
platform
side
of
things
when
people
are
building
on
any
of
these
providers.
They're
kind
of
like
it's
awesome
that
you
can
see
the
actual
control
plane.
Components
like
it
ends
up
being
like
a
kind
cluster.
Where
you
see
everything
that's
deployed
that
is
necessary
for
the
cluster
to
run
within
the
cluster,
which
I
think
is
cool
from
an
observability
side
of
things
like
I
can.
I
can
investigate.
D
No
not
really,
especially
because,
like
you
could
with
doorback
like
you,
could
just
like
not
give
anybody
permission
to
fun
to
do
anything
on
cube
system
right
for
that
that
matter,
and
if
someone
has
the
ability
to
do
so
like
you
have
handling
permissions
and
you
go
and
mock
it
with
the
with
any
of
those
resources.
I
mean
we
cannot
stop
that
like.
D
But
if
you
do
have
permissions
right
like
we
can't
stop
that,
but
at
the
same
time
like
exposing
the
control
in
the
like
truthfully,
the
back
control
is
that
it's
in
cluster
api,
it's
not
on
the
user
side.
While
you
can
change
things
out
of
bend
like
they
can
either
get
reconciliated
or
they
will
be
overwritten.
The
next
time,
like
a
rollout,
happens.
A
Oh,
I
hadn't
even
considered
that
so
yeah,
so
if
somebody
were
to
have
our
back
permissions
and
did
exec
into
the
api
server
and
somehow
changed
api
server
flags
in
theory
that
thing
that
thing
is
going
to
get
get
re-reconciled
again
to
be
back
at
the
right
state
based
on
whatever
cap
he
provided.
So
you
won't
really
you
don't.
A
Yeah
one
or
many
but
yeah
we'd
have
to
we'd
have
to
create
some
sort
of
fake
yeah
virtual
node
that
we
deal
with
that
would
represent
that
same
exact
naming
scheme
as
well.
A
Yep
and
then,
and
then
we
need
to
make
sure
that
that
machine,
that
virtual
node
object
is
also
doing
all
the
binds
to
make
sure
that
the
the
right
pods
show
up
on
that
host
and
then,
as
long
as
the
the
cubelet,
the
cubelet
endpoint
within
there
is
set
properly
we'd
at
least
still
be
able
to
do
logs
and
exec
and
all
that
stuff.
Like
I
mean
it's
relative,
it
like,
I
want
to
say
it's
relatively
easy,
but
it's
really
it
it's
not
that
many
barriers
potentially
to
be
able
to
implement
something
like
this.
C
E
A
A
Yeah
yeah.
Definitely,
although
there's
there's
some.
F
Subtle
things
in
there:
let's
go
ahead
away:
yeah,
we
know
there's
a
kind
of
a
couplet
in
terms
of
api
server.
You
can
implement
this
way.
I
think
we
did
a
summing
optimization
to
make
sure
that
we
know
there's
a
connecting
multiple
couplets,
but
the
original
design
was
to
1b
vm
agent
to
one
kubelet
and.
A
Yeah
yeah,
if
we,
if
you
basically
did
it
so
that
the
control
plane
hosts
were
only
through
a
single
through
a
single
endpoint,
it
would
be
easy
to
do
the
traffic
management
of
where,
where
to
send
it,
to
the
right,
actually,
no
just
behind
the
scenes,
it
would
just
go
through
the
api
server.
The
actual
super
clusters-
api
server
to
do
those
exec
and
logs.
A
So
we
so
if,
even
if
the
in
cluster,
the
all
the
rest
of
the
in
cluster
nodes
were
done
with
ace
with
the
standard
one-to-one
cubelet
to
v
node
setup,
we
could
have
the
control
plane
nodes,
not
done
that
way,
so
that
we
could
genericize
the
implementation
so
that,
even
if
you
went
through
at
all
say,
for
instance,
you
had
three
control
plane:
three
control
plane,
nodes,
the
the
faked
control
plane
nodes,
those
would
all
have
the
same.
A
C
Bringing
a
kind
of
interesting
thing
that
if
you
do
that
way,
if
you
expose
the
control
plane
parts
in
the
vc,
which
means
the
tenant
can
somehow
auditing
everything
happens.
So.
D
C
So
which
which
doesn't
exist
for
now
so
so
for
now,
it's
like
you
know
more,
like
kind
of
service
kind
of
thing,
so
all
the
maintenance
work
we
are
leveraging
to
the
provider
because
providers
only
give
you
an
interface
for
operating
that
you
guys
server.
But
in
this
way
it's
like
I'm
kind
of
exposed
more.
C
If
you
want
to
audit
yourself,
you
can
do
that.
That's
probably
a
kind
of
new
behavior
yeah.
A
A
A
C
C
A
A
I
wonder
if
there's
like
I
mean
I
guess
these
these
two
designs
technically,
aren't
they
don't
have
to
be
independent?
I
mean
they
don't
have
to
be
it.
Doesn't
we
don't
have
to
pick
one
single
route
for
this
because
it
sounds
like
that
could
be
implemented
as
an
additional
piece
that
says.
Okay,
you
can
have
a
nested
cluster
that
has
that's
more
like
a
a
traditional
cloud
providers
solution.
You
can't
see
anything,
it's
completely
hidden.
A
You
have
to
manage
it
as
the
the
management
cluster
and
you
don't
ever
show
any
of
this
stuff.
You
just
give
people
a
kubernetes
endpoint
and
they
go
do
whatever
they
want.
Then
there's
the
other
side
which
is
okay.
We
can
also
add
on
top
of
this
and
say:
okay,
if
we
went
down
the
path
of
kcp,
we
could
have
control
planes
in
the
cluster
and
those
are
just
like.
Aws
has
the
aws
provider
and
aws
has
the
eks
provider
you
get
both
routes.
Yes,.
B
Yeah,
I
just
have
a
question,
so
it
sounds
like
we.
The
problem
is,
the
pod
will
be
exposed
to
the
end
user
right
to
the
nested
cluster
and
the
user.
So
is
that
possible
because
the
pod
is
going
to
be
created
on
a
super
cluster.
So
is
that
possible?
We
just
not
synchronize
this
static
part
back
to
the
tenant
cluster
and
then
the
user
will
not
be
able
to
see
all
these
parts
right
and
I
think,
could
be
admin.
Controller
control.
Plan
controller
does
not
need
to
access
the
tenant
master.
B
A
I
think
that's
not
correct
vince,
just
correct
me,
I'm
wrong
there.
The
the
the
kcp
needs
to
have
an
machine
mapped
to
a
node
within
the
tenant
control
plane,
and
so
it
has
to
be
have
a
physical
node
that
shows
up
in
the
tenant
control
plane,
which
means
and
the
pod
has
to
show
up
in
there.
A
So,
even
if
from
the
management
cluster
side
of
things,
you
could
see
the
pod
and
you
could
exec
and
do
all
of
the
all
of
the
health
checking
against
that
that
wouldn't
actually
suffice
for,
like
kcp,
would
just
try
to
bounce
the
control
plane
multiple
times
because
it
couldn't
exec
through
its
own
tenant.
Api
server.
D
A
As
well
interesting,
so
that
might
be.
That
might
be
something
as
well
that
we
could
leverage
if,
like
because
scd
might
be
a
little
bit
more
scary,
to
show
in
a
control
plane
like
I
might
be
more
okay.
I
mean
I'm
making
a
bunch
of
assumptions
here,
but
it
might
be
more
secure
to
be
able
to
show
what
the
api
server
and
controller
manager
are
doing,
but
hide
fcd.
A
If
we
wanted
to
and
treat
that
as
external,
even
though
it's
just
deployed
as
a
pod
in
the
control
plane
and
still
do
health
checks
against
the
api
server,
which
is
a
potential
path
as
well.
D
A
D
I
was
trying
to
think
of
alternatives,
but
I
was
wondering
if
like
we
could
create
like
a
client,
that's
connect
to
a
cluster,
but
it's
like
that.
That
cluster
is
just
a
bunch
of
resources
that
then
so
like
another
shim,
I
guess
like,
but
then
you
will
need
to
route
those
requests
to
the
right
pans.
D
A
A
A
Through
the
api
server
to
etcd,
that's
as
a
static
pod,
so
we
so
what
we
really
need
to
make
sure
is
the
connection
there
is
that
the
api
server
registers,
the
pods
that
we
want
to
be
able
to
connect
to,
and
everything
else
should
be
relatively
fine,
because
you're
not
doing
because,
like
nodes,
don't
actually
matter
as
much
as
the
the
getting
to
the
pod
by
based
on
some
specific
naming
sequence
or
scheme.
D
Yeah,
I
was
yeah,
I
don't
know,
I
don't
know
I
was
thinking
like.
Maybe
there
is
a
way
to
create
a
fake
node
in
the
management
and
then
just
run
your
radium
on
it,
but
that
also
doesn't
solve
the
problem
because,
like
you
have
this,
your
network
is
split
right
because,
like
you
have
to
work
our
notes
on
some
somewhere
else,
which
complicates
a
little
bit.
A
I
think
this
is
still
doable.
I
mean
like
it
like,
if
we're
okay,
with
some
of
the
caveats
there
about
pods
showing
up
in
control
planes.
I
think
this
is
relatively
doable
with
a
couple
custom
sinkers
and
we
could
write
a
out
of
tree
out
of
vc
custom
sinker
that
you
deploy
specifically
for
this,
so
that,
if
folks
didn't
want
to
actually
do
it,
we
could
just
sorry
I'm
removing
somebody
if
we.
A
If
we
didn't
want
them
to
actually
it
like,
if
you
didn't
want
to
deploy
this,
you
could
deploy
it
separately
and
not
use
this
specific
syncer,
because,
just
like
you
have
fey,
you
wrote
the
the
custom
syncer
and
internally
we
have
our
own
custom
syncer.
So
we
can
support
crds
and
things
like
that.
We
could
easily
do
that
kind
of
function
and
not
overwrite
anything.
That's
in
virtual
cluster
not
enforce
any
changes
there,
except
for
somehow
supporting
some
flag
to
allow
us
to
backport
specific
resources.
D
We'll
need
to
think
about
it,
and
maybe
this
is
a
it's
kind
of
it's
kind
of
a
lot
to
right
now
to
take
a
kcp
and
make
it
like
non-cuban
demo
aware
because,
like
it
will
have
those
expectations
and
the
code
is
really
tied
up
together,
although
truthfully
like,
if
we
think
about
it
like
it,
would
be
really
really
great.
If
the
api
of
a
control
plane
provider
were
actually
separate
and
then
you
would
just
implement
those
bits,
but
that's
like
a
whole
like
whole
thing
that
needs
to
happen.
A
D
A
Yeah
all
right
this
is
gonna.
This
is
gonna
require
some
more
work
and
we
should
definitely
con
confer
over
slack
on
this
because
we're
at
we're
almost
at
the
end
of
the
hour.
So
I
want
to
make
sure
and
give
folks
at
least
five
minutes
back.
We
didn't
get
to
milestone
planning.
Can
we
do
that
over
slack?
Do
you
do
you
all
agree
that
we
can
just
basically
do
some
filing
of
issues?
I'm
thinking
the
things
sorry
to
derail
this
conversation
for
just
a
second.
A
We
can
go
back
to
it
as
well,
but
we
can
do
things
like
filing
some
issues
for
how
to
integrate
cap
n
and
vc
I'll
get
an
issue
filed
for
doing
an
actual
release.
Vince,
I'm
probably
going
to
poke
you
about
that.
Just
to
make
sure
that
I
have
all
the
right
things
like
the
the
open,
open
questions
I
have
there
like
do.
I
need
to
create
a
project
within
google
cloud?
How
do
I
get
access
to
those
kind
of
things?
A
How
does
proud
push
those
I've
never
done
any
of
those
things,
so
I'll
probably
poke
you
pretty
soon
about
that,
but
yeah.
I'm
thinking
I'm
thinking.
If
we
can
file
some
issues
and
then
start
triaging
those
so
how
to
get
virtual
cluster
actually
integrated
so
meaning
we
probably
need
to
have
a
way
to
supply
it.
Out
of
a
non-virtual
cluster
provided
cube
config
to
virtual
cluster,
like
we
probably
want
to
add
to
the
virtual
cluster
cr,
an
actual
cube,
config
field
or
some
something
like
that.
C
That's
okay,
because
it's
kind
of
provider.
Now
we
already
have
two
providers.
We
have
a
cloud
provider,
we
have
native
provider,
so
we
just
need
to
switch
the
cloud
provider
to
a
new
version
switch
whatever,
because
whatever
we
need
to
have
is
the
cookie
config.
You
want
to
get
a
coupon
code
somewhere.
If
you
don't
want
to
change
the
stack
you
can
just
like
always
hack
your
way,
I
mean
clicky
way
you
put
good
config
in
the
annotation
somewhere,
no
yeah.
That's
that's
something
that
we
did
actually
internally
to
work
on
some
problems.
A
Cool
all
right
so
yeah,
so
if
we
can
do
if
we
can
just
do
that
over
slack
throughout
the
rest
of
the
day
or
something
like
that,
I
think
it'll
be
awesome
to
get
some
of
those
things
filed,
and
then
we
can
plan
out
what
the
1.x
actual
release
is
going
to
be.
And
then
I
can
work
through
getting
the
main
branch
up
and
work
through
getting
the
proud
job.
So
we
can
get
an
actual
our
first
initial
release
created.
Then
we
start
moving
this
thing
forward,
a
little
bit
more.
C
Reasonable
cool
yeah,
I
think
for
ksp
stuff.
I
think
it's
pretty
interesting.
We
haven't
sorted
through
more
about
this,
so
I
see
the
benefit
and
I
see
the
kind
of
drawback,
so
maybe
someone
they
feel
is
very
interesting
and
if
it
doesn't
really
take
too
much
work,
you
can
try
to
do
some
plc
and
give
a
try.
I
mean
thinker,
don't
worry
about
the
synchro.
I
think
synchro
is
the
last
problem,
so
the
major
problem
is
first
workflow
and
what
kind
of
problem
we
solve
and.
E
A
Yeah,
charles,
I
know
you
had
some
interest
in
this
you
and
I
can
continue
talking
as
well.
If
you
wanted
to
run
with
that
as
well.
A
Cool
all
right,
I'm
gonna
call
it
then
and
give
give
everybody
a
little
bit
of
time
back
and
we'll
chat
over
slack
for
the
rest
of
the
day.