►
Description
#sig-cluster-lifecycle
#capn
#capi
A
Good
morning,
everybody
this
is
february
23rd.
This
is
the
cluster
api
provider
nested
office
hours
talk
about
how
the
progress
that
we're
doing
on
building
out
the
cluster
api
implementation
for
nesting
control
planes
in
in
kubernetes
clusters.
We
have
a
pretty
well,
we
have
an
empty
agenda,
so
we
might
be
able
to
end
this
unless
somebody
has
things
they
want
to
bring
up
and
talk
about.
B
Yeah,
I'm
just
curious,
so
it's
I
think,
chelsea
lcd,
that
controller
is,
is
reviewed
and
checking.
Why.
A
Yeah,
I
think,
there's
a
couple
changes
on
there.
I
think
there
was
feedback
that
I
checked
in
on
this
morning
and
it
looks
like
there's
still
some
unchanged
or
un
un
finished
working.
A
Yeah
we
were
chatting
a
little
bit
last
week
about
the
the
common
spec
channel
changes.
I
think
the
big
changes
that
are
still
in
there
were
we
still
had
some
of
the
pki
generation.
In
that
controller,
we
can
merge
that
I
guess,
and
we
can
just
pull
it
out
in
the
next
pr
that
change
it,
that
that
moves
that
over
to
the
ncp,
if
we
want
to
go
forward.
A
Yeah,
so
if
you
go
check
out
the
the
cluster
api,
nested
slack
channel
and
then
just
scroll
up,
I
mean
a
couple
messages
from
thursday.
We
were
talking
about
whether
we
wanted
to
move
forward
with
the
current
setup
or
go
fully
into
the
q
builder
declarative
patterns
and
use
the
version
and
channel
fields
that
are
defined
in
the
common.
A
B
A
Yeah,
I
think
the
the
the
question
here
was:
do
we
want
to
basically
go
down
the
path
of
using
all
of
the
built-in
reconcilers
for
from
the
qbuilder
declarative
pattern?
So
if
you
haven't
looked
at
that
the
inner
workings
of
how
that
project
is
set
up
it,
basically
it
it
writes
the
the
reconcile
function
for
you,
so
you
don't
even
have
to
you,
don't
even
have
to
implement
it.
You
just
supply
it
with
that
common
spec
and
it
goes
and
looks
up
what
manifest
is
supposed
to
grab.
A
So
in
the
current
etsy
dpr,
we
have
everything
hard
coded
as
as
go
types
to
create
the
stateful
set
and
so
and
all
the
components.
Well,
sorry
it
it
decodes
from
yam
from
a
yaml
file,
all
in
that
controller,
and
so
if
we
went
the
full
keybuilder
declarative
patterns
route,
it
will
it'll
automatically
create
a
lot
of
those
resources
for
you,
based
on
what's
supplied
in
the
channels
url
in
essence
or
channels,
pointer.
A
Yeah
yeah,
so
that
that
points
to
a
a
yaml
file
or
a
custom.
Well,
it's
all
yaml,
but
it
can
be
customized
driven
or
just
raw
kubernetes
manifests.
A
So
I
think
the
plan
was
to
move
forward
with
this,
and
then
we
can
make
change.
We
can
iterate
or
we
can.
We
could
change
the
implementation
as
we
go
forward
with
the
rest
of
the
controllers,
but
I
I
want
to
make
sure
yeah
I
can
ping
chow
or
if
you
want
a
ping
chow,
we
can
check
in
on
on
those
last
changes
and
see.
If
that's
that's
been
done,
the
other
one,
actually,
that
that
is
good
to
call
out
if
you
can
get.
A
If
you
have
a
chance
or
if
anybody
has
a
chance,
can
you
just
go
through
that
other
pr
on
the
ncp,
doc
and
nc
and
and
the
nested
controller
and
nested
control
plane?
Dock
is
kind
of.
B
B
A
B
A
Yeah,
no,
that
makes
yeah.
I
mean
it's
yeah.
That
makes
perfect
sense.
Okay!
Well,
that's
also
a
good
call
out
so
judong
vince
koscheck.
If
any
of
you
have
a
chance
in,
can
go
and
give
this
a
read
through
the
nested
control,
plane
and
nested
cluster
proposal.
That'd
be
fantastic,
just
get
some
extra
feedback
in
there
and
if
it's
good
just
give
it
a
give
it
a
thumbs
up.
C
B
A
A
Cool
moving
over
so
the
cap
end
goals
and
the
high
level
cap
and
goals
and
the
high
level
user
stories.
A
Those
are
basically
taken
care
of
by
the
the
the
the
ncp
and
nc
doc
that
I
have
for
the
proposal,
so
those
will
get
closed
out
and
that's
technically
also
what
the
same
the
one
above
it
is,
which
is
the
base
architecture,
those
are
just
to
gather
people's
people's
requirements
in
separate
spaces
for
user
stories
and
and
the
the
goals
so
that
those
bottom
three,
four,
five
and
six
should
get
taken
care
of
by
that
pr.
A
There's
an
open
port
there's
an
open
issue
about
updating
the
design
resources
so
vince
you
had.
You
had
mentioned
that
we
had
some
fancy
stuff
within
nest
or
within
cappy
to
basically
generate
some
of
those.
I
moved
over
and
used
that
to
generate
the
ma,
the
diagrams
for
the
nc
and
ncp
dock,
so
at
least
the
make
files
in
there
now
so
that
can
get
done
at
any
time.
I
don't
think
it's
a
it's
a
pressing
issue.
That's
just
version
controlling
those
one
thing,
chris.
B
So
if,
if
your
sap
design
is
merged,
so
basically
we
have.
We
have
made
agreement
that
the
pki
stuff
will
be
done
in
the
central
city
controller,
so
not
individual
components
that
have
taken
care
of
their
own
pks.
So
the
the
the
charles
you
know,
component
design
needs
to
be
needs
to
be
changed
because
that
doc
says
that
each
component
maintains
its
own.
A
A
Yeah,
I
think,
there's
another
call
out
that
you
would
that
you
had
commented
in
here
just
to
call
this
out
oops,
so
you
had
added
in
that.
We
basically
should
we
update
this,
since
it
still
mentions
the
ca
being
created
by
each
component
yeah.
So.
A
A
A
A
That
it
sounds
like
we
basically
can
yeah,
so
we've
got
18
that
that's
going
and
updating
the
designed
the
the
actual
diagrams
to
use
plant
uml.
Then
we
have
the
base
scaffolding,
which
I
believe
that
might
be
I
mean
we
might
have.
No,
that
didn't
actually
get
done
because
we
don't
have
all
of
them
in
there.
That's
right,
okay,
cool
all
right,
then
we
have
create
controller.
So
that's
what
what
chad's
been
working
on.
I
started
this
work
and
then
skipped
off
to
go.
A
Do
the
ncp
work
so
now
we
have
the
so
once
we
get
the
ncp
work,
we
can
get
that
the
well.
We
can
actually
start
that
in
parallel
and
we
can
basically
create
the
controller
for
that,
so
I'm
to
actually
make
another.
That
is,
create
the
controller
for
the
nested
control,
nested
cluster
and
nested
control.
Plane
controllers
I'll
make
two
separate
issues.
B
B
A
B
A
Pretty
much
anything
in
here
that
that
can
be
picked,
it
can
be
picked
up
as
long
as
it's
got.
It's
been
documented
in
the
design.
Docs
would
be
what
I
would
say.
So
we
so
what
we
don't
have
there's.
I
think,
there's
three
issues
in
here
that
we
need
to
make,
because
we
need
to
make
one
for
the
controller
manager.
We
need
to
make
one
for
the
control
plane
and
we
need
to
make
one
for
the
cluster
controller.
So.
B
B
A
I'm
just
going
to
link
out
to
this
one
and
then
I'll
go
I'll
fill
out
the
information
on
these.
So
I
don't
have
to
type
it
on
here,
but
I'll
do
that
all
outside
of
this
call
not
to
waste
all
your.
E
A
E
A
Sounds
good
real,
quick
just
going
back
through
the
just
going
back
up
to
the
top
because
we
have
or
to
the
middle
the
admiral
their
admiralty
yeah.
This
thing,
the
feedback
that
we
were
getting
from
from
k
fox,
I
think
you
you
had
you
had.
A
Yeah,
so
the
idea
so
so
k
fox
has
brought
up
if
we
could
bring
this
into
into
into
cap
n
to
basically
use
use
that
multi-cluster
controller
at
all
or
leverage
any
of
the
bits
that
they've
done
for
scheduling
for
multi-cluster
scheduling,
similar
to
what
you're
implementing
in
the
I'm
going
to
assign
this
to
you
and
then
I'll.
Let
you
handle
handle
this
because
I
think,
because
you're
doing
something
similar
already
in
in
vc,
so.
F
Just
some
more
requests
like
we
usually
tend
to
ask,
either
to
integrate
so
like
the
other
way
around
non-cncf
projects
and
not
use
them.
F
A
good
call,
because
there's
a
term
of
ownership
that
like
it,
has
to
be
owned
by
cncf
to
make
sure
that,
like
we
don't
make
decisions
for
users
that,
like
they
wouldn't
want.
E
A
A
C
Suggestion
be
vince
yeah,
I'm
really
interesting
about
the
virtual
couplet,
so
I'll
take
a
look
as
well
and
if
I
need
help
favor
try
that
and
I
will
take
a
look
at
this
item.
C
F
Okay,
I
guess
it's
in
sandbox
still,
so
I'm
like
we'll
definitely
wait
for
incubating.
First,
usually
and
yeah
I
mean
like,
if
that's
useful,
for
for
you
all
like
you,
I
guess
you
could
start
like
looking
at
it.
A
No,
I'm
still
going
to
send
it.
Okay,
yeah
cool
yeah,
I'm
going
to
sign
it
to
you
faye.
If
you
want
to
handle
it.
If
you
want
to
just
triage
that
it
sounds
like
we
probably
are
just
going
to
push
this
off
then
and
just
be
like
we're
working
on
other
things
for
right
now
and
as
as
we
get
to
virtual
or
as
we
get
to
v
cubelet
into
incubating,
we
can.
We
can
evaluate
how
it
fits
into
this
world.
A
Oh
okay,
okay,
true
yeah,
because
it
just
should
register
itself
as
a
cubelet,
so
it
shouldn't
be.
It
shouldn't,
be
anything
special
for
us.
It
should
just
be
them.
Integrating
in
in,
like
vince
was
saying
cool
all
right,
so
the
other
ones.
We
have
an
sdk
nested
api
server.
I'm
just
gonna
sign
this
to
myself.
B
B
A
Yeah,
I'm
just
waiting
for
that
to
for
that
to
get
merged
before
before
I
go
and
make
some
changes
and
then
have
to
recycle
through
that
cool.
So
we
have
got
the
cluster
and
the
controller,
the
yeah,
the
nested
cluster
and
the
nested
control
plane
types
and
we
have
the
controller
manager.
A
All
right
does
anybody
want
to
pick
up
control,
plane
or
cluster
types
and.
A
Controllers,
if
you've
got
cycles,
if
you
don't
that's
fine.
C
I
can
take
a
look
first
and
let
you
know
after
that:
okay,
all
right,
which
one
yeah
we
create
cloud
controller
and
that's
controlled,
ls.
Okay,
I
will
take
a
look
first,
see
how
much
work
and
how
can
I
evolve
more
on
that.
A
Well,
I'll
add
more
into
the
issue
after
after
this
meeting.
Just
so,
you
don't
have
to
see
me
type.
A
Cool,
do
we
have
more
issues
that
we
want
to
get
filed
in
here?
I
know
I've
been
doing
I
I
know
fay
you've
been
doing
a
bunch
of
stuff
in
the
in
the
vc
land
still
and
I'm
I'm
doing
a
couple
more
things,
and
I
think
wei
is
also
doing
a
couple
things
in
there.
So
we've
let
this
sit
for
a
hot.
Second,
I'm
not
sure
if
there's
other
issues
that
we
should
that
we
should
file
as
of
right
now
or
if
we
can
just
move
forward
with
these
yeah.
B
A
I
don't
know,
and
I'm
not
sure
if
yeah
I'm
not
sure
what
we
should
be
working
back
towards
or
from.
B
B
And
then
I
mean
I
suggest
that
we
we
discuss
this
and
make
some
milestone.
I
mean
it's
just
planning
sure.
Otherwise
it
is
kind
of
hard
to
track
the
progress,
and
that
makes
sense.
B
If
you
need
those,
I
mean
we
need
to
also
think
of
how
to
move
the
vc
stuff
to
this
repo.
So
everything
that
cannot
be
done
in
parallel.
I
I
think
we
have
to
die
in
sequential,
because
it's
about
the
repo
organization.
How
do
we
agree
so
that,
because
we
may
have
two
different
crd,
even
groups
mixed
together,
I
mean
maybe
somehow
in
the
later
it
has
to
be
temporary,
but
that
has
to
be
done
until
you
know
we
have
some
concrete
ncp
ready
already.
A
For
sure
now
that
makes
sense
real,
quick
vince
from
from
the
cappy
side
of
things
and
managing
milestones,
and,
and
all
of
that
is
there
any.
Is
there
any
common
practices
that
I
should
be
following
or
we
should
be
following
things
like
the
actual
milestone
that
you're
tracking
against?
Can
it
be
something
like
just
v001,
or
should
it
be
like
just
mvp
or
is
there
any
patterns
that
we
should
be
following
or
is
it
loose
the
mvp?
We
did
zero
one.
F
Zero
just
because
if
it's
one
alpha
one,
that's
what
I
would
target
and
then
usually
we
create
two
milestones:
one
for
the
zero
release
and
one
for
the
x
release.
So
everything
in
zero
releases,
release
blocking
and
the
dot
x
release
is
like
gonna,
be
in
a
patch
version
later,
but
it
it
it
doesn't.
It
cannot
be
breaking
changes.
A
F
Which
is
kind
of
like
a
catch-all
for
everything?
That's
like
not
like
like
now
and
like
you
need
a
proposal
or
something
like
that,
and
then
every
like
cycle.
We
kind
of
like
reconciliate,
like
all
the
issues
open
to
see
like
if
we
want
to
put
everything
in
a
new.
A
In
a
new
release,
okay,
so
we're
gonna
have
a
next
one
as
well.
So
three,
okay,
I
didn't
set
any
dates,
as
you
can
probably
already
tell
on
these.
A
So
do
I
wanna
work
back
towards
or
we
want
to
work
towards
some
specific
milestone.
Is
there
anything
that
you're
trying
to
you're
trying
to
hit
time-wise,
fey
or.
A
Sure,
no,
no,
of
course,
of
course
I
mean
we're
looking
at
so
we're
like
february
23rd.
Do
we
want
to
try
and
get
these
controllers
done
and
an
mvp
in
place
by
the
end
of
march
end
of
q1
of
of
this
year?
Do
we
think
that's
reasonable?
That's
reasonable,
at
least
to
me.
A
So
work,
so
we
would
get
these
controllers
out
done
by
the
end
of
march
and
then
end
of
april
we'd
set
like
the
next
milestone,
which
is
which
is
integrating
vc
into
it.
That's
what
you're
saying
okay,
I
like
that.
F
Is
is
that
a
1.0
for
the
end
of
april,
though.
A
I
wouldn't
consider
it
an
a
1.0.
I
think
we
would
still
be
which
that
would
just
be
integrated.
Well
correct
me
if
I'm
wrong
faye,
but
I
would
assume
that
would
just
be
integrating
what
we
have
in
vc,
so
that
you
can
leverage
these
nested
control
planes
with
vc,
but
still
not
be
a
full
dot
like
a
full
1.0
release.
I
think
we'd
still
need
some
cycles
on
that.
B
From
one
point:
what's
the
car
here
criteria
for
1.0,
so
I
mean
I
I
I
actually
have
never
hit
any
project
that
I
was
working
on
to
reaching
one
point
or
we
manipulate
just
zero
points.
I
mean
one
two
three
is
kind
of.
I
don't
know.
I
have
some
other
criteria
on
other
projects,
one
1.0,
but
I
don't
know
for
capping.
What
does
1.0
mean?
It
has.
F
for
now,
but
yeah
like
so
the
the
zero
one.
Zero
release,
usually
like
it's
expected
to
be
something.
That's
that
works
like
it
would
be
the
first
release
and
if
you
need
multiple
steps
during
like
the
next
couple
of
months
like
you
could
also
release
like
beta
tags,
so
pre-release
versions
in
github
release.
F
B
I
think
I
think
yeah
there
are
a
couple
of
things
right
so
so,
since
this
repo
is
about
the
campaign,
so
if
you
look
at
the
theme
it
has
to
be
I
I
I
as
long
as
ikea
works,
I
mean
you
reach
a
milestone.
This
report
doesn't
necessarily
because
of
the
naming
issues
I
mean,
although
it
has
combined
with
vc,
but
that
this
repo,
I
don't
think
it's
100,
representative
vc,
although
we
may
represent
this
repo
as
we
see
so,
I
can
see
that
you
know
for
we.
B
0.1.234,
that's,
okay,
it
has
to
be
0.2,
it's
my
opinion.
So
if
you
think
you
know,
once
we
bring
vc
stuff,
you
make
it
as
0.2.
That's
also
fine,
but
I'm
assuming
every
you
know.
Bumping
of
the
major
release
number
like
0.2
means
you
have
some
innovation
or
innovation
on
the
kpn
stuff.
This
is
my
understanding.
D
A
A
B
Cool
yeah,
so
yeah,
that's
my
understanding
because
of
naming
so
the
dot
two
has
to
be
something
new
for
the
kpi
architecture,
either
architectural
functionality.
But
all
the
you
know,
arguments
you
know
additional
stuff,
even
for
the
vc
stuff.
We
can
add
this
small
minor
version
number.
If
you
track
to.
A
Yeah,
that's
perfect,
then
that's
that's!
That
would
be
what
what
fits
into
our
1.0
it's
just
you
won't
be
able
to
you
won't
be
able
to.
You
won't,
be
able
to
schedule
any
pods
from
it
in
the
nested
cluster.
Just
yet,
but
that'll
come
quick
that
yeah
cool
all
right,
so
I'm
not
going
to
set
a
milestone.
Well,
actually
so
I'll
set
this
1.x
release
as
that
april.
Actually,
no
I'm
going
to
leave
that
timeline.
Blank
for
right
now
and
we
can
just
put
stuff
into
this
I'll.
A
Add
this
as
integrating
vc
with
cap
n
and
then
so
we'll.
We
won't
set
a
timeline
because
we'll
kind
of
wait
until
we
get
the
that
1.0
in
place
and
then
we'll
move
we'll
set
a
timeline
once
we
have
that
fully
closed.
A
Yeah
I've
got
it
due
march,
31st.
A
A
I
mean,
if
you
never
what's,
there's
the
one
of
those
programmer,
not
virtues,
but
but
rules
that
are
basically
like
there's
just
it's
going
to
be
a
never-ending
work.
If
we
don't
have
an
actual
timeline
or
a
release
that
we're
working
towards
so
all
right
do
we
have
any
other
housekeeping
items
that
we
want
to
do
we
want
to
handle
while
we're
here,
I
guess
we
can
go
through
and
throw
some
of
these
into
the
into
those
milestones.
B
A
All
right
so
then,
everything
is
basically
representing
that
for
right
now
and
then
we'll
get
the
and
we
can
get
the
milestone,
plug-in
added,
and
then
we
can
actually
use
those
commands
cool.
That
sounds
good.
I
like
this,
so
we
have
an
actual
plan
of
plan
to
move
forward
on
a
little
more
concrete.
A
Anybody
have
anything
else
that
they
want
to
bring
up
agenda
still
yeah.
D
You
want
to
discuss
the
sinker
stuff
if
you
want,
or.
A
Sure
do
you
want
to
do
like
the
open
pr's
I
mean,
I
mean
the
pr
that
you
are
working
on
right
yeah.
We
can
talk
a
little
bit
more
about
that.
A
Yeah
your
most
your
most
recent
feedback
was
basically
the
flow
that
I'm
missing
so
yeah.
There's.
Let
me
let
me
give
you
a
background
background.
I
want,
I
think
you.
A
I
think
you
fully
understand
what
I'm
what
I'm
going
to
be
implementing,
but
in
essence
I
have
another
pull
request
that
I
haven't
finished,
which
is
adding
in
a
web
hook
that
basically
just
handles,
creates
and
creates
go
through
and
and
following
this
same
exact
flow
just
goes
and
adds
the
adoptable
annotation
when
it
creates
the
resource
and
expects
that
the
that,
when
the
syncer
picks
it
up,
I
mean
I
don't
walk
through
the
code,
but
when
the
syncer
picks
it
up
that
it
goes
and
checks
to
see.
If
that
annotation
exists.
A
Now
the
flow
that
you're
talking
about
that
isn't
covered,
we
create
the
the
service.
We
change
the
service
spec
vservice
back
then
v
service
is
created.
What.
B
C
A
So
I
check
to
see
if
it's
an
adoptable
service,
so
we
go
through
here.
Let
me
actually
open
up
this,
this
full
file.
So
I
I
did
add
that
after
you
added
that
feedback,
there
is
still
there
is
still
a
workflow
in
here.
That
is.
That
is
bad.
In
my
opinion,
which
is
the
well
there's,
there's
two
parts
that
are
that
are
potentially
conflicting.
A
So
once
we
get
the
once,
we
get
the
list
of
services
for
them
from
the
patroller
we
go
through
and
we
check
to
see
if
it
should
be
deleted
and
if
it's
not
an
adoptable
service,
we
added,
we
add
the
should
delete.
But
if
it's
an
adoptable
service
we
we
don't
add
that
now
the
one
place
where
that
breaks
down
is
if
the
admission
life
cycle
in
the
tenant
control,
plane
say
the
tenant
went
and
added
in
something
that
validated
to
see.
A
We
would
have
created
the
service
in
the
super
cluster
admissions
at
the
tenant
control
plan
will
fail,
meaning
we
still
have
the
orphaned
service
in
the
super
cluster,
but
it
looks
like
it's
adoptable,
and
so
it
just
sits.
There
that's
a
case
that
I
definitely
haven't
handled
yet,
and
I'm
still
trying
to
figure
out
exactly
how
how
to
handle
properly
yeah.
B
So
it's
okay,
I
mean,
but
you
have
to
stop
this
behavior,
I'm
okay,
you
come
up
with
saying,
just
don't
create
it,
leave
it
as
and
let
people
to
figure
out
how
to
resolve
it
I
mean,
or
if
p
service
is
deleted,
you
need
to
delete
the
result
resurface
because
you
need
to
notify
the
tenant.
B
A
A
What
what
downstream
could
that
affect
if
a
controller
was
was
looking
for
those
because
what?
Basically,
in
essence,
what
what
could
happen
in
their
in
their
tenant
control
plan
that
they
wouldn't
expect?
If
that
were
to
be
the
case,
and
I've
been
trying
to
figure
out
if
there's
a
way
that
we
could
do
it
otherwise,
where,
if
the
p
service
is
created
or
is
deleted
in
the
super
cluster,
we
still
have
some
allocation
of
that.
A
But
I
don't
think
that's
possible,
specifically
the
the
cluster
ip,
because
that's
the
only
thing
that
we're
trying
to
deal
with
there
is
like
making
sure
that
the
cluster
ip
stays
the
same,
and
it's
almost
like.
A
If
it
was
already,
it
would
just
get
reallocated
to
something
else,
or
it
could
potentially
get
reallocated,
because
this
like
what
I
was
thinking
there
is,
if
we,
if
there
was
a
flow
where,
if
it's
an
adoptable
service
and
the
p
service
is
gone,
we
go
and
actually
try
and
recreate
with
the
cluster
ip
that
was
originally
assigned
to
it.
But
that
could
have
been
reallocated
that
that's
impossible.
B
B
A
Sure,
no
I'm
sure
it
will.
I'm
sure
it
will
like
just
an
accidental
delete
on
on
a
service
and
in
the
in
the
super
cluster
would
just
tank
that
that
cluster
from
actually
operating
prop
properly.
So
maybe
what
what
should
be
done,
then,
is
we
should
so?
Are
you
thinking
that
I
could
implement
in
the
a
deletion
check
within
the
super
cluster
check
to
see
if
that
virtual
cluster
exists
or
the
virtual
cluster
service
exists?.
A
I
mean
I
could,
but
that
would
be
that's
just
more
more
more
work
for
the
for
the
super
cluster
to
then
try
to
triage
back
and
forth.
A
Oh
that's
an
interesting
idea.
I
hadn't
even
thought
about
that
yeah,
okay,
interesting,
so
we
could
basically
have
an
have
if
it's
been.
If
it's
been
created
by
the
tenant
control
plane.
Through
this
adopting
phase,
I
mean
you.
B
A
Okay,
that's
an
interesting
idea,
yeah
that
seems
like
that
would
actually
solve
that
problem,
because
then
we
did
it.
We
in
essence
wouldn't
be
able
to
create
it,
wouldn't
be
able
to
delete
that
from
the
super
cluster.
The
only
thing
that
we'd,
still
we
wouldn't
be
handling
is
like.
If
the
entire
control
the
super
control
plane
went
out
and
was
like,
was
unrecoverable
from
ncd
yeah.
A
B
A
The
one
the
only
problem
there
is,
if
that
doesn't
catch,
if
they
actually
delete
it
in
the
vc.
So
there's
there's
the
phase
there,
where
I
kind
of
do
have
to
handle
that
in
the
sinker.
Potentially,
I
can
still
put
this
under
the
same
feature
flag
then,
because,
if
you
delete
like
if
I
wait
and
wait
on
a
controller
to
check
to
see,
I
guess
I
can
have
an
outside
controller.
That's
listening
for
the
services
in
the
in
the
tenant
control
plane
as
well,
because.
D
B
B
I
was,
I
was
mentioning
the
pr
which
is,
I
think,
let
me
speak
so
the
the
the
it
is
possible
that
people
delete
the
these
service
and
the
creative
service
with
the
same
name
immediately.
So
the
wrapper,
who
can
clear
the
old
one.
A
Yeah,
but
I
had
yeah
that's
a
good
point.
I
had
originally
basically
set
it
up
to
do.
Finder
create
in
essence,
so
it'd
go
and
if
it
found
one
that
was
already
existing
it
could
it
could
leverage
that
and
just
take
it
over
and
re-adopt
it.
But
there's
probably
some
issues
there
from
a
security
standpoint
where,
if
you
adopted
a
service
that
was
associated
with
other
well,
no,
it
would
just
get
re-reconciled
at
the
end
of
the
day.
C
Chris,
do
you
think
it's
possible
that
you
get
rid
of
current
vc
thinker
service
part
and
create
a
total
new
one?
With
your
web
hook,
combined
with
the
super
cluster
service
creation,
we
got
the
mix
with
the
current
vc
syncer
service
synchronization.
C
We
always
have
kind
of
a
loop
either
you
create
that
one,
the
vcsync
or
recreate
or
redelete
this
kind
of
thing.
So
why
not
just
to
get
rid
of
the
current
vc
thinker
service,
synchronization
method?
Totally,
then
everything
goes
through
your
super
classic
plus
web
book
for
the
service.
A
B
C
C
If
you
have
a
root
privilege
and
can
control
the
super
cluster
user
has
to
bear
with
that.
This
is
a
by
design.
You
cannot
compete
with
admin
in
the
supercluster.
C
C
B
B
C
Okay,
so
this
is
this-
is
the
kind
of.
B
This
is
different.
It's
not
saying
that
I
mean
we.
We
create
a
hole
for
the
work
for
the
tenant
to
use
the
virtual
resource,
the
super
resource.
No,
not
that
way.
It's
like
it's
like
proxy
is,
you
know
the
thinker
is
proxy
everybody
and
now
everybody
as
a
root
and
because
it's
a
very
strict
way
because
we
control
we
carefully
control
everything
in
a
synchro
to
make
sure
there's
no
abuse.
A
Yeah,
so
if
we
so
so,
your
your
flow,
your
suggestion
that
you
had
where
basically
pretty
much
everything
that's
implemented
here,
stays
the
main
addition
that
we
would
add,
is
in
the
checks
on
a
deletion
at
a
supercluster.
A
We
go
and
delete
the
I
mean
in
essence
delete
and
recreate
the
service
against
the
control
against
the
virtual
clusters,
control
plane
from
what
the
object
was
currently
in
there.
Just
basically
remove
the
cluster
ip
and
let
it
re-allocate
itself
because
then
it
just
goes
through
the
admission
life
cycle
again,
so
sinker
gets
a
request.
Sorry
request
comes
in
to
the
tenant
control
plan
it
for
a
new
service.
It
goes
and
creates
in
the
super
cluster
life
goes
on.
A
Somebody
goes
into
the
super
cluster
or
something
happens
in
the
super
cluster
and
that
that
service
gets
deleted.
The
sinker
notices,
the
deletion
on
the
super
cluster
side
of
things
deletes
grabs
the
object
from
from
the
tenant
control
plane.
The
v
service
deletes
the
v
service
and
then
recreates
it
with
that
same
exact
object
without
a
cluster
ip
assigned
to
it
and
without
resource
versions
and
all
of
the
other,
the
metadata.
In
essence,
it
would
go
through
the
admissions
lifecycle
and
create
in
the
p
service.
B
C
Yeah,
this
is
why
I,
I
was
a
little
bit
have
another
option
with
chris
that
I
use
nat
to
net
the
virtual
cluster
service
using
the
super
cluster
ip.
So
I
just
do
a
net
on
the
port
side
so
that
we
can
translate
locally
to
the
virtual
cluster
service
to
the
real
ip,
which
is
a
allocated
by
the
super
cluster.
B
C
B
A
C
Why
yeah
the
crew
proceed?
Stuff
has
a
little
limitation
because
not
every
user
use
a
group
process.
Some
user
is
style.
Some
user
use,
a
different
network
stack
for
the
kubernetes,
so
just
changing
cupracy
is
only
one
solution,
so
we
would
like
to
have
a
more
generic
approach
than
that.
B
B
But
I
don't
have
a
strong
opinion
if
you,
if
you
have
a
good
prototyping,
that's
that's
nice.
I
don't
have
strong
objection
for
that,
but
people
will
always
have
their
opinion
of
having
another
sidecar
containers.
C
B
That's
the
reason
you
probably
have
to
put
in
you
need
a
container,
so
you
need
to
kind
of
need
to
do
something.
You
need
to
wait.
I'm
here,
you
know
the
whole
entire
rule
is
up
is
correctly,
but
it's
a
bit
tricky
because
chris,
you
know
we
do
that
because
we
we
leverage,
cutter
containers.
So
then
you
can
do
that
so
way.
If
you
want
to
do
that,
which
means
you
have
a
root
privilege,
you
you,
your
port,
your
container
can
change
the
host
networks
iv
table
so
yeah.
It
may
not
be
true.
C
B
It
depends,
but
I
I
I
I
can
I
I
can
see
some
limitations
there,
because
I
think
I
think
people
are
even
talk
about
get
rid
of
the
part.
You
know
entire
privilege
about
operating
the
narrow
stacks.
B
If
you
joined
the
multi-tenancy,
I
mean
working
group,
they
were
talking
about
the
the
the
reducing
the
capability,
the
system,
the
the
policy
for
the
capability.
I
think
you
to
get
rid
of
a
lot
of
probability
of
operating
on
the
networking
stack
if
you
open
and
that's
okay,
but
I
mean
I
mean
it's
very
specific
in
scenario
specific,
so
I
agree
you
know
chris's
intention
to
try
to
resolve
it
in
a
kind
of
don't
change
that
that
part
that's
hard
to
make
yeah.
C
B
C
You
can
yeah,
I
already
did
that
already
did
that
it
works,
but
we
didn't
just
don't
want
to
go
that
direction,
because
we
need
a
privilege,
change
and
and
also
chris
seems,
a
more
kind
of
a
clean
version.
So.
B
Yeah,
I
I
think
yeah,
I
think,
for
the
first
first
version
just
make
sure
everything
works
and
kind
of
resolve
these
corner
cases
just
try
to
change
as
much
as
possible.
At
least
people
can
give
you
a
try
to
see
what's
going
on.
This
is
a
hard
problem.
I
mean
we,
we
can
point
out
a
short
card,
but
with
some
additional
condition
to
resolve
this
problem.
But
if
you
don't
have
that
short
card,
I
mean
for
sure.
A
Yeah,
there's
some
definitely
there's
some
weird
corner
cases
here
for
sure
to
resolve.
I
I
I
super
appreciate
your
help
like
navigating
these
things
and
and
calling
them
out
that,
because
I
I
had
missed
that
virtual
cluster
one
and
I'm
surprised
that
it
hasn't
bit
us
yet,
at
least
in
my
testing
that
it's
mostly
just
like
that
timing
issue.
A
Okay,
so
I
can
get
I
I'll
get
the
I'll
get
the
other
change
in
here
and
feature
flag
it
as
well,
just
like
everything
else,
that's
in
here,
so
that
it
doesn't
affect
anything
that
you're
doing
and
then
cool.
That
makes
sense.
How's
the
scheduler
stuff
going
just
out
of
curiosity
to
check
in,
and
we
only
have
seven
minutes
left,
yeah.
B
Yeah,
I
think
we
can.
We
can
have
a
working
demo
this
week,
actually.
C
Awesome,
so
is
this
your
code
in
the
experimental
in
a
motor
tenancy
code?
Oh
this
part,
okay,
awesome.
B
Work
was
you
know,
maintaining
the
cash,
the
steady
scheduling
part
is
pretty
simple,
but
at
least
you
can
for
the
simple
use
I
mean
very
common
user.
I
can.
It
can
cover
80
of
use
case.
In
my
opinion,
yes,
let's
create
a
namespace
given,
and
the
quota
will
be
spread
to
multiple
supercluster
and
the
thinker.
Each
supervisor
will
have
a
thinker
just
sync.
The
particular
parts
have
the
writing.
A
A
Yeah
I
mean
if
you
have
a
working
demo,
I
would
love
to
check
that
out
next
week
or
if
you,
if
you're
willing
to
to
to
present
on
that.
B
Yeah
yeah,
I
mean
walking
demo
in
the
sense
that
I
I
would.
I
think
that
odin
probably
is
one
or
two
changes
left.
So
I
assume
we
have
a
looking
at
working
levels
this
week
I
I
mean,
and
next
we
definitely
we
should
have
a
working
demo,
but
I
I
I
prepared
zero
document,
so
some
document
first
basically
yeah.
C
You
want
to
see
some
of
your
explanations
about
the
code
and
how,
with
your
basic
ideas,.
B
B
B
A
Cool
yeah
I'm
excited
to
see
that
whenever
you're
ready
just
make
sure
you
add
it
to
the
agenda,
so
we
can
actually
have
it
like
scheduled
out.
You
may
not
even
take
I
mean,
take
multiple
slots
and
my.