►
Description
#sig-cluster-lifecycle
#capn
#capi
A
All
right,
good
morning,
everybody,
this
is
the
february
9th
cluster
api
provider
nested
office
hours
again,
as
always,
this
is
recorded
and
posted
to
youtube.
So
don't
say
anything
you
wouldn't
want
to
be
published
to
the
whole
world.
Then
we
can
get
started
faye.
I
added
your
item
to
the
top,
so
you
can
actually
just
get
started
on
that.
B
Yeah,
just
I
want
to
give
you
guys
a
heads
up,
because
I
think
waiting
was
telling
us.
It
is
not
very
convenient
to
add
a
new
thinker
which
requires
some
code
code
changes
for
the
existing
code.
So
we
probably
we
face
the
same
problem
internally,
so
yeah.
So
I
we
decided
to
you
know
refactor
most
of
the
thinker
code,
for
we
have
done
a
few
steps.
First,
you
know
moving
all
the
moving
order,
the
multi-cluster
reconciler
framework
outside
of
sync
and
put
to
the
util
so
which
means
any
other.
B
Whatever
controller
you
want
to
implement
the
kind
of
modern
cluster
reconsider,
you
can
leverage
the
framework
which
is
not
tied
to
sinker
so
which
also
facilitated
my
you
know,
scheduling
my
scheduler
schedule,
the
prototype
kind
of
thing,
because
my
scheduler
also
needs
kind
of
multi-cluster
controller
style
style.
This
is
one
step
second
step.
Is
we
we're
trying
to
remove
all
the
hardcoded
types
in
the
mcc
controller?
B
Now
it's
kind
of
it's
a
refractor
time
matching,
so
there's
no
hard
particle
type
any
anymore
in
the
mcc
controller.
So
everything
is
in
the
reflector.
So
to
do
the
compilation
and
to
do
the
mapping
match
the
second
step.
The
third
step
is
we
slightly
change
the
resistation
steps
instead
of
use
guys
still
remember
in
the
resource,
we
have
a
registration
start
goal.
You
have
to
add
your
code
if
you
add
a
new
resource
controller
to
the
registration,
so
we
slightly
change
that
approach.
B
Basically,
we
move
the
resolution
to
the
init
to
the
unit.
You
need
function
of
each
each
resource
module
so
every
time.
So
then
we
have
a
centralized
place
to
import
every
resource.
You
are
wanting
to
register
so
in
that
way
to
adding
a
new
crd
pretty
much,
you
only
need
to
add
one
line
in
the
resource
import.
That's
it
and
no
existing
code
need
to
be
changed.
So
just
add
one
line
tell
me
which
which
module
you
want
to
import.
Then
then,
then,
you
can
just
add
your
code.
B
Your
separate
directory
just
follow
our
kernel
style.
Editing
need
functions
to
register
yours,
your
crd
and
the
implemented
controller.
The
interface,
I
think
the
change
to
the
existing
code
should
be
minimal,
minimal.
Now
yeah,
that's
that's!
That's
what
what
happens
currently
yeah.
C
Hi,
so
I
have
a
few
questions.
I
thought
I
totally
followed
the
progress
of
this
code
change
because
on
same
time
I
also
doing
this
internally.
So
one
question
is
that
is
this
the
final
steps
or
you
are
still
planning
to
do
more
kind
of
a
changing.
B
For
the
crd,
this
is
probably
we
will
do
a
few
other
minor
changes,
but
not
in
the
interface
layer,
a
property
interface
in
the
sense
that
we
may
rename
the
the
the
mc
controller.
There
is
again
method
and
and-
and
there
was
another
method
called
get
by
type,
so
these
two
methods
may
cause
confusion,
because
if
you
look
at
the
current
code
after
refactoring,
sometimes
you
will
see
getting
a
getaway
time
use
the
interchangeable,
not
interchangeably,
but
you
can
see
in
one
file.
B
Sometimes
I
could
get
sometimes
I
could
get
by
type.
So
there
are
some
inheritance
in
the
sense
that
get
your
type
you
are
trying
to
get.
It
is
the
type
that
you
registered
to
the
mccontroller,
so
this
is
very
implicit,
so
I've
been
changing
the
method,
the
name
of
get
to
something
else,
be
more
more
explicit,
more
yeah.
C
This
is,
this
is
the
so
I
can
finally
rely
on
this.
This
is
maybe
the
stabilize
the
code
right
if
you
just
get
involved.
Okay,
so
another
question:
that's
in
our
implementation
that
the
the
vc
sinker
is
a
stand-alone
module
which
is
single
service
when
a
single
process
and
our
crd
syncer
is
another
process.
So
the
changeur
made
is
a
designed
to
have
both
crd
and
the
standard
syncer
in
the
same
process.
Correct,
yes,
yeah,
because
so.
B
C
Okay,
this
is
totally
fine.
Another
question
is
that
normally
for
the
crd,
we
have
a
large
customized
crd
client
set
informer
and
lister.
All
these
things
can
be
generated
based
on
the
crd
types,
and
we
use
that
general
code
to
maintain
this
decline.
Cr
decline
connections
reconcile
yeah,
so
I
cannot
see
that
with
this
new
change,
I
still
need
to
rely
on
this
client
right
all
this
customization.
C
Definitely
so,
in
this
case,
our
code,
the
register,
so
the
scheme
itself,
I
need
to
add
all
your
scheme.
Sometimes
there
still
have
a
little
bit
complication
to
add
the
scheme.
I
think
so
it's
not
one
line
change
for
sure.
I
still
need
to
maintain
an
addition.
So,
on
the
scheme
side,
I
have
to
add
this
new
types
in
so
that
they
can
go
picking,
result
types
and
also
on
the
crd
con
sinker
side.
I
have
reconstructed
this
customized
client.
C
I
cannot
use
a
client
goal
client,
because
this
is
just
a
standard
client
I
have
to
construct.
So
I
think
this
is
the
same
thing
for
your
internal
kind
of
things,
correct.
B
Our
purpose
is
the
first
thing
note,
so
we
don't
want
any
code
changes
to
the
framework
code
so
that
that's
the
point
that's
this,
which
is
inevitable.
You
have
to
add
a
new
quantity.
You
know
tried
client
client
for
the
crd.
You
cannot
avoid
that.
The
per
the
reason
is
that
we
don't
want
to
change
the
framework
code
for
every
new
crd,
because
that
code
is
supposed
to
be
stable
and
reliable.
B
So
that's
the
main
reason
to
do
the
refactoring,
but
I
I
agree,
but
for
the
schema
part
I
I
think
I
need
to
double
check.
So
I
would
I
would
hope
you
know
the
schema
resolution
can
be
done.
B
C
We
can
do
it
in
initialize
the
function,
it's
no
problem,
but
for
the
testing
code,
I
think
for
testing
code.
Currently,
you
provide
the
fake
client
for
standard.
It's
just
for
fine
when
you
do
the
name
space
do
the
part
so
fine,
but
for
the
crd.
Sometimes
you
need
both
the
client,
the
standard
client,
as
well
as
the
crd
client,
in
order
to
really
do
your
test.
So
this
way
now
our
test
frameworks
only
give
two
set
of
clients,
super
cluster
standard,
client
and
the
virtual
cluster
client.
C
So
in
order
to
make
this
work,
we
still
need
to
maintain
a
private
test
framework
so
that
we
can
manage
all
these
four
types
of
clients,
super
cr
decline,
super
standard,
client,
virtual
client
and
virtual
crd
client.
So
I
I
don't
have
any
plan
to
modify
that
or
seems
a
complicated.
This
part.
It
cannot
be
very
generic,
because
this
is
really
test
framework
for
my
crd,
together
with
the
standard
client.
B
For
now
I
would
say
maybe
test
framework
you
have
to
duplicate
duplicate
it
yeah.
C
This
is
what
I
did
it's
not
hard
for
me
to
if,
unless
you
have
a
plan
to
change
it,
I'll
wait
for
your
change,
but
otherwise
this
will
be
the
a
little
bit
customized.
B
It's
kind
of
yeah,
I
don't.
Let
me
not
discuss
internally.
First,
I
can
give
you
answer
maybe
tomorrow,
but
my
current
thought
is
probably
we
won't
change.
We
we
we
won't.
We
won't
change
that
framework
because
so
it
is,
let
me
think
about
it.
It's
probably
doable.
C
B
C
If
you
don't
have
any
change,
I
will
add
a
more
change
so
that
you
can
update
the
client
on
the
fly
by
by
a
little
bit
small
change
there.
B
Yeah,
so
so
we
can
do
this,
so
I
think
whatever
experience
you
have
so
far
in
terms
of
testing
or
or
adding
a
schema,
I
think
I
think
we
can
come
up
with
either
document,
probably
not,
if
not
formally,
you
know,
code
changes
but
based
on
current
code.
I
think
I
think
if
you
agree
that
you
know
our
thinker,
you
know
framework
firmware.
Firmware
is
good
enough
is
kind
of
stable,
then
adding
crd
is.
We
can
write
a
kind
of
tutorial
or
documents
about
how
to
address
the
idea.
B
I
I
would
have
I
I
would
use
pack
that
most
of
the
things
just
additive,
not
you
know
changing.
So
it
sounds
interesting
to
new
people,
because
this
this
kind
of
kind
of
kind
of
the
couple,
not
not
a
community
couple
but
99
percent
of
the
couple
with
the
framework.
So
can
you
do
you
want
to
drive
that?
I
think
I
I
will
more
than
welcome
if
we
want
to
do
that.
C
Okay,
okay:
I
can
talk
to
chris
and
we'll
find
some
time
to
work
on
that
with
your
folks
as
well.
Yeah,
you
just
be.
B
Just
just
just
just
share
your
experience
with
document
people,
less
people
can
just
follow
your
experience.
Okay,
okay,
yeah
yeah,
my
yeah,
that's
my
part.
A
Sorry,
just
writing
up
that
last
note:
cool
you're
gonna
outline
a
a
document
about
how
to
actually
use
the
vc
syncer
framework
for
handling
crds
is
what
is
the
in
essence,.
A
Sounds
good
yeah,
all
right,
yeah
you
and
I
can
talk
about
that
way.
Yeah
cool!
I
also
skipped
over
something
we
do
have
one
new
person
kosher.
Do
you
want
to
introduce
yourself
as
well.
D
A
Cool
glad
to
have
you
yeah
and
then
the
only
update
I
have
is.
I
still
haven't
actually
published
the
the
the
updates
to
ncpdoc
working
on
that
this
afternoon,
and
I
should
be
able
to
get
something
up
not
up
and
running
ciao.
I
also
haven't
had
a
chance
to
review
the
nested
scdpr.
E
Yeah
just
remove
some
of
the
some
of
the
code
for
generating
the
pki
the
certificate,
because
you
said
that
part
will
be
will
be
down
in
the
nest:
data
control
plan,
controller,
yeah,
there's
not
much.
I
wanted
to
say
today,
yeah.
A
C
A
No,
no,
we
had
the
whole
conversation
last
week.
I
haven't
gone
and
updated
it
since
our
since
our
long
discussion
to
add
certificate
management,
con
cube,
config
management
and
maintaining
the
owner
references
between
all
of
the
nested
objects,
so
I'm
still
working
on
writing
up
the
that
information.
A
Okay,
if
you,
if
you
go
on
the
actual
agenda
last
week,
the
action
items
were
for
me
to
do
that
and
I
didn't
get
it.
A
Yeah,
so
I'm
just
behind
over
here
cool
all
right.
Does
anybody
else
want
to
bring
anything
up,
otherwise
we're
at
the
end
of
our
agenda.
F
Yeah
question
for
the
testing
client
you
mentioned
like,
so
is
this
client
copies?
This
is
not
controller
on
time.
This
right
from
my
understanding
what
you
mentioned
before.
F
Got
it
just
destroy
like
a
two
cents
on
that,
like
the
the
generic
line
code,
like
we
had
issues
between
versions
because
like
they
bring,
they
might
bring
in
the
braking
changes.
So
if
you
can
use
the
test
and
for
controller
runtime,
it's
like
much
more
solid
than
like
what
I
would
expect
like
the
client
go,
but
like
just
two
cents
like
I'm,
not
trying
to
be
prescriptive
here
for
sure
yeah.
B
We
I
see,
I
totally
agree
with
you,
because
we
saw
that
the
compatibility
using
other
project
when
we
upgrade
kubernetes
on
116.
He
was
the
generated
triangle
called
brakes.
I
know
that
yeah,
that's
I
I'm
not
exactly.
Experts
on
you
know,
controller
runtime
based,
but
I
I
think
I
can
talk
with
our,
but
this
is
mostly
for
way
actually,
because
you
guys
you
are
trying
to
use
generally
crd
clients,
so
I
I
I
need.
B
I
I'm
not
I'm
not
exactly
sure
internally
what
kind
of
code
generator
they
use
for
the
client
for
the
cld
client,
but
I
know
just
one
project,
but
not
this
project.
They
use
a
client
logo
list.
I'm
assuming
waiting,
also
use
the
crango
based
kind
of
gold-based
client
right.
You
are
using.
B
A
Yeah
we
maintain
off
of
the
just
the
standard
code,
generator
project,
the
actual
clients
and
the
client
set
listers
and
informers
yeah
for
those
there's
no.
A
Technically
that
we
couldn't
switch
that
out
for
something
from
for
controller
runtime
based
and
be
able
to
use
a
little
bit
more
of
the
dynamic
clients.
We
could
talk
about
that
and
see
if
that
would
be,
if
that,
if
that
would
help
any
of
these
cases,
okay,.
B
It
depends
whether
you
are
hitting
the
comparability
issues
we
do
in
the
past.
F
Okay,
one
other
benefit
of
that
is
like
because
I've
heard
like
we
need
duplicate
code.
That's
like
what
kind
of
what
kind
of
client
go
like
generation
code
like
actually
brings
in,
but
the
dynamic
client.
F
So
a
lot
of
people
are
scared
because,
like
oh,
it's
dynamic
like
there
isn't
type
safety
we
actually
made
like
sure
like.
There
is
no
more
runtime
objects
or
like
interfaces
like
all
over
the
place
in
control
around
time
itself,
so
it's
more
stronger
than
it
was
for
sure.
There
isn't
like
a
that
amount
type
safety
that,
like
you,
would
get
from
a
generic
client
better
wait
for
go
2.0
for
that,
but
yeah
I'm
happy
to
help
like
so
like
if
you
like,
our
blog
so
like.
F
If
you
would
like
some
to
go
through
some
examples.
Happy.
F
A
You
can
just
also
point
to
the
controller
on
time
examples
directory
those
have
those
are
decent
to
to
show
at
least
just
connecting
to
a
client
and
not
having
to
set
up
the
whole
entire
manager
and
so
on
and
so
forth,
because
I
doubt
the
the
vcs.
It
would
be
a
big
effort
in
the
vc
syncer
to
replace
the
way
that
it
manages
controllers
but
being
able
to
just
use
the
the
package.
C
C
So
is
this
a
linked
list
that
put
all
this
new
client
into
this
linked
list
right?
This
is
the
what
we
discussed
before
right.
A
We
did
discuss
this
at
one
point.
This
is
when
you
brought
up
the
the
unstructured
unstructured
clients.
This
is
what
we
were
talking
about,
adding,
instead
of
using
unstructured,
unstructured
yeah.
This
is
exactly
that.
B
C
It's
much
easier
if
we
have
this
kind
of
dynamic
client
so
that
we
can
maintain
even
lin
kind
of
a
code
base.
F
Yeah
and
on
the
testing
side,
so
test
m
is
a
like
a
whole
test
environment
in
controller
runtime
that
actually
spins
up
locally
an
api
server
controller
manager.
I
believe
that's
it
so
an
atd
as
binaries
and
like
it
linked
them
all
together.
While
the
test
is
running
so
you
can
also
spin
up
more
in
fact
like
when
you
were
talking
about
like
oh,
I
I
have
a
need
for
like
multiple
like
test
environments.
F
We
have
the
same
need
in
cluster
api
because
there
is
like
a
concept
of
management
cluster
and
then
a
workload
cluster.
F
So
what
we
do
like
in
some
places
is
like
we
spin
up
two
testings
to
kind
of
like
have
two
kind
of
aps
servers
and
ncd,
and
that
helps
a
lot
because,
like
you,
don't
need
fake
generation,
because
the
fake,
the
fake
client
is
actually
really
buggy
and
it
actually
doesn't
sometimes
like
it,
doesn't
cover
all
the
expectations
that
the
aps
server
does.
Yes,
yeah.
One
example
is
like
field
indexes
like
they
don't
support
that
and
or
like
it
didn't
support
resource
versions
like
until
like
not
long
ago.
F
So
it's
like
still
a
lot
behind
so
like
we
actually
like,
are
trying
not
to
use
like
the
fake
client
anymore,
because
it's
not
ideal
to
have
this
super
fake.
You
know
environment
like
it
actually
doesn't
test
anything.
B
But
the
only
thing
that,
based
on
my
past
experience
with
the
controller
runtime
test
framework,
the
problem
is
at
least
I
hate
the
problem.
So
the
part,
the
part
number
used
by
the
edcd
can
be
used
by
others,
and
the
test
framework
cannot
be
started.
I
know
it's
it's
because
my.
F
Random,
it's
random.
Now
so,
like
you
could,
like
I've,
I've
been
able
to
run
like
10
different
test
environments
on
my
machine
locally,
because
if
I
do
all
the
tests
in
parallel,
it
will
try
to
spin
up
all
at
the
same
time,.
B
F
We
improved
a
lot
on
these
things,
but
yeah
you
100,
you
can
do
it
like.
You
can
spin
up
multiple
test
environments.
B
B
F
I'm
actually
we're
deleting
namespaces
today.
I'm
quite
sure
that,
like
you,
as
as
it's
true
for
kubernetes
in
general,
like
you
need
to
delete
all
the
objects
in
the
namespace
first
and
then
delete
the
namespace.
B
A
A
Control
yeah,
unless
the
controller
removes
it
when
it
gets
a
deletion,
but
it
should
still
it
should
still
operate
at
least.
But
if
you
have
two,
if
you
have
a
finalizer
on
an
object
and
you're
testing
one
controller,
but
not
the
other
one,
and
it
doesn't,
and
it
doesn't
actually
call
this
function
to
go,
remove
the
finalizer,
then
yeah
it
will.
It
will
yeah,
but
there
is
still
like
if
you're,
on
the
same
object
it
should,
if
you're
on
the
same
controller
in
the
same
object,
it
should
be
able
to
run.
B
F
Yeah,
I'm
quite
sure
that,
like
we
actually
are
deleting
namespaces
these
days,
actually
like
one
thing
that
we
did
for
some
tests
is
to
run
different
iterations
of
the
tests
in
different
namespaces,
so
they're
all
contained
and
they
don't
reuse
the
same
stuff.
All
the
time
generate
name
also
works
great
because
you
know
if
you
want
a
randomness,
and
I
cannot
rely
on
the
same
names
for
tests
yeah
that
these
are.
These
are
all
things
that
like
have
been
useful,
but
we're
just
linked
extensively
like
really
really
expensive
views
on
it.
F
So,
okay,
I'm
gonna,
grab
a
demo
for
next
week
so
that
we
can
walk
through
it
and.
B
Yeah
yeah,
I
think
I
think
my
my
my
biggest
concern
was
the
pot,
so
if
the
product
number
is
not
randomly
generated,
if
there's
a
collision
and
that's
I'll
double
check
but
100
sure
that,
like
now,
you
can
run
multiple.
A
Awesome,
I'm
gonna
throw
one
more
thing
out
there
I
might
bring
next
week
or
the
following
week.
We've
been
doing
something
internally,
so
you
know
we
had
all
those
conversations
about
cluster
ips
in
services
so
faye.
I
wanted
to
kind
of
bring
this
up
again
as
well
as
a
we.
We
potentially
have
a
path
forward
where
that
we're
gonna
use
for
right
now
right
now,
it's
all
completely
it's.
It's
all
changed
in
the
vc
syncer,
so
I
was
gonna.
A
I
was
gonna,
potentially
put
together
a
pull
request
that
has
it
as
a
feature
gate,
but
I'll
bring
a
little
bit
more
information
about
what
what
it
is
and
how
it
works,
maybe
to
the
next
next
week,
if
not
the
following
week
about
how
it's
actually
being
done,
but
in
essence
we
had
to
change
a
couple
things
like
where
services,
so
we
end
up
having
a
web
hook
that
implements
the
creation
of
services
in
the
super
cluster.
A
I
think
I
talked
to
you
a
little
bit
about
this,
but
it
goes
and
creates
those
things.
So
it's
instead
of
instead
of
the
vc
synchro
path
of
being
being
backgrounded.
It's
it's
done
as
a
proxy
for
just
that
resource
and
then
the
vc
syncer.
We
had
to
change
a
couple
pieces
so
that
we
could
get
the
kubernetes
service
to
get
the
actual
ip
of
the
or
the
cluster
ip
of
the
api
server
service,
so
that,
basically,
we
just
do
a
little
bit
of
like
flipping
in
there.
A
So,
instead
of
looking
for
the
kubernetes
service,
we
grab
that
service
and
add
that
cluster
ip
and
then
you
had
brought
up
the
issue
around
uids
and
how
the
vc
syncer
is
expecting
data
to
be
in
the
object
or
else
it
goes
and
tries
to
clean
up
like
the
patroller,
goes
and
tries
to
clean
up
any
services
that
have
mismatched
uids
and
so
there's
some
weirdness
in
there.
Where,
if
you
use-
and
I
think
this
is
what
you're
calling
out.
A
But
if
you
use
a
web
hook,
you
don't
have
the
uid
of
the
object
when
it
gets
when
it
gets
added
into
into
the
super
cluster.
And
so
you
don't
have
a
uid,
so
the
vc
syncer.
I
I've
made
some
like
hacky
changes,
and
so
I'm
interested
to
hear
your
thoughts
on
how
we
could
potentially
do
this
in
a
secure
way
where
it
would
go
and
grab
after
the
fact.
A
If
it's
a
pre-created
service,
it
kind
of
adopts
it
similar
to
how
like
a
deployment,
can
adopt,
reckon
replica
sets,
and
so
it
just
basically
adds
the
uid
and
then
allows
it
to
sync
from
there
on
out.
So.
B
A
Oh
interesting,
so
if
the
oh
yeah,
if
the
super
cluster
service
gets,
gets
deleted,
yeah
it's
out
of
sync
and
you
can't
go
and
update
the
the
tenant.
A
That's
true
that
that's
a
very
good
callout
cool
all
right.
I
will
I'll
noodle
a
little
bit
on
that
and
then,
like
I
said
I'll,
bring.
E
A
B
B
A
Yeah,
no,
that's
a
really
good
point.
Yeah
I'll
continue
to
noodle
noodle
on
that
and
see
if
we
can
bring
that
bring
up
something.
B
Yeah
I'll
be
there
yeah
all
the
rest
is
just
implementation
issues.
I
I
believe
you
can
work
around
it,
but
this
is
fundamentally
yeah
just
because
if
you
have
some
solution
to
that,
I'm
open
to
that
as
well.
Cool.
A
Sounds
good
all
right
everybody,
we
can
call
it
and
give
back
30
minutes.
Okay,
see
you
bye,
cool
thanks!
Everybody!
Thank
you,
see
ya.