►
Description
#sig-cluster-lifecycle
#capn
#capi
A
Good
morning,
everybody
today
is
february
16th,
and
this
is
the
cluster
api
provider
nested
office
hours.
I'm
gonna
trip
up
on
trying
to
say
that
every
single
time
but
anyways
this
is
a
recorded
meeting,
will
be
posted
to
youtube.
So
don't
say
anything:
you
don't
want
post
it
up
there.
We
have
a
pretty
short
agenda
so
far.
So
please
make
sure
you
add
your
name
to
that.
Google
doc.
A
We
have
two
items
on
there
and
we
don't
have
any
new
folks.
So
no
introductions
or
anything
like
that,
and
I
don't
have
any
psas
to
call
out.
So
we
can
just
jump
into
this
vince
if
you
want
to
take
over
and
start
talking
about
copy,
dev
environments
or
test
environments.
B
All
right,
so
we
briefly
talked
like
last
time
about
like
test
end.
What's
so
great
about
it,
what's
not
so
great
about
it,
and
I
guess,
like
there
were
a
few
questions
about
like
how
do
we.
B
You
know
like
a
go
from
like
having
nothing
to
actually
having
like
a
test
environment
and
also
like
how
do
we
get
to
a
place
where,
like
we
could
spin
up
multiple
test
environment
if
we
wanted
to
control,
runtime
actually
adds
like
a
lot
of
these
things,
like
it,
they're
they're,
built
in
control
of
runtime.
B
This
is
the
cappy
main
branch
and
we're
right
now
we're
using
controller
runtime082,
which
is
one
of
the
latest
releases
and
I'm
gonna
walk
through
like
just
like.
How
do
we
set
up
these
controllers
and
like
how
do
we
test
them?
And
there
is
also
like
some
sort
of
like
super
type
that
we
have
here
like
this
test
environment,
which
is
built
on
top
of
controller
runtime.
B
So,
if
you're
not
familiar
with
like
how
cap
is
structured,
but
I
think
everybody
here
is
like
kind
of
like
familiar
with
it.
B
It's
like
we
have
a
management,
cluster
and
workload
cluster
and
a
lot
of
times
like
we
need
to
test
both
and
there's
a
couple
of
ways
to
go
about
this,
like
you
could
have,
for
example,
you
could
pretend
to
that,
like
the
same,
execute
control
plane
that
you're
connected
to
is
both
a
workload
and
a
management
cluster,
because
that
that
is
a
support
use
case
or
you
could
create
like
a
two
different
test
environment
so
with
the
con
with
their
own
control,
plane
and
lcd,
then
what
we're
act
like
one
as
a
management
plus
one
is
the
workload
cluster
for
controllers.
B
What
we
actually
are
doing
here
is
like
the
first
things
that
you
you
might
see
here
is
that
we
have
we
define
a
test
environment
and
we
say:
okay
like
I
want
to
create
a
new
test
environment.
But
what
does
this
do
is
actually
like
kind
of
like
reusing
the
end
test
from
controller
runtime
to
to
say,
hey,
I
want
to
spin
up
a
new
test
environment.
What
m
test
does
under
the
hud
is
actually
like.
Pretty
nice
like
there
is
a
fetch
external
binaries
shell
script.
B
Q
builder
has
published
cube
api
server
and
xcd
binaries
for
both
darwin
and
linux.
I
believe
so
far.
B
I'm
not
sure
if
there
are
like
m
arms
builds
or
anything
like
that
yet,
but
we
can
add
support
for
it
because,
like
we
just
have
to
cross
build
that
there.
But
what
this
script
does
is
that,
like
it
will
download
locally,
if
you
don't
have
in
a
temporary
folder
the
these
two
binaries
for
the
version
that
you're
specifying
so,
for
example,
here
we're
testing
the
minimum
version
of
the
management
and
workload
plus
for
that
that
we
want
to
support
in,
in
this
case,
like
it's
192
for
the
management
cluster.
B
So
this
will
download
those
binaries
locally
and
make
sure
that
like
when
you
create
a
new
test
environment,
it
will
spin
those
binaries
up
in
a
process
group.
Any
questions
so
far.
B
B
B
So
under
the
hood,
the
test
environment
is
actually
quite
complicated
like
how,
like
it
all
ties,
everything
up
together,
especially
because
it
also
supports
web
books,
which
it
actually
it's
really
nice,
but
it
will
spin
up
the
aps
server.
It
will
spin
up
at
cd
that
you
can
see
here
and
it
will
find
those
binaries
first
and
then
it
will
try
to
to
spin
it
up.
B
It
also
configure
like
things
like
the
rest
config,
so
that,
like
you,
can
do
like
a
super
like
a
lot
of
requests
like
per
second
and
like
I
have
a
high
burst,
etc.
B
But
the
really
nice
thing
is
lucky.
You
could
actually
configure
everything
in
a
control
plane.
So
like
there's
a
whole
thing
here
like
about
the
api
server,
so
you
could
set
like
secure
port.
If
you
want
to,
you
could
set
the
path
where
to
find
or
like
define
the
binaries
or
you
could
find
the
argument.
If
you
have
another
std
like
you
could
potentially
configure
to
use
it,
we
don't
do
that.
B
We
just
pin
up
two
two
processes,
and
then
it
has
happened
sometimes
that,
like
we,
have
found
the
errors
in
in
in
the
abs
server
itself,
and
one
nice
thing
about
testing,
is
that
actually,
like
you
can
like
say
like
attach
control,
plane,
output
and
then
everything
will
go
to
somewhere
out
in
there
as
needed.
B
You
could
also
use
an
existing
cluster
if
you
wanted
to
so
like
this
will
just
like
defeat
the
purpose
of
like
having
kind
of
like
segregated,
like
environments.
But
you
know,
if
you
have
that
use
case,
that's
great
as
well.
You
can
also
install
all
the
crds
so,
like
you
could
give
the
test
environment
say
like
I
want
all
these
crds
to
be
installed.
You
can
give
the
crds
in
here,
or
you
could
say
like
I
want
them.
B
This
install
options,
which
is
like
you,
could
give
them
paths,
and
so
this
would
actually
be
set
usually
to
the
config
directory
that
you
have
in
here
so
like
in
the
basis
so
like
all
of
these
crds,
so
that,
when
test
m
starts
like
it
will
just
go
and
find
those
crds
and
install
them
for
you,
and
you
don't
have
to
do
like
anything
else.
You
can
also
clean
them
up
after
you
use
them,
etc.
B
So
there
is
a
like
a
lot
of
good
support
about
it
and
if
you
have
web
books,
this
is
also
like
why
we
kind
of
like
have
a
structure
that
then
has
that
environment
struck
is
because
we
installed
our
own
web
boards
in
like
web
books,
and
we
want
to
make
sure
that,
like
all
the
web
books
are
registered
and
up
and
running
for
those
crds
you'll
see
here,
there's
like
a
custom
function
like
how
to
install
web
books-
and
you
know
we
don't
have
to
go
through
this,
but
like
it's
pretty
much
saying,
I'm
gonna
go
find
all
the
web
pages
and
register
them
from
the
config
directory.
B
This
is
probably
something
that
we
can
add
to
control
runtime
itself
in
the
future.
But
what's
so
good
about
having
my
box
in
here,
you
get
conversion,
you
get
validation,
you
get
defaulting
all
of
in
test
m.
So
when
you
like,
have
this
integration
test
up
and
running
you're
creating
an
object,
it
will
actually
go
through
the
same
code
path
that,
like
a
qctl
apply,
would
go
through,
which
is
really
powerful.
If
you
want
to
test
like
a
user
behavior
as
well.
B
Once
all
of
this
is
done
like
we,
we
usually
return
the
struct,
which
the
magic,
client
and
config
and
then
a
suite
in
here
like
we
just
create
the
test
environment,
and
then
we
were
just
like
all
the
reconciliation
in
here
and
then
we
start
the
manager
like
we
were
studying
a
main.go
file
and
then
we
wait
for
it
to
be
elected.
B
We
don't
use
legit
election
here.
This
is
just
because
this
should
be
called
something
else,
but
this
waits
for
all
the
all
the
reconciles
to
be
up
and
all
the
caches
to
have
started.
So
this
is
pretty
important
to
wait
for
that
to
happen
before
continuing.
B
B
A
Yeah,
I'm
to
throw
out
one
one
question,
because
this
is
fascinating
and
some
of
the
stuff
that
I
haven't
actually
checked
out.
Yet
most
of
the
tests
that
I've
ever
seen
for
controller
runtime
and
keybuilder
have
been
just
like
linked
off
of
docs
and
they
were
pointing
to
like
the
databricks
azure
one,
which
has
a
very
different
implementation
and
similar,
but
very
different
in
terms
of
like
some
of
these,
like
gotchas
that
you
found.
A
This
is
just
a
complete
blanket
statement.
Is
any
of
this
ever
going
to
end
up
more
into
those
into
qbuilder
or
controller
runtimes
docs
a
little
bit
because
it
doesn't
feel
like
this
is
all
like.
This
is
so
much
more
advanced
than
what
we
get
out
of
like
what
I've
typically
seen
and
I'm
stoked
by
it.
B
Yeah
I
feel,
like
I
mean
we
have
been
using
it
for
two
years
plus
at
this
point,
so
like
we're
pretty
far
ahead
in
terms
of
like
what
our
requirements
have
been
and
how
we
want
to
use
it
in
terms
of
dots.
Like
yes
like.
I
hope
that,
like
these
things
like
actually
are
gonna
like
the
goal
is
like
actually
to
build
them
in
here,
but
like
bring
up
as
much
as
possible
and
control
the
runtime,
because
then
all
the
browsers
can
reuse
it.
B
I
would
love
if
actually,
these
things
would
actually
be
a
little
bit
easier
like
so
that
you
don't
even
have
to
register,
like
you
know
like
you,
could
auto
discover
reconcilers
in
the
same
package,
and
things
like
that
sort
of
like
a
really
advanced
scenario
like
we'll,
have
to
think
about
yeah
yeah.
A
Because
it's
getting
so
stupid
yeah
because
this
gets
so
screw
when
you
do
like
the
multi-group.
True,
like
you
end
up
with
multiple
sweet
tests
and
they
can't
run
into
they
can't
run
parallel
because
of
the
way
that
envy
test
is
set
up
in
in
at
least
base
q
builder,
and
so
I've
run
into
a
ton
of
gotchas
around
around
that.
And
this
is.
B
A
Uh-Huh
we'll
talk
about
it
later
I
haven't,
I
haven't
I've
been
meaning
to
finally
go
file,
some
issues
to
try
and
fix
some
of
that
stuff,
where
the
like
some
of
the
ways
that
you
can
a
scaffold,
a
project
will
screw
you
in
the
long
run,
for
being
able
to
do
some
of
these
things
and
a
lot
of
it
doesn't
point
to
make
it
make
your
sweet
test
like
main.go.
A
It's
almost
like
test
the
individual
controllers
that
are
in
here,
especially
when
you
go
multi-group
test
the
individual
controllers
in
here,
but
then
it
really
screws
up
like
massive
full
test.
Suite
runs
anyways.
B
B
Yeah
we
like
we
did
the
complete
opposite
we're
like
well,
we
want
to
run
all
the
controllers
in
a
package
and
maybe
even
like
other
packages
controllers,
if
possible,
because
the
system
has
to
be
just
together
like
it's
like
really
uncommon.
B
That,
like
I
could
test,
for
example
like
the
cluster
controller
or
like
the
machine
controller
without
the
cluster
controller,
unless,
like
I
start
making
a
test
to
behave
like
that
cluster
controller,
which
is
another
option,
but
truthfully
like
in
in
production
environments
like
they
will
have
to
behave
together
and
kind
of
this
like
yeah,
like
the
reconciliation,
looks
like
I
have
to
behave
together,
and
you
know
cool
thanks
for
that.
B
So
in
terms
of
like,
for
example,
so
one
of
the
change
that
we
have
made
in
cluster
api
versus
as
a
builder,
you
might
have
noticed,
like
a
huge
builder,
has
so,
for
example.
Let
me
take-
I
think
this
one
still
has
it.
B
Has
this
before
sweet
and
after
sweet,
usually
so
gingka
is
not
bad,
but
one
thing
that
we
didn't
like
in
copy
is
that,
like
you,
you
lose
like
the
ability
to
run
single
tests.
If
you
wanted
to
so
as
an
example
like
if
I
go
into
machine
delete
policy,
if
I
click
this
run
test
in
vs
code,
I
cannot
do
that
with
with
gingko.
B
But
if
I
I
can
do
that
in
here
and
if
I
click
it
like,
it's
not
gonna
break,
but
it
would
run
the
test
environment
first,
as
you
can
see,
and
then
we
actually
run
only
that
test
and
that
test
only
rather
than
like,
try
to
spin
up
the
whole
so
like
yeah,
like
it
just
ran
test
machine
to
be
deleted,
but
instead
like,
if
I
go
into
the
other
package,
we
haven't
been
able
to
do
all
of
them
yet
so,
for
example,
this
is
still
on
the
old
one.
B
B
The
only
way
that
I
can
run
this
is
like
actually
go
and
type
that
out
like
the
whole
package.
I
guess,
or
I
could
try
to
do,
file
tests
or
something,
but
really
like
you
can
just
say
like.
I
want
to
just
test
this.
It
condition
so
we
went
kind
of
the
opposite
direction
of
this
and
we're
trying
to
move
all
the
other
suites
as
well
to
use
test
main,
because
this
main
you
know,
works
well
and
it
kind
of
yeah.
B
So
I
can
just
run
one
test
or
debug
one
test
if
I
want
to
so
the
one
other
thing
that,
like
we
have
been
doing
here,
so
it's
like
if
I
get
the
machine
deployment
controller,
for
example,
so
like
as
you
can
see
like,
but
it's
like
still
some
mix
of
things
in
here
but
like
which
is
fine.
We
one
thing
that
we
have
been
doing
is
the
namespace
segregation
of
like
these
tests,
and
this
is
sure
all
things
that,
like
are
gonna,
be
documented
at
some
point.
B
B
So
after
each
test
so
like,
if
this
is
run,
and
then
there
is
like
a
bunch
of
tests
in
here
like
after
each
test
is
run,
we
create
and
delete
the
the
namespace
so
that,
like
the
next
one,
is
going
to
like
a
b
separately
in
that
namespace
and
the
namespace
is
going
to
be
created
with.
I
guess
like.
B
If
this
is
using
the
same
name,
it
should
really
use
a
generating
name,
but,
like
that's
a
topic
for
a
different
type,
whenever
we
can,
we
would
use
general
name
instead
of
like
a
name
and
make
sure
that,
like
the,
if
you
have
a
test,
that's
running
like
that.
If
you
want
to
do
count
equals
two
for
two,
for
example,
that
like
a
goal
of
sports-
and
you
wanna
run
that
test
twice,
you
don't
rely
on
naming
to
actually
say
like
hey.
B
B
So
that's
like
the
reproducible
aspect
of
it,
the
other
nice
thing
about
test
and
is
something
that
we
have
found
that
you
you
folks,
probably
want
to
use
this
as
well.
It's
like
how
can
I
test
things
that
I
don't
know
anything
about.
I
mentioned
like
we
have
the
concept
providers
and
like
if
these
products
have
contracts
so
like.
How
do
we
go
about
testing
these
contracts,
the
thing
about
test
and
that,
like
it's,
actually
really
useful
that
you
can
inject
the
crds
on
the
fly?
B
So,
for
example,
here
you
can
see
like
we
have
an
infrastructure
cluster
crd.
This
urd
doesn't
exist
anything
except
for
in
tests
and
there
is
pretty
much
like
a
whole
so
like
we
create
these
crds
in
tests
and
we
registered
the
crd
as
well
when
test
them
comes
up
so
that
you
could
do
the
same
thing
pretty
much
but
as
a
whole
hack
and
I'm
trying
to
to
find
it
that
you
can
do
to
say.
B
I
want
to
register
these
crds
and
the
crd
is
like
we
pretty
much
disable
the
open
api
spec
so
that,
like
it's
just
say,
like
plain
object
like
you
can
go
anything
you
can
go
in
here
and
it
will
also
preserve
with
the
fields
because,
like
newer
version
of
the
api
server,
actually
won't,
let
you
preserve
the
fields
and
unless
you
opt
in
into
into
that
behavior,
so
yeah
we
try
to.
We
try
to
kind
of
like
make
sure
that
that's
in
place
any
questions
before
I
move
forward.
C
I've
been
saying
charles,
I
have.
I
have
some
questions.
It's
really
cool.
I
I
didn't
realize
control
runtime
provide
this
cool
testing
environment.
C
I
think
this
testing
environment
can
actually
be
used
for
adding
any
controller
that
developed,
developed
based
on
control,
runtime
or
any
other
like
client
go
controller.
Is
that
right.
B
This
would
so
like
this
probably
work
best
with
controllers
that
are
part
of
controller
runtime,
so
like
that
you
would
create
with
controller
runtime.
I
wouldn't
see
why,
like
it,
wouldn't
work
with
client,
go
plain
informers
and
like
just
like
more
generic
controllers,
but
I
have
not
tested
that.
So
I
I'm
not
able
to
speak.
C
Okay,
I
see-
and
I
and
I
noticed
that
we
only
have
kubi
api
server
and
etc
in
a
controller,
a
control
plan.
So
if
I
wanted
to
test
something
related
to
deployment
or
stable
set,
is
that
possible.
B
You
need
the
controller
manager,
if
I
remember
correctly,
for
those
so
probably
so
like
you
could
create
it,
but
it
won't
be.
It
won't
actually
like
spin
up
like
a
deployment
or
it's
like
it
doesn't
have
any
nodes
underneath.
C
Okay,
so
is
that
is
that
in
the
roadmap
like
in
the
future,
maybe
we
will
add
the
controller
controller
manager
into
the
test
framework
and
we
can
do
some
more
cool
testing.
B
I
I
have
not
seen
like
an
ask
for
a
thing
until
the
runtime.
It
would
be
probably
hard
because
this
yeah,
I'm
not
sure
if
because
like
there,
might
be
no
cri
and
cni
like
a
local
use
like
like
it
gets
like
exponentially
complicated
rather
than
just
like,
run
the
api
server
in
ncd.
B
But
what
you're
saying
is
actually
like.
So
these
are
like
usually
folks,
like
call
these
integration
tests
right
and
we
usually
do
end-to-end
tests
with
kind.
So
we
have
a.
B
Like
you
could
spin
up
like
a
a
kind
based
cluster
with
a
cluster
api,
so
you
could
just
have
like
a
test
cluster
in
there
and
then
create
a
deployments,
and
you
know
like
everything
you
want
to
create
like
on
that:
okay,
okay,
so.
B
The
foundations,
the
english
api
to
spin
up
like
a
a
kind
cluster
and
then
you
can
test
for
other
things,
but
I
see
but
didn't
control
the
runtime.
I
would
see
see
if
really
yeah.
It
wouldn't
be
super
durable.
C
B
B
So,
for
example
like
if
you
see
cluster
api
has
like
so
many
controllers,
and
some
of
them
have
to
work
like
really
closely
with
each
other.
So,
for
example,
here,
like
the
machining
cluster
controller,
the
machine
a
machine
cannot
be
created
without
a
cluster,
but
the
cluster
has
to
be
reconciled
and
some
fields
have
to
be
in
place
for
the
machine.
I
see:
okay,
okay,
okay,
that
makes
sense
yeah.
So
those
kind
of
behavior
like
those
are
testable
in
here
other
behaviors,
which
is
like
more
like,
for
example,
hey.
B
I
want
to
check
the
status
of
a
node
in
a
cluster.
We
would
either
mock
it
or
create
a
fake
node,
which
you
know
it
won't
be
represented.
C
Yeah,
I
see
yeah
that
makes
sense
yeah
because
sometimes
when
I
write
the
controller,
sometimes
I
wanted
to
reconcile
some
other
workload
like
deployment,
stable,
set
or
even
parts.
So
I'm
thinking
about
yeah,
we
can
add
some
fake
or
smoke
objecting
to
the
api
server.
Yeah
yeah.
That's
that's
all
my
question.
Thank
you.
A
We've
actually
done
something
very
similar
to
that
for
a
couple
controllers
that
that
I've
been
on
testing
where
we've
gone
through
and
basically
tried
to.
We
were
trying
to
do
reconciliation
based
on
specific
states
that,
like
a
deployment
was
going
through
and
if
it
got
into
a
bad
state,
we
wanted
to
handle
it
in
a
specific
way,
and
so
we
just
mocked
exactly
the
deployment.
C
B
B
A
This
is
fantastic
thanks
for
taking
on
that
tour.
I'm
gonna
ask
a
couple
more
questions,
so
the
test
main
setup
that
you
actually
did
and
not
using
ginkgo
for
all
that
for
gingo
yeah,
whatever
it's
called
that
project
for
this.
Do
you
see
that
permeating
outside
of
outside
a
cluster
api?
I
mean
being
that
you,
you
are
on
you
head
up
cue
builder
and
control
controller
runtime
as
well.
Is
that
something
that
we
would
be
changing
the
community
as
well
to
do
or
exploring
from
that
side.
B
So
I
have
not
contributed
to
book
directly
but
yeah
control
runtime.
We
can
draw
the
examples
like
how
to
do
so,
rewriting
all
the
tests
like
I'm
not
a
fan
of,
but
we
can
probably
say
like
hey,
like
maybe
new
tests,
we
can
write
with
like
the
new,
a
new
test
mean
or
something
but
yeah
like
those
who
are
pain
and
like
going
forward
that
we
actually
don't
allow
liking
caffeine.
We
don't
allow
new
tests
to
be
written
with
gingko.
A
Okay,
cool
and
for
the
setup
of
that
is
that
just
is
that
just
get
called
does
test
main
just
get
called
for
every
single
test
run,
or
are
you
calling
it
from
somewhere
else
that
that
wasn't?
Actually
that
I
didn't
see-
or
I
might
have
missed,
that.
B
Destiny
is
actually
called
from
go
directly
when
you
do
go
test,
so
it's
the
first
function
that
actually
gets
called
when
a
package
after
init
the
package.
When
you,
when
you
run
tests,
it
will
run
testament
first.
A
A
The
sweet,
the
sweet
setup
or
the
before
suite
call,
but
not
for
genko.
That's
what
you're
getting
no
it's
it's
built
in
and
goes,
which
is
just
why
it's
super
nice
yeah,
okay,
cool
that
was
fantastic.
I
I
in
the
middle
of
this
because
it
was
so
good.
I
called
a
couple
more
people
from
from
our
team
in
so
you
might
have
saw
a
couple
more
people
jump
in.
A
They
might
have
missed
a
little
bit
of
this,
but
we
can
give
you
we
can
give
you
some
primers
on
on
or
some
notes
about
what
vince
was
talking
about.
Gabby
and
etia.
A
C
D
A
D
Yep
looks
good
singularly
emote
in
in
multi-tenancy
thinker.
D
So
basically
I
had
a
brief
discussion
with
chris
yesterday
and
and
the
day
before,
so
we
would
like
to
have
a
sample
code
implement
in
current
multi-tenancy
sinker
code
so
that
everyone
can
use
this
as
a
base
to
synchronize
some
annotated
crds
from
super
cluster
to
the
virtual
cluster
that
tenant
created.
D
So
this
document
is
try
to
give
a
high
level
of
view
what
we
want
to
do
and
try
to
get
some
feedback
from
the
community
that
if
this
is
a
good
approach
or
not,
and
then
we
will
do
some
implementation
after
that,
so
the
main
objective,
I
think
for
this
document-
is
try
to
install
the
crd
from
super
cluster
to
the
virtual
cluster
when
the
virtual
cluster
is
started
and
also
we
will
dynamically
build
the
scheme
and
see
our
clients
when
the
synchronization
started,
and
we
will
also
construct
example,
cr
single
code
on
the
side
of
this
current
virtual
thinker
code.
D
Basically,
we
want
to
reuse
the
entire
multi-tenancy
single
code
as
a
library
or
infrastructure,
and
we
build
our
proprietary
code
on
the
sideline
in
a
proprietary
ripple
and
what
we
want
to
reuse
is
most
of
the
component
in
the
current
virtual
cluster
multi-tenancy
thinker
code
and
one
only
one
module
will
be
introduced
will
be
an
optional
module
that,
based
on
the
configuration
option,
that
this
module
is
in
the
listener
package
that
monitor
the
super
cluster
crd
creation
and
this
crd
need
to
be
have
a
annotation
called
a
tenancy
super
public.
D
This
can
be
a
we
can
still
discuss
on
this
annotation,
but
once
this
crd
has
this
annotation,
this
optional
crd
syncer
will
try
to
load
the
crd
into
the
newly
created
tenant
virtual
cluster
and
for
all
the
virtual
cluster.
We
will
install
the
crd
without
any
any
any
other
options,
and
this
will
be
also
automatically
install.
D
This
new
crd
scheme
into
the
scheme,
which
is
currently
exist
in
the
multi-tenancy
code,
so
with
this
two
options
that
all
the
crd
has
been
installed
and
there
then
user
can
start
to
play
with
the
synchronization
code.
D
The
resource
type
need
to
be
registered
into
the
plugin
so
that
this
code
can
be
start
with
the
new
resource,
type
controllers
and
start
to
reconcile
all
the
objects.
So
this
is
a
basically
is
this
stretcher,
but
we
think
that
all
this
need
to
be
done
at
the
compile
time.
We
do
not
want
to
support
the
dynamically
installation,
like
crd,
when
dynamically
installed.
We
also
don't
want
to
support
a
customized
resource
thinker
when
this
already
running
and
user
want
to
inject
into
a
new
thinker.
While
it
is
running,
I
think.
D
Currently
we
have
quite
some
issue
need
to
be
fixed
before
we
can
do
fully
dynamic
load
of
this
new
synchronization
modules,
and
so
this
is
a
basically.
D
We
were
mostly
using
the
controller
runtime
to
do
all
this
client
sets
and
as
well
as
the
testing-
and
I
think
basically
the
idea
is-
is
like
that.
A
Structure,
we'll
definitely
want
to
get
phase
eyes
on
this
as
well
yeah,
if
you
have
any
feedback
as
well,
that'd
be
useful.
D
Yeah,
so
the
basic
idea,
I
think,
is
just
put
this
two
code
base
separate
instead
of
fully
integrated
into
it,
and
I'm
going
to
implement
this
either
in
this
directory
example,
or
what
pha
is
doing
with
experiment.
D
Exam
experiment
directory
like
to
put
this
as
a
new
main
and
order
temple
resource
thinker,
a
custom
resource
thinker
in
this
directory
and
make
sure
we
have
at
least
a
template,
or
example,
to
teach
user
how
we
can
implement
customize
the
resource
thinker.
On
top
of
that.
D
Yeah,
so
currently,
I
think
I
will
put.
I
hope
I
can
put
into
an
example
or
another
option
is
put
experiment
that
I
have
a
like
the
new
code
face.
Doing
we
have
a
main,
like
is
a
scheduler
code.
We
have
a
main.
We
have
a
package
within
this
experiment
or
with
example,
code,
so
that
we
can
build
build
this
series
thinker
code,
but
to
refer
all
this
main
component
to
the
current
multi
multi-tenancy
sinker
code.
C
I
see
I
see
yeah,
I
don't
have
any
any
any
thoughts
right
now,
but
I
think
it
should
be
better
if
we
can
that
way
to
take
a
look
of
this.
This
proposal.
D
Okay,
our
ping
pay
to
make
sure
he
reviewed
it
and
are
going
to
implement
example
somewhere
in
that.
A
Cool
thanks
for
showing
us,
okay,
I've
also
cc'd
faye
on
that
on
the
issue.
Okay,
so
I
assume
he'll
get
a
ping
from
that.
A
A
All
righty:
well,
we
can
cut
it
about
20
minutes
early
and
give
everybody
back
some
time
thanks
everybody
again,
and
this
will
be
posted
to
youtube
after
this
we'll
see
y'all
next
week,.