►
Description
#sig-cluster-lifecycle
#capn
#capi
A
All
right,
good
morning,
everybody,
this
is
the
cluster
api
provider
nested
office
hours
that
happens
at
10
a.m,
pacific
standard
time
every
tuesday.
A
Today
we
don't
have
a
long
agenda,
but
just
to
let
you
know
up
front
so
this
will
be
recorded
and
posted
to
youtube
later
today,
so
don't
say
anything
you
wouldn't
want
to
be
shared
with
the
whole
world.
We
can
get
started.
I
will
start
off
with.
I
don't
think
I
don't
have
any
psas.
We
do
have
a
couple
new
folks.
If
you
want
to
introduce
yourself
now
is
your
chance.
You
can
just
unmute
tell
us
who
you
are
and
tell
us
what
you're
interested
in
and
yeah.
B
Hi
everyone,
my
name
is
dan.
I'm
a
member
of
the
multi-tenancy
working
group
and
I've
been
interested
in
the
virtual
cluster
project.
So
I
decided
to
check
out
this
working
group
as
well.
I
work
on
operators
as
part
of
the
operator
framework
at
red
hat
and
yeah
super
interested
in
the
project
and
would
like
to
know
more
about
where
you
guys
are
at
and
what
kind
of
issues
you're
attacking
awesome.
A
Cool,
I
think
that
would
be
it.
I
don't
have
any
psas
for
the
whole
group
faye.
I
don't
know
if
you
do
or
if
anybody
wants
to
bring
anything
up
before
we
jump
into
the
couple
agenda
items.
C
C
A
All
right
also,
because
there's
only
two
agenda
items:
if
you
want
to
drop
something
in
there
to
talk
about,
please
do
you
should
all
have
access
to
edit
that
doc
and
then
moving
on
into
the
actual
agenda
items.
Wait.
Did
you
have
time
to
prepare
something
to
talk
about
sinking
crds
that
you
wanted
to
bring
that
you
wanted
to
talk.
C
Really
sure
screen
yeah
sure
share
this
phone.
Okay,
can
you
see
it
no,
no
still
same
thing
still
empty.
Sorry,
whenever
you
know
you
zoom
this
match.
A
It's
still
same
zoom
web
page.
C
Cool
now
we've
got
it.
Okay,
just
do
this,
so
sorry
for
the
small
screen.
So
let
me
just
do
this,
so
basically,
I
just
today
are
going
to
talk
about
the
crd
thinker
we
implemented
and
what
we
would
like
to
have
it
more
advanced
and
what
we
want
to
do
with
the
dynamic
package.
We
were
looking
for
to
enhance
the
maintainability
of
this
crd
syncer.
C
So,
first
of
all,
so
this
is
the
crd
syncer
we
implement.
If
you
look
at
this
diagram,
it's
very
similar
to
the
standard
multi-tenant
vc
sinker.
So
we
try
to
reuse
as
much
as
possible
all
the
components
built
by
multi-tenant,
and
so
this
is
the
manager
register.
Mmc
controller
listener
cluster.
All
this
gray
colored
component
either
copy
paste
the
code
to
the
directory
or
using
the
standard
package
from
the
multi-tenant.
C
C
I
will
not
go
to
detail
because,
because
everyone
knows
very
well
about
this
structure,
the
problem
we
have
to,
as
I
mentioned,
that
why
we
have
to
copy
paste
all
the
code
into
our
directory
in
order
to
make
this
work
is
because
each
crd,
if
we
are
not
using
dynamic
package,
we
have
to
create,
we
have
to
create
another.
Let
me
turn
off
this
one:
okay,
we
have
to
use
a
scheme
to
type
the
client
and
we
have
to
build
a
scheme
and
let
this
component
use
it.
C
So
the
key
things
is
that
this
mcc
controller
is
heavily
relied
on
the
pre-build
scheme
to
work
and
in
this
mcc
controller,
as
well
as
cluster.go.
This
file
is
all
rely
on
the
this.
Just
all
rely
on
this
scheme,
which
we
modified
to
add
our
crd
schemes
in
it.
So
this
will
be
importing
a
special
scheme
instead
of
using
the
standards
key
and
because
of
this
change,
you
can
see
that
all
these
other
files-
cluster.go
listen
patrol
and
the
manager
all
these
directory
packages
has
to
rely
on
the
mtc
controller
need
to
be
modified.
C
Just
the
import
part
nothing
else,
and
this
is
secondary
dependency,
so
so,
and
you
can
see
that
all
this
yellow
color
is,
we
have
to
copy
paste
into
our
code
base,
which
is
very
inconvenient
and
very
hard
to
maintain
so
the
source,
the
source
directory
we
put
to
one
crd,
and
we
have
two
three
ideas
there.
So
each
crd
has
a
subdirectory.
We
put
all
the
checker
sinker
necessary
into
it.
C
So
this
is
the
the
main
problem
of
this
crd
syncer
is
that
all
depends
on
the
new
scheme
and
we
have
to
import
a
special
scheme
and,
as
a
consequence,
a
lot
of
components.
We
cannot
directly
input
from
to
our
code
base,
but
have
to
modify
a
little
bit
in
the
import
part
to
fit
the
big
pictures.
C
So
so
this
is
the
of
course.
A
big
problem
is
that
with
duplicate
and
hard
to
maintain
and
the
most
duplicate
is
just
because
of
the
imported
crd
scheme
and
nothing
else,
and
I
had
a
commit
a
pr
to
use
the
ca
rest
config
direct
import
to
the
old
component.
This
makes
another
change
which
is
already
in
our
multi-tenant
project,
which
is
don't
require
any
additional,
reduce
quite
much
the
code
that
we
need
to
regenerate
the
rest
config
for
crds
and
testing
also
requires
some
specific
fake
clients,
instead
of
using
the
standard
client.
C
C
So
this
is
the
the
problem
we
are
facing
so
now.
This
is
why
we're
thinking
of
we
would
like
to
use
the
dynamic
client
and
the
unstructured
structure
object
to
to
reduce
the
dependency
of
the
schemes
so
that
we
can
leverage
the
existing
multi-tenant
component
code
to
use
to
use
the
standard,
dynamic,
client
and
uses
generic
code.
Instead
of
we
have
to
modify
that
implement
each
schemes
as
a
hard
code,
one
and
import
by
the
mcc
controller.
So
this
is
a
very
high
level
view
of
how
this
dynamic
client
works.
C
So
basic
idea
about
dynamic
client
is
that
they
use
a
g
vr,
which
a
group
version
and
resource
to
identify
the
client
and
make
this
searchable
and
keep
the
client
as
simple
as
possible
in
this
list
of
the
client
and-
and
this
will
be
the
creation
of
the
dynamic
client
and-
and
this
will
be
the
deployment-
and
you
can
see
that
if
we
generate
this
key
with
a
gvr,
we
are
able
to
find
the
deploy
resource.
This
is
a
key
and
we
can
deploy
any
crds,
which
is
a
you
know.
C
Do
a
restful
api
call
and
do
this
any
creation
or
or
deletion
as
we
want,
because
we
already
have
this
dynamic
client
generate
based
on
the
gvr
and
in
order
to
get
this
client
back
is
that
we
can
always
use
gvr
to
use
it
to
to
do
all
the
operations
like
listing
and
looking
for
all
the
information
in
it.
So
this
is
give
a
standard
away
on
top
of
the
typed
client
existing
in
the
core
v1
to
have
this
implemented
and
we
can
dynamically
add
in
the
gvr
instead
of
rely
on
the
scheme.
C
So
so
so
this
is
the
unstructured
structure.
Object
which
is
built
here
is
using
this
structure
to
compose
a
new
gvk.
C
In
order
to
get
the
gvk,
we
have
to
use
this
standard
form
and
yammer
to
use
this
utilities
to
convert
it,
and-
and
this
is
why
we
don't
use
a
hard
code
go
type
and
we
can
also
find
resource
using
the
gvr,
which
is
a
this
format,
and
the
gvr
g
is
a
group
version
and
resource
and
to
compose
address
for
api
request,
which
is
this
address
for
api
request,
and
everything
is
based
on
the
gbr.
C
So,
given
this
new
kind
of
a
dimension
that
we
really
want
to
do,
is
this
thing?
So
the
object
here
is
not
try
to
dynamically,
create
a
client
on
the
fly,
but
to
reduce
the
dependency
and
there's
much
possible.
So
that's
all
the
component
standard
component
in
the
multi-tenant
project
can
be
as
imported
entirely
as
a
package
by
the
crd
thinkers
if
someone
want
to
implement
the
one.
C
So
if
you
look
at
this,
this
is
the
main
part
purpose
is
not
is
to
reduce
the
the
dependency
and
also
in
order
to
do
that.
We
have
to
make
this
the
dynamic
client
support
in
the
in
the
mcc
controller,
in
the
manager
in
the
cluster.gov.
C
C
The
the
first
thing
that
we
have
to
find
a
component,
that's
less
dependent
on
all
the
others
and
make
this
component
to
read
the
existing
all
the
controllers,
including
the
standards
controllers,
as
well
as
crd
controllers.
So
I
think
the
best
place
is
using
the
synchro.go
files,
which
is
a
current
exists
in
the
multi-tenant
project.
C
We
can
call
this
customized
syncers,
so
what
this
seeker
do
is
that
during
the
build
time-
and
we
will
compose
the
entire
list
of
the
old
controllers
in
the
code
base,
including
the
crd
controller,
as
well
as
the
standard
controllers,
as
you
know,
that
this
is
in
the
register,
dot
go
files
in
the
in
current
motor
tenant
project
and
we
build
that
one
and
the
the
the
when
the
server
starts
and
will
read
the
config.
C
This
then
start
the
this
custom
syncer,
and
this
kind
of
singer
will
get
all
this
mapped
list
of
the
controllers
and
to
start
this
by
the
manager
one
by
one.
So
this
is
something
that
we
can
achieve
by
using
a
customized
syncers,
and
this
custom
sync
are
only
imported
by
the
synchro
servers
so
that
only
all
the
rest
package
like
manager,
aps
mcc
controller
cluster,
can
be
reused
by
the
third-party
developer
of
the
crd
controller
to
import
directly.
C
So
this
is
one
thing
that
we
can
do
a
slight
change
and
make
most
of
components
reusable,
and,
secondly,
that
we
have
to
put
the
dynamic
client
change
in
the
code
of
the
mmc
controller,
on
top
of
the
current
client
manipulation
code
and,
for
example,
if
we
cannot
find
a
type,
we
probably
add
in
some
map
method
to
using
the
gvr
to
search
for
the
client
and
et
cetera.
So
this
is
the
should
be
the
similar
to
this
one,
and
we
can
do
this.
C
Client
manipulation
the
same
way
and
also
we
can
enable
the
control
controller
to
get
the
access
crt
using
the
unstructured
object.
So
this
is
also
we
can
do
that
to
make
this
available
in
the
mcc
controllers.
A
C
A
Yeah
this
yeah
thanks
for
walking
us
through
what
what's
going
on
there
with
sinking
crds
down
to
the
to
the
super
cluster
cool.
Now
within
the
with
within
this
just
kind
of
like
start,
the
conversation
off.
I
think
a
lot
of
this
could
probably
be
done
with
controller
runtime
and
still
using
typed
clients
and
not
actually
having
to
use
unstructured,
because
that's
kind
of
a
a
little
bit
painful
for
a
lot
of
people
to
actually
implement.
A
But
it
seems
like
if
you
go
back
to
your
last
slide,
the
one
before
this,
we
could
probably
actually
use
what
we're
doing
with
controller
runtimes
we're
actually
using
the
dynamic
clients,
but
still
using
typed
clients,
instead
very
similar
in
in
structure.
Behind
the
scenes
same
way.
A
Going
to
be
building
out
all
the
controllers
that
we
want
for
for
the
for
cap
n
in
terms
of
using
physical
clients,
not
not
having
to
actually
go
and
create
the
unstructured
resources,
but
I
think
that
makes
a
lot
of
sense.
Do
you
so.
A
In
terms
of
in
terms
of
what
we're
doing
from
our
from
our
crds
that
we're
actually
syncing
we're
currently
implementing
it,
the
same
way
we're
basically
just
forking
and
then
adding
in
our
own
resources-
and
the
goal
here
is-
is
really
long
term
to
be
able
to
set
up
all
clients
being
using
the
using
the
actual
dynamic
client.
Or
would
you
still
do
you
still
think
that
we
should
be
using
the
the
typed
client
or
the
full
typed
clients
within
the
core
syncer.
C
My
thinking
is,
is
the
we
can
do
a
mixed
of
hyper
mode
that
ins
in
the
current
multi-tenant
component
a
standard
component.
C
We
still
supporting
the
type
the
client,
but
on
top
of
that,
if
we
cannot
find
this
or
we
can
use
a
gvr,
if
we
user
won't
use
gvr
to
do
the
operation,
then
we
switch
to
this
dynamic
line.
This
way,
this
component
will
be
much
flexible
and
can
be
support.
More
types,
most
of
the
code
here
in
the
multi-tenant
is
just
start
controllers
and
some
some
of
the
type
kind
of
operations
there
there's
a
field
code
to
to
do
this
client.
C
A
Make
are
you
able
to
share
the
same
informers
between
the
two?
If
we
don't
switch
them,
can
we
still
share
the
two,
the
the
same
informer
between
both
the
dynamic
and
the
type
clients,
so
we're.
A
C
Yeah
cash
should
be
same
because
dynamic
line
is
just
a
way
of
to
searching
the
rest
for
a
client
for
a
crd,
so
you
still
need
to
create
a
crd
at
first
place,
but
the
crd
still
be
created
in
the
api
server.
So
once
the
api
server
is
created,
everything
informer
and
all
the
other
things
should
be
the
same.
C
Only
the
client
part
which
we
currently
rely
on
the
mcc
controller
or
cluster,
go
to
get
this
client
or
get
the
types
which
need
to
be
supporting
the
dynamic
client,
but
all
the
informal
cache
should
be
still
keep
the
same,
because
the
cr
itself
does
not
change
only
the
way
how
we
look
at
the
decline
got
changed
now.
The
client
was
in
the
in
the
in
in
the
list,
which
is
maintained
by
by
this
libraries,
dynamic,
client
libraries.
D
Yeah
so
currently
yeah,
I
think
currently
the
way
the
cache
work
is.
Every
time
you
add
a
new
resource,
you
call
watch
for
resource
type.
There
is
a
method,
a
message
mccontroller,
which
register
the
resource
to
the
cache,
and
we
wait
for
the
sync
to
the
object
to
be
synced
until
everything
before
we
start
the
syncing
process
right
so
yeah.
As
long
as
you
do
the
same
way,
I
I
haven't
tried,
but
that's
very
good,
to
know
that
we
have
this
unstructured
object.
If
the
current
informal
cache
supports
unstructured,
how
does
that
work.
C
C
So
everything
is
still
start
from
the
compile
time,
so
nothing
can
be
injected
dynamically,
as
the
the
name
of
this
dynamic
client
says,
but
but
we
just
want
to
add
the
dynamic
client
library
into
the
current
mcc
controller
class
so
that
we
have
to
fix
the
amount
of
controller
we
want
to
support
before
in
the
build
time.
In
compelling
time,
then
yeah.
D
So
you
still
need
to
have
exactly
you
need
to
know
the
crd
schema
and
you,
then
you
need
to
register
the
crd,
otherwise
the
cash
won't
work.
Yes,.
C
Yes,
this
is
the
whole
purpose.
The
whole
purpose
is
not
so
dynamic
but
make
sure
that
since
mcd
controllers
support
dynamic
client,
we
can
still
build
it
and
all
the
mcc
controller
can
be
totally
imported
directed
by
from
the
community
and
the
user
reused
by
our
results.
Crd
thinker,
this
is
the
the
key
safe
part.
Is
that
sorry?
C
Sorry?
So
so
so
this
is
the
key
key
benefit
of
that.
It's
not
too
dynamic.
So
so
you
see,
this
is
the
one.
Now
everything
needs
to
be
copy
paste
into
our
code
code
directory
and
very
hard
to
maintain,
and
if
we
can
all
import
it
directly.
All
this
component
is
a
package.
Oh
direct
input,
this
one,
then
user
need
to
maintain
this.
One
should
be
fine,
and-
and
this
only
this
is
their
user
logic
anyway,
so
they
can
put
into
their
directory.
So
this
makes
it
make
the
crd.
D
Sure
sure
I
now
understand
so
yeah.
I
think
I
think
that's
a
good
idea,
but
let
me
try
to
find
if
every
alternative
way
to
get
the
same,
the
same
benefit
or
same
convenience.
D
C
Yeah
this
is,
we
did
yeah.
This
is
everything
in
the
crd,
the
cache,
the
informer
listener
and
all
generated
either
using
the
client,
gen
or
using
the
computer,
and
but
this
this
this
just
this
part,
is
I'm
not
including
this
presentation.
Of
course,
we
can
generate
this
code.
D
Yeah
yeah,
but
another
thing
is:
maybe
everything
that
mse
controller
need
to
be
changed
to
support
a
new
crd
type.
Is
it
possible
to
use
a
code
gem
for
that?
This
is
my
question.
C
Then
we
you
have
to
build
it.
The
way
in
the
multi-tenant
yeah.
D
But
my
my
point
is
yeah
yeah.
In
most
cases
people
would
need
rose
building
types,
so
I
don't
see
a
unless
you
have
a
case
that
you
only
want
to
sing
crd
but
yeah.
For
that
purpose,
the
import
is
good
other
than
that.
If,
if
people
anyway
needs
for
the
building
times
so
the
most
convenient
way,
they
will,
you
know,
give
clone
the
whole
repo
and
they
got
the
default.
The
different
functionality
of
syncing,
all
the
building
types,
and
but
they
want
to
add
more
crds.
D
Then
they
can
use
the
cogen
tools
to
generate
the
support.
The
code
to
support
crd
that
that
depends,
which
case
is,
has
has
more
use
cases.
I'm
guessing
the
case
that
people
need
every
everything
would
would
be
more
than
they
only
need
crd.
So
that
that's
my
point.
So
if
you
have
a
case
that
you're
only
an
eci,
your
for
example,
they
don't
need
anything
in
the
current
resource
directory.
D
D
That's
that's
one
thing,
but
on
the
other
hand,
if
people-
indeed
anyway,
they
they
probably
don't
go
with
the
import
path,
they
will
just
grab
the
whole
people
and
add
something
like
exactly
what
you
do,
because
I'm
guessing
you
guys
need
both
the
regular
object
thinker
and
the
cld
synchros
right.
C
C
Sorry
yeah
this
is,
we
can
yeah.
This
is
another
option.
We
can
do
that
now.
We
unfortunately
did
this
way.
We
can
combine
them
in
the
in
the
near
future,
but
so
we
just
want
to
make
this
standard
one
as
close
to
the
community
and
we
are
experienced
that's.
This
is
a
crd
thinker
as
a
separate
synchro.
B
D
D
D
C
Yeah,
even
if
we
so,
we
would
prefer
that
we
just
use
a
package
directly
without
any
change.
So
that
way,
when
we
do
our
build
system-
and
it's
always
can
get
the
latest
and
try
to
test
it
and
unfortunately
because
we
have
to
change
the
code,
as
you
mentioned,
probably
we
can
later
use
a
code
generator
to
to
to
make
the
automation
to
make
it
working,
but
still
in
overall
still
we
copy
our
entire
code
in
and
do
the
modification.
Then
we
import
all
the
package
to
the
crd
syncer.
F
Yes
controllers
that
act
on
arbitrary
types,
but
ultimately,
what
those
controllers
do
will
be
reading
a
resource
from
one
cluster
decoding
it
into
some
kind
of
a
you
know,
some
kind
of
structure
applying
transformations
to
that,
depending
on
which
direction
it's
going
and
then
updating
or
writing
and
persisting
into
the
other
side
with
the
dynamic
client
we're
moving
everything
to
unstructured
and
we're
talking
about
now
code
generating
things
to
make
that
easier.
F
I
wonder,
though,
like
if
we
because,
like
you,
said
way
like
you
at
compile
time,
we
do
have
type
information
available,
because
the
pack,
like
the
crd
sync
like
the
separate
binary,
is
importing
this
library
and
it's
you
know
in
some
way
instantiating
it
calling
something
like
a
new
syncer
function.
F
At
that
point,
if
we
were
to
type
to
pass
in
a
reference
to
something
that
implements
runtime
to
object,
something
that
is
of
the
kind
that
you
want
to
sync
be
that
crd
or
a
inbuilt
type,
whatever
you
know,
runtime
dot
something
implementing
type
meta
from
kubernetes.
F
There's
enough
information
there
for
us
to
actually
not
need
to
use
the
dynamic
client
as
such,
and
we
could
then
be
using
the
controller,
runtime
client
which
could
decode
directly
into
that
type
and
then
actually
go
and
call
your
like
mutation
functions
and
whatever
else
by
passing
that
along
and
having
the
mutation
functions,
take
something
like
a
runtime
to
object.
That's
typically
the
pattern
that
I've
seen
with
this
sort
of
a
thing
there's
actually
a
couple
of
cases
in
the
core
code
base
which
are
a
little
bit
different
because
they
don't
use.
F
Obviously,
controller
runtime,
but
there
are
a
few
cases
of
constructing
controllers
dynamically
like
this,
and
I
wonder
if
that
would
simplify
things,
because
that
same
package
could
then
be
used
by
the
main
kcs
like
sorry,
the
main,
what
do
we
call
it?
Cluster
api,
nested
or
virtual
cluster
project
in
order
to
in
order
to
construct
all
of
those
core
inbuilt
things
too?
So
that
way
we
only
end
out
with
one
underlying
thing
and
if
you
were
to
compile
these
all
together,
they'd
all
share
a
controller
runtime
client
too.
C
Yeah,
so
I
think
this
is
a
one
under
direction
that
we
still
need
to
change
a
little
bit
the
current
mct
controller
code,
because
this
one
is
already
hardcoded
when
we
get
something
they
go
scheme.
So,
if
scheme
does
not
exist,
they
will
fail.
C
So
if
you
want
to
use
a
full
reference
kind
of
approach,
I
haven't
looked
at
this
direction
much
I
very
yeah
thanks
for
bringing
it
up,
but
still
we
need
to
make
the
mccontroller
and
the
cluster
local
or
this
existing
component
to
handle
this
reference
is
yeah
so
have
to
yeah.
We
still
need
to
do
some
modification
on
on
some
of
these
files
and
let
them
to
able
to
support
reference,
and
this
is
another
approach.
I
think
this
is
a
maybe
achievable.
C
I
haven't
looked
at
this
one,
it's
a
good
point.
So
why
not
just
impose
a
structure
and
structure
in
order
to
what
difference
between
the
two
the
the
benefit?
Or
is
this
reference
much
easier?
Or
it's
because
the.
C
F
Wouldn't
recommend
so
the
unstructured
approach
it
does
work,
but
you
are
basically
disregarding
all
the
type
information
that
you
have
about
that
object.
It
is
officially
supported
in
that
it's
in
the
core
code
base,
but
it's
typically
being
used
for
things
where
you
are
dynamically
like
when
you
don't
have
any
type
information,
even
a
compile
time
dynamically
establishing
watches.
So
if
you
were
building
something
where
you
wanted
to
say
at
runtime
configure
to
sync
from
like
some
new
api
type,
the
unstructured
client
can
be
used
there.
F
Because,
like
you
noted,
you
only
need
the
api
version
and
the
kind
and
you
can
start
watching
and
establish
an
informal
cache
and
everything
and
it
will
decode
into
unstructured
because
that's
dynamically
registered
at
runtime.
So
you
don't
have
any
of
that
type
information,
but
because
what
we're
basically
making
here
is
that
making
it
easy
to
create
new
syncer
components
for
different
kind
of
api
groups
or
whatever
else,
we've
got
cool
ones
here
and
then,
whatever
crd
types,
we
do
have
that
type
information.
F
Things
like
the
unstructured
is
a
matte
string
interface
so
you're
having
to
cast
your
structs
all
the
time
you're
having
to
do
quite
a
lot
of
things
there
in
order
to
kind
of
modify
or
mutate
any
data,
as
opposed
to
just
working
with
those
standard
like
core
v1
dot,
pod
types
that
we
could
be
using.
Okay,
yeah.
F
C
But
I
I
I
should
ask
for
the
chris
and
fey
or
other
folks,
even
though
this
talk,
I'm
not
thinking
about
to
dynamically
inject
this
new
types
in,
but
is
it
long
run?
We
do
hope
that
while
we
is
running
the
thinkers,
if
user
have
some
somehow
get
the
crd
want
to
sync
for
now
we
have
to
stop
the
sinker
and
rebuild
it,
but
in
the
long
run,
is
it
another
possibility
to
inject
it
while
the
vc
sinker
or
whatever
the
future
thinker
names,
we
can
try
to
do
the
crd
sinkers?
C
Is
this
something
we
are
looking
at?
If
we're
looking
at
it,
I
think
the
instructor
can
help
on
this
front
really
do
the
dynamic
kind
of
injecting.
I
think
that.
F
Will
you
know
when,
as
when?
That
is
a
requirement?
I
think
it
does
make
sense,
but
I
also
don't
think,
like
that's,
that's
quite
a
complex
problem
domain,
because
as
soon
as
you
need
to
apply
mutations
to
those
objects
too,
it
becomes
very
complicated
and,
from
a
like
isolation
perspective
allowing
dynamic
configuration
of
bubbling
up.
Those
resources
is
potentially
quite
problematic,
and
I
wonder,
if
maybe
bear
in
mind,
we
already
have
an
approach
now
where
we
can
have
different
sinkers,
you
can
have
multiple
different
synchro,
binaries
and
components
running.
F
That
could
actually
be
like
almost
a
separate,
a
separate
implementation
that
we
look
at
further
down
the
line.
I
wonder
instead
of
hindering
and
making
it
harder
to
work
with
the
existing
package
today,
for
what
we're
talking
about.
D
Yeah,
because
the
problem
is
our
thinker
is
not
directly,
you
know
unmodified
object
copy.
No,
we
have
so
many
mutations
because
because
for
I
would
expect
even
for
crds,
you
probably
have
some
mutation
tips
take
some
places.
If
you
do
that
away,
I
really
don't
think
you
know
dynamic,
dynamic,
inject
code.
We
are
making
simpler
it
I
so
so
to
me
there
are
a
few
things,
so
I
I
think
we
definitely
should
support
crds
and
my
currently
thinking
is
a
better
way
is
we
can
have
some.
D
You
know
coaching
to
have
people
to
generate
all
the
required
files
as
a
scaffolding
code.
Then
they
have,
they
can
add
their
logic
to
mutate,
other
objects.
They
sync
and
I
wish
they
can
share
the
one
binary.
The
reason
is
that,
based
on
my
experience
so
even
depends
on
the
crd
type,
but
since
the
cra
is
so
flexible,
it's
highly
possible.
D
When
you
do
the
mutation,
you
need
to
reference.
Other
objects
like
parts.
So,
for
example,
if
if
you
are
syncing
a
controller
cr,
if
you
probably
need
some,
you
know
part
information.
So
if,
if
you
do
that,
you
have
to
populate
all
the
cache
with
other
parts,
that's
probably
a
big
waste,
because
you
cannot
share
the
same
informal
cache
with
a
default
upstream
synchro.
So
for
that
purpose,
maybe
combine
them
in
one
binary
can
save
some
more
resources.
D
That
is
my
thinking.
Definition
has
its
own
benefit
like
easy
maintain.
There
is,
you
know,
separate
clear
separations,
easy
code
maintenance,
something
like
that,
but
for
me
I
think,
for
the
long
run.
If
I,
if
I,
if
I
want
to
enhance
sinker,
I
would
like
to
enhance
in
this
way
so
a
lot
of
people
easy
to
address
the
ids.
But
at
least
for
now
don't
dynamic,
inject
things.
C
Okay,
so
now
we
have
three
approach:
one
is
the
instructor
another
is
that
james,
you
mentioned
using
the
reference
article
look
and
the
third
one
is
a
phase
mentioned
that
to
try
to
use
some
code
gen
to
to
make
this
work
correct.
So
we
have
three
a
possible
approach,
which
is:
I
haven't,
looked
at
the
third
one
and
the
second
one.
So
I
will
take
a
look
at
the
winner
time
and
third,
one,
the
second
one
c,
which
is
which
much
easier.
C
If
you
have
any
ideas
which
tool
you
want
to
use
just
share
with
me.
So
I
can
take
a
look.
A
A
A
If
you
needed
to
or
if
somebody
wanted
to,
they
could
fork
this
and
add
their
own
into
it,
but
they
wouldn't
have
to
go
and
change
everything
in
mc
controller,
but
not
to
over
rotate
on
this,
because
we
did
spend
like
40
minutes.
Can
we
can
we
open
an
issue
on
one
of
those
two
repos
and
then
we
can
start
a
design
doc
that
we
talked
about
this
a
little
bit
deeper.
Okay,
that
makes.
A
Yep
cool
all
right
moving
on
a
little
bit,
so
I
wanted
to
bring
up
real,
quick
it'll,
be
a
two-minute
conversation.
I
brought
up
the
idea
about
using
the
cluster
add-ons
declarative
patterns
project
to
cluster
the
cluster
add-ons
group
right
before
this
meeting
they're
also
they're.
All
they
didn't
have
any
major
feedback
in
terms
of
concerns
about
doing
it.
There
was
some
there
was
some
feedback
about
whether
supporting
the
entire
control
plane
in
one
co.
A
One
operator
versus
doing
individual
components
could
work
whether
we
were
to
be
to
have
say,
for
instance,
a
single
controller
for
the
api
server
and
controller
manager
together
instead
of
having
us
having
those
separate.
I
think
it's
it's
an
idea.
There's
concerns
that
we
wouldn't
be
able
to
orchestrate
both
sets
of
those
add-ons
based
on
the
way
that
the
libraries
are
set
up
in
this
in
a
single
controller,
which
might
mean
that
out
of
the
box,
we
do
want
to
continue
to
keep
them
separated.
A
I
think,
from
from
specifically
around
upgrades
being
able
to
say
only
deploy
the
new,
the
new
api
server
binaries,
but
don't
don't
add
the
new
controller
manager
binaries,
for
example,
pods,
for
example,
and
keeping
those
separated
the
biggest.
The
biggest
issues
between
that
is
making
sure
that
when
one
is
updated,
we're
making
sure
to
either
wait
or
what
the
orchestration
layer
is
for
that
state
machine
between
the
two,
which
is
gonna,
be
the
biggest
thing,
and
I
think
that
can
all
end
up
at
the
ncp
controller
for
orchestrating
all
the
resources.
A
So
I
don't
think
it's
gonna
be
a
huge
issue
for
us,
but
they're
all
supportive
and
and
if
we
need
any
changes,
we
can
go
work
with
anybody
in
the
in
that
group,
as
we
start
to
build
out
those
controllers
and
then
in
the
last
15
minutes.
I
want
to
talk
about
two
other
things:
james.
You
have
an
item
on
the
on
the
agenda.
Do
you
want
to
talk
about
that
real,
quick
yeah?
A
If
you
want
to
go
in
first
and
I'll,
just
use
the
time
at
the
end
if
you've
got,
I
wanted
to
not
end
this
meeting
without
talking
about
next
steps.
So
that's
that
that's
the
main
thing
so
ciao,
thanks
for
putting
in
that
first
scaffolding,
pr
and
getting
that
all
set
up.
I
assume
you're
picking
up
the
nested
scd,
because
you're
still
assigned
to
that.
A
Cool
all
right,
and
then
once
that
comes
through,
I'm
still
working
on
trying
to
get
time
set
off
to
actually
work
on
the
ncp
controller
dock
apologize
for
not
getting
to
that.
What
is
next,
we
still
need
to
do
the
nested
api
server
and
nested
controller
manager,
since
we
have
everybody
here
and
we
do
have
base
scaffolding
set
up
for.
For
some
of
these
things,
is
anybody
interested
in
going
and
signing
themselves
to
one
of
those
controllers?
A
If
anybody
has
time
this
week,
those
issues
are
open
or
are
open.
On
on
on
the
cap
end
project
you're
more
than
welcome
just
go,
assign
that
to
yourself,
if
not
we'll,
we'll
try
and
figure
out
who,
from
our
side
of
things,
can
can
pick
things
up.
D
Yeah,
I
I
think
I
think,
probably
not
this
week,
but
we
can
definitely
pick
one-
maybe
I
guess
over
yeah,
but
we
we
yeah
at
least
we
will
pick
up
pick
up.
One
no
problem.
A
Sounds
good
cool
and
yeah.
We
should
definitely
pick
up
something
from
the
apple
side
of
things
as
well.
I
don't
know
if
anybody
on
the
call
wants
to
wants
to
call
that
out
now
or
we
can
self-organize
internally
outside
of
this
and
then
james.
I
can
now
give
you
15
minutes
to
talk
about
what
we're
what
you
brought
up.
F
Cool,
do
you
want
to
provide
any
background
on
the
plus
like
what
brought
this
up.
A
Yeah,
so
a
little
bit
of
background,
so
we
have
internally
been
trying
to
work
through
how
we're
going
to
do
how
we're
going
to
do
admission,
control,
basically
admission
web
hooks,
and
so
we've
all
been
having
a
bunch
of
conversations
about
how
to
navigate
from
a
control
plane
api
to
the
web,
hook
to
the
actual
web
hook,
endpoints
in
essence,
and
so
this
sparked
a
conversation
of
okay.
The
network
stack
that
is
deployed
as
a
tenant
control
plane
is
actually
pseudo
different
from
the
actual
network
stack.
A
A
I
brought
this
up
a
long
time
ago
and
we're
finally
trying
to
look
through
different
ways
of
looking
at
this,
and
so
james
brought
up
an
idea
around
around
doing
some
thinking
syncing,
instead
of
instead
of
or
sorry
proxying
instead
of
syncing.
So
that's
the
background.
F
Okay
yeah,
so
it
was
kind
of
off
the
back
of
the
fact
that
we
we're
kind
of
dealing
with
situations
where
we're
either
having
to
we're
having
data
defaulted
by
the
tenant
api
server,
which
then
gets
persisted
to
the
tenant
fcd
and
then
needs
syncing
over
to
the
control
plane,
one
and
now
because
that
feels
either
immutable
or
can't
be
set
by
the
client
or
it's
allocated.
We
end
up
with
issues
there.
F
I
guess
it
just
kind
of
prompted
me
to
think
a
little
bit
about
the
differences
between
syncing
and
proxying
with
this
sort
of
a
model.
So
right
now
we
are
effectively
using
the
tenant
api
server.
As
like
a
intermediate
data
store
between
the
super
cluster
and
the
client.
So
client
data
goes,
you
know
to
the
tenant.
F
One
gets
persisted
the
standard
defaulting,
any
other
kind
of
options
that
we're
used
to
in
cube
api
server
happens,
and
then
it
goes
to
get
moved
on,
and
it
then
occurred
to
me
as
well
things
like
resource
version,
collisions
and
stuff
like
that,
aren't
going
to
be
honored
as
a
result,
because
we
are
buffering
it
elsewhere.
They
are
technically
different
objects
which
we're
then
mutating
just
enough
to
be
able
to
submit
them
and
apply
a
patch
to
the
supercluster.
F
So
it
started
to
make
me
think
as
well
when
you
said
the
ips
are
you
know,
they're
different
they're
allocated
by
two
different
things
and
the
network
stacks
are
sort
of
different
that,
in
a
way,
what
we're
trying
to
do
for
users
is
present
a
different
view
of
the
super
cluster
to
those
users.
So
the
definition
of
list
all
name
spaces,
is
kind
of
different.
It's
like
that.
F
It's
like
the
data
is
ultimately
the
same,
but
when
you
say
all
namespaces,
you
actually
mean
all
name
spaces
with
this
particular
prefix
or
something
along
those
lines.
So,
in
a
way
it's
it's
almost
like
a
data
transformation
thing
you're,
presenting
a
different
view
of
that
data
to
the
client,
and
if
you
take
that
to
like
and
continue
with
it,
the
differences
are
quite
stark
in
terms
of
like
the
model
and
the
problems
that
clients
will
face.
F
So
if
we
dynamically
like
when
a
user,
if
we
change
the
model
to
instead
of
persisting
to
a
lcd
in
the
middle,
if
instead
a
client
goes
submits,
a
resource
into
namespace,
say
named
default,
the
api
server
that
they're
talking
to
instead
of
persisting
that
itself
and
then
having
a
syncer,
move
it
across
that
api
server
could
actually
apply
those
same
transformations
that
we
run
today.
Those
mutations
before
responding
to
the
request
and
then
submit
that
and
forward
that
request
on
to
the
upstream
supercluster.
F
That
way,
you
basically
block
those
like
a
create
operation
on
the
tenant
api
server
will
block
until
that
has
actually
been
submitted
in
the
supercluster
and
then
the
resource
there,
that's
returned
on
from
the
create
operation
from
the
supercluster
can
be
mutated
back
into
a
representation
suitable
for
the
tenant
clients,
so
basically
removing
those
prefixes
from
the
namespace
name
or
resetting,
or
oh
well,
setting
the
correct,
there's
actually
not
any
resetting
or
overwriting
to
do,
but
basically
responding
that
and
passing
it
back
on.
So
the
idea
was
basically
instead
of
having
syncing.
F
You
would
have
a
custom
api
server
that
for
the
resources
that
we
sync
today,
instead
of
assisting
those
into
xcd,
would
proxy
like
transform,
transform
them
and
proxy
them
through
and
then
for
all
the
resources
that
today
we
don't
sync,
because
we
don't
need
to
sing
them.
You
know
the
custom
resource
definitions,
the
cluster
scoped
things
they
would
still
be
persisted
into
it
like
a
tenant
xcd.
F
They
still
get
their
scope
to
view,
but
the
api
server
is
effectively
brokering
this
communication
and
it
it
reminds
me
in
a
way
of
kind
of
conversions
here
between
api
versions.
You
are
it's,
it's
not
the
same
thing,
but
it's
kind
of
manipulating
it
to
present
a
different
view,
so
the
clients
can
all
interoperate
and
yeah.
It
would
mean
that
we
don't
need
to
do
things
like
handling
spec.cluster
ip,
on
services
being
allocated
in
two
different
things,
because
the
api
server
would
basically
be
deferring
all
of
that
and
it's
a
similar
thing
with
node
port.
F
It's
basically
the
same
with
everything
even
defaulting
functions.
Right
now
we
actually
effectively
run
defaulting
twice
once
in
the
tenant
api
server,
at
whatever
kubernetes
version.
That's
using
and
then
they
go
that
goes
and
gets
submitted
on
and
in
fact
it
kind
of
gets
even
more
muddied
because
it
gets
defaulted
on
the
tenant
one
and
then
it
gets
submitted
with
those
default
values
set
explicitly.
F
So
the
super
cluster
isn't
actually
getting
a
view
of
the
user
request.
It's
getting
a
view
of
the
user
request
right
now,
after
defaulting
has
run
at
a
particular
kubernetes
version,
and
I
worry
that
as
the
project
matures-
and
we
see
more
deployments
of
this-
and
actually
you
get
drift
between
the
supercluster
version
and
the
tenant
cluster
version.
F
So
it's
definitely
quite
a
large
topic
and
I'm
obviously
aware
that
it's
quite
a
fundamental
shift
in
like
that
architecture.
But
I
thought
it
would
be
worth
bringing
up
to
kind
of
bounce
some
ideas
around
as
to
other
ways
that
this
could
be
done
and
yeah
sort
of
start.
Thinking
about
pros
and
cons
is.
D
Yeah,
I
can
tell
you
my
immediately
thought
about
this,
so
so
intentionally
you
want
to
make
the
source
of
shoes
to
the
super
instead
of
10
for
some
objects,
because.
F
D
That
is
my
biggest
concept,
because
this
form
is
hard
to
resolve
architecture
wise
if
you
change
the
source
to
choose
other
than
that
you
architecturalize
you
just
make
as
a
you
know,
right
through
cache
kind
of
thing,
which
seems
okay
and
indeed
solve
some
problem
like
the
class
ip
problem,
so
my
biggest
worry
is
complicated
with
the
user,
because
how
do
you
tell
user
that?
Oh,
this
part
of
the
resource
are
not
owned
by
you
are
owned
by
the
supercluster,
but
that
type
of
resource
owned
by
you.
So
how
can
so.
F
I
don't,
I
don't
see
it
as
a
change
in
ownership.
Personally,
it's
it's
kind
of
this
is
true,
as
it
is
today
right
now
with
the
way
we're
syncing
things,
you're
updating
objects,
and
then
you
know
they're
getting
synced.
You
could
write
a
controller
in
your
tenant
that
manipulates
service
objects,
for
example,
now
that
that
problem
still
exists,
that
tenant
controllers,
as
well
as
supercluster
controllers,
could
be
manipulating
those
same
service
objects.
In
fact,
the
situation
today
is
kind
of
worse,
because
the
optimistic
concurrency
guarantees
that
kubernetes
and
its
api
server
provides
are
lost.
F
We
can't
now
rely
on
resource
version
to
prevent
us
stomping
on
each
other.
So
by
switching
to
something
like
this,
I
actually
think
it
aligns
it
closer
to.
I
think
it
makes
it
basically
the
same
as
how
kubernetes
operates
today,
like
when
you're
talking
to
a
regular
api
server,
because
all
of
that
sort
of
conflict
over
who's,
manipulating
what
field
or
whatever
else
is
handled
by
the
regular
optimistic
concurrency
bits.
The
ownership
thing.
I
don't.
Quite,
I
don't
quite
see
what
your,
what
the,
what
the
difference
is.
F
Basically,
we
there
is
no
concept
of
ownership
in
this
really.
We've
just
got
clients,
interoperating
and
working
together
on
different
types.
So
yeah,
I
I
wouldn't
say
like
the
ownership-
has
shifted
at
all.
It's
we're
no
longer
using
like
having
an
intermediate
store
and
then
sinking
that
representation
in
that
store
and
like
resolving
these
conflicts.
D
D
So
if
you
change
the
way
like,
if,
if
super
is
the,
if
you,
if
we
change
the
design
like
you
know
everything,
the
client
is
just
the
copy
everything
from
the
super,
because
since
creating
a
super,
which
means
everything
changes
in
the
super,
so,
for
example,
the
spec
we
will
copy
back
to
the
client.
F
D
So,
but
how
about
oh,
so
you
are
saying
that
if
tenants
change
roads
things,
it
will
be
directly.
F
It's
not
even
replicated
that's
the
only
place,
it
would
be
persisted
basically
right
now,
I'm
saying
instead
of
having
two
sources
of
truth,
which
is
kind
of
what
we
have.
We
have
two
sources
of
truth
for
every
pod
for
every
service,
because
we
persist
it
into
the
tenants.
Xcd
apply
chain
like
tenant,
clients,
apply
changes
to
that
version
of
the
resource,
and
then
the
syncer
is
having
to
resolve
conflicts
and
merge
these
back
and
forth.
F
Instead
of
that
pattern,
I'm
basically
saying
that
the
clients
basically
get
to
interact
with
the
super
clusters
copy
or
like
version
of
the
resource,
but
because
we
obviously
want
to
present
it
as
a
like
scoped
thing.
The
api
server
has
to
do
things
like
rewriting
namespace
names
to
remove
prefixes
or
add
prefixes,
depending
on
which
direction
it's
going
so
that
tenant
api
server
basically
becomes
a
lot
more.
A
Instead
of
talking
about
it,
we
can
continue
to
have
this
conversation
since
we're
at
two
minutes
until
the
end
of
this.
But
we
can
continue
this
async
over
slack
and
meet
together
next
week
as
well
and
see
if
we
can
talk
more
a
little
bit
about
it,
as
we
continue
to
kind
of
think
deeper
into
this.
I'm
glad
that
we
got
this
at
least
out
in
the
open
to
talk
about
to
start
to
start
thinking
about,
because
it
is
gonna.
A
It
is
gonna,
make
it
difficult
for
for
a
tenant
control
plane
to
to
operate,
to
use
the
standard
functions,
to
do
things
like
deploying
say,
for
instance,
cert
manager
and
even
even
getting
the
right
cluster
ips
to
associate
with
the
the
workload
or
the
search
certificate
that
it's
generating.
All
those
kinds
of
things
are
gonna.
We
need
to
kind
of
work
through.
A
D
Through
and
talking
about
it,
so
this
is
all
the
all:
the
difficulties
of
complexity
is
hidden
by
the
thinker
or
whatever
mechanical
we
use.
You
know
everything
provided
to
the
user,
doesn't
change
so
it's,
but
it's
everything
so
yeah.
I
need
to
think
about
it
more.
It's
we
are
all
the
you
know,
controller
work,
informal
case,
scenes
kind
of
work
if
you're
missing
that
part
of
thing
in
the
edcd.
That's
my
concern.
D
F
F
So
if
the
super
cluster
goes
down,
then
you
wouldn't
be
able
to
you
know:
do
a
pod
create
operation.
No,
you
would
get
an
error
with
that,
but
right
now
I
think
that's
a
true
representation
right
now
that
that
resource
would
be
assisted
into
the
tenant
fcd
and
then
it
would
sit
around
and
wait,
whereas
here
we
should
really
in
like
true
kubernetes
fashion.
If
the
api
servers
down
they
should
they
should
get
a
503
error
or
something
right.
We
kind
of
lie
because
we
do
we've
got
a
right
back
cache.
G
F
G
E
E
Are
pods
and
schedule
scheduling
related
things,
yeah
yeah?
I
was
trying
to
think
about
this
from
another
perspective
like
what,
if
we
could,
you
know,
have,
let's
just
say
the
cry
layer
could
actually
plug
into
kubernetes
and
create
pods
in
place
of
containers.
So,
like
you'd
have
a
plug-in
and
then
you
can
run
like
something
like
kind
with
a
different
cry
that
spins
up
pods
instead
of
other
things.
But
sorry
I
digress.
This
is
more
of
a
multi-tenancy
thing
than
a
cap
and.
F
Yeah
for
fear
of
dragging
out
too
long.
I
might
message
you
about
that.
Instead,
because
I
can
see
chris
sitting
there
like
wanting
to
call
it,
it's
two
minutes
pass.
No.
C
Yeah,
I'm
okay,
but
from
my
side
I
have
totally
reversed
the
thinking
from
james
proposal
that
james
perroto
is
put
everything
into
a
super
cluster
and
as
put
the
tenant
api
server
as
a
proxy,
but
my
thinking
is
really
to
make
really
choose
of
everything
into
tenant
clusters
and
make
even
a
scheduler
in
a
tenant
api
server,
so
that
so
the
tenant
can
really
access
the
path,
the
created
part
and
put
everything
intended
cluster
and
make
the
super
cluster
a
kind
of
supporting
role.
So
this
reverse
directions.
C
I
think
really
distributed
the
load
relatively
nicely,
because
now
we
have
the
problem
bottleneck
by
the
etcd
api
server
on
the
super
cluster.
So
if
we
can
move
more
rows
to
the
tenant
cluster,
this
will
be
a
better
distributing
way.
So
I'm
looking
at
this
direction-
and
probably
we
can
talk
more,
but
I
think
if
we
can
put
the
scheduler
really
into
the
tenant
cluster,
makes
a
really
tendency
api
server.
A
real
one,
probably
can
be
a
lot
of.
We
can
see
even
more
things.
C
Is
some
examples
in
the
source
code,
in
example,
so
which
is
I'm
interesting
to
look
at
the
network?
Part,
that's
how
we
can
make
it
work
still.
I
think
it's
still
achievable
giving
a
different
project
around.
So
if
we
can
make
this
direction,
move
probably
can
even
save
some
of
the
work.
F
I
think
it
would
be
like
from
my
understanding
of
some
of
the
goals
of
virtual
cluster
was
to
do
exactly
the
opposite
of
that
for
workloads,
at
least,
which
was
deferring
scheduling
like
utilizing
the
supercluster
as
the
scheduling
domain.
So
not
exposing
things
like
node
objects
to
that
supercluster
or
even
like
cluster
topology,
really.
C
No,
we
only
expose
some
of
the
nodes
to
the
tenant
class,
so
this
is
why
the
super
class
have
a
supporting
role
here,
to
manage
how
much
node
can
be
exposed
to
the
tenant
cluster
and
how
much
is
to
another
tenant
cluster.
So
this
is
a
supporting
role
played
by
the
super
cluster
that
decide
how
much
you
can
do
so
once
the
tenant
have
this
amount
of
a
cluster
node
available,
he
can
play
whatever
he
wants
and
if
he
wants
to
get
dynamic,
adding
more
nodes.
D
So,
but,
but
to
me
the
model,
the
okay,
the
number
one,
as
james
said,
you
know
the
number
one
reason
I
want
to
come
up
with
things:
share
the
resource
and
use
a
single
super
scheduler
to
increase
the
no
determination.
So
in
way
in
your
approach,
you
to
me
it
it
is
that
simple
and
it
only
work.
You
have
a
dedicated
notes
assigned
to
the
tenants
and
you
expose
those
dedicated
nodes
to
your
tenant,
fs
server
that
I
think.
D
C
D
We
have
a
really
controller
on
the
bottom.
The
problem
is,
if
you
want
to
have
an
arbitrator
to
you
know,
to
do
the
judified
to
see
which
one
is
valid,
because
you
have
concurrent
placement
right,
you
need
to
justify
which
one
is
valid,
then
that
role,
if
you
think
supermaster,
need
to
play
that
role,
they
need
to
know
all
the
you
know
scheduled
current
distribution.
They
need
to
have,
you
know,
plugs
to
see
two
schedulers
the
whole
cache
of
the
tube
scheduler
and
their
own
caches.
They
need
to
have
a
global
view
or
the
object.
D
C
Yeah,
so
this
is
why
that
the
job
is
distributed
by
the
two
levels.
One
level
on
this
tenant
cluster
is
just
locally
optimized,
whatever
they
want
given
the
resources,
but
on
a
super
classic
level.
His
job
is
just
make
sure
you
divide
it
in
nicely.
So
that's
the
dependent
class
are
not
conflict.
Each
other.
D
No,
the
the
divided
thing
is
as
violence,
otherwise,
you
have
to
either
change
the
tenant
scheduler
and
they
make
a
scheduled
decision
they
want
to.
They
want.
They
need
to
call
another
global
arbitrator
to
see.
If
my
schedule
is
conflicting
with
others,
so
you
may
have
a
locking
kind
of
thing
after
the
global
cache.
This
is
one
way.
C
D
It's
kind
of
you
know,
you
know
counter
injury
to
my,
you
know
my
intention,
because
I
want
note
to
be
shared.
I
want
to
intend
to
know
the
resource.
Realization
is
getting
higher
and
higher.
Otherwise,
this
our
approach
has
no
difference
compared
with
multi-cluster
approach
right,
which
can
have
dedicated
cluster.
F
A
Cool
all
right,
I'm
going
to
call
this
right
now
and
we'll.
We
can
continue
this
async
on
on
the
captain
channel
on
slack
and
then
and
let's
get
it,
let's
get
a
doc
written
as
well
that
we
can
continue
to
move
this
forward
separately,
sound
good
everybody,
yeah
gems,
that's
a
good
idea!
I
think
about
it.
A
A
Go
ahead,
james,
sorry,
all
right,
cool
all
right!
Thanks
everybody
for
joining
this
will
be
posted
to
youtube
a
little
bit
later
and
then
we'll
resync
next
week
on
tuesday.