►
From YouTube: WG-Multitenancy BI-Weekly Meeting for 20221018
Description
WG-Multitenancy BI-Weekly Meeting for 20221018
A
Now
it'll
got
it
so
now
we
are
official
online,
kick
things
off
yeah!
Thank
you,
Jim
and
Tima
for
having
us
join
this
meeting
so
glad
to
be
here.
So
my
name
is
kaiishu.
I
lead
the
kubernetes
team
at
by
dance.
Since
January
Charles
delivered
the
first
session
to
this
group.
We
have
made
a
lot
of
progress.
So
if
you
look
at
the
cncf
landscape
right
right
now,
Cooper
Zoo
already
officially
on
the
landscape.
A
So
that's
great
news
for
everybody
and
we
also
have
made
some
other
updates
like
fixed
a
lot
of
issues
from
the
community,
and
this
time
we
want
to
present
the
update,
mainly
only
cluster
resource
code
implementation.
We
have
even
though
it's
right
now
still
in
Tech
preview,
but
you
know
we
soon
will
be
publishing
this
to
the
open
source.
Community.
That's
why
they
prepare
this
discussion
with
the
team
and
seek
for
any
feedback
before
finalize
the
publication
shortly.
A
One
quick
question
can
I
shared,
so
you
mentioned
that
this
is
in
the
cncf
landscape.
So
kubzu
is
a.
Is
it
a
Sandbox
project
or
incubating
project
or
right
now,
I
think
it's
incubator
project,
yeah,
fantastic.
B
Yeah
thanks
again
yeah
yeah
thanks
everyone
for
joining
yeah.
It's
a
great
pleasure
to
meet
you
guys
in
this
working
group
meeting
so
yeah.
This
is
the
Lincoln
from
by
dance,
so
I'm
here,
together
with
kaishu
and
ishan
from
our
team,
to
give
an
a
status
update
for
kubzu.
So
it's
an
incremental.
We
are
going
to
present
the
incremental
change
since
Charles
presented
early
this
January
and
okay.
So
here
is
the
agenda.
I
will
prepare
for
this
session.
B
So
first
of
all
we
are
going
to
give
a
quick
recap
of
kubzu,
so
I
think
it
would
be
very
helpful
for
folks
who
are
new
to
Google
and
a
second.
However,
there
are
a
few
news
to
share
with
you
folks.
So,
firstly,
kubzu
was
open
source
in
literal,
Late
July
and
was
just
included
in
the
cncf
landscape
I.
Think
in
this
month
and
yeah
we
improved
the
kubernetes
compatibility
of
poop
Zoom.
So
far,
Google
supports
kubernetes
up
to
1.24,
so
yeah.
B
The
thing
was
has
been
working
very
hard
to
run
a
lot
of
like
performance
testing
and
do
a
lot
of
bug
fixing
and
the
third
one
is
What.
There
is
a
new
feature:
yeah
we're
going
to
share
a
new
feature
of
the
cluster
level,
resource
code
and
management,
even
though
it's
still
something
like
a
tech
view
feature
yeah.
B
That's
that's
why
you
folks
still
cannot
have
access
to
it
from
cubes
to
GitHub
repo
yeah
it'll
be
generally
available
very
soon,
so
please
stay
tuned
and,
in
the
last
part
of
the
session,
we're
going
to
have
a
quick
demo
of
this
new
feature.
B
And
yeah,
let's
first
of
all,
let's
take
I,
have
a
quick
recap
of
kubuzu,
so
kubuzu
is
just
lightweight
gateway,
gateway
service
for
kubernetes,
Mountain
tenancy.
It
presents
a
new
tenancy
model
for
kubernetes
So.
Currently,
there
are
three
main
tendency
model
in
kubernetes,
so
namespace
as
a
service
control,
plane,
another
service
and
the
cluster
as
a
service.
B
Kubernetes
API,
as
a
service
presented
by
Google,
is
designed
to
sit
between
namespace
as
a
service
and
the
control
plane
as
a
service
which
enabled
kubzu
to
be
very
lightweight
and
also
very
fast,
and
also
to
provide
a
view
level.
Isolation
just
described
in
this
diagram
simply
speaking
wrong.
Its
own
API
server,
along
with
distributed
key
value,
store
on
its
metadata
storage,
which
would
serve
for
the
server
for
the
API
request
from
all
from
inside
all
tenants.
B
Meanwhile,
it
also
serves
as
API
Gateway
with
protocol
conversion,
which
submit
API
requested
with
proper
conversion
to
the
backend
API
server.
B
In
terms
of
the
tenant
management,
we
introduced
the
tenant
crd
of
the
API
and
the
Tenant
controller
within
the
kubzu
server
to
manage
all
the
tenant
instance
that
kubzu
or
tenant
controller
will
be
responsible
for
silence
certificate
and
generating
codes
of
Cube
config
file.
This
is
how
like
kubzu,
give
each
tenant
an
isolated
view
of
the
cluster.
B
So,
as
we
have
already
refresh
our
memory
about
Cooper
zoo,
let's
take
a
look
at
the
new
feature,
which
is
a
cluster
level
resource
code
management.
So.
Currently,
kubernetes
building
resource
quota
is
just
namespace
scope,
so
it
will
be
insufficient
to
support
tendency
model
beyond
the
single
namespace.
So
is
that
that
being
said,
we
expect
the
resource
quota
at
the
tenant
level,
which
support
resource
code
and
management
spending
multiple
namespaces
we
need
class.
B
So
that's
why
we
need
the
cluster
level
resource
code,
time
management,
AKA
class
cluster
resource
code.
So
it
is
a
cluster
scope
to
crd
and
it
supports
cross
namespace
resource
code
and
management,
and
the
status
of
the
cluster
resource
quota
will
be
the
place
to
persist.
The
aggregated
resources
usage
in
each
of
the
namespace,
which
managed
by
the
tenant.
B
So
yeah
here
we
are
going
to
keep
and
I'll
a
little
bit
more
information
about
the
key
components
of
the
class
resource
quota
feature
so
the
same
as
a
kubernetes
building
resource
quota
cluster
resource
code
also
has
two
main
components,
so
the
admission
components
and
the
controller
so
as
we've
already
familiar
with
the
resource
code,
animation
will
be
responsible
for
both
quota,
admissible
evaluation
and
usage
updating,
while
it's
controller
reconcile
a
code
update
and
recalculate
the
actual
resource
usage.
B
So
basically
it
serves
more
server
as
a
source
of
choose
in
terms
of
resource
code.
The
classroom
resource
code,
controller
yeah,
we'll
just
keep
watching
for
the
creation
of
the
of
its
instances
and
then
create
them,
create
one
managed
resource
quota
instance
for
each
of
the
namespace,
with
matching
label
as
specified
in
spec
in
in
the
format
of
the
namespace
selector.
B
So
it's
basically
it's
just
a
label
since
the
operation,
the
class
resource
code
controller
will
continue
watching
all
of
this
instance
and
keep
thinking
and
aggregate
quota
usage
from
all
managed
resource
code
and
persist.
The
aggregate
usage
in
the
status
of
the
cluster
resource
code
so
but
in
the
admission
part
so
different
from
depending
on
the
entry
admission,
the
as
in
the
building
resource
quota,
the
cluster
resource
code
are
just
leveraged
the
admission
weapon,
also
it
it
is
only
responsible
for
automating,
compatible
resource
or
resource
requests.
B
Well,
not
updating
any
resource
usage
as
a
building
resource
code
admission
does
so.
The
desired
here
here
is
just
to
rely
on
both
the
mission
and
the
controller
of
the
building
resource
quota
to
further
evaluate
the
compatibility
of
the
resource
request
and
update
the
usage
quantity.
B
So
yeah
here's
a
diagram
of
the
architecture
overview
about
kubernetes
with
cluster
resource
quota.
So
it
also
shows
that
the
control
flow
of
how
cluster
resource
code
works
with
the
building
resource
quota
when
there
is
a
part,
is
created.
So
here,
let's
assume
that
the
class
resource
code
instance
has
been
created
for
one
tenant
and
all
the
managed
resource
code
instance
has
been
created
in
each
of
the
each
of
its
namespace,
so
with
a
part
is
created
by
the
tenant.
B
So
the
request
will
firstly
go
through
the
out
of
tree
admission:
web
hook,
a
classroom
resource
code
and
the
classroom
source
code.
I
will
evaluate
the
resource,
compatibility
based
on
the
code
and
the
current
usage,
and
it
will
either
deny
it
or
just
allow
it.
So
if
the
request
is
allowed
by
the
animation
weapon,
the
request
will
go
through
the
entry
resource
code
admission,
which
will
admit
an
update
the
resource
code
usage
so
which
it's
just
a
building
logic.
B
So
after
that,
the
resource
code
controller
will
reconcile
the
code
and
recalculate
the
usage
to
provide
the
source
of
juice.
What
this
step
is
also
just
building
logic
and
after
a
result,
quota
I
have
just
completed
the
work.
The
class
results.
Quota
controller
will
keep
watching
for
all
the
managed
resource
quota
and
just
thinking
and
aggregate
the
usage
of
this
of
each
resource
code
under
all
the
managed
namespace.
B
So
yeah
there
is
some
advantage
on
disadvantage
in
terms
of
this
kind
of
design,
so
yeah.
The
advantage
is
like
it
like
effectively
avoid
the
contention
in
usage
calculation
and
update
from
both
a
class
resource
quota
and
resource
code,
so
yeah
that
that
is
compared
to
if
the
design
of
cluster
resource
quota
just
inherit,
what
the
resource
code
animation
does.
So
it
so
another
there
will
if
the
classrooms
quota
also
work
work
for
it.
B
Like
usage
update,
so
yeah
there
will
be
contention
in
terms
of
usage,
calculation,
updating
so
yeah.
That
being
said,
the
this
kind
of
design,
because
those
simplify
the
class
of
resource
code
admission
part.
So
the
yeah
permission
is
easy
enough.
Just
do
the
like
the
compatibility,
evaluation
and
just
either
admit
or
reject
so
yeah
in
terms
of
the
disadvantages,
so
overseeing
resource
is
possible
because
they're
the
asynchronous
model
of
the
will
at
least
a
watch
of
the
crd
update.
B
So
because
we
rely
on
the
usage
update,
which
is
aggregated
and
sync
synced
from
being
to
the
classroom
resource
code,
AS
status.
So
since
it's
asynchronous,
so
if
some
ongoing
request
has
not
yet
been
like
persist
or
has
not
been
updated
into
the
status
of
the
class
resource
quota
and
you've
got
before
that,
there
is
some
like
other
admissible
request
comes
in,
so
there
will
be
a
resource
code
like
over
sailing.
B
So
yeah,
that's
pretty
that's
kind
of
like
no
issue
here,
so
we
try
very
hard
and
we
can
have
some
alert
or
we
can
introduce
the
extra
like
allocated
field
to
just
like
Pro
aggressively
to
just
prevent
the
overseeing
but
I
think
it's
just
a
trade-off
between
either
overselling
or
underselling.
B
So
yeah
so
much
for
the
classroom
resource
quota
itself.
So,
let's
see
now,
let's
take
a
look
at
how
like
poop
Zoo
leverage,
this
class
resource
code
feature
so
inside
the
tenant
spec.
So
we
add
a
new
field
for
specifying
the
cluster
resource
code.
B
So
whenever
a
tenant
instance
is
created,
technical
controller
will
create
create
a
cluster
resource
quota
instance
in
the
kubzu
API
server
and
cluster
resource
code
controller
will
take
over
the
rest
and,
as
we
mentioned
in
the
previous
slides
yeah,
something
like
it
will
create
one
of
the
one
node
resource
code
itself.
It's
managed
namespace.
B
Under
this
tenant,
so
yeah
similar
thing
and
reconciling
is
reconciliation.
Handling
will
apply
to
the
update
and
delete
the
events
for
the
cluster
resource
code
spec
in
the
tenant.
B
So
here
yeah
so
much
for
the
like
theory
about
this
new
feature
and
a
status
update
so
yeah.
Now,
let's
we're
going
to
have
a
quick
demo
about
this
new
feature.
B
C
This
is
the
phrase:
I
got
a
question.
So
can
you
go
go
back
to
your
previous
slide.
C
B
C
B
Yeah
so
yeah
for
the
hard
limits
we
just
specify
just
inherit
what
whatever
the
limit
in
the
cluster
resource
code,
so
yeah
the
I
didn't,
prepare
the
spec
CRT
spec,
you
know
slice
but
I
think
the
demo
will
show
that.
So,
oh
you
can
think
so
you
you.
C
What
is
the
limit
of
our
namespace
and
if
you,
if
we
have
unnamed,
especially
it's
going
to
to
be
the
total
called
out,
would
be
n
times?
The
high
limit
is
that.
B
Yeah,
so
basically
the
if
yeah,
let's
take
an
example,
so
cluster
is
required.
I
have
like
two
CPU
and
two
two
gig
of
memory
right,
so
that
means
in
each
of
the
names
is.
It
will
have
like
two
yeah
the
same
exactly
the
same
so
the
two
core
and
to
a
gig
of
memory.
Okay,.
C
B
Yeah,
the
total
quota
will
not
be
the
sum
of
that,
so
the
cluster
resource
code,
animation
controller,
will
just
prevent,
but
that
it
goes
beyond
more
than
the
higher
limit.
Yeah
I
think
we
just
use
the
result
counter
to
collect
the
usage.
Okay.
C
Sorry
sorry
I
just
want
to
make
it
very
clear
because
so,
if
people
want
to
use
cost
results
quota,
they
only
have
a
let's
say
they
want
to
put
a
hot
limit
for
attendance.
How
do
you?
How
can
you
do
that
if
they
want
to
put
a
high
limit
for
tenant
tenant?
How
can
you
do
that?
If
you
want
to
duplicate
every
namespace
with
the
limit,
can
you
do
that?
Basically,.
B
A
hard
limit
for
yeah
I
think
so,
but
it
cannot
be
granular
so
so
for
each
names.
Basically
we'll
just
see
all
the
namespace
under
the
same
thing,
and
it
will
be
the
same,
have
the
same
yes,
yeah.
C
A
B
Yeah,
so
we
have
two
level
of
check
right,
so
resource
quota,
so
cluster
resource
code
I
will
check
and
prevent
this
kind
of
yeah.
All
the
requests
will
not
go
beyond
The
Encore
and
Y
gig
of
memory,
which
it's
it's
kind
of
like
shared
pool,
I'm
all
the
namespace
under
this
tenant.
So
but
we
don't
prevent
like
if
why
the
tenants
just
want
to
create
all
the
resources
within
one
namespace.
It's
also
okay.
So
as
long
as
those
limit
of
the.
B
C
A
B
Illustrate
that
yeah
I
believe
Ryan
has
some
question.
A
I
was
just
gonna,
I.
Guess
reiterate
my
understanding,
because
I
think
it's
similar
to
how
we
handle
it
for
HMC
and
so
yeah
the
they
essentially
they
they
put
a
resource
code
on
each
namespace
and
then
anytime
that
changes
they
add
it
to
the
global.
So
you
have
that
Global
status
and
there's
an
emission
controller
on
that.
So
there's
a
delay
but
I
think
that
that's
my
understanding
of
how
it
works
right.
B
Oh
yeah
I
think
yeah,
so
any
any
other
folks
have
any
question
so.
C
But
that
that
means
within
the
tenant
namespace
can
contend
for
the
quota
right.
A
C
B
C
A
B
Yeah
I
got
your
point.
So
if
the
tenants
like
trying
to
create
resources
in
two
different
namespace
yeah.
C
B
Yeah
I
think
I
did
like
theoretically
yeah
it's
possible
in
some
like
extreme
cases,
so
yeah.
C
B
I
prepare
recording
here
so
yeah
just
make
sure
everything
goes
smooth.
B
Yeah,
let
me
just
start
the
recording
and
so
yeah
after
the
thing
is
set
up,
so
we
can
see
that
we
have
two
text
here.
So
one
is
for
the
backend
API
server
and
what
the
other
one
spoke.
B
B
In
The
Next
Step
we're
going
to
create
a
tenant
so
yeah
after
we
apply
that
we
just
share
what
is
included
in
the
tenant,
so
we
just
name
it
with
six
ones
and
with
two
CPU
to
GPU
as
a
memory.
So
then
we,
let's
check
out
check
the
status
of
the
cluster
resource
code.
We
can
see
that
yeah.
We
can
see
that
we
have
let's
go
back
to.
B
Yeah,
we
can
see
that
we
have
yeah
two
CPU
and
two
gig
memory
has
been
specified
in
the
spec
and
in
the
status.
The
usage
is
just
both
of
the
usage
is
zero.
B
Cool
yeah
we're
going
to
take
a
look
at
the
weather
if
there's
anything
in
the
cube,
config
file,
yeah,
so
yeah.
Let's
just
take
a
look
at
the
resource
quota
from
the
tenant
perspective
in
the
default
namespace,
the
resource
code
yeah,
which
is
the
same
as
what
we
specify
in
the
cluster
resource
code.
B
Now,
here
the
here's,
a
part
with
one
CPU
and
one
gig
of
the
mirroring,
which
is
under
the
yeah
classroom
resource
code.specified,
being
respect
so
yeah.
We
can
see
that
the
names
the
pod
has
been
created
successfully
and
yeah.
The
party
is
running
now
and
and
then
yeah
we're
going
to
check
the
quota
usage.
B
Yeah,
we
also
take
a
look
at
the
all
zip
code
from
the
perspective
of
the
backend
API
server,
so
yeah
as
you
can
see
which
my
answer
phase
question.
So
each
of
the
namespace
will
just
get
the
same
like
harder
limit
with
the
same
amounts
of
specified
in
the
classroom
source
code
aspect.
B
Yeah
in
the
next
step,
we
are
going
to
create
another
part
in
a
different
namespace.
B
So
yeah
we
are
going
to
create
a
namespace
called
test
and
create
a
pod
under
the
name
name,
space.
B
So
yeah
we
can
see
that
in
this
request
it
has
two
CPU
and
two
two
gig.
So
if
we
just
take
a
just
look
at
the
resource
code
under
test
namespace,
so
it
should
be
admissible,
but
it
will
be
rejected
by
the
class
resource
code
because
it
code,
if
the
part
is
created
it
will
go.
The
total,
like
usage,
will
go
beyond
the
hard
limit
of
the
cluster
resource
code.
That's
why
it
will
be
denied.
B
B
Yeah
same
view
from
the
backend
API
server
perspective.
B
Yeah
we're
going
to
like
show
you
guys
how
the
resource
quota
cost,
which
is
customized
resource
code
collaborate
with
the
cluster
resource
code.
That's
specified
in
the
tenant
spec.
B
B
B
Yeah
we
prepared
yamo
for
create
resource
quota,
so
it
has
a
hard
limit
for
one
CPU
and
one
gig
of
memory,
and
then
we,
the
tenant,
will
apply
this
resource
quota
so
which
will
create
the
resource
code
under
the
default
namespace.
B
So
yeah
we
can
see
that
underneath
a
default
namespace
from
the
tenant
perspective
yeah,
we
have
two
Originals
quota.
There.
B
Now
we're
going
to
create
a
pod
request
so
which
is
which
is
beyond
the
cluster
resource
code,
oh,
which
is
beyond
the
customized
resource
code,
but
under
the
cluster
resource
code.
So
you
can
see
what
will
happen
so
here
if
you
look
at
the
error
message,
so
it's
not!
The
error
is
not
erased
by
cluster
resource
code,
so
that
means
clutter
resource
code.
I
just
admit
this
request,
but
the
resource
code
admission
component
just
rejected.
B
Of
course
it
the
specific
the
customized
resource
quota
or
the
request
just
goes
beyond
the
customized
resource
code
that
we
just
created.
B
So
yeah
I
think
that
concludes
my
demo
and
yeah
I
hope
we,
it
just
properly
explained
the
question
if
he
has.
C
Yeah,
okay,
yeah
I
kind
of
understand
so.
C
B
Yeah
so
yeah
we
are
going
to
take
a
look
at
the
like
extreme
case.
A
C
You
go
back
any
of
the
demo,
so
each
Canada
at
least
they'll
have
a
full
name
space
default.
It
could
be
system
whatever.
So
can
the
cookie
system.
B
C
A
So
in
terms
of
the
roadmap,
any
other
major
features
or
things
planned,
we
have
we're
actually
working
at
like
the
marketing
DNS
and
the
offset
control
for
multi.
Tenants
like
we
found
some
access
to
leaking
issue
and.
B
A
Security
capability
and
in.
B
Impact,
in
that
you
know
social
with
us,
we
will.
A
We
will
support
like
the
service
accounts
and
the
rbac
rule-based
access
control
to
support
the
the
we
would
be
attained
by
being
object
to
help.
B
Us
improve
the
of
the
access
control.
A
I
think
that's
the
two
of
two
of
the
major
feature
we
are
working
with:
okay,.
A
So
we'll
hope,
a
lot
more
Community
users
right
to
use
the
coupon
so
can
send
us
more
input
and
feedback.
Regarding
our
next
steps
so
depends
on
your
guide,
Jim
and
Ryan.
You
know,
hey,
you
guys
have
to
evangelist.
You
know
this.
This
is
a
cool
thing
right
to
the
community
better,
so
we
can
have
a
more
audience
yeah.
C
But
yeah
I
think
that
the
entire
class
to
discover
resources
can
be
independent
of
cookie
tool
right,
even
yeah
right.
So
right.
If
you
look
at
their
design,
I
mean,
as
you
can
see,
you
can
use.
A
One
of
the
use
case
I
mean
we
implemented
the
exact
same
thing
in
our
I'm,
assuming
mostly
in
hnc.
So
that's
already
there
but
yeah.
It
seems
like
it's
a
pretty
similar
design.
C
A
So
Adrian
and
I
forget:
who
else
was
on
the
team
at
that
point,
but
they
they
rolled
it
into
anthos
for
the
so
it's
the
Google
hrq
and
then
they
backwarded
it
to
hnc
not
that
long
ago.
So
yeah.
A
Like
pretty
similar,
I
was
actually
looking
through
back
in
the
code
to
try
to
figure
out
exactly
what
we
were
doing
and
so
yeah
like.
Essentially
we
you
know
there's
it,
but
since
we're
hierarchical
we
know
all
the
child
tenants,
and
so
we
we
manage
it,
and
so
we
do
the
calculation
up
front.
So
we
can
make
the
emission
web
hook
yeah.
C
I
I
think
the
funny
part
is
the
key
funny
part.
Is
they
don't
do
the
usage
packaging
in
any
of
the
internal
memory
scrub
stuff,
internal
cash?
They
just
leverage
the
building
coal
house
stuff
to
do
the
usage
calculations
so
which
coming
from
ABS
should
be
accurate
and
the
synchronized
for
my
API
silver
perspective,
so
yeah
yeah,
the
trick
is
you
have
to
give
individual
namespace
as
a
cluster
scope
limit
by
default?
It
has
because
one
namespace
can
use
it
all.
C
You
should
allow
that
then,
but
if
people
wanted
your
final
granity
control,
you
can
override
the
namespace
is
the
final
final
gravity.
One
right
I
understand.
B
C
Yeah
yeah
I
think
that's
a
that's.
A
thumb.
I
think
that
that
does
not
make
sense
in
the
sense
of
in
in
a
case
from
an
event
in
perspective
and
the
simplified
Divine
I
mean
that
can
be
wrong.
A
Cool
yeah,
this
is
good.
Thank
you
for
the
update
and
you
know
it's
very
looking
forward
to
some
of
the
other
features
they
mentioned,
and
we
will
you
know
it's
up,
yeah,
we'll
post
the
recording
on
the
channel
and
we
can
kind
of
publish
it
there.
A
B
C
A
Okay,
yes
I'll,
be
there
I'm
there
Sunday
through
the
following
week,
I
think
Saturday
here
so
pretty
much
all
week,
yeah.
B
A
Tried
the
way
we
never
made
the
decision
yet
people
people
are
tired
and
sick,
so
we
probably
were
steep
this
time,
but
good
luck
to
your
session
Jimmy
and
the
folks
yeah.
Thank
you
all
right.
Thanks.