►
From YouTube: Kubernetes Working Group for Multi-tenancy 20210111
Description
Demo of Kubezoo's multi-tenancy capabilities by the tools' creators!
A
All
right,
hi
everyone-
this
is
jim
and
part
of
the
multi-tenancy
working
group
today-
is
january
11th,
so
we're
having
our
first
session
of
the
new
year
welcome
everyone,
and
I
think
we
had
a
couple
of
items
on
the
agenda.
So
charles
is
going
to
present
kubzu
multi-tenancy
solution.
So
that
was
the
main
item
and
then
based
on
time,
we
will
discuss
a
continuation
from
something
we
talked
about
back
in
november,
which
is
creating
a
github
issue
and
next
steps
for
kubernetes
documentation
updates
from
multi-tenancy
anything
else.
B
B
I
think
we
run
what
we
believe
to
be
the
largest
kubernetes
cluster
in
the
university
of
north
carolina
school
system
and
it's
multi-tenant.
So
this
meeting
and
this
group
is
very
interesting
to
me.
A
C
D
Why
don't
I
introduce
myself
then,
especially
for
the
new
people,
I'm
adrian
eric
nice,
to
meet
you
as
as
in
person
as
we're
going
to
get
these
days,
I'm
joining
from
toronto,
canada,
which
is
where
I
live
and
yeah
I'm
the
original
author
of
hnc
and
the
maintainer,
and
I
apologize
that
I
haven't
been
super
on
top
of
responding
lately,
but
I'm
trying
to
to
catch
up
again
because
there's
a
bunch
of
people
who
want
to
contribute,
especially
when
it
comes
to
propagating
annotations
and
labels,
and
so
this
is
a.
D
This
has
been
a
popular
topic
and
hopefully
we
can
make
it
happen
even
before,
possibly
before
1.0
at
this
point
so
we'll
see
about
that
no
promises
yet.
But
let's
see,
if
we
can
make
that
happen,
why
don't
I
nominate
faye
to
to
introduce
himself
next.
E
Sure
sure
this
is
faye,
I'm
I've
been
working
in
the
multi-transit
working
group
for
a
few
years.
I'm
tackling
the
project
called
virtual
cluster,
so
which
is
another
way
of
handling
the
resource,
isolation
requirements
for
multiple
tenants
with
technical
control
plans,
so.
A
I'll
already
introduce
myself
briefly,
but
I
am
jimbarguardia
working
on
benchmarks
within
the
multi-tenancy
working
group.
I'll
also
do
some
work
with
the
policy
working
group
and
on
the
kirano
project.
A
All
right
now
we're
just
kind
of
quickly
doing
a
round
of
introductions
for
some
of
the
new
folks
who
are
joining,
and
I
think
we
have
two
topics
on
the
agenda
today.
So
charles
will
be
presenting
on
kubzu
and
then,
if
we
have
time,
we
can
quickly
discuss
the
next
steps
on
the
creating
a
github
issue
for
the
kubernetes
docs
update
from
multi-tenancy.
G
Yes,
with
that,
charles
all,
yours,
okay,
great,
so
let's
get
started
before
that.
Do
you
know
how
to
share
the
share?
The
screen
I'll
share
the
slides
to
the
google
google
meeting.
I
didn't
use
it
before.
G
Okay,
could
you
see
the
slides,
yep
yep?
Okay,
great
hello,
everyone
thanks
for
joining
I'm
charles
software
engineer
at
by
dance
tiktok.
So
today
I
will
introduce
our
project
kubizu
a
light
gateway
server
that
supports
multi-tenancy
on
kubernetes.
G
Just
give
me
one
second,
okay,
so
here
is
now
live
of
today's
talk.
First,
I
will
briefly
introduce
the
background
and
explain
why
we
want
to
have
another
multi-tenancy
solution.
Then
I
will
describe
the
basic
idea
of
kubizu.
After
that
we
can
discuss
some
of
the
implementation
details
like
how
kubizu
provides
an
illusion
to
each
tenant
that
they
are
an
only
user
of
the
cluster,
how
we
categorize
different
kubernetes
resources
and
convert
them
respectively
and
how
we
address
some
of
the
common
challenges
in
a
multi-tenancy
scenario.
G
D
G
G
What
about
now,
can
you
see
the
screen?
Now?
Yes,
okay,
okay,
good,
okay,
so
why
we
come
up
with
this
project.
As
we
all
know,
native
kubernetes
only
provides
very
limited
multi-tenancy
support
using
namespace
and
the
default
magnesium
cannot
meet
our
requirements,
so
it
is
very
common
that
different
tenants
or
any
users
wanted
to
use
cluster
scope,
resource
like
cluster
role,
name,
space,
persistent
volume,
etc.
G
While
granting
cannons
the
priority
to
use
the
cluster's
scope,
resource
may
result
in
tenant
information
leakage
and
other
security
problems
like
malicious
tenants,
may
delete
or
update
resources
not
belonging
to
them.
Therefore,
we
need
a
solution
or
tool
to
help
us
to
manage
the
kubernetes
cluster
that
are
shared
by
multiple
tenants,
but
why
we
don't
choose
some
existing
multi-tenancy
solution.
G
Let
me
give
you
a
more
detailed
explanation
of
the
small
tenant
here.
So,
in
our
case,
a
typical
tenant
usually
creates
only
couples
or
at
most
dozens
of
pots,
and
I
use
them
for
just
couple
hours
and
sometimes
made
there
may
exist
thousands
of
this
kind
of
tenants
on
the
cluster
at
the
same
time,
since
most
of
them
only
create
a
small
amount
of
resource,
creating
independent
control
plan
for
each
of
them
can
be
inefficient.
G
G
Finally,
the
cluster
or
control
plane
level
isolation
may
be
an
overkill.
The
operational
and
the
maintenance
burden
for
managing
thousands
of
control
plans
or
independent
cluster
can
be
happy,
and
we
all
wanted
to
make
sure
that
the
sre
team
is
happy.
So
therefore,
we
come
up
with
kubizu,
a
lightweight
gateway
server
that
supports
multi-tenancy
on
kubernetes.
G
The
basic
idea
of
kubizu
is
is
very
straightforward,
so
kubizu
will
act
as
a
gateway.
Server
sit
in
front
of
the
kubi
api
server
capture,
api
requests,
sent
from
the
tenant
client,
inject
tenant-specific
information
forward,
the
request
to
the
back-end
api
server
and
later
process
the
corresponding
response
before
returning
them
back
to
the
account
client
so
different
from
a
normal
gateway.
Server
kubizu
itself
is
an
api
server
so
which
is
built
on
top
of
the
kuby
gateway.
Runtime
builder,
the
kubrick
gateway.
G
G
We
come
up
with
this
project
because
we
found
in
everyday
tasks
that,
when
managing
kubernetes
cluster
we
usually
want
to
customize
and
the
authentication
or
the
authorization
policy
add
extra
admission
handlers
share
traffic,
shard
traffic
and
route
requested
to
different
back-end
clusters.
Anyway,
I
will
not
talk
too
much
about
the
cookie
gateway
runtime
builder
here.
G
So,
as
we
mentioned
before,
we
want
to
have
a
multi-tenancy
solution
that
can
provide
good
isolation
between
tenants
which
provide
an
illusion
to
each
tenant
that
he
or
she
exclusively
occupied
the
cluster
and
can
safely
use
cluster
scope
resource
while
not
bring
any
potential
security
issues.
We
also
want
the
solution
to
be
resource,
efficient,
lightweight
and
fast.
Finally,
we
want
to
ensure
that
it
will
not
import
heavy
operational
or
maintenance
burdens,
so
how
kubizu
meets
all
these
requirements
for
antenna?
G
In
addition,
after
a
new
tenant
created,
we
don't
need
to
boost
any
extra
components
or
parts.
Instead,
we
only
need
to
create
the
tenant
object
and
sync
several
system
resource,
like
system
namespace
and
cluster
row
to
the
backend
cluster,
which
makes
the
tenant
provision
process
super
fast
and
light.
Finally,
as
a
unified
gateway,
server
kubizu
will
be
shared
by
all
tenants.
Therefore,
it
only
requires
a
little
actual
computing
resource
and
maintenance
efforts.
G
Kubizu
is
not
meant
to
replace
any
existing
multi-tenancy
solution,
but
more
of
a
supplement
to
existing
multi-tenancy
toolkits.
As
we
can
see
in
this
speaker,
we
can
divide
multi-tenancy
solution
into
four
layers
from
the
bottom
up.
We
have
cluster
s
servers
which
create
independent
cluster
for
each
individual
tenant.
G
This
solution
has
the
strongest
isolation,
but
has
relatively
lower
resource
efficiency.
One
level
up.
We
have
the
control
plan
as
servers,
which
provides
very
good
isolation
as
well,
since
each
canon
exclusively
on
a
control
plan,
but
at
the
same
time
we
have
to
pay
the
cost
of
maintaining
and
running
control
plan
components,
and
then
we
can
have
a
new
layer
called
kubernetes
api
servers
as
servers
api
servers.
That
is
what
kubizu
aims
to
provide.
It
provides
a
view,
level
isolation
that
makes
an
allusion
to
each
tenant
that
they
exclusively
occupy
the
cluster.
G
Finally,
on
top
of
them,
we
have
the
default
namespace
as
service,
as
we
can
see
that
there
exists
a
trade-off
between
a
degree
of
isolation
and
the
agility
for
a
given
solution,
the
stronger
isolation
it
provides,
the
less
resource
efficient
and
the
heavier
maintenance
burden
it
will
be,
but
with
all
these
potential
options
we
can
choose
the
best
solution
that
fits
different
use.
Use
case
perfectly
here
is
a
diagram
that
compared
above
approach
from
five
different
dimensions.
G
G
For
the
same
reason,
the
operation
cost
is
low
as
well
since,
for
each
new
tenants
we
only
need
to
create
a
tenant
object
and
couple
system
resource
the
bootstrap
time
for
each
tenant
service
is
negligible.
For
an
api
compatibility.
I
will
talk
talk
about
it
later
currently,
kubizu
support,
most
of
the
kubernetes
api
accepts
some
features
like
webcook,
which
we
plan
to
support
in
the
near
future.
G
G
First,
let
me
introduce
how
could
we
do
manage
tenants,
as
mentioned
in
the
previous
slides?
Quizoo
itself
is
like
an
api
server
and
it
has
an
associated
metadata
store.
So
we
register
a
new
resource
type
called
tenant
in
a
kubisu
server
and
embed
the
tenant
controller
logic
inside
the
kubizu
server
after
a
new
tenant
object
is
created,
the
tenant
controller
will
first
create
some
system
resource
on
backend
cluster,
like
system
namespace,
tenant,
cluster
role
and
cluster
roboting.
That
limits
the
tenant's
permission
on
a
backend
cluster
land.
G
After
canon
is
created,
we
can
start
processing
the
request
from
the
tenant.
We
implement
the
new
rest
storage
in
a
gateway
server
just
for
converting
tenant
object.
The
request
forwarding
process
is
shown
as
in
a
figure
after
receiving
a
request.
The
kuvizu
server
will
check
if
this
request
is
related
to
the
tenant
object.
If,
yes,
then
it
is
highly
possible
that
the
request
is
sent
from
the
cluster
administrator
who
want
to
conduct
the
operation
on
a
tenant
object.
G
G
Custom
resource
definitions.
Crd
is
a
little
bit
different
from
other
cluster
scope
resource
as
a
custom
resource
crd
are
organized
by
group
version
kind
and
the
layer
nand
are
in
the
format
of
pure
name,
followed
by
the
group
name
to
better
organize
the
crds.
By
tenants
we
add
canon
prefix
to
the
group
field,
for
example,
if
we
have
two
tenants
tenant
one
and
a
tenant
two
both
of
them
want
to
create
a
crd,
app
with
group
name
ticktock.com.
G
G
Even
though
we
try
to
standardize
the
conversion
process
by
dividing
the
resource
into
different
categories,
there
are
still
couple
objects
need
to
be
handled,
especially
this
kind
of
object
usually
include
cross
reference,
which
is
a
name
space.
Object
need
to
refer
to
a
cluster
scope.
Resource
here
is
an
example.
A
row
binding,
which
is
a
name
space
scope
resource,
may
refer
to
a
classes,
cluster
scope
resource
which
is
in
the
cluster
row.
G
On
the
left
hand
side,
we
list
all
resources
that
include
a
cross
reference
and
a
need
to
handle,
especially
also
all
cannons
share
the
same
underlying
infrastructure,
so
there
are
also
some
objects
that
are
that
are
not
need
to
be
converted
and
should
be
shared
between
tenants.
For
example,
the
resource
scheduling
related
object
like
priority
class.
It
makes
no
sense
to
let
a
tenant
to
create
a
different
priority
class
that
has
higher
priority
than
other
tenants
class
on
the
right
hand,
side
release
the
three
objects
that
need
no
conversion.
G
Okay,
that's
all
for
the
conversion
rules.
Next,
I
will
talk
about
how
kubizu
addressed
some
common
challenges
in
a
multi-tenancy
environment
and
how
could
visual
support
some
classic
kubernetes
patents?
G
First,
how
could
we
ensure
the
fairness
between
tenants,
since
all
tenants
will
share
the
gateway
server?
It
can
be
possible
that
some
greedy
tenant
may
send
a
large
volume
of
api
requests
in
a
very
short
period
of
time,
which
can
overwhelm
the
kubizu
server
and
cause
slowing
down
of
the
request
processing
of
other
tenants
to
prevent
this
from
happening.
We
leverage
the
api
server
priority
and
the
fairness
aka
apf
mechanism,
which
is
introduced
after
the
1.18
alpha.
G
We
create
independent
flow
schema
and
priority
level
pair
for
each
tenant,
which
ensure
that
different
tenants
won't
affect
each
other.
In
addition,
by
setting
the
assured
concurrency
shares
of
each
priority
level,
we
allow
cluster
administrator
to
assign
different
weight
to
different
tenants
for
tenant,
weighs
higher
weight.
More
api
requests
will
be
processed.
G
Kubis
will
also
support
the
controller
and
the
operator
pattern
currently
tenants
who
want
to
deploy
the
controller
only
need
to
mount
the
kubiconfigure
file
that
point
to
the
kubito
server
to
the
controller
part
in
the
future.
We
may
set
the
kubernetes
service
host
and
kubernetes
service
part
environment
variables
in
a
tenant
part
directly
to
make
it
easier
to
deploy
the
controller.
G
Currently,
since
we
use
a
large
flat
network
internally
for
all
tenants,
so
we
do
not
need
to
worry
about
the
network
isolation,
but
if
we
want
to
isolate
tenant
network
from
each
other
using
vpc,
then
we
may
need
to
ask
the
underlying
infrastructure
to
support
it.
For
example,
we
can
combine
kubizu
with
some
serverless
kubernetes
solution
like
like
the
virtual
couplets,
and
the
underlying
infrastructure
will
help
us
to
lift
the
heavy
weight
and
support
the
network
isolation
to
support
the
dns.
G
We
can
set
up
the
core
dns
as
the
regular
as
a
regular
controller
in
the
tenant's
ppc,
with
the
dns
policy
set
as
the
cluster
first
and
has
it
watched
the
tenant's
endpoints
slides
through
the
kubizu
server.
Then
the
tenants
parts
and
the
servers
should
be
able
to
talk
to
each
other
with
within
the
tenant's
vpc.
G
Okay,
I
will
try
to
share
the
share
the
screen.
G
G
So
it
it
looks
like
I
cannot
share
the
in
the
terminal.
G
D
D
G
Camera
on
yeah:
let's
yeah,
that's!
Okay!
Let
me
let
me
do
the
demo
first
and
then
turn
on
the
camera.
D
G
Okay,
so
could
you
see
the
terminal
clear?
Yes,
okay,
this
seconds?
Okay,
so
let's,
let's
do
some
demo.
So
in
order
to
help
you
better
understand
the
demo,
I
will
first
introduce
the
demo
environment.
So
all
the
demo
will
be
running
on
a
mini
cube
cluster
locally
and
I
will
start
a
kubernetes
server
and
associated
etcd
as
an
independent
process.
On
my
local
machine,
okay
and
this
terminal
in
the
the
this
window,
the
tmox
window,
I
will
use
to
star
all
the
necessary
components
and
the
next
tmox
window.
G
G
G
G
We
have
two
contacts.
The
first
one
is
the
minikill
contacts
we
use
to
talk
to
the
backhand
cluster
and
the
second
contact
is
the
kubizu
zoo
admin
contacts
this
one.
I
will
use
to
talk
to
the
kubizu
server.
G
G
So
here
is
a
simple
tenant.
Yama
I
used
to
create
a
tenant,
so
this
one
is
a
tenant
two
with
an
n62
and
id
six
two
I
just
randomly
picked
the
id
and
an
m,
and
at
least
one
is
just
for
demo.
So
there
are
no
actual
spec
in
a
production
environment.
We
may
add
some
more
spec
like
a
quota
and
a
weight
for
each
tenant
and
let's
check.
G
G
Okay,
so
tenon
2
is
created
and
let's
check
the
backend
cluster
see
what
we
have
so,
as
you
can
see
that
on
the
back
end
cluster
after
tenon
created,
we
will
create.
We
will
synchronize
the
system
system
namespace
for
these
tenants.
So
in
our
case,
we'll
synchronize
four
namespace
default,
not
least
public
in
a
system
with
the
tenant
id
as
the
prefix.
G
G
G
2.,
so
we
have
no
resource
for
tenant
two
and
then
let's
try
to
create
some
parts
or
staple
sets.
Here
I
create
a
staple
stat.
G
G
So
now,
as
you
can
see,
that
two
parts
are
created
for
ten
and
two
and
if
we
try
to
view
all
this
resource
as
tenant
two,
we
can
see
that
all
these
resources
are
stored
in
the
default
name
space
as
tenant
two.
But
what
if
we
wanted
to
what?
If
the
tenant
one
wanted
to
check
the
resource
on
the
cluster.
G
So
for
tenant
one
there
is
nothing
created,
so
the
tenant
one
can
see
nothing,
but
what
we
have
if
we
try
to
see
this
as
cluster
administrator.
G
G
And
the
same
here,
let's
create,
let's
create
same
parts
for
canon
one.
G
Okay,
so
this
time,
let's
check
the
on
the
resource
so
same
thing
here
so
nintendo
one.
We
will
have
the
corresponding
resource
created
on
the
tenant's
own
namespace.
G
So
if
we
try
to
do
it
to
see
the
resource
as
tenant
two,
we
will
see
the
re.
We
can
only
see
the
resource
belonging
to
the
tenant
too.
Okay,
so
next
I
will
try
to
deploy
a
basic
controller.
So
this
basic
controller
is
just
a
pop
lister.
I
will
deploy
this
controller
as
tenant
one.
This
part
lister
will
just
list
all
the
parts
inside
the
default
namespace.
So
if
everything
works
as
expected,
the
policer
will
keep
listing
three
parts,
including
itself
and
the
two
two
parts
I
just
created
for
10
and
one.
G
G
D
G
Mistake
our
case
there
is
no,
because
the
user
cannot
access
the
user's
query.
Configure
file
can
only
be
used
to
access
the
kubizu
server.
It
cannot
only
when,
if
you
just
render
the
kobe
configure
file
to
the
back-end
cluster
to
the
users
or
tenants,
then
they
will
be
able
to
access
the
back-end
cluster.
E
G
Excuse
me,
I'm
sorry,
I'm
trying
to
open
up
the
slides,
so
I
didn't
quite
get
your
question.
Yeah.
E
G
Oh
yeah,
so
that's
that's
a
good
question
so
for
now
we
just
manually
so,
for
example,
I'm
antenna
one
and
I
and
I
want
to
deploy
some
controllers,
so
I
will
require
the
tenants
to
mount
to
this.
This
could
be
configure
file,
yeah
internet
parts
directly.
Things
like
that.
So
so
I
know
this.
One
is
just
a
poc
for
the
controller
for
now,
but
we
plan
to
add
more
support,
set
of
the
environment.
Variable
and
amount
of
the
server's
account
token
into
the
parts
yeah
cool
yeah.
F
A
With
crds,
as
well
as
you
powered
dns
and
networking
and
some
things,
but
what
about
like
resource
quarters?
I
think
you
mentioned
quotas.
Yes,.
F
F
Yeah
one
thing
I've
learned
is,
if
you
don't
use
chrome,
the
google
meet
is
a
rough
experience.
F
I
use
firefox
and
so
every
time
I
try
to
use
it
on
firefox
it
freezes
up
and
then
I'm
like
crap
switch
to
chrome,
so
yeah.
D
Well,
I'm
waiting
to
see
I'm
waiting
to
reinvent
charles
once
he
joins,
but
so
far
I'm
not
thanking
him.
Jim,
maybe
do
you
want
to
switch
gears
and
we
can
talk
about
your
thing,
at
least
until
about
documentation,
at
least
until
charles
rejoins
us.
A
A
All
right,
yeah,
let
me
just
quickly
put
the
link
in
chat,
and
I
think
I
added
it
in
the
doc.
So
just
in
terms
of
context
for
this,
what
we're
talking
about
is
that
I
guess
in
our
last
meeting
we
had
discussed
some
of
the
next
steps
or
activities
within
this
working
group,
and
I
think
the
consensus
was
that,
as
you
know,
just
like
we
saw
in
this
demo,
you
know
there
are
several
good
solutions
for
multi-tenancy.
A
G
Okay,
yeah,
sorry,
I'm
not
sure
which
part
which
part
I
at
which
moment
I
I
I
was
cut
off
yeah
you.
G
G
Okay,
so
so
the
basic
idea
is
we
implement
a
new
crd
core
cluster
resource
quota,
which
can
help
us
to
manage
the
resource
quota
across
multiple
namespace,
okay
and
the
heart
limits
of
this
cluster
resource
quota
will
be
set
as
the
tenant's
resource
quota
and
the
user
resource
will
be
the
sum
of
the
other
resource
usage
of
all
the
namespace,
and
we
have
a
controller
and
also
with
a
web
hook,
which
can
help
us
to
admit
the
incoming
parts
and
also
help
us
to
calculate
the
resource
usage
across.
G
A
Okay,
no,
that
makes
a
lot
of
sense,
but
a
couple
more
questions.
One
is
with
manage
kubernetes
clusters.
Does
this
impose
any
problems,
or
can
you
use
this
in
front
of
like
gke,
ks,
etc?.
G
Yes,
so
currently
the
only
problem
is
the
certificate.
So
when
we
wanted
to
interact
with
the
back-end
cluster,
we
need
to
have
the
certificate
here.
We
need
to
have
the
service
account
public
key,
because
we
need
to
use
create
some
controller
inside
the
cookie
zoo,
so
that
that
thing
is
not
normally
accessible
accessible.
If
we
are
using
a
public
cloud
provider,
probably
a
kubernetes
service.
That
is
something
we
wanted
to
solve
in
the
future,
but
for
now
I
other
than
that
there
is
no
other
limitation.
A
Got
it
yeah
one
last
question:
you
mentioned
something
about
web
hooks,
you
know
not
being
supported,
but
you
could
run
like
if
you
want
to
run
like
you
know,
kirana
or
gatekeeper
in
the
in
the
main
cluster.
You
would
be
able
to
do
that
right
and
apply
policies
and
things
like
that.
G
Yeah,
so
so
right
now,
because
we
have
not
find
out
the
solution
that
that
tenants
to
install
a
webhook
as
the
tenant
itself.
So
if
we
have
multiple
tenants,
then
how
can
we
make
sure
the
tenant?
The
web
will
only
apply
to
the
resources
only
to
a
specific
tenant?
Yes,
okay.
G
So
I
think
the
webhook
has
to
be
set
up
on
the
api
server
back-end
api
server,
real
estate
and
web
will
talk
to
the
kuvi
zoo
back
and
go
through
the
kubizu
again,
something
like
that
and
it
will
transfer
convert
the
tenant's
object.
Yeah
great.
D
I
I
just
had
one
question,
which
is:
what's
the
ongoing
maintenance
burden
of
kibizu,
like
let's
say
a
new
api
is
added
to
kubernetes
or
old
ones.
Deprecated.
Is
that
all
just
handled
automatically
unless
there's
something
like
a
cross
reference.
D
G
Yes,
yeah
take
take
ourselves,
take
us
some
time
to
list
all
this
cross-reference
objects
and
the
cluster
scope
objects.
Things
like
that.
I
think
that's
the
only
thing
make
the
solution
may
not
be
that
generic.
If
the
kubernetes
add
a
new
object,
we
need
to
take
some
time
to
investigate
if
this
object
need
to
cross
reference.
Other
objects,
yeah.
D
I
think
I
have
one
last
question
we're
almost
out
of
time.
I
was
just
wondering
if
you
could
tell
us,
maybe
a
little
bit
more
about
the
tenants
that
are
actually
using
these.
Like
are
these
teams
that
are
running
services
in
prod,
like
how
often
do
these
services
need
to
talk
to
each
other?
Like
you,
you
have
an
integration
with
dns,
so
it
sounds
like
they
can
talk
to
each
other,
but
I
was
just
wondering
if
you
could
give
us
any
more
insight
as
to
the
kinds
of
teams
that
are
using
the
system.
G
Oh
so
so
to
be
clarified,
your
question
is
you
wanted
to
know
some
like
the
use
case
of
this
this
this
this
server
right
cruiser
server.
G
Yeah,
so
tenants
can
be
different.
Sometimes
we
have
tenant
like
small
tenants.
They
just
wanted
to
test,
try
out
kubernetes,
so
they
just.
F
G
A
small
cluster
here
and
then
try
out
several
parts,
and
sometimes
there
are
maybe
some
workloads
online
workloads
or
online
workload.
They
wanted
to
start
a
cluster
in
just
a
minute
and
let
me
just
run
some
limit
wrong
manning
parts,
and
sometimes
they
are
maybe
some
offline
workloads.
They
wanted
to
just
submit
some
machine
learning
workload
and
they
wanted
to
when
they
start
to
submit
the
workload
they
wanted
to
have
a
new
cluster
ready.
So
that's
another
use
case,
so
I
would
say:
yeah.
There
are
many
use
case
we
are.
We
are
trying
yeah.
D
And
what
about
the
case,
where
a
service
or
a
workload
from
one
tenant
needs
to
access
service
from
another
tenant
like?
What?
When
does
that?
Come
up
for,
like
these
one-off
things,
as
well
like
for
batch
or
offline.
G
So
if
one
tenant
service
wanted
to
talk
to
other
tenant
service,
so
in
our
case
internally
we
just
use
a
large
flat
network,
so
they
can
talk
to
each
other
freely.
So
there
is
no
limitation
between
network
connection,
yeah,
okay,
cool.
D
Okay,
for
almost
a
time
so,
charles
thanks
for
that
that
was
super
interesting!
Oh,
is
this
open
source
or
are
you
thinking.
G
Of
open
source,
that's
the
things
we
wanted
to
talk
yeah.
We
are
currently
quite
ready
for
the
open
source,
but
we
are
not
sure
how
to
make
it
more
natively,
because
I
know
that
we
are
not.
We
are
no
more
doing
the
incubation
for
the
multi-tenancy
project
right
so
yeah,
so
maybe
in
the
future.
In
next
month
or
next
next
month,
we
will
open
source
this
project
under
the
finance
or
tick-tock
organization.
Yeah,
that's
a
plan.
D
And
that'll,
actually,
that
discussion
can
feed
in
well
to
jim's
discussion,
because,
as
we
update
our
documentation,
we
can
decide
how
we
are
going
to
point
to
community
tools
that
are
like
caverno
not
actually
well.
Actually,
no,
no
is
the
ncf
right.
It
is
cncf
yeah,
but
there
are
other
things
like
loft,
for
example
that
are
not
and-
and
we
don't
necessarily
want
to
like-
exclude
those
and
pretend
they
don't
exist
right.
A
Yes,
so
real
quick
on
that
and
you
know
so,
what
we
wanted
to
do
was
at
least
propose
a
section
in
the
kubernetes
documentation
on
multi-tenancy.
So
the
idea
would
be
to
kind
of
try
and
come
up
with
some
text
which,
as
serves
as
the
initial
guide,
explains
what
you
know.
Different
types
of
models
exist
and
then
provide
a
way
like
adrian
is
saying
where
we
can
list
projects
both
cncf
projects,
internal
entry
projects,
you
know
managed
by
sigs
as
well
as
external
projects.
A
You
know,
so
there
is
like
a
kind
of
a
cncf
policy
on
how
you
can
list
other
projects
and
mark
them.
So
it
would
be
more
of
a
self-service
mechanism
to
update
the
docs
in
that
manner.
A
But
what
I
was
thinking
as
a
next
step
here,
if
everyone
agrees,
is
we'll
just
create
a
github
issue
in
the
multi-tenancy
repo,
and
then
we
can,
you
know,
circulate
that
and
get
some
feedback
from
various
sigs
and
key
folks
and
and
see
what
they
think
and
then
propose
the
doc
updates
and
the
sections
accordingly.
D
Thanks,
everybody,
sorry,
I'm
afraid
I
have
to
go
when
I'm
running
the
meeting.
So
that
means
the
meeting
is
about
to
end,
but
thanks
everybody
for
joining
and
thank
you
in
two
weeks,
hopefully
I'll.
Take
the
time
see
you
later,
thanks
bye.