►
From YouTube: Kubernetes WG Multitenancy 20180606
Description
Agenda and Notes: https://docs.google.com/document/d/1fj3yzmeU2eU8ZNBCUJG97dk_wC7228-e_MmdcmTNrZY/edit
A
Cool
take
it
away
all
right,
I'm,
Tim,
all
Claire
and
I'm
working
on
adding
sandboxes
to
kubernetes
as
a
native
concept
when
we
say
San
boxes,
we're
talking
about
kind
of
kata
and
G
Weiser
right
now,
but
keeping
the
definition
open
to
other
runtimes
that
might
come
up
in
the
future,
so
we're
primarily
driving
these
discussions
in
sig
note.
But
I
just
wanted
to
give
an
update
here
since
I
see
this
as
an
important
piece
of
multi-tenancy
and
yeah
I
think
I've
talked
about
it
before
so.
A
I
won't
go
into
the
kind
of
the
concepts
too
much,
but
right
now
we're
talking
about
what
we
want.
The
kubernetes
api
to
look
like
and
the
goal
is
to
get
something
probably
an
alpha
stage
in
Carina,
DS,
1.12
and
yeah.
So
the
the
there's
kind
of
two
competing
proposals
out
right
now.
The
first
one
is
a
super
simple
API.
A
It's
just
sandbox
boo
lean
on
the
pod
and
this
basically
just
gets
passed
through
to
the
runtime,
the
CRI
runtime,
which
then
makes
a
decision
based
on
the
state
of
that
boolean,
which
OCI
runtime
to
call
underneath.
So
that
would
either
be
a
native
one
in
the
case
of
sandbox
false
or
one
of
the
sandbox
runtimes
in
the
case
of
sandbox.
True,
and
it
would
be
up
to
the
cluster
administrator
to.
A
The
nodes
with
the
desired
sandbox
runtime
and
then
there's
also
some
interactions
with
other
fields
in
the
API.
So,
for
instance,
we
don't
want
to
allow
you
to
set
sandbox
true
and
also
say,
but
I'm
an
amount.
You
know
the
route
host
path
and
add
host
networking
and
whatnot,
because
then
you're
not
really
Santa
boxed
and
the
goal
is
to
make
this
both
really
simple
and
also
hard
to
hard
to
screw
up,
and
so
that's
kind
of
where
those
requirements
come
in
yeah.
So
this
is
this:
is
the
really
simple
option?
A
A
lot
of
the
decisions
on
how
to
configure
it
is
just
pushed
to
the
cluster
administrator
or
the
you
know,
cluster
provisioner
in
the
case
of
managed
cloud
environment
yeah.
So
that's
the
first
approach.
The
second
approach
is
kind
of
the
more
expressive
more
abstract
one.
Instead
of
just
a
boolean,
we
would
introduce
a
new
API
resource
called
runtime
class,
which
sort
of
defines
what
a
runtime
in
kubernetes
would
look
like,
and
that
has
kind
of
three
different
classes
of
fields
that
are
captured
in
the
spec
of
the
runtime
class.
A
One
is
fields
that
change
the
runtime
behavior
of
the
pod,
and
so
this
would
be
like
opaque
parameters
that
get
passed
through
to
the
CRI
runtime
and
are
actually
you
know,
basically
annotations,
but
things
that
can
be
used
to
configure
the
the
underlying
runtime
pod.
Overhead
is
another
one
that
we've
talked
about
adding
in
there,
since
sandbox
is
tend
to
have
a
higher
overhead,
it's
more
important
that
we
factor
that
in
to
scheduling
decisions,
and
so
that
could
be
included
in
the
runtime
class
as
well.
A
A
Yeah
and
then,
and
then
the
kind
of
extension
of
that
is,
we
could
have
sort
of
predefined
runtime
classes.
So,
for
instance,
we
might
say
that
is
a
runtime
class
that
has
a
specific
definition.
Sandbox
does
run
chem
class.
That
has
a
you
know,
specific
definition,
a
set
of
requirements
and
then
make
those
part
of
the
API.
A
So
you
can
expect
that
if
you
are
deploying
to
a
cluster-
and
it
has
a
runtime
class
named
sandbox,
do
you
have
some
expectations
about
what
you
get
from
that
and
we
could
make
that
part
of
conformance,
and
this
is
sort
of
kind
of
thinking
farther
into
the
future,
but
is
how
we
can
get
runtime
class
to
sort
of
have
the
same
simplicity
that
the
sandbox
boolean
has
so
anyhow.
Those
are
the
two
definition
or
the
two
kind
of
computing
proposals
that
are
out
right
now.
A
A
B
Yeah,
so
my
dock,
like
as
a
preface
I,
would
like
to
say
that
I'm
like
not
married
to
this
I,
just
like
had
this
weird
idea
and
obviously
it
related
to
the
white
containers
are
set
up
on
Linux
and
all
the
other
isolation
recognitions
on
other
kernels.
So
I
kind
of
like
took
it
really
far
so
yeah
I
guess
like
does
anyone
have
any
specific
questions
before
I
mean
I
could
do
like
a
brief
overview
but
yeah,
basically
I
just
kind
of
used?
B
What
I
knew
about
isolation
mechanisms
on
other
kernels
and
kind
of
redid
that
abstraction
on
kubernetes,
which
is
kind
of
insane
but
I,
do
feel
like
David
brought
up
some
really
good
points
as
to
how
the
scheduler
works
and
things
that
I
had
lucked
out
with
regard
to
like
resource
control
per
node.
So
that's
an
interesting
complexity
that
I
didn't
really
think
into.
B
B
B
Yeah
I
mean
personally
I
think
most
people
thought
of
about
it
in
terms
of
just
using
our
back
and
the
mechanisms
that
we
have
today,
but
I'm
not
like
fully
confident
in
just
doing
that.
Just
because,
if
there's
one
bug,
then
you
have
to
upgrade
but
I'm,
not
sure.
If
has
anyone
here
like
deployed
anything
like
that.
E
B
Yeah,
so
it
is
like
a
nested
kubernetes,
which
is
a
little
gross,
and
there
are
I,
saw
some
github
repos
that
actually
do
like
necessary
kubernetes,
which
I
forget
that
names
up
now.
Yes,.
E
E
We
are
just
not,
and
this
is
something
that
I
just
read
today
on
your
doc,
about
the
tenants
that
spend
with
people
named
stays.
This
is
something
we
haven't
tried,
but
everything
else
we
did
and
we
actually
have
proof
of
concepts
that
land
is
so
eventually
we
want
to
go
there,
but
that's
why
I'm
here
for
the
first
time
to
actually
engage
with
this
working
group
and
share
our
experience
and
maybe
hopefully
learn
a
lot
from
you
guys
as
well.
B
Yeah,
that's
cool.
The
question
in
the
chat
is
also
similar.
It's
like
it
seems
quite
close
to
running
that
tenant
kubernetes
in
a
provider
for
kubernetes
namespace,
which
yeah
it
is
it's
totally
similar
to
nested
and
I,
looked
at
a
lot
of
the
nested
code
on
github,
but
I
didn't
like
beddit.
So
that's
why
I
just
didn't
say
like
use
this
thing
so
yeah,
it
is
very
similar,
though
I
mean
you
just
basically
have
to
give
each
ten
at
their
own
API
server,
URL.
E
Yeah,
the
only
difference
on
our
end
to
be
very
straightforward
is
that
the
the
raine
0q
bananas
is
not
exactly
cuban
at
ease.
That's
just
an
implementation
detail
on
our
end,
but
other
than
that,
the
design,
what
I,
really
the
conclusion
I
wanted
to
share
with
you
is
that
we
we
agreed
with
this
this
with
this
design.
B
E
B
Yeah
yeah
yeah,
and
also
in
the
chat
from
the
analogy
of
the
nested
kernel
paper
I
was
I,
was
thinking
about
it
as
like,
actually
integrated
into
kubernetes,
not
like
a
thing
on
top
but
like
that,
would
take
a
lot
of
work.
To
be
honest,
and
especially
though,
like
kind
of
abstract
clone,
this
is
called
kubernetes
would
be
creating
namespaces.
Would
that
would
take
a
lot
of
work
just
in
terms
of
like
the
etsy
D
and
all
those
services
that
you
have
to
spin
up.
B
F
For
sure
a
real
use
case
for
this
for
at
least
four
parts
of
it
yeah
going
forward,
we
want
to
think
about
the
resource
isolation,
but
for
right
now
the
main
concern
we
have
is
making
operators
and
specifically
CR
G's
available
to
more
than
one
namespace,
but
not
the
entire
cluster.
So
that
would
require
at
least
multiple
API
servers.
F
B
F
G
F
So
sort
of
here
using
a
a
sorry
who
using
humanities,
features
like
C,
IDs
or
aggregated
API
servers
when
you
install
either
the
CR
the
or
the
ideas
I
can
get
an
API
server
that
registers
an
API
globally
for
the
cluster.
So
in
a
multi-tenant
scenario,
you
might
want
tenants
that
have
multiple
namespaces
but
don't
all
see
the
same
sets
of
API
it's
from
the
communities,
API
server.
C
G
F
F
G
See
somehow
I
never
quite
understood
that
subtlety
that's
interesting,
alright,
thanks,
then
that
makes
that
make
sense,
and
then
I
guess
your
point
with
the
aggregated
API
server
is
that
it's
also
a
globally
registered
thing
in
similar
ways
that
that
what
you're
saying.
C
C
F
What
one
way
to
do
it
could
be
with
with
new
scopes,
I,
don't
think
I,
don't
think,
there's
like
an
easy
transition
paths
for
the
API
serve
queue,
API
server
to
deal
with
that
wait.
The
path
of
least
resistance
seems
to
be
to
do
something
that
lets.
You
have
multiple
API
servers,
rather
than
making
the
API
server
aware
of
all
these
thanks,
but
I
very
cursory.
I
E
I'm
just
trying
to
understand
something
here
from
what
I
get
from
Jesse's
doc.
In
particular,
the
proposal
is
to
I
said
just
clearly
segregate
races
to
tenants
to
separate
given
attics
clusters.
You
have
one
that
which
is
a
reading
zero
that
is
used
to
schedule
these
two
other
clusters,
but
they
are
separate,
so
each
one
of
them
has
its
own
API
server,
its
own
control
plan
in
some
Cuba
nannies
nodes.
E
E
F
B
Yeah,
that's
interesting,
I
didn't
think
about
the
whole
multi
cluster
working
group
I've,
no
idea
what
they're
doing.
A
So
I'm
trying
to
think
about
this
in
terms
of
the
threats
that
we're
defending
against
and
I
agree
that
it
seems
like
doing
the
like
sort
of
the
multiple
control
planes
protects
us
until
we
end
up
until
we
get
to
running
pods
right
and
then
you
start
interacting
with
the
cubelets
and
the
cubelet
is
sort
of
like
the
crossover
point
between
those
control
planes.
We
could
solve
this
by
running
multiple,
let's
bring.
B
E
From
from
our
perspective
from
the
the
stuff
that
we've
been
doing
at
mesosphere
we've,
we
are
walking
towers
that
that
solution
I'm,
not
saying
it
is
the
correct
solution,
we're
still
trying
to
to
to
make
it
work
properly,
the
cool
it
and,
and
and
and
it's
not
implemented
to
work
as
a
to
run
as
a
container
that
does
not
have
access
to
a
lot
of
things
from
the
O's.
So
but
yes,
that's
where
we're
that's!
The
terrain
were
trying
to
to.
You
know
no,
no,
to
learn
about.
B
Yeah,
like
one
of
the
main
things
that
I
was
thinking
about
when
I
did.
This
was
like,
as
you
can
see,
by
like
the
first
two
iterations
like
there's,
really
not
much
that
you
gain
versus
like
an
opera
like
in
ease
of
operability
for
the
clusters,
like
you,
don't
gain
much
unless
you're
in
the
third
scenario,
where
you
are
such
caring
resources.
G
Yeah
and
also
I
think
I
have
made
comment
in
the
doc
about
like
kind
of
from
the
I
guess.
This
is
related
to
what
you're
saying
the
like
from
the
cluster
admin
perspective
like
it's
almost
like
they're
managing,
you
know
n
different
clusters,
one
per
tenant
since
there's
you
know
and
and
different
API
servers,
so
I
think
one
of
the
challenges
is
like.
How
do
you
do
the
management
without
kind
of
building
Federation
on
top
of
the
inner
clusters?
These
nested
clusters
right,
like
it's
yeah,.
G
Because
in
particular,
like
all
of
the
information
about
what's
going
on
inside,
each
nested
cluster
is
hidden
inside
the
API
server
for
each
nested
cluster.
And
so,
if
you
want
to
somehow
expose
that
you
have
to
pull
it
out
of
each
API
server
individually.
And
so
then
that
starts
looking
a
lot
like
Federation
or
you
know.
Whatever
people
are
doing
today
to
manage
multiple
clusters.
B
Totally
yeah
and
that's
actually
where
like
if
this
was
implemented,
and
there
were
bugs
like
where
you
could
escape
your
your
space,
like
that,
would
probably
be
where
they
lie,
is
wherever
we
decide
to
expose
data,
because,
like
you're,
gonna
have
to
either
combine
like
the
Etsy,
be
instances
like
if
you
want
this
to
be
manager,
manageable
and
operable
from
like
the
root
zero.
Like
the
ring.
G
Yeah
and
I
think
I
mean
this
is
sort
of
a
common
theme
with
some
of
the
comments
like
if
you
once
you
start
doing,
things
for
efficiency
or
manageability
like
having
a
single
cubelet
per
node
like
Tim,
was
saying,
or
a
single
@
CD
across
the
whole
cluster
shared
by
each
of
the
inner
clusters.
Then
you
start
to
lose
some
of
the
isolation,
so
it's
kind
of
like
a
trade-off.
G
B
B
Yeah
those
were
things
that
I
was
trying
to
think
about,
but
it
was
more
along
the
lines
of
like
get
it
configured
correctly.
The
first
time
versus
giving
everyone
the
options,
but
yeah
I
think
there
are
problems
with
just
in
terms
of
actually
auditing
containers,
like
they're,
still
patches
being
made
to
the
kernel
to
add
auditing
like
so
that
you
can
know
which
container
is
doing
what
and
then
you
can
track
it
down.
So
things
like
that,
like
we
don't
even
have
that
today
with
containers
so
like
those
are
the
hard
problems.
F
B
G
Instead
canonical
API
server,
but
are
you
saying
then
that
you
need
a
single
API
server,
because
I
was
assuming
that?
If
that
you
would
still
had
the
separate
API
servers,
so
the
cubelet
would
pull
from
multiple
API
servers.
I
mean
it
does
introduce
various
problems
with
like
scheduling
and
and
and
stuff
like
that.
But
I
was
just
trying
to
understand
yeah.
B
I'm
not
sure,
actually,
honestly,
the
best
way
to
do
it,
because
I'm
not
entirely
sure
how
that
works
today.
But
the
easiest
scenario
would
be
probably
the
one
I
would
go
with.
G
Pretty
sure
it's
just
one,
two
one
I
think
that
the
like
I'm
sure
you
could
hypothetically
make
it
talk
to
multiple
API
servers
but
like
if
you
had
it
one
to
one,
then
what
would
be
the
difference
between
that
system
and
what
we
have
today,
like
you,
have
a
single
cubelet
per
node
and
each
cubelet
is
talking
to
the
same
shared
api.
Server
like
it
seems
like
you,
lose
the
halation
I
found
a
lot
of
isolation
came
from
having
one
API
server
and
Etsy
deeper
tenant,
maybe
I
misunderstood,
but
like
that's
that
make
sense.
Yeah.
B
C
F
E
I'd,
like
add
a
couple
things
while
making
the
cool
enabling
the
Kula
to
speak
to
separate
API,
Cuban
IDs
control
planes,
because
the
kulit
can
and
going
to
a
little
balancer
that
has
a
bunch
of
API
servers
if
you
find
it
the
API
server
various
statements
for
as
long
as
it's
the
same.
It's
part
of
the
same
control
plane.
It
still
works.
But
there
is
another
channel
of
communication
between
the
API
server
and
the
coolant,
which
means
news
that
now
we
enable
the
kubu
to
talk
to
two
separate
human
at
ease
control
planes.
E
Then
the
users
from
these
from
the
separate
clusters
can
then
access
the
kulit
directly
so
doing
something
like
coop
control,
exact
to
control,
log
crew
control
port
forward.
That's
a
channel
that
the
the
api
server
proxies
to
to
the
cool
it,
because
the
coolest
itself
has
a
small
but
important
API
server
there,
not
a
key
bananas.
B
So
in
this
scenario
like
you
would
have
to
trust
the
cubelet,
and
that
would
be
something
we
would
have
to
bet
like
all
those
functions
with
exec
and
logs
and
like
I,
looked
at
a
lot
of
those
when
we
did
virtual
cubelet
and
like
they
aren't
really
all
that
complex.
So
I
feel
like
that
was
a
like
surface
area
that
we
could
actually
dive
into
and
make
sure
was
safe
and,
like
fuzz,
all
of
it.
I.
A
Think
we're
eventually
going
to
need
to
sort
of
overall
the
cubelets
api's.
We've
talked
for
a
long
time
about
reusing
the
api
machinery
within
the
cubelet
to
support
those
api's.
But,
as
you
say,
the
cubelet
doesn't
really
care
about
namespaces
right
now,
and
you
know
if
we
ever
want
to
have
multi
tenant
logging
and
monitoring.
For
instance,
you.
B
E
A
I
So
it
kind
of
seems
to
me
that
they're
holding
the
with
if
the
isolation
doesn't
go
all
the
way
down
to
the
bottom.
It's
a
having
the
multiple
api's,
this
kind
of
connect,
the
opportunities
in
describing
in,
and
wouldn't
you
kind
of
leaving
aside
things
like
in
epic
CI
beans,
pretending
than
nothing
and
the
fact
that
you'd
be
putting
all
of
the
complexity
in
one
place,
but
this
kind
of
goes
back
to
the
one
of
the
original
SAS
versions
of
multi-tenancy,
where
you've
actually
just
got
another
API
in
front
of
it.
I
B
L
Think
that
would
be
useful,
maybe
to
step
back
a
little
bit
of
it
and
like
say
what
is
the
threat
model?
What
are
we?
What
do
we
really
need
to
have
shared
here?
What
are
common
resources?
What
must
be
isolated
is
we're
talking
very
much
about
implementations
in
the
context
of
current
implementation
versus
where
they
want
to
be
relative
to
the
threat.
B
Well,
I
think
it's
like
it's
a
trade
off
with
them,
both
so
like
the
threat
model
that
I
was
thinking
about
was
like
illogical
bugs
in
API
server,
because
that's
what
people
have
access
to
like,
since
they
only
have
since
the
only
thing
exposed
them
like
they,
don't
see.
The
keyboard
API
like
the
API
server
talks
to
the
keyboard
so
like
we
should
trust
our
own
components
there,
but,
like
the
actual
user
talks
to
the
API
server.
B
B
So
there's
that
and
then
the
other
entire
aspect
is
remote
code
execution
in
containers
themselves,
which
is
you
know
it's
something
that
we
have
already
thought
about
a
lot
so
like
those
are
the
two
things
that
are
actually
exposed
to
tenants
so
like
when
I
thought
about
it.
That
way,
like
the
cubelet
and
everything
really
underneath
was
more
a
concern
with
do
we
trust
our
components
to
talk
to
our
other
components,
I.
L
Think
that's
what
I
was
trying
to
imply
with
the
threat
modeling
is
that
we
we
kind
of
have
a
loose
component
diagram
or
flow
of
the
cluster
or
tenant
namespace
credence
against
the
cluster
and
that
those
are
implicitly
untrusted
at
first
until
they've
gone
through
some
validation.
But
then,
if
they're
validated
can
interact
with
the
core.
G
Well,
the
qubit
does
have
like
Tim
was
mentioned,
like
other
various
people
were
mentioned,
you
think
it
has
its
own
AP
guys.
Well,
not
not
a
true
API,
so
everybody
has
its
own
HTTP
server.
That's
serving
these,
like
you,
know
the
exact
and
attach,
and
these
different
end
points
so
those
also
have
to
be
hardened
against
attack.
Maybe
they
don't
have
to
be
multi-tenant.
G
B
Totally
I
was
just
more
thinking
about
the
fact
that,
like
it's,
not
the
tenant
directly
connecting
to
it,
it's
just
another
one
of
our
components.
So
we
kind
of
control
that
communication.
G
G
But
in
terms
in
terms
of
like
the
threat
model
and
the
benefits
and
stuff
like
I
mean
I,
think
benefits
of
it
are
like
it's
all.
This
problem
of
how
you
can
have
multiple
tenants
have
the
same
namespace
name,
which
is
something
we've
never
really
figured
out
how
to
do
in
kubernetes
like
in
a
normal
kubernetes
cluster.
You
would
need
like
hierarchical
namespaces
or,
like
someone
was
mentioning
before
a
new
tenant
concept,
where
the
tenant
pen
could
have
its
own
set
of
name
spaces.
G
But
I
was
assuming
at
chess
in
mind
like
you,
could
potentially
have
us
for
Guinness,
or
maybe
it
was
mentioned
in
the
design,
but
like
a
separate
DNS
for
tenants
and
so
that
kind
of
solves
that
problem
of
like
service
and
Cod
names
leaking
across
tenants.
So
I
think
this
does
solve
like
those
certain
class
of
problems
that,
like
information,
leakage
and
the
namespace
name,
collisions
and
stuff
I
think
the
questions
just
like
is
the
cost.
To
you
know.
B
Yeah
I
completely
agree
and
I
I
I
have
put
the
DNS
thing
except
I,
shoved
it
off
in
a
separate
Google
Doc,
but
yeah.
We
need
to
namespace
cube
DNS
for
sure.
E
Well,
it's
just
an
add-on
right,
so,
whatever
add-on
that
is
deployed
is
deployed
per
cluster
or
isolated,
I
I,
so
the
way
I
see
it
is
and
the
way
I
read
Jesse's
proposal
is
we
have
a
separation
of
clusters
while
the
not
a
separation,
a
new
layer
of
segregation,
separation
in
the
API
itself?
That's
why
I
mentioned
you
know,
maybe
the
possibility
of
maybe
having
a
tenant
object.
That
would
scope
object.
Sorry
that
would
then,
as
David
was
mentioning,
he
would
be
like
a
top
level
of
object
that
then
could
have
multiple
namespaces.
E
That
would
also
address
usability
from
from
a
user
perspective
using
coop
controller,
just
the
API
without
syndication.
This
user
belongs
to
tenant
a
B
or
C
or
to
tenants
I,
don't
know,
but
a
lot
of
these
things
and
David
mentioned
cost.
If,
even
if
this
concept
is
added
everything
every
component,
that
I
can
recall
a
controller
manager,
you
know
just
enforcing
code
as
services
and
points
whatever
which
we
the
cost,
could
be
on
on
the
complexity
of
the
code,
you
would
have
to
write
a
lot
of
code
to
understand
this
tenant.
E
At
least
that's
how
I
see
it
while,
on
the
other
hand,
the
chess
is
proposing
again
the
way
I
read,
it
is
multiple
clusters,
so
the
cost
is
on
the
operation
side
of
things.
Given
an
is
not
easy
to
operate.
Okay,
neither
is
contributing
to
it.
So
not
because
you
know
the
community
isn't
great.
A
community
is
great.
It's
not
that
it's
just
that
there's
a
lot
of
code
to
to
cover.
If
we,
this
is
kind
of
object,
yeah.
B
F
I'm
not
sure
that
I've
been
thinking
about
the
idea
of
a
first-class
tenant
and
multiple
API
servers
or
clusters.
However,
you
want
to
think
about
it
as
being
mutually
exclusive.
I've
been
thinking
that
there
would
be
a
top-level
concept
of
a
tenant
and
some
API
server.
That
I
can
talk
to
you
to
find
out
about
the
tenets
of
what
namespaces
they
own,
but
then
there
would
also
be
for
each
of
those
tenants,
a
separate
API
server
with
separate
authentication
credentials.
I
can
use
to
talk
to
that
and
the
design
of
how
that
happens.
F
B
K
B
So
that's
when
I
was
getting
into
the
really
weird
hypothetical
stuff
were
we
like
reimplemented
Innes.
Sorry
I
like
for
some
reason
thought
about
this.
This
calls
first,
so
it's
a
weird
place
in
my
mind,
but
yeah
no
I
think
it's
entirely
possible
and
you
you
have
like
the
operator
at
the
like
ring
zero
level
who
hypothetically
can
see
everything,
but,
of
course
that
will
take
a
lot
of
work
to
make
that
secure.
F
G
So
I
mean
so
that
I
think
is
the
analogy.
People
are
bringing
up
earlier,
like
with
multi
cluster
and
Federation.
It's
very
similar
issue
where,
like
you
know,
you
won't
do
operations
across
many
registered
clusters,
and
so
you
have
that
same
issue.
How
do
you
get
a
list
of
all
the
clusters?
How
do
you
store
the
credentials
to
access
all
those
clusters
so,
like
I,
don't
think
there's
there's
an
analogy
there
with
with
the
multi
cluster
stuff.
B
Yeah,
like
actually
I,
had
multiple
versions
of
this.
The
first
one
was
like
here's,
how
it
could
possibly
be
done
and
I
showed
it
to
Gabe,
and
he
was
like
that's
just
multiple
clusters
next
to
each
other.
So
then
I.
J
B
C
C
C
We
can
know
we
don't
have
to
add
any
space
to
tenant
relationship,
be
like
one
too
many
or
whatever
I
don't
know.
This
is
really
what
it
should
be,
but
it
might
make
sense
that
there
are
some
shared
namespaces,
your
other
things.
We
think
about
these
things
that
maybe
the
workloads
it's
like.
We
have
a
way
that
they're
grouping
and
knows
we
need
ways
to
route
the
controls
in
a
way.
That's
like
reliable
and
actually
forms
a
real
Halloween.
G
Yeah,
we
could
to
understand
exactly
how
people
are
using
like
some
tenant
concept,
like
I
know
that
people
for
sure
have
made
up
their
own
than
re-implementing
at
different
ways
like
either
implicitly
through
the
are
back
rules
or
I.
Guess
people
in
this
working
group
have
talked
about
they've
created
an
actual
concept.
I
guess
is
a
custom
resource
or
something
and
and
so
like.
Maybe
if
we
understood
better
or
like
what
people
want
that
tenant
concept
to
do,
then
we
could
figure
out
like
what
the
right
way
to
implement.
G
C
F
G
K
G
E
B
E
B
E
B
We'll
take
us
the
action
I
for
like
the
next
meeting
to
kind
of
all
think
about
or
like
come
back
with
what
we're
people
are
doing
with
tenants.
I
guess.
G
Yeah
I
guess
that's
another
way
of
doing
the
isolation
like
where
you'd
have
via
multiple
VMs
per
physical
machine
and
each
one
would
have
its
own
cupid,
and
so,
instead
of
having
multiple
cubelets
running
directly
on
the
same
physical
machine
like
that
seems
like
a
lot
simpler
from
a
resource
management
perspective
because
then
the
Kuban,
let's
don't
have
to
know
about
each
other.
It's
like
a
way
of
statically
partitioning.
The
cubelet
read
the
node
resources
across
the
cubelets.
By
putting
them
each
in
a
separate
VM
I
mean
it
has
the
disadvantage
of.