►
From YouTube: Kubernetes WG Multitenancy 20181010
Description
Notes and Agenda: https://docs.google.com/document/d/1fj3yzmeU2eU8ZNBCUJG97dk_wC7228-e_MmdcmTNrZY/edit
A
B
All
of
the
work
that
this
working
group
has
been
doing
over
that
and
looking
at
the
various
documents
and
watching
some
of
the
last
conversations
which
were
really
in-depth
and
good
and
just
kind
of
wanted
to
meet
with
the
working
group
and
see
kind
of
where
that
was
right.
Now,
just
I
saw
your
dock.
B
That
kind
of
went
over
like
the
individual
permissions
that
we
would
look
at
and
that
seemed
really
tight
and
then
kind
of
saw
the
other
dock
that
had
the
various
groupings
of
different
types
of
multi-tenancy,
the
one
that
we're
really
interested
in
and
that
I
suspect
a
lot
of
people
are
really
interested
in.
Is
that
one
organization
that
is
semi
trusted.
B
So
you
aren't
expecting
sort
of
anonymous
users
to
come
in
and
having
to
really
lock
everything
down
from
that
perspective,
but
still
wanting
to
have
good
isolation
between
the
different
organizations
that
are
sharing
that
multi
tenant
cluster,
so
I
just
kind
of
wanted
to
kick
off
that
conversation
and
see
kind
of
where
the
group
was
and
kind
of
where
different
people
were
coming
from.
With
regards
to
that,
does
that
make
sense.
A
B
A
C
This
is
Jared
I
work
here
with
Tasha,
and
you
know
we're
thinking
about
this
and
it
felt
like
we
were
watching
some
of
the
previous
conversations
on
this
call
where
we
were
debating.
You
know
different
forms,
a
tendency
like
what
is
a
tenant,
and
so
it
seemed
like
it
would
be
useful,
like
first
of
all,
just
kind
of
maybe
capture
and
formalize
that
notion
of
what
we
mean
by
that
interdepartmental
tenancy,
so
single
organization,
some
degree
of
trust.
C
But
you
know
the
the
relationship
of
those
teams
and
sort
of
document
that
use
case
and
then,
second
of
all,
it's
a
kind
of
document.
You
know
there
exists
all
these
policy
objects
that
we
can
configure
in
kubernetes
to
achieve
that
level
of
isolation
and
there's
that
guy
that
sort
of
list
them
all
out,
but
perhaps
to
go
so
far
as
to
perhaps
like
define
like
a
profile
or
a
standard
set
of
policy
objects.
That
would
you
know,
based
on
the
capabilities
and
Cooper
news
today
that
then
implement
that
model.
C
A
So
there
was
the
work
that
David
had
shown
along
with
someone
else
from
Google
of
view.
There
was
many
months
ago
about
security
kind
of
policies
that
you
can
apply
to
a
cluster,
so
it
would
be
great
to
then
integrate
it
into
that,
so
that
you
could
have
like
a
one-off
kind
of
install
and
they
were
using
I
believe
CR
DS
to
do
it.
So
it's
not
in
like
kubernetes
core.
It's
like
a
separate
project.
Okay,.
C
A
So
I
think
a
part
of
our
mandate
or
charter
or
whatever
was
eventually
doing
the
whole
easy
thing
to
verify
that
your
cluster
is
like
at
least
getting
the
security
kind
of
boundaries
that
you
expect,
which
is
going
to
be
different
for
kind
of
model,
but
yeah.
It
does
obviously
like
if
you
turn
off,
you
know,
running
privilege,
containers
or
something
you're
breaking
that
test
suite.
That
shows
that
it's
a
verified,
kubernetes
cluster
and
like
I,
know
that
we
get
rid
of
things
like
that
internally
say
with
things
so
I
think
it's
we're.
A
D
C
E
Just
good
comment
about
the
testing
you
know
for
multi-tenancy,
so
there's
been
a
discussion
going
on
in
a
conformance
working
group
creating
a
separate
profile
for
multi-tenancy
in
a
very
initial
stage.
I
was
there
are
a
few
discussions
around
that,
but
I
think
that's
something
has
been
kicked
around
as
well,
so
essentially
set
of
profiles
they
kind
of
go
on
top
of
the
baseline
conformance.
E
C
E
I
guess
they're
I
think
they're
gonna
work
together,
I
guess
you
know
when
that
happens,
so
so
right
now,
there's
no
profile
as
yet.
So
right
now,
there's
only
one
certification
at
the
baseline
level
you
see
and
the
goal
is
to
have
you
know
these
profiles,
multi-tenancy
profile
or
something
else
in
a
profiles,
different
different
profiles
and
then
the
service
provider
can
do
certification
for
those
profiles.
Basically.
F
Hi,
so
just
to
add
on
to
what
deepak
was
saying
the
main
problem
right
now
is
we
don't
have
enough
tests
in
the
base
test
suite
once
we
have
tests
in
the
base
test
suite,
then
we
can
do
profiles
so
there's
a
bootstrapping
problem
already.
So
if
this
group
can
suggest
additional
tests
that
we
can
develop
and
add
to
the
base
test
suite,
then
we
can
talk
about
how
to
make
them
into
profiles.
Does
that
help
Jared?
F
C
C
That's
sort
of
you
know,
that's
just
one
example:
I
think
there
are
others
and
so
I
guess
the
the
as
we've
been
thinking
about
this
I
guess
the
the
thing
we're
not
sure
how
to
handle
is
that
you
know
if
you're,
if
you're,
if
you're,
building
a
product
that
is
a
sort
of
intended
to
be
a
multi-tenant,
kubernetes
environment
and
and
but
then
you
have
to
kind
of
turn
off
all
of
the
tenant
isolation
and
security
controls.
In
order
to
certify
the
platform,
it
seems
kind
of
problem,
yeah.
F
I
can
point
you
to
issues
that
I
created
like
about
a
year
ago
about
this,
but
the
the
neck
here
is.
We
need
to
list
exactly
which
tests
fail
under
which
condition
and
which
security
knob.
So
we
need
to
document
this,
and
then
that
is
something
I
can
take
to
sig
architecture
and
the
conformance
working
group
and
tell
them
for
additional
information
to
you
know
justify
that.
We
need
a
profile
for
multi-tenancy,
I.
Think.
G
This
is
something
that
also
you
could
drive
out
of
the
specification
that
we
talked
about.
So
if
you
come
up
with
a
list
of
assertions,
we
say:
okay,
this
is
how
we
define
soft
multi-tenancy.
You
know
these
are
the
policies.
This
is
the
runtime
isolation
and
here's
what
we
shouldn't
do
if
there
was
something
we
could
refer
to
the
hardness
assertions.
You
could
then
turn
this
assertions
it
into
a
test
suite
or
addition,
Fester's
yeah.
C
So
so
I
guess
I'm
hearing
like
like
folks
feel
like
a
vo.
You
know
welcome
effort
to
try
to
write
down
that
that
use
case
to
sort
of
elaborate
on
the
specific
multi-tenancy
scenario
we're
talking
about
and
then
try
to
get
to
a
more
sort
of
formal
specification
of
a
policy
object,
configuration
that
would
support
that
model.
I.
F
Would
recommend,
starting
from
both
ends
document
what
is
breaking
when
you
turn
things
on,
so
that's
the
short-term
go
bang
on
people's
doors
to
say,
okay.
This
is
not
working
for
us,
a
kind
of
thing
and
on
the
other
side,
which
is
the
longer-term
thing
like
everybody
here,
is
suggesting
the
doing
this
specification
and
doing
coming
from
the
other
side.
So
one
other
question
that
I
had
was
around.
F
Have
you
thought
about
how
to
try,
like
name
spaces,
to
specific
nodes
and
do
much
in
a
tenancy
that
way,
because
that
is
something
that
we've
tried
in
a
couple
of
places
in
OpenStack
related
efforts
where,
if
you
define
a
name
space,
then
you
say
that
these
nodes
are
only
running
payloads
from
tight
to
that
namespace.
So
that's
that.
C
Yeah,
so
I
won't
say
that
it
it's
a
solved
problem
but
I'm
sure
you
guys
have
thought
about
some
of
these
solutions
as
well.
So
you
know,
there's
there's
the
model,
you
know,
David
did
a
talk
earlier
this
year.
You
know
calling
out
the
use
of
like
taints
and
toleration
x'
as
a
mechanism
for
doing
that.
So
you
could
put
a
tape
on
certain
notes
that
were
tied
to
like
a
namespace,
ID
and
add
some
toleration
on
the
names
call
pods
of
the
namespace
to
pin
them
there.
C
We've
also
been
like
internally
chatting
about
the
idea
of
using
you
know
some
kind
of
policy
configuration
on
the
namespace.
That
would
say
that
that
would
you
know,
do
some
things
like
either
pin
like
a
node
affinity,
value
to
a
pod
or
something
as
part
of
the
pod
lifecycle
to
pin
it's
a
house.
You
know
been
I,
don't
know
if
you
wanna
chime
in
here
and
not
another
approaches
that
we've
discussed.
G
Yeah
sorry
I
should
have
introduced
myself
on
bed.
I
will
also
work
with
VMware
I'm.
Looking
at
this
in
a
technical,
very
deep
technical
sense,
I
think
that
that
I
think
from
a
business
value
standpoint,
there's
two
areas
of
interest
to
us:
I
think
that
namespace
isolation
from
a
runtime
perspective
and
achieving
that
through
having
individual
notes
but
in
space
next
turn
sense.
G
We're
also
interested
in
in
looking
at
network
isolation
for
names
basis,
which
is
something
I
know
you
guys
have
talked
about
as
well,
but
from
a
runtime
standpoint.
We're
also
interesting
hard
level
isolation
as
well.
So
you
know
I
mean
that
might
be
might
be
obvious,
but
that's
something
we're
interested
in
being
VMware
but
I
think
yeah,
but
we're
very
interested
in
looking
at
how
we
could
how
we
could
define
policies
associated
with
particular
namespace,
for
example,
that
what
you
could
say:
okay,
I,
want
to
namespace.
G
It
has
this
much
resourced
I,
don't
care
about
strong,
hard
isolation
to
serenade
in
this
namespace,
but
I
absolutely
want
strong
runtime
isolation
in
the
namespace.
Therefore,
I
want
my
own
nodes
for
the
same
space
or
I
have
namespace,
and
everything
I
want
to
write
in
this
namespace
I
want
to
run
as
a
VM.
F
So
right,
so
there
are
some
things
that
you
can
already
do,
and
you
know
that
is
something
we
should
definitely
document
thing
that
these
are
the
possibilities
right
now
to
achieve
the
soft
multi-tenancy
and
then
go
on
to
do
the
rest
of
the
things
that
this
working
go.
Files
been
talking
about,
I,
think
yeah,.
C
It's
kind
of
elaborate
I
think
what
that
part
of
the
conversation
in
a
vision
that
there's
there's
sort
of
several
levels
of
potential
isolation
of
a
namespace
like
like
one
level
is,
you
know,
namespaces
share
workers,
and
so
my
pods
might
be
intermingled
with
pods
of
other
workers
on
a
single
Linux
host.
You
can
also
envision
sort
of
a
mode
where
a
namespace
had
a
dedicated
set
of
workers,
and
you
know
the
the
only
pods
running
on
those
workers
were
from
that
namespace.
C
You
know,
as
we
looked
at
that
so
there's
like
the
you
know,
runtime
config
and
the
pod
security
policy,
where
we
sort
of
addressed
this
on
a
per
pod
basis.
As
we've
been
looking
at,
we've
been
thinking
about
how
you
make
this
source
electable
in
a
namespace
level,
so
in
other
words,
I
kind
of
want
to
mark
a
namespace,
that's
a
policy
of
the
namespace
and
covering
all
of
the
pods
in
it.
So
in
other
words,
you
know,
rather
than
trying
to
say,
hey
within
a
single
namespace.
C
I
might
have
some
things
that
are
isolated
on
dedicated
workers
and
other
things
aren't
shared
workers
that
becomes
really
complicated,
I
think
for
the
end
user
to
reason
about,
whereas,
like
a
policy
of
the
namespace
that
says
this
is
the
level
of
isolation
this
namespace
should
get
and,
and
we
can
sort
of
automate
the
the
policy
application
individual
seems
like
a
model.
That's
you
know,
balances
isolation
with
usability.
C
So
feel
like
some
of
this
might
be
able
to
getting
a
little
far
ahead
of
on
the
skis.
You
know
I
feel
like
we
got
consensus
on
the
idea
of
like
documenting
use
case,
and
you
know
documenting
the
policy
configuration
documenting
some
of
the
end
and
tests
that
you
know
be
impacted
by
that
policy
configuration.
Is
that,
like
a
reasonable
next
milestone,
yeah.
A
F
G
G
G
C
H
H
Yeah
I
guess
I'm
not
completely
familiar
with
the
whole
community's
thing,
but
I'm
just
trying
to
think
of
alternative
solutions
that
achieve
the
same
goal
on
the
same
machine.
If
I
ran
multiple
instances
of
the
the
runtime
with
different
they're
connected
different
kubernetes
clusters,
then
I
essentially
have
multi
tendency
on
my
machine.
But
it's
not
in
the
same
fashion
of
me,
coming
down
from
a
single
kubernetes
instance.
H
C
F
G
As
you
point
out,
you
know,
as
a
single
known
agent,
it's
a
very
very
tightly
coupled
it's
very
tightly
coupled
to
the
pods
containers
running
on
the
note,
if
you
were
to
have
two
different
runtime
classes
available
to
a
couplet
you'd
have
to
deal
with
in
the
resource
management
challenges
that
came
out
of
that.
You
would
also
have
to
deal
with
you
get
bogged
down
in
overheads
and
all
these
other
things
it
becomes
complicated,
I
think
they
decided
against
that.
It's
my
understanding.
G
G
Absolutely
I
mean,
if
you
consider
a
VM
to
be,
you
know
a
unit
of
isolation,
so
unit
of
runtime
isolation
and
a
unit
of
like
a
single
fault
domain.
Then
it
makes
a
ton
of
sense
to
have
to
use
the
VMS
to
draw
those
lines
right,
so
you
know
let
them
this
is
why
we're
talking
about
for
a
namespace,
you
could
say:
okay
well,
namespace
has
exclusive
access
to
certain
other
VMs
or
at
the
poly
level,
that
it's
kind
of
containment.
You
say
it
has
exclusive
access
to
to
a
VM
and
has
its
own
kernel.
G
E
Question
about
the
VMware
folks.
Actually,
this
is
regarding.
There
was
a
lot
of
discussion
I
think
couple
of
months
ago
about
namespace
not
being
the
right
level
of
abstraction
for
multi-tenancy,
so
there
was
a
Cashion
around
tenant,
maybe
a
new
tenant
kind
of
a
concept
that
may
include
multiple
namespaces,
you
know:
did
you
guys
think
on
those
lines?
Are
you
pretty
much
looking
at
namespace
being
the
solution
from
enabling
multi-tenancy.
B
E
C
Think,
like
kind
of
been
debating
that
so
you
know,
there's
there's
what
we've
seen
is
a
model
is
that
in
certain
cases
we
don't
need
like
an
explicit
tenant
construct,
but
that
really
becomes
more
function
of
the
identity
management
system.
So
you
know
I,
think
there's,
for
example,
there's
a
model
where,
like
as
a
user,
when
I
like
I,
you
know
we
have.
C
This
sort
of
more
loosely
coupled
model
where
user
has
a
role
binding
on
one
or
more
namespaces,
and
there's
a
system
that
sort
of
manages
their
coop
config
based
on
knowledge
of
those
little
bindings,
and
then
the
the
organization
concept
can
almost
live
entirely
in
the
authentication
endpoint.
That's
basically
in
its
response
to
your
login
commands
like
telling
you
which
change
spaces
you
have
access
to.
C
So
in
that
model
that
I
don't
think
that
requires
like
an
explicit
notion
of
a
tenant.
It's
really
just
like
a
peer-to-peer
relationship
between
users
and
namespaces.
There
are
certain
I
am
Mason
is
where
explicitly
having
like
a
tenant
object
seems
useful
in
the
case
of
like
an
enterprise
having
multiple
teams
share
a
cluster
I
think
so
far
we
haven't
necessarily
come
across
a
strong
need
for
that.
Yeah
good.
E
I,
don't
remember,
I
think
it's
been
a
while
and
I
I
need
to
go
through
the
document
as
well,
because
I
think
the
the
consensus
was
that
using
namespace
other
as
an
abstraction
for
multi-tenancy
was
kind
of
force-feeding,
because
that
that
was
not
the
purpose
of
namespace.
It
seems
and
that's
why
there
was
a
big
discussion
around
the
the
hierarchy
in
the
tunable
hierarchy,
tenant
being
at
the
top
and
then
multiple
namespaces,
underneath
it
so
I
need
to
kind
of
go
back
as
well
as
been
a
while.
E
E
C
E
Yeah
because
it
can
get
pretty
pretty
complex
as
well,
because
I
came
from
I
used
to
work
at
feeble
Oracle
and
all
that
and
we
used
to
build
that
as
part
of
our
RPS
and
CRMs.
So
you
can
have
a
potentially
tenant
hierarchy
as
well.
So
essentially
you
sign
your
quota
at
the
higher
level
and
then
that
can
I
guess
David
David
guess
give
it
up
across
all
the
sub
organization
and
all
that,
so
it
can
get
pretty
complex
as
well.
C
C
You
know
that
full
hierarchy,
where,
like
I,
give
like
a
business
unit
at
quota
and
then
each
of
the
projects
have
quotas
that,
like
take
up
from
that,
we
see
a
lot
of
people
moving
away
from
that
model
towards
a
more
flat
model.
Where
we're
you
know,
the
sort
of
discussion
that
goes
on
is
on
the
lines
of
like
hey.
E
E
But
I
think
that
there
was
a
consensus
to
some
extent
that
we
need
to
have
at
least
two
level
of
hierarchy
tenant
being
at
the
top
and
then
multiple
namespaces,
so
travel
I
think
that
was
a
spin
a
while,
so
I
think
that's
what
I
kind
of
people
agreed
on
that.
But
then
it
cannot
I.
Don't
know
anything
happened
after
that.
C
Well,
if
we
think
about
it
from
like
any
approach
of
a
lot
of
Cooper
today
seems
to
be
to
provide
sort
of
building
blocks
that
allow
you
to
build
up
the
through
policy.
The
version
that
you
want
at
your
business
and
so
I
can
see
like
like
in
it
would
be
great
if
there
was
a
model
where,
if
you
wanted
that
level
of
hierarchy,
you
had
the
option
of
turning
on
that
capability.
But
if
you
didn't
want
it,
you
didn't
have
to
use
it.
E
That's
what
needs
to
happen,
I
think
the
damn
to
Tim's
point
I
think
this
is
the
this
is
what
this
group
can
really
add:
value
essentially
be
able
to
define
various
flavors
of
multi-tenancy
and
then
depend
accordingly.
You
can
have
different
profiles,
so
hard
multi-tenancy,
hard,
hard,
multi-tenancy,
soft
multi-tenancy,
and
then
correspondingly,
you
have
you
know
those
kind
of
related
profiles
basically
see
and
that's
what
has
to
be
done
by
this
group,
because
this
group
understands
how
what
multi-tenancy
is
and
that
can
feed
that
information
can
be
fed
into
the
conformance
/.
G
Think
one
of
the
one
of
the
interesting
things,
though,
when
you
get
into
the
hard
mobility
discussion,
you
know
I
read
Jesse's
document
about
hard
multi-tenancy
suggesting
you
have.
You
know
an
individual
API
servers,
individual
DNS
servers
per
namespace.
Once
you
get
into
that
realm,
it
becomes
a
much
much
bigger
thing
than
just
profiles,
and
it
also
becomes
very
difficult
to
reason
about
Tesla.
You
know
if
you
have
any
kind
of
end-to-end
test.
G
The
question
of
whether
the
API
server
you're
talking
to
is
is
an
individual
one
for
your
namespace
or
whether
it's
a
global
one
is
something
that's
going
to
be
very,
very
hard
to
test.
So
I
think
this
is
like
starting
at
the
softer
end
of
the
scale
is.
This
is
something
that's.
There's
gonna
be
a
lot
easier
prosecuted
about.
C
Like
I
love,
the
CS
sort
of
get
to
you
know
a
single
play
that
the
the
the
MVP
multi-tenancy
model
of,
like
you
know,
without
going
too
far
ahead
of
where
kubernetes
is
now
I
feel
like
there's
a
version
of
multi-tenancy
that
can
be
supported
with
the
policy
objects
that
exists
in
the
current
implementation,
and
you
know
sort
of
defining
that
as
a
good
stepping
stone.
You
know
from
there
we
can
talk
about.
You
know
additional
profiles
that
enhance
that
capability
with
new,
stronger
forms
of
isolation,
and
you
know,
support
different
models
of
multi-tenancy
right.
F
I
would
also
encourage
us
to
think
about
primitives
like
chains
and
toleration,
something
else
similar
to
that
which
can
be
used
to
then
build
the
multi-tenancy
stuff.
Even
if
the
tenant
concept
itself
lives
outside
of
Cuba
netters.
If
we
have
those
primitives,
then
we
can
apply
those
to
in
different
ways
right
essentially
so
I
would
think.
I
would
want
us
to
think
about
those
kinds
of
things
as
well
like
smaller
things
that
can
be
easily
added
to
Cuban
it
as
without
too
much
fuss
and
which
everybody
can
use.
D
E
You
I
think
but
quick
thing
from
Bobbi.
Actually
we
we
have
multi-tenancy
actually
enabled
as
part
of
our
kinetics
cloud
offering
and
one
of
the
work
which
we
are
doing
is
we
cannot
documenting
documenting.
How
did
we
harden
all
that
thing?
You
know
I
think
that
kind
of
fits
into
the
purity
profile
so
I
will
Google
actually
would
like
to
kind
of
share
that
as
well
with
the
community.
E
C
So
well
that
would
be
an
awesome
discussion.
I
think
to
have
three
different
teams:
that
sort
of
independently
arrived
at
a
multi-tenant
solution
to
sort
of
swap
notes
and-
and
you
know
perhaps,
that
the
standard
profile
comes
out
of
some
unification
of
those
three
approaches
or
perhaps
there's
three
different
profiles.
Depending
on
how
sing
ya.
D
D
Hey
I
can
tell
you
something
cool
that
that
we
have
going
on
so
our
CI
for
for
openshift
is
around
in
a
an
efficient
ship
cluster,
and
so
every
like,
we
have
you
know
various
repositories
and
every
PR
job
gets
run
in
a
pod
in
a
separate
namespace
and
an
over
ship
cluster
and
as
a
PR
author,
you
have
access
to
that.
We
it's
a
project
namespace
and
you
have
access
to
that.
But
nobody
else
does
it's.
It's
pretty.
It's
pretty
cool.
D
D
C
F
C
I
C
You
know
I
think
that
there,
the
reality
is
that
there
are
many
definitions
of
a
tenant
and
there
probably
multiple
answers
to
that
question.
You
know,
as
I
think
about
the
the
test
we
discussed
of
like
trying
to
document
like
this
model
of
enterprise,
multi-tenancy,
which
we
admit
is
just
one
model
and
not
all
of
them.
C
I
feel
like
we
should
start
documenting
those
things
more
concretely
and
I
suspect
we
end
up
with
actually
like
two
or
three
or
four
different
things
there,
or
maybe
we
can
converge
them
into
a
single
definition,
but
otherwise
it's
sort
of
like
it's
either.
Like
a
philosophical
debate
of
like
you
know
what
is
existence
or,
or
else
we're
trying
to
like
merge.
You
know
three
kind
of
very
different
entities
that
we
all
happen
to
place
the
same
name
on,
and
it's
not
what
anybody
has
well.
I
C
That's
kind
of
what
I'm
thinking
that
we
we
start
kind
of
bottom-up,
but
there's
there's
a
thing,
hasn't
even
call
it
a
tenant
but
there's
a
form
of
like
namespace
isolation.
That's
useful
for
multiple
engineering
teams
sharing
a
cluster,
but
what
sort
of
more
lock
down
access
to
their
workloads
versus
other
people's
workloads.
I
I,
don't
know
if
you
saw
my
document
so
how
these
cases
is
I
think
a
bit
more
convoluted
than
other
people's
in
which
namespaces
actually
can
be.
You
can
have
multiple
tenants
granting
other
people
access
to
stuff
in
a
namespace,
particularly
in
it,
in
a
Devon
test
environment.
C
H
C
Know
I
like
another
approach.
What
I
think
it'd
feel
like
would
be
to
sort
of
take
a
persona
and
user
experience
based
approach.
So
you
know
if
you
start
off
with
like
a
definition
of
personas.
We're
like
hey,
like
I've,
got
a
cluster
administrator
and
then
I've
got
a
bunch
of
engineering
teams
and,
and
you
can
sort
of
document
those
personas
and
workflows
and
user
experience
they
each
of
those
personas
have
of
how
they
use
the
cluster.
C
So
so,
for
example,
like
as
a
developer
accessing
a
shared
coronaries
cluster
I
have
a
kind
of
weird
level
of
access
like
if
I
go,
read
the
kubernetes
documentation
and
go
through
the
tutorials
like
how
to
use
kubernetes.
You
know
a
lot
of
those
things
might
not
work
right
like
like
in
the
sort
of
namespace
level
tenancy,
you
know,
I'm,
probably
not
using
the
default
namespace
I'm,
always
using
a
specific
namespace
and
I
can
create
namespaces
potentially,
but
I.
Can't
necessarily,
you
know,
see
inside
namespaces
that
other
teams
have
backs
have
created.
C
I
also,
probably
am
limited
to
namespace,
scoped
objects,
and
so
I
might
be
able
to.
Maybe
I
can
see
cluster
scoped
objects
or
maybe
I
can't
even
see
them
at
all,
and,
and
so
you
know,
there's
a
collector.
They
useful
tasks
and
documenting
those
personas
and
user
experiences
and
there's
probably
a
couple
different
scenarios
like
you
could
envision
the
hey
I've
got
a
shared
cluster
and
cluster
admin,
a
set
of
engineering
teams.
H
B
So
we
can
kick
off
documenting
the
use
case
that
we've
been
looking
at
and
then
you
know
I.
Think
probably
two
weeks
is
a
good
cadence,
still
Jess.
If
we
want
to
get
back
together
and
have
the
different
people
who
volunteered
kind
of
go
over
what
how
they've
been
locking
down
their
clusters
and
kind
of
what
their
use
cases
are,
and
then
we
can
kind
of
look
at
the
proposal
for
the
softs
multi-tenancy
within
within
an
enterprise
and
kind
of
just
see
how
everyone's
feeling
about
that
and
kind
of
take
it
from
there.
G
I
mean
I
think
if
we
can
have
a
thing
that
there's
been
so
much
good
work
done
on
like
how
you
might
do
all
these
different
things
and
I
think
that
if
we
were
to
have
a
kind
of
an
opinionated
suggestion
that
said
each
of
these
different
areas
like
resource
isolation,
network,
isolation,
12,
plane,
isolation,
all
these
different
things.
If
we
had
an
opinionated
document
that
says
you
know
here's
what
soft
multi-tenancy
would
look
like
in
this
suggestion.
It
is
how
you
might
choose
to
configure
it.