►
From YouTube: Kubernetes WG Multitenancy 20180718
Description
Agenda and Notes https://docs.google.com/document/d/1fj3yzmeU2eU8ZNBCUJG97dk_wC7228-e_MmdcmTNrZY/edit
A
Okay,
so
hi
I'm
I'm
new
kind
of
new
here
I've
been
interested
in
multi-tenancy
in
general
in
a
long
time
for
a
long
time,
but
my
interest
has
receded
in
the
general
multi-tenancy
but
I'm
interested
in
a
more
particular
and
restricted
problem.
I'm
now
using
Coubertin
his
API
machinery
to
build
the
control
plane
for
something
else,
a
traditional
infrastructure
cloud
and
particularly
focusing
on
the
Sdn
art-
and
you
know
this
is
a
thing
that
has
lots
of
tenants
and
I'm
concerned
about
fairness
in
the
control
plane
itself.
B
B
My
thoughts
as
I've
shared
them
before
the
last
meeting
last
month
are
that
having
a
tenant
scope
would
cover
all
of
these
use
cases.
However,
it
would
introduce
a
lot
of
complexity
for
all
the
you
know,
all
the
API
machine,
your
stuff,
all
the
controllers,
etc,
would
have
to
understand
the
tenant
scope,
and
you
know
the
scope
would
have
things
like
Coda's,
that's
at
etc
and
yeah
I,
don't
know
how
to
move
on
from
that.
B
A
One
of
the
things
I
like
about
Coburn
is,
as
it
is
today,
is
actually
pretty
open
and
modular,
so
for
several
of
the
relevant
concepts
that
are
relevant
to
multi-tenancy
kubernetes
is
is
not
very
opinionated
and
I
actually
kind
of
like
that
that
that
makes
it
at
least
conceivable
that
it
might
be
useful
for
a
variety
of
use
cases.
So
I
would
prefer
to
keep
a
you
know
that
degree
of
openness
to
it
to
the
degree
we
can
know
if
it's
terribly
painful
but
you're
right
that
mean
that
poses
you
know,
throws
back
the
question.
A
If
we
want
fairness,
it
has
to
be
fairness
among
what
we
have
to
have
a
clear
idea
on
that,
and
it's
actually
another
concern
that
I've
seen
discussed
in
the
materials
around
here,
which
is
I,
think
related,
but
not
coming
for
exactly
the
same
source,
which
is
not
to
say
customers
but
concerns
with
may
be
buggy
controllers.
We
don't
want
a
buggy
controller
running
wild
dump.
You
know
shutting
everything
else
out
and,
and
in
some
sense
you
know,
I've
long
heard.
The
idea
that
well
you
know,
security
is,
is
actually
a
really
great
defense.
A
But,
like
you
say,
it
takes
a
question
of
what
exactly
do
we
define
those
things
to
be
for
my
versus
the
most
natural
thing
might
be
namespace
I
think
there'll
be
a
pretty
close
approximation
where
some
we're
still
I
think
somewhat,
not
totally
pinned
down
on
how
user
facing
concepts
map
the
kubernetes
basis.
But
it's
gonna
be
there
about
the
right,
coarseness
or
fairness
amongst
customers.
A
A
A
B
B
B
B
In
in
what
concerns
consuming
API
resources,
so
let's
say
you
have
two
tenants
from
the
same
Cuban
Eddie's
cluster,
but
one
have
different
coded
than
the
other
one
when
it
when
it
concerns
to
consume
the
API.
You
know
so
I
think
that
all
these
use
cases
should
be
captured.
And
what
do
you
mean
by
consume
the
API
so
just
creating
things
just
creating
an
object,
for
instance,
let's
say
creating
kubernetes
pods,
but
you
have
tenant
one
and
ten
and
two.
This
was
something
that
we
have
it
on.
The
mailing
list.
C
B
It's
not
my
requirement,
actually,
sorry
I
just
think
I,
just
as
I
was
reading
it
in
our
mailing
list,
I
thought
it
makes
sense
for
all
of
the
control
plane.
So
not
just
how
many
pods
can
be
created,
how
much
this
can
be
allocated
to
the
workloads
of
this
namespace,
but
things
like
a
user
coming
and
being
a
bad
controller
or
something
like
that.
But
we
have
like
different
levels
of
how
much
the
control
plane
gives
to
a
certain
control.
Plane:
user,
not
I,
think
it
are.
B
They
are
separate,
but
I
still
think
that
it
follows
under
the
multi-tenancy
category.
But
if
it's
adding
confusion,
confusion
to
the
discussion,
I'm,
sorry
I'll,
just
you
know-
be
quiet
about
that.
One
and
I'll,
if
you
guys,
won't
just
just
check
it
out
on
the
mailing
list
of
the
working
group,
so.
A
I've
been
trying
to
be
clear
about
the
difference
between
the
resources
allocated
by
the
control
plane
versus
the
control
plane
itself.
So
if
we
want
fairness
and
allocation
of
pods,
that's
a
little
different
problem
than
fairness
in
QPS
to
the
control
point,
I
mean
that's
a
distinction.
At
least
it
seems
clear
enough
to
me
it's
here.
No,
no!
That
is
correct.
Okay,
now,
as
I
also
mentioned,
I'm,
actually
not
concerned
with
pods
I'm
building
an
SDM
I'm.
My
concern
is
with
a
the
API.
A
D
E
C
C
The
event
rate
limit
admission
controller,
which
well
actually
that's
probably
not
exactly
user
facing
either,
but
there's
like
a
QPS
rate
limit
for
the
scheduler
to
send
binding
requests
to
the
API
server
and
like
I
said
there
also
is
a
separate
one.
That's
more
configurable
that
for
limiting
the
rate
of
posting
events
to
the
API
server
that
one
can
be
configured
per
namespace
and
I.
Think
along
couple
of
other
dimensions,
I
could
hook
up
the
documentation
if
the
other
one
I'm
talking
about
like
for
the
scheduler
QPS
rate
limit.
C
That's
just
global
across
all
users,
but
I
mean
they're.
There
definitely
are
some
examples
of
rate
limiting
today
and
I
think
they
are
mostly
oriented
towards
preventing
bugs,
or
you
know,
like
you
know,
or
to
put
another
way,
preventing
a
buggy
controller
or
application
from
overwhelming
the
control
plane.
But
I
mean
they
have
one
step
right.
A
I
think
there's
a
critical
difference
here
between
controller
and
application
right.
The
controllers
are
owned
by
the
designers
of
the
control
point
and
I
think
you
know
it's
reasonable
for
the
designer
and
operator
of
a
control
plane
to
say:
okay,
here's
the
maximum
rate,
which
is
reasonable
for
this
controller
to
be
issuing.
A
C
So,
for
example,
the
scheduler
is
kind
of
a
standard
component,
although
users
can
replace
it,
but,
and
so
so
I
would
put
that
in
a
somewhat
separate.
Maybe
I
used
the
wrong
terms,
but
I
would
put
that
in
a
somewhat
separate
category
from
from
something
like
the
event,
the
event
rate
limiter
admission
controller,
which
which
limits
the
rate
of
events
being
job.
You
know
the
API
object,
called
event
being
generated,
and
that
usually
comes
from
application.
It
comes
from
either
applications
or
controllers
that
that
could
be
written
by
the
user
or
be
system
controllers.
C
C
Exactly
or
the
same
category
I
would
say.
The
second
category
in
this
case
is
a
combination
of
the
first
category
and
what
you
said,
because
controllers
are
written
sometimes
by
the
people
who
wrote
the
control
plane,
like
the
standard
controllers
like
deployment
and
replica
set
and
so
on,
and
sometimes
written
by
users,
especially
like,
if
you're
writing
a
controller
for
a
CRD
that
you
created
and
so
so
I
think
that
that
second
category
is
is
both
now
I.
C
Don't
know
that
the
requirements
are
different
for
these
two
categories,
like
maybe
the
rate
limiter
for
the
scheduler,
should
actually
be
per
namespace
and
/
or
per
user
like
the
event
rate
limit
and
Michigan
trollerous.
It's
not
today,
but
I'm,
not
I'm,
not
I'm.
Just
saying
I'm
not
saying
that
there
should
be
a
distinction
between
these
two,
but
in
terms
of
like
what
support
the
system
provides
for
protection
in
isolation.
But
I'm,
just
saying,
like
seems
like
they're
implicitly,
is
some
distinction
today.
E
Yeah
I
think
that
the
would
the
growing
number
of
operators
operate
a
pattern
because
those
people
turning
them
outwards,
whether
it's
people
writing
them
themselves
or
people
wanting
to
consume.
So,
like
you
know,
confluent
I've
got
a
catechol
arm
in
this.
You
know
LCD
ones
and
those
truck
I
suspect.
That's
gonna
be
a
really
common
pattern
and
people
want
to
use
them
as
often
if
they
want
to
deploy
the
java.
A
Wraps
or
whatever
so
I
think
so,
identifying
the
proper
boundaries
I
think
is
important
here
and
where
the
code
sits
in
source
repositories
is
not
what's
important,
I
think
what's
important
is
more
operational
boundaries,
alright,
so
in
a
multi-tenancy
situation
we
usually
have
a
provider,
that's
operating
a
platform,
and
then
there
are
customers
that
are
not
new
trusted.
Now
the
provider
may
well
run
in
his
platform,
stuff,
that's
in
the
goob
tree
and
in
other
trees,
and
it's
it's.
A
The
providers
problem
to
figure
out
how
to
integrate
these
things
and
deciding
what
rate
limits
are
appropriate
internally
to
his
platform,
and
if
he
chooses
to
have
a
Kafka
operator
in
his
platform,
that's
just
fine
and
if
this
platform,
in
fact
is
not
even
kubernetes
at
all,
but
it's
the
SDN
that
I'm
working
with
people
who
are
building.
Oh,
that's,
fine!
It's
all
about
platform
versus
the
customers
who
are
not
trusted.
Typically,
in
a
provider
situation,
the
provider
doesn't
allow
customers
to
add
the
sort
of
componentry
we've
been
talking
about
recently.
A
Typically,
the
customers
are
not
allowed
to
add
stuff.
That
well
I
mean
yes
and
no
I
mean
it
depends
on
this
sort
of
service
being
provided
provided
right,
I
mean
typically
today.
In
fact,
you
rarely
see
a
multi
pen
and
kubernetes,
because
it's
so
we
haven't
really
solved
the
general
problem.
So
pretty
much
all
the
services
out
there
offer
a
single
tenant
clusters,
but
I'm
trying
to
do
something
different,
as
I
said,
building
Sdn
using
Kubek
di
machinery
and
in
my
SDM
I
don't
allow
customers
to
provide
or
install
additional
controllers.
C
A
A
There
are
30
there
also
will
it's
in
the
server
so
they're
in
the
client.
There
is
two
styles
of
self-restraint:
there's
one
in
terms
of
QPS
and
bursts
and
on
another
newer
one
in
terms
of
a
general
rate,
limiter
interface
and
then
on
the
server.
Oh
there's
a
different
limit,
there's
the
limit
and
it's
a
green
spine
individuals.
It's
an
aggregate
limit
on
max
requests
in
flight
and
a
limit
on
max
mutating
requests
in
flight,
but
that's
not
about
limiting
individuals.
So
I'm
not
aware
anything
on
the
server
side,
that
is
limiting
individuals.
C
C
Point
I
was
not
sure
about
that.
I
I
know
that
there
one
is
because
I've
been
following
some.
The
discussion
about
improving
the
performance
of
scheduler
performance
and
I
know
that
there's
always
there's
been
discussion
about
whether
to
raise
the
scheduler
QPS
rate
limit,
but
I
know
was
clear
on
whether
it
was
on
a
client
or
server.
So
that's
interesting
that
it's
actually
on
the
client.
C
So
going
back
to
the
other,
so,
like
you
know
the
one
pattern
we
have
for
the
server
side
rate
event
rate
and
server
side,
operation
rate
limiting
is
this
event
rate
limiter
and
I
mean
I.
Think
it's
pretty
relevant
for
this
discussion.
It's
in
alpha
right
now,
it's
in
the
kubernetes
documentation
and
in
going
back
to
the
original
question
at
the
beginning
of
the
meeting
it
uses,
namespace
and
also
user,
which
I
assume
it
gets
out
of
the
authentication
information
to
break
down
the
the
limits.
C
C
As
the
tenant
definition
I
mean
it
sound
like
at
the
beginning
of
the
meeting
sounded
like
there
was
a
proposition
that
we
were
blocked
on
solving
Mike's
problem
by
having
a
formal
definition
of
tenant,
but
that
isn't
so
clear
to
me,
since
people
do
seem
to
be
using
namespace
as
the
tenant
boundary
or
you
can
also
use
user
like
the
event
rate
limit
and
between
those
two.
Is
that
not
sufficient
for
defining
a
tenant
at
least
right
now?
This
is
kind
of
a
bigger
question
that
I
don't
know.
C
Implemented
well,
I,
don't
know
the
details
of
how
its
limited
I
mean
they
can.
But
I
can
say.
The
configuration
is
basically
that
you
can
you.
Can
you
specify
the
event
like
the
event,
the
type
of
the
event
that
you
care
about?
You
can
have
more
than
one
so
and
you
do
that
using
the
the
source
and
object
fields
that
are
published
with
every
event.
So
you
can
say
what
kind
of
events
you're
interested
in
and
then
you
can
set
a
rate
of
rate
limit
per
namespace
or
per
user
or
globally
I.
C
Don't
think
you
can
combine
namespace
and
user
I
think
you
have
to
choose
yeah
you
have
to
choose
which
which
type
of
breakdown
you're
using
either
per
namespace
or
per
user
or
global,
and
then
you
can
set
a
QPS
rate
and
a
burst
a
burst
rate
and
I.
Don't
know
exactly
what
the
burst
I
mean:
I
can
guess
what
that
means,
but
I
don't
know
the
technical
definition
of
it.
But
it's
to
you
know
it's
to
allow
temporary
bursts
in
in
operations
without
violating
the
rate
limit.
A
Yeah
sounds
like
a
token
bucket
kind
of
thing,
so
I
have
a
little
bit
of
a
difficulty
here
with
solving
the
problem,
with
with
rate
limiting
so
I
presume.
The
idea
here
is
what
might
be
called
oversubscription
right.
We're
not
going
to
take
the
total
capacity
in
the
system
divided
by
the
number
of
whatever
kind
of
things
were
talking
about
and
give
each
one
an
equal
share,
and
that's
all
there
and
and
that
uses
up
the
capacity
alright
right.
A
We
want
to
give
everybody
allowing
allowing
to
some
individuals
to
use
more
than
one
end
of
the
capacity,
but
now
we
have
a
problem
all
right.
What
if
the
other
users
collectively
are
asking
for
more
than
than
the
capacity
which
we
kind
of
by
hypothesis
are
gonna
allow
right.
Now
we
still
back
to
the
fairness
problem.
How
do
we
give?
How
do
we
stop
it
prevent
the
possibility
that,
because
they're
collectively
asking
for
more
the
capacity,
some
of
them
are
going
to
get
no
service,
while
others
get
all
the
available
service.
A
F
C
G
C
I
think
that
there
is
some
mechanism
for
doing
that
for
like
DNS
or
something
like
there's
like
a
the
DNS
servers.
I
can't
remember,
but
as
far
as
I
know,
there
isn't
at
least
anything
in
standard
kubernetes,
I'm,
aware
of
that
does
like
auto
scaling
of
the
API
servers,
which
would
actually
be
useful,
I
mean
I,
think
you're,
making
a
great
point.
The
controllers
I
don't
know
that
there's
any
I
don't
know
that
they
would
work
if
you
scale
them
up
like.
C
If
you
had
multiple
schedulers
I,
don't
know
what
would
happen
or
if
you
had
multiple
controller
managers,
because
you
know
the
obvious
race
conditions,
I,
don't
I,
don't
they
you
know,
might
work,
but
I
wouldn't
be
so
confident.
On
the
other
hand,
the
API
server
is
designed
to
be
kind
of
I
mean
they're,
all
stateless,
but
you
know
to
be
to
be,
let's
just
say,
stateless
and
and
and
to
allow
replication
and
I
know.
People
have
run
replicated
API
servers
against
a
single
@cd
and
that
works
fine,
but
I
don't
know
of
anyone.
C
C
B
Sorry
sorry
I
just
want
to
have
two
things
to
to
what
David
said,
just
to
clarify.
First,
any
controller
and
the
ones
you
mentioned,
controller
manager
or
the
scheduler
do
be
there
election
against
the
Cuban
Aires
API.
Only
one
is
actually
the
lead
at
the
time,
so
you
may
have
10.
Only
one
will
act
on
state
API,
API
server.
There
is
one
reconciliation,
loop,
I
believe
it's
in
in
well,
I'm,
mostly
sure
that
it's
part
of
the
API
server,
where
the
endpoints
are
actually
leaked
to
all
the
pots.
B
F
C
That's
a
good
point
and
I
think
anything.
Also
that
raises
the
question
of,
like
you
know,
in
studying
rate
limits,
if
you
set
rate
limits
on
the
API
server.
Does
that,
like
you,
have
to
do
that
if
you're?
If
the
bottleneck
is
sut,
then
you
have
to
set
the
rate
limits
somewhere
that
actually
protects
the
resource.
That's
getting
overloaded
like
like
you
could
like
you,
you
you.
C
In
other
words,
you
might
need
to
understand
how
an
API
server
operation
turns
into
at
CD
operations
in
the
backend
in
order
to
appropriately
set
an
API
server
level
rate
limit.
If
you're
trying
to
protect
fcd.
Of
course,
you
get
set,
you
know
rate
limits
on
fcd
operations,
but
it's
hard
to
know
how
to
make
the
API
server
respond
properly
to
an
error
coming
from
NCD
saying
I'm
overloaded.
So
you
know
it's
probably
easier
to
reject
operations
if
you're
using
rate
limiting
to
reject
them
at
the
API
server
level.
C
But
then
you
have
this
problem
of
of
you
know
you're
getting
farther
from
the
resource
you're
trying
to
protect.
If,
if
it's
actually
fcd,
that's
that's
the
bottleneck,
it
may
not
be.
It
depends
on
your
configuration
of
your
of
your
cluster,
but
I
think
that's
another
consideration
is
like
where
to
where
to
put
the
rate
limit.
A
C
C
Then
you
need
one
of
these,
like
whatever
work,
conserving
scheduling,
algorithm
things
or
whatever
they're
called
like
that
that
that
can
share
the
the
slack
resources
across
across
multiple
users
fairly
and
I'm
sure
that
there
are
implementations
of
that
that
are
publicly
available,
but
I
would
imagine
I
mean
what
is
your
take
on
that
my
you
mean
it
seems
like
those
are
more
complicated
than
the
in
terms
of
the
implementation.
Then
then
something
that's
just
a
rate.
The
rate
limit,
yeah.
A
A
So
I
think
that
you
know,
since
rate
limiting
is
not
going
to
solve
the
problem.
We
do
have
to
look
elsewhere
and
so
yeah.
What
you
said
is-
or
the
first
thing
that
popped
into
my
mind,
is
maybe
we
put
some
kind
of
fair
queuing
mechanism
in
the
admission
pipeline
into
API
servers
and
yeah
it's
more
complicated
than
the
existing
stuff.
But
since
the
existing
stuff
isn't
going
to
solve
a
problem,
you
kinda
have
to
go
there.
A
Right
all
what
I
understand
you
still
have
to
have
a
definition
of
what
you're
being
fair
amongst
absolutely
now.
The
virtue
of
the
priest
discussion,
though
about
the
QPS
as
a
matter
of
self
restraint
imposed
by
controllers,
reminds
me
that
you
know
we
don't
have
to
have
one
solution
for
all
these
problems,
even
though
you
could
abstractly
pull
it
all
the
same
problem
actually
divided
conquer
here,
so
four
controllers
that
are
owned
by
the
organization
that
runs
the
platform.
A
They
can
do
engineering
and
decide
what
a
reasonable
amount
of
self-restraint
is,
and
that
leaves
the
fairness
problem
relief
to
be
just
amongst
customers.
So
with
the
I
think,
you
know
practical
observation
that
that
often
namespaces
are
a
good
boundary
for
isolating
customers.
Even
if
customers
may
be
a
few
namespaces
I
think
a
namespace
oriented
fairness
will
be
a
you
know.
A
ninety
percent
solution
for
thermos,
almost
customers.
C
Yeah,
what,
if
folks
think
about
that
I
mean
we?
The
discussion
has
mostly
been
just
like
three
or
four
people
talking.
Maybe
we
should
try
to
see
what
other?
What
other
folks
think
I
mean?
It
sounds
like
two
things
have
come
up:
one
is
this
tenant
definition
and
whether
namespace
is
sufficient
right
now
as
the
as
the
boundary
and
the
accounting
unit,
and
then
the
other
is
kind
of
the
this.
This
fair
queuing
idea
versus
simple
rate
limits
versus
what
other
other
ideas
people
all
people
have
example,
two
things
in
common.
C
J
J
While
it's
being
figured
out,
I
was
just
going
to
say
that
I,
like
keeping
it
simple
and
not
adding
something
other
than
namespace
until
we
find
that
there's
something
wrong
with
namespace.
Pip
can't
make
it
usable
for
this,
mostly
just
because
more
complicated
things
get
like
harder.
It
is
to
onboard
new
people.
I
I
Chewing
and
as
mentioned,
that's
a
pretty
old
problem,
there's
a
lot
of
different
solutions
to
how
to
control
flow
through
a
network
and
I.
Don't
know
that
we
should
have
to
figure
out
the
perfect
solution
for
that,
but
if
we
could
make
sure
that
all
of
the
necessary
components
are
there
for
people
to
experiment
with
that,
that
would
be
good
and
then
the
other
thing
with
namespace
as
the
boundary
one,
not
everything.
I
The
problem
I've
seen
where
namespace
breaks
down
is
when
you
want
to
have
some
sort
of
delegation
of
controls
where
or
where,
like
you
know,
you
don't
have
to
be
cluster
admin
or
just
a
normal
user,
but
something
is
tween
that
becomes
hard.
So
it's
not.
You
can
use
namespaces
to
like
set
a
quota,
but
it's
hard
to
have
someone
you
set
within
those
namespaces
but
doesn't
have
a
whole
cluster
admin.
C
I
I
C
C
Maybe
it's
just
because
we
don't
have
people
doing
multi-tenancy,
yet
I,
don't
I,
don't
know
I
mean,
like
the
all
the
policies
that
well
that's,
not
true.
What
I
was
about
to
say.
It's
not
true,
like
some
of
the
multi-tenancy
policies
we
have
today
are
based
just
on
namespace.
Some
are
based
on
like
label
label
selectors
like,
for
example,
network
policy,
selects
the
allowed.
C
I
I
have
come
up.
A
lot,
though,
is
for
when
you
want
to
be
able
to
create
namespaces
more
dynamically,
that
can
often
disguise
or
be
the
problem
that
people
run
into.
So,
if
you
don't
know
all
of
your
hanging
spaces
ahead
of
time,
it's
hard
to
delegate
the
permissions
are
assigned
the
quotas
to
them
without
a
bunch
of
custom
and.
A
C
You
know,
other
folks
have
have
comments
on
either
these
issues,
the
definition
of
a
tenant
or
this
this
rate
limiting
stuff
or
I
shouldn't
call
it
well.
I,
don't
know,
maybe
rate
limiting,
is
constraining
the
solution
space
too
much.
But
let's
just
say
this:
this
general
problem
of
fairness
of
the
control
plane
across
across
tenants.
D
So
I
heard
now,
if
you
want
controlled
objects
to
be
completely
isolated
from
other
tenants
that
that's
a
namespace
that
you
need,
one
operator
occurred,
which
is
this
new
space
or
you
need
to
lift
access
control
to
something
that's
global,
watching
at
all
the
namespaces
and
if
you're,
actually
multi
tenant
having
a
single
within
want.
All
namespaces
is
not
necessary
that
you
want
so
yeah.
We
do
okay,.
A
A
D
A
D
A
D
C
D
E
Yeah
I
think
that's
kind
of
speaks
to
like
what
is
a
tenant,
though,
and
look
I'd
like
putting
a
document
the
other
day.
I,
don't
think
many
people
have
read
it,
but
our
use
case.
The
one-to-one
between
tenants
and
namespaces
is
not
at
all.
What
our
use
cases
attempt
the
namespaces
are
much
more
at
the
application,
isolation
them.
So
we
can
have
namespaces
actually
with
sorry
logical
namespaces,
which
consists
of
multiple
namespaces,
which
they
contain
components
by
multiple
tenants.
E
Maybe
maybe
a
use
case
is
a
bit
strange,
but
if
you
outline
it
for
us,
I
posted
a
document,
a
link
to
a
document
on
the
mailing
list
couple
of
days
ago,
but
it
doesn't
go
into
a
great
deal
of
detail.
I
wrote
it
in
a
hurry,
so
we
have
use
cases
where
this
is
within
an
enterprise.
It's
multi-tenancy
across
different
business
units
and
different.
Well,
you
know
a
couple
hundred
scrum
teams
and
things.
E
E
Firstly,
what
song
code
driven
and
so
like
a
CI
CD
pipe
one
might
spin
up
a
little
mini
environment,
which
has
you
know
just
the
seven
or
eight
components
and
particular
versions
of
them
just
to
do
penetration
testing,
or
you
know,
integration,
testing
or
something
like
that.
Then
it'll
pull
it
down,
but
and
within
that
it
may
be
some
if
you've
got
multiple
teams
building
stuff,
maybe
a
couple
of
components
from
one
team
and
a
couple
of
components
from
another
team
or
several
other
teams
with
and
depending
on
the
relationship
between
those
teams.
E
E
C
D
C
E
Yes,
anyway,
the
the
they're
not
quite
orthogonal,
but
they
they're
not
definitely
not
one-to-one
or
even
kind
of
in
a
higher
in
a
tree
structure,
which
is
something
we
talked
about
before
I.
Don't
know
whether
that's
a
I'm
just
overthinking
it
or
where
we're
trying
to
do
something
that
wasn't
namespaces
weren't
actually
designed
for,
but
I
was
just
agreeing
with
I
think,
even
that
the
our
thinking
is
that
they
make
much
more
to
application
isolation
rather
than
like
organizational
entity
or
human
or
customers.
Sort
of
thing.
D
The
fact
that
so
much
is
hung
off
of
the
namespace
for
cleaning
up
resources
makes
it
feel
like
a
thing
that
I
should
be
creating
and
destroying
and
using
to
structure
the
anything.
That's
an
ephemeral
application,
because
I
can
create
a
new
space,
a
bunch
of
things
in
them
set
up
finaliters
and
whatever
I
want,
and
then
I
know
that
I
believe
that
namespace,
the
GC
groups
will
take
care
of
the
rest
room
where
I
secure
a
tenant.
That's
much
more
like
long
lived,
I
didn't
if
the
tenant
is
shouldn't.
C
So
that
makes
a
lot
of
sense
to
me,
but
I'm
still
one
so
so
I
guess
the
one
question
I
have
is
is
like.
Is
there
a
use
case
for
having
the
you
know?
Well,
there's
many
questions,
but
one
is
when
you
want
a
policy
to
span
namespaces
like
let's
say
that
you
do
map
namespaces
to
applications
instead
of
users
like
like
you
guys,
are
saying,
then
do
you
want
to
be
able
to
set
quotas
or
auerbach
or
network
policy,
or
you
know
all
of
these
things
like?
C
Is
there
a
desire
to
set
to
treat
multiple
namespaces
as
a
single
unit,
ie
the
tenant
maps
to
multiple
namespaces,
or
do
you
still
want
separate
policies
for
each
namespace?
Is
it
just
like
a
little
harder
because
you
have
to
copy/paste
the
policy
once
per
namespace,
or
is
it
actually
like
a
fundamental
limitation
I?
Think
when
I'm
wondering
about
that,
like
you,
want
to
set
a
quota
across
multiple
namespaces,
because
you're
treating
a
namespace
as
an
application
or.
A
Or
more
than
an
application,
I
think
the
argument
was:
let's
look
anis,
please
be
only
an
application
and
attendant
and
multiple
applications.
So
this
is
reminding
me
of
structure
of
I,
see
in
many
places
right
so
like
in
Cal,
Cloud
Foundry.
They
had
organization
and
space,
and
the
idea
is
maybe
encourage
names.
Namespace
is
like
a
cloud
foundry
space
of
which
an
organization
may
have,
if
you
like,
or
in
Google
Cloud
I,
think
there's
one
customer
can
have
multiple
projects,
but
typically
not
a
lot
right.
A
So
I'm
thinking
that
you
know
there
is
a
small
number
and
it
more
importantly,
potentially
controlled
number
of
namespaces
for
tenant.
Then
I
think
we
can
still
say
neem.
Space
is
a
reasonable
proxy
for
Kennedy
and
if
we
have
fairness
and
month
namespaces
as
long
as
a
tenant
doesn't
have
a
huge
number
namespaces
we're
getting
fairness
amongst
tenants
as
well.
C
Solve
this
problem,
and
it
sounds
like
from
what
Craig
was
saying:
the
answer's?
No,
it
sounds
like
he
has
this
kind
of
complicated
matrix
of
how
applications
and
users
or
tenant
snap
to
namespaces
and
might
not
be
sufficient,
but
maybe
for
most
use
cases
just
adding
that
second
level
of
hierarchy,
if
you
will
on
top
of
namespaces,
so
you
can
say
a
single
tenant
owns
multiple
namespaces,
but
not
if
not
an
intersecting
sets
of
names
does
that
does
that
cover
kind
of
the
remaining
use
most
of
the
remaining
new
spaces
will
be
one
question
of.
I
I'm
wondering
if
we're
conflating
two
parts
in
that
or
the
problem
I,
don't
think
it's
necessarily
the
grouping
of
namespaces
that
we
need
to
it
so
much
as
like
user
identities
or
actor
identities,
because
that's
where
you're
trying
to
kind
of
make
farewell
that
success
and
how
those
mat2
namespaces
is
possibly
a
very
complex
problem.
Just
as
mapping
namespace
visits
to
knows
it's
a
problem
is
difficult,
so
kind
of
a
similar.
I
A
I
Yes,
but
that
would
be
why
users
would
be
assigned
to
I
guess,
probably
18
organizations
that
are
work
for
it
and
they
would
have
to
be
in
one
of
those
organizations
and
not
everyone
has
the
ability
to
put
users
into
every
organization
or
create
new
ones.
So
the
normal
kind
of
access
control
like
there
wouldn't
prevent
I
think
too
much
well.
D
E
I
was
gonna,
say
even
me,
now
really
complicated,
namespace
kind
of
nested
namespace
II
thing,
there's
still
a
concept
of
somebody
owns
that
collection
of
stuff.
So
even
I've
said
this
I
set
up
a
little
test
environment.
That's
got
some
of
my
stuff
in
it
and
some
of
your
stuff
in
it,
but
I
set
it
up,
so
I
still
own
it.
So
I'm
at
that
level
like
what
David
was
saying
before,
or
that
all
of
that
still
maps
to
our
situation
as
well,
so
which
is
kind
of
what
Eric's
saying
it's.
F
A
You
see
if
I
understood
correctly,
are
you
suggesting
that
we
we
actually
generalized
the
admission
pipeline
in
API
service,
so
that,
in
addition
to
identifying
user
and
group
it
also
identifies
tenant?
And
thus,
whatever
plugins
we
have,
that
are
doing,
authentication
has
freedom
to
define
tenants
in
whatever
way
the
the
author
that
plug
into
that
could.
F
C
C
Yeah
I
don't
know
that
require
that
then
needs
some
thought
intra.
It's
an
interesting
idea
and
I
mean
it
definitely
is
possible,
at
least
to
put
admission
controllers
in
that
take
authorization
information
into
account
like,
for
example,
the
event
rate,
limited
mission
controller
that
I
mentioned
earlier,
lets
you
set
rate
limits
per
user
and
it
gets
that
user
out
of
the
authentication
information.
So
you.
F
C
There
were
hypothetically
a
tenant,
a
tenant
in
the
authentication
information.
Then
you
could
plumb
that,
through
at
least
to
the
admission
controllers
and
make
make
decisions
about
it,
so
that's
theoretically
possible,
at
least
if,
if
that
was
useful,
we
just
need
to
I,
guess
yeah,
there's
a
lot
of
stuff
to
figure
out.
That
was
something
that
people
wanted,
though,.
B
B
B
So
maybe
just
as
mapping
the
same
ways
we
do
roll
piling,
the
binding.
Sorry
like
a
tenant
would
be
just
a
binding
between
and
multiple
names
and
storing,
multiple
namespaces
and
multiple
users,
and
then
an
admission
controller
could
get
well
try.
Any
further
is
who
the
tenant
is
based
on
the
authentication,
Heather
and
hey
the
here's,
the
user,
here's
the
namespace
of
the
resource.
These
are
trying
to
do
something
any
further,
the
tenant
from
there
and
then
do
the
admission.
So
let
me.
H
B
B
A
A
B
A
E
We're
probably
there
is
that
the
actual
strings
that
cut
users
are
actually
managed
outside
of
the
cluster,
whereas
the
service
accounts
are
created
within
the
cluster
I.
Don't
know
if
that
makes
a
difference.
So,
if
you're
going
to
map
the
because
if
you're
you're
saying
authenticating
using
LDAP
or
I'm,
not
a
Kinect
or
something
you
kind
of
know
what
the
users
are
in
advance,
but
you
could,
you
can
have
tooling
that
creates
service
accounts
dynamically
all
over
the
show.
E
A
E
From
a
management
of
the
cusp
like,
if
you're
going
to
put
a
dashboard
in
front
of
it
or
something
you
can
kind
of
make,
or
you
can
have
a
resource
that
perhaps
this
only
uses
ahead
of
time
but
because
you're
probably
not
creating,
and
if
you
haven't,
if
you
a
new
user
accesses
that
that
you
haven't
actually
listed
before
that's
a
normal
sort
of
onboarding
thing.
However,
you
you
could
have
tooling
that
creates
service
accounts.
On-The-Fly
mapping
knows
back
into
the
tenant
concept.
You'd
have
to
keep
track,
of
which
user
created
the
service
or
something.