►
From YouTube: Kubernetes WG Multitenancy 20180228
Description
Notes and Agenda: https://docs.google.com/document/d/1fj3yzmeU2eU8ZNBCUJG97dk_wC7228-e_MmdcmTNrZY/edit
C
Go
ahead,
first
of
all,
I'm,
very
sorry
about
the
mess
with
the
link
and
generally
the
short
notice,
but
crystalline
ice
who
separate
the
the
functionality
that
our
good
lab
integration
does
today
into
some
part
that
we
think
would
be
for
general
use
in
in
handling
a
tenant
abstraction
within
the
cluster
and
provides
at
least
a
concept
of
how
to
integrate
that
with
extra
resources
or
external
components
that
would
set
those
labels.
Names
identifiers
that
need
to
be
controlled
in
a
is
conforming
manner.
C
E
C
We
I'm
not
actually
quite
sure
when
we
decided
that,
but
fer
Leon
we
just
established
that
a
single
tenant
may
have
multiple
namespaces
I.
Think
one
of
the
the
most
common
use
cases
for
that
is
you
one
tenant,
creates
a
multi-level
app
with
the
three
three
layer
app
whatever
and
those
only
resources
from
from
one
namespace
via
the
ingress
and
have
network
isolation,
network
security
with
something
like
canal
or
similar
between
individual
namespaces,
so
that
tenant
by
itself
would
require
more
than
one
namespace.
C
We
also
have
a
use
case,
simply
map
external
entities,
basically
projects
in
gift
lab
github.
Well,
you
have
whatever
you
use
to
individual
namespaces,
because
that
was
the
best
fit
we
could
see
and
we
stuck
with
that
I
think
the
discussions
or
as
far
as
it
as
far
as
I
understood
the
discussions
in
the
last
couple
of
weeks
that
basic
path
of
one
tenant,
having
possibly
more
than
one
namespace,
and
if
that
doesn't
fit
your
or
a
local.
C
E
C
Well,
we
can
get
back
to
that
later.
Okay,
the
idea
is
basically
that
a
tenant
is
granted
certain
resources,
a
certain
resource
quota
resource
budget,
whatever
you
want
to
call
it
at
the
moment,
we
choose
the
the
most
simple
one
that
we
could
think
of
of
implement,
which
is
a
simple
number
to
the
concurrent
amount
of
resources
across
all
names.
Basically,
you
have,
let's
say
a
budget
of
five
course.
C
D
B
Then
translates
that
to
whatever
mechanisms
we
have
to
kubernetes
to
actually
make
it
work
in
the
cluster
like
role
bindings
and
like
our
bag,
and
also
like
all
these
possibilities
to
restrict
and
quote
resources
there
was
I
can
part
of
of
the
idea
that
we
that
we
need
this
hand
off
thing,
because
currently
the
good
lab
integration
project.
We
have
it's
working,
but
it's
doing
a
lot
of
things,
and
it's
also
it's
also
very
opinionated
about
the
way
it
does
that.
So
we
thought
we
would
like
to
separate
these
two
things.
B
B
B
And
yeah
I
might
add
that
the
other
thing
that
we
need
in
order
to
make
this
work
ultimately
is
a
tenant
resource
quota
admission
controller,
which
basically
is
an
extension
to
the
already
present
quota
admission
controller
in
that
sense
that
it
counts
the
quota
across
multiple
namespaces
and
maps
that
to
a
tenant
and
then
applies
the
restriction.
That's
the
other
thing
that
we
need
to
do
in
the
end.
E
B
Yeah
we
do
that
in
a
moment
by
letting
them
create
projects
and
stuff
in
good
episode.
It
can't
create
namespaces
of
the
YouTube
CDL
or
on
the
cluster
itself,
but
they
can
simply
create
yeah
a
project,
a
group
or
whatever
it
can
get
left
so
that
automatically
creates
the
namespace
for
that
yeah.
So.
E
I
had
because
there's
a
lot
more
that
you
need
for
multi-tenancy-
and
this
was
talked
a
lot
about
in
the
last
sort
of
session
where
we
went
through
I-
think
it
was
Atlassian's,
multi
tenant
setup
that
you
need
stuff
like
pause
security
policies.
Network
policy
resource
quota
are
back
like
you
need
to
set
all
of
those
up
for
an
individual
namespace
before
you
go
and
let
anyone
access
it
at
all.
So,
rather
than
an
admission
controller,
why
wouldn't
you
just
create
those
on
behalf
of
the
user
and
then
give
them
access
post-doc?
E
C
This
is
a
misunderstanding:
they
the
power
they
have
for
creating.
It
is
via
a
separate
mechanism
and
all
the
the
prerequisites
would
then
be
part
of
the
list
of
things
that
the
tenant
controller
would
have
to
do
what
security
and
and
it's
not
fleshed
out
in
the
document.
But
yes,
I
have
to
go
back
through
recording
from
two
weeks
ago
and
at
that
to
the
laundry
list
of
things
that
the
controller
should
do
so.
E
The
idea,
then,
is
that
in
that
case,
where
you
set
it
all
up
for
them,
you
have
to
sort
of
pick
quotas
on
a
per
day
in
space
and
say,
like
you
know,
this
user
controls
namespace
a
and
then
space
B,
but
I'm
gonna
give
them
individual
quotas
in
those
ones,
whereas
you
would
like
a
mechanism
to
say
namespace,
a
and
namespace
B
contribute
both
to
one
singular
quota.
That
then
has
a
limit
together
exactly
yeah
the
little.
B
B
So
at
the
moment,
just
to
clarify
we
have
to
get
live
integrator
and
that
does
all
the
stuff
with
the
port,
security
policies
and
etc.
So
it
creates
the
namespace
and
then
sets
up
stuff
like
just
like
port
security
policies
and
only
binds
people
to
role.
Sted
are
predefined
and
you
can
provide
these
role
names
to
the
service
by
our
environment
variables.
B
So
they
can
us
a
admin
and
have
control
about
what
they
can
actually
do
in
the
namespaces,
and
we
lock
them
in
rather
tightly
so
they
can
do
their
stuff
in
the
namespaces
and
they
can
self-service,
create
namespaces,
but
they
can't
break
out
or
do
weird
things
with
it
or
exceed
certain
limits,
etcetera,
yeah
and
we
want,
if
we
just
want
to
translate
this
into
multiple
projects,
to
make
them
more
feasible
to
be
reused
and
re-engineered
add
a
bit
towards
a
more
a
nightly
architecture.
At
a
solution.
Yeah
I
have.
G
F
C
For
us,
it's
it's
more
of
a
resource
management
problem,
because
we
have
organizational
units
that
we
want
to
give
the
ability
to
create
super
to
basically
organize
themselves
internally,
but
they
have
to
be
constrained
into
a
certain
resource
budget.
It
basically
boils
down
to
one.
One:
organizational
unit
is
able
to
procure
a
certain
amount
of
money
for
computational
resources
and
they
put
to
use
these
resources
just
provide
it
through
us,
but
how
they
organize
with
projects
and
groups
and
subgroups
that's
up
to
them,
and
we
don't
even
want
to
interfere
with
that.
F
D
C
Look?
Yes,
sorry,
yes
it
it's!
We
have
a
list
of
things
that
we've
learned
to
have
a
requirement
for
for
limiting
for
individual
users
or
individual
tenants,
and
that's
not
just
cores
in
memory,
but
also
request
r/a.
The
number
of
objects,
for
instance,
is
something
that
we
have
to
monitor
closely
because
we
have
a
fairly
low.
Not
a
note
count
so
130
pots
per
note
for
us
is
something
that
is
threatening
at
the
moment.
We
we
can
live
with
that,
but
that's
getting
becoming
a
problem
so
Oh.
B
B
Network
policies,
yeah,
what
we
need
is
the
api
account.
We
really
have
trouble
with
the
control
plane
and
that
people
can
just
hammer
that,
and
we
don't
have
a
way
at
the
moment,
really
restrict
just
how
many
API
calls
per
minute
or
something
like
that
people
can
make.
So
that
will
be
something
that
would
ease
up
our
lives
in
the
near
future.
So
the
whole
multi-tenancy
in
the
control
plane
idea
is
something
that's
also
pretty
important
for
us.
B
H
B
No,
we
can't,
because
that
would
require
people
to
decide
how
to
move
their
resources
between
namespaces
and
we
would
have
to
enable
them
to
do
that.
Actually,
so
that's
just
not
feasible.
We
also
have
this
thing
where
people
just
work
together
and
use
their
resources
together
and
then
the
question
is:
if
you
start
something
in
another
namespace.
Does
this
get
reduced
from
your
quota
or
from
the
quota
of
somebody
else?
Who
also
belongs
to
that
namespace?
B
Do
your
quotas
a
tap
somehow,
so
we
thought
the
easiest
thing
as
the
most
straightforward
thing
to
design.
That
wants
to
say:
okay,
we
have
one
tenant
and
that
is
linked.
That
has
resources,
and
that
is
linked
to
a
namespace
and
whatever
happens
in
that
namespace
is
reduced
from
ours
or
is
taken
from
the
quota
for
the
resources
from
that
a
tenant.
So
that's
why
yeah.
H
B
I
How
does
how
does
that
tie
into
what
you're
talking
about
here
with
the
sort
of
a
tenancy
quota.
B
I
B
We
are
a
university
and
we
have
different
research
projects
and
they
get
grant
money
and
from
that
buy
resources
which
is
then
essentially
nodes
for
computing.
Basically
anything
so
there
can
be
anything
from
just
casual
compute
nodes
to
data
science,
nodes
to
GPIO
nodes
or
to
some
kind
of
of
sensory
network
nodes
for
IOT
stuff
or
whatever.
You
can
come
up
with.
The
main
reason
we
want
to
restrict
are
the
easiest
reason
why
we
want
to
restrict.
B
B
So
that
if
people
don't
use
the
resources
at
the
moment,
then
they
can
be
used
by
other
people,
which
is
what
we
refer
to
with
the
sharing
and
the
leasing
of
resources
that
we
want
to
enable
that's
sort
of
due
to
the
way
that
we
operate,
that
we
have
these
distain
research
projects
where
people,
maybe
I,
might
not
even
know
each
other,
but
they
basically
all
want
the
same
as
net
case.
And
that's
why
I
think
it.
B
C
D
There's
a
semi
related
murders,
relevant
project
of
it's
not
exactly
the
same
model
as
you're
talking,
I,
heal,
the
public,
you've
arbitrator
that
and
expressive
photos
as
minimum
guarantees,
instead
of
instead
of
maximums
and
and
that's
a
way
that,
like
the
onion
resources
and
the
cluster,
can
be
fairly
shared
among
the
different
users
of
the
of
the
cluster.
And
so
it's
kind
of
one
way
at
getting
at
this
idea
of
borrowing
you're
not
directly
borrowing
because
they
didn't
well
it's
sort
of
like
borrowing
Laphroaig.
D
So
you
might
want
to
look
into
that
just
as
a
comparison
point,
so
that
I
realized
that
that's
kind
of
just
one
piece
of
this
like
possible
future
work
thing
that
you
listed
here
and
isn't
integral
to
the
rest
of
this
proposal.
But
it's
something
you
might
want
to
look
into
to
just
just
just
the
reference.
Why
cuz
it's
kind
of
delayed?
It's
that
idea
of
borrowing
for
there
for
another
using.
C
Document,
what
is
I'm
not
sure
if,
if
if
it
makes
us
special,
but
we
usually
have
a
problem
with
users
not
being
able
to
manage
their
own
clusters.
So
all
this
note
releasing
and
and
and
sharing,
is
it's
just
a
business
exercise
if
you
like?
Technically,
we
have
to
do
the
actual
setting
up
of
notes
and
and
maintaining
the
cluster,
but
I'll
have
a
further
look
into.
D
D
It's
not
based
on
idea
of
people
kind
of
owning,
node
back,
you
know
and
then
blending
those
out
it's
more
about
the
photo
and
having
the
system,
giving
people
kind
of
a
minimum
guarantee
and
then
being
able
to
share
all
the
unused
resources
as
a
pool.
It's
kind
of
related
to
the
models
of
mezzos
users,
but
anyway
yeah
it's
worth
checking
out,
but
yeah.
G
I'm
a
little
unclear
about
the
scenario
about
updating
the
namespaces
in
a
tenant.
So,
according
to
my
understanding,
if
you
want
to
add
a
namespace,
they
should
update
this
a
tenant
customer
resource.
So
that
means
a
user
in
the
namespace
can
basically
update
the
tenants
custom
resource
and
he
has
full
control
of
all
the
names
by
defining
this
resource.
That
means
he
may
be
able
to
delete
a
certain
namespace
he
doesn't
belong
to.
C
Common,
no
and
we
don't
see
the
nest
of
the
need
for
that,
because
one
we
would
envision
that
the
tenant
objects
are
maintained
by
an
external
source
of
truth,
and
in
addition
to
that,
our
we
would
set
up
our
back
rules
that
only
basically
the
cluster
admin
would
be
able
to
manipulate
the
tenant
objects
anyway.
But
yes,
you're
right.
If
we
pick
rules
and
any
user
gets
access
to
those
tenant
objects,
they
can
do
whatever
they
like.
That's
right,
boys
down
to
never
getting
get,
never
giving
away
the
access
to
the
Challenger
objects.
I
C
Yes,
but
it's
even
more
restricted,
we
I
I,
that's
why
we
we
present
this
this
document
to
see
other
or
to
learn
of
other
use
cases.
I
can't
envision
a
use
case
where
anybody,
but
this
maintenance
system
or
the
the
master
admin
gets
access
to
the
tenant
objects
anyway.
So
that's
it's
it's
more
of
a
technical
necessity
to
actually
have
this
access
available
at
all.
Then
then
they
regular
used
to
get
I.
I
Guess
what
I
meant
was
it
sort
of?
It
feels
like
it's
important
to
have
to
have
the
understand.
You
know
when
you
talk
about
the
tenant,
making
changes
you're
talking
about
the
tenant,
making
changes
both
via
in
your
case,
changing
get
laughed
at
then,
as
a
controller
make
the
changes
rather
than
the
tenant
making
API
changes
directly
correct.
I
Yes,
I
asked
the
question
earlier
about
the
bringing
bringing
urine
the
pools
of
compute
because
I'm
we
have
a
similar
issue
in
our
multi-tenant,
the
internal
multi-tenant
cluster,
where
we
have
different
CI
CD.
Also,
we
have
different
batch
job
use
cases
and
for
them
we
do
give
them
a
separate
pool
of
nodes.
So
we
have
a
slightly
similar
use
case
in
some
ways,
but
we
do
that
in
a
very
coarse-grained
way
at
the
moment,
with
tanks
and
polar
oceans.
I
So
we
tank
all
the
nodes
with
a
you
know:
customer
equals
this
customer
and
then
that
customer
can
then
tolerate
those
types
with
their
pods.
Not
only
their
stuff
will
run
on
their.
It
is
a
hundred
percent,
no
guarantees
about
that,
and
that
is
100
percent
based
on
trust
of
the
moon,
rather
than
based
on
hard
enforcing.
B
I
So,
in
our
case,
we've
only
got
a
couple
of
customers,
so
the
communication
part
is
easy.
We
say
hey
to
run
your
build
on
your
nose.
You
need
to
have
this
Toleration
using
this
computer
and
the
regards
to
actually
telling
the
nodes
I'm.
It's
just
part
of
the
arm,
the
node
config,
when
you
start
up
a
cube,
the
cube.
Let's
start
up
with
this
type,
that's
it!
Okay,
as
I
said
it's
very
coarse-grained
and
simple
at
the
moment,
because
it
was
the
quickest
easiest
why
we
could
solve
that
for.
B
You
yeah,
we
are
just
wondering
if
anybody
else
out
there
is
in
the
need
of
such
a
fine-grained
and
and
an
automated
way
of
setting
up
stuff
in
a
cluster,
because
we
found
a
lot
of
things
that
work
similar
but
didn't
quite
suit.
Our
needs
and
I'm
not
quite
sure,
if
you're,
very
special
on
that
or
as
a
university,
especially
or
if
there
is
anybody
else,
are.
F
B
It's
an
action
that
is
done
at
the
moment
in
an
external
system
that
is
controlled
by
the
company
or
a
by
the
organization
that
you're
in
and
there
the
user
has
some
interface
that
he
or
she
can
understand,
and
it's
just
used
to
use
like
like
creating
a
project
and
get
all
that.
That
is
one
thing,
but.
F
B
F
B
In
that
case,
it's
actually
the
integrator.
That
does
have
a
look.
If
the
names
I
mean
what
we
do
is
we
take
the
path
name
out
of
get
left,
because
you
also
have
groups
and
subgroups
in
that
special
case.
So
you
get
a
name,
and
you
then
translate
that
to
a
valid
kns
name
and
the
integrator
also
checks
for
an
already
present
namespace
by
the
same
name,
and
if
so,
this
counts
up
and
make
sure
you
don't
have
the
clash
at
all
collisions.
B
So,
for
instance,
a
master
in
a
project
and
get
lab
gets
mapped
to
a
role
that
allows
that
master
to
actually
start
nodes
and
to
delete
I,
mean
start
pots
and
to
delete
pots
and,
of
course,
read
the
locks,
etc.
They
are
not
allowed
to
run.
Daemon
sets
that's
a
special
thing
that
we
only
allow
to
certain
people
and
all
the
other
roles
like
developer,
guest,
etc,
and
they
are
only
allowed
to
read,
locks
and
see.
B
What's
there
and
and
just
just
see,
what's
running
in
the
namespaces
that
are
reference
to
the
projects
they
are
part
of.
So
that's
what
we
do
right
now,
but
you
won't
be
free
to
say.
Ok
in
my
case,
a
developer
might
also
create
cards.
So
I
throw
them
away
whatever.
So,
for
that,
we
also
give
out
the
possibility,
within
the
bounds
of
the
rules
and
of
the
role
settings
that
we
provide
in
the
cluster,
so
that
users
may
decide
based
on
get
lamp
who
may
actually
alter
the
stuff
in
the
namespace.
C
And
do
all
this
setting
up
and
and
applying
gitlab
roles
to
to
kubernetes
rules
as
part
of
the
opinionated
flavor
of
the
integrator
right
now,
and
is
one
of
the
things
we
want
to
break
up
and
and
and
reduce
with
this
controller
proposal
and
and
that
one
more
point
with
namespaces?
That's
the
the
mapping
from
an
external
name
to
a
valid
and
unique
namespace
name.
That
is
mentioned
in
the
document
where
the
tenant
object
can
have
arbitrary.
A
A
A
B
The
whole
thing
is
open
source
I
mean
it's
all
on
github,
the
stuff
that
we
already
built,
which
is
this,
this
custom
gitlab
integration
thing
and
in
its
current
state
it
I'm
not
quite
satisfied
with
it
anymore,
because
it's
it's
doing
too
many
things.
We
have
a
great
log
integration
because
we
were
looking
into
multi-tenant
solutions
for
logging,
which
were
quite
sparse
in
case
of
open
source
solutions.
The
whole
I
wanted
to
link
it.
B
C
Which
is
basically
the
intent
of
this
document
to
get
comments
and
to
to
get
feedback
on
map
to
your
own
use
cases,
or
if
you
can
come
up
with
a
use
case
that
could
fit
with
minimal
extensions
or
similar
or
something
like
that
and
the
end
result
definitely
would
be
a
an
open
source
project.
Yes,
so.
J
I
was
really
the
document
hi
I'm
ray
and
joined
late,
but
I
was
reading
the
document
and
I
noticed
there
wasn't
really
a
motivations
section,
and
maybe
that
gets
to
the
use
cases,
but
I
would
love
to
see
that
in
terms
of
the
use
cases
and
then
the
other
thing
I'd
love
to
understand
is
what
scale
do
you
see
this
working
at?
Do
you
see
this
working
with
like
50
namespaces
you,
you
know.
If
clusters
have
a
thousand
namespaces,
you
know
at
what
level
does
does
this
is
the
most
effective
all.
J
Many
people
are
actually
authorized
how
many
people
are
authorized
to
to
make
changes
like
when
it
comes
to
these
kind
of
policy
administrations.
I
find
it
useful
to
think
about
not
just
the
sheer
number
of
namespaces.
Sometimes
we
create
them
programmatically
for
stuff,
but
it
really
comes
down
to
the
amount
of
people
who
are
involved
in
the
management
of
this
right.
J
B
I
think
in
the
past
half
year
we
had
around
200
people
that
were
actually
doing
stuff
in
the
cluster
yeah
organize
different
projects
somewhere
in
multiple
projects
which
that's
this
up
to
multiple
namespaces,
that
they
control
or
that
they
do
stuff.
In
and
now
we
have
the
holiday
season
at
the
moment
in
the
university
we
have
gained
some
new
research
people,
so
yeah
I'd
say
all
in
all.
We
had
like
200
people
over
the
past
five
to
six
months:
okay,
yeah.
J
I
think
this
gets
to
the
motivation
when
I
hear
200
I
see
one
love
of
hierarchy
being
okay
right
when
you
get
to
2,000
I,
think
it's
probably
not
gonna,
be
okay
right,
maybe
right
answer!
I
would
love
that
the
to
think
about
that
in
this
thing
right,
if
we
do
a
solution
like
this,
would
it
make
sense
to
just
do
n
right
rather
than
than
just
one,
because
you
know
there
are
a
lot
of
different
shops
out
there.
J
You
know
200
teams,
let's
say
right:
it
gets
to
be
a
lot
of
work
to
go
and
individually
modify
the
quota
of
each,
and
so
you
might
want
to
delegate
that
to
a
quota
admin
who
sits
at
the
top
of
you
know,
let's
say
300
developers
and
say
you
go
deal
with
them
all
right
and
and
and
that's
a
popular
thing.
We
do
it
inside
of
Google
all
the
time
right.
We
can't
scale
to
our
scale
but
I.
Even
when
I
was
you
know,
we
talked
to
people
have
3,000
developers,
they
have
the
same
problems.
I
J
You
can
put
it
inside
right
and
then
and
then
developers
get
a
nicer
experience
because
they
actually
get
to
see
all
the
levels
right.
They
don't
have
to
worry
about.
Oh
I
got
quoted
9,
it
turns
out.
The
other
system
knows
why
I
got
tonight
and
it
doesn't.
They
don't
know
inside
of
cube
right,
so
I'm.
C
Actually,
not
quite
sure,
if
the
whole
there,
we
found
a
number
of
small
things
that
add
up.
If
you,
if
you,
if
you're,
managing
a
high
the
comparatively
high
number
of
users
on
a
cluster
with
our
less
than
50
nodes
and
actually
I'm,
they
don't
to
scale
our
model
up
to.
Let's
say:
2000,
concurrently,
active
power
users
simply
because
of
managing
stuff
like
the
size
of
the
ED,
CD
and
I.
Think
going
full
multi-level
this
this
tenant
thing
and
on
the
coder
computation
will
add
a
lot
of
computational
overhead
make
it
would
be.
C
C
J
I
think
I
think
if
I
heard
your
argument
right,
you're
saying
that
basically,
it's
pretty
compute
intensive
to
be
able
to
do
arbitrary
levels
of
hierarchy
and
it's
much
simpler
to
do
just
to
write
and
then,
who
knows,
isn't
generally
scaled
at
that
level.
Anyways
such
that
that
you
would
have
this
this
high
parallelism
is
that
is
that
my
understanding
before
you.
C
C
J
C
J
So
it
would
be
cool
to
see
us
do
it,
and
then
the
other
thing
I
would
say
is
that
the
the
levels
of
hierarchy
is
probably
not
gonna
go
much
above
for
just
in
in
the
experience
I've
seen
I
know
at
Google
we
don't
go
above
4
and
we're
giant
they're
like
40,000
people
who
interact
with
our
systems
right,
so
you're
not
talking
a
ton
of
levels,
but
it's
just
just
when
you
get
into
2
vs
n
right
and
is
really
less
than
7
or
8
right.
It's
it's!
J
F
Did
you
guys
get
a
chance
to
look
at
Quentin's
hierarchical
label
stock
at
all
or,
yes,
you,
okay,
how
do
you
see
that,
if
that
happens,
how
do
you
guys
see
this
fitting
in.
C
I'm,
not
I'm,
not
sure,
I
see
the
hierarchical
labels
as
a
as
a
tool
to
do
hierarchical
stuff
for
whatever
and
it
it
would
allow,
for
instance,
grant
parts
of
the
cluster
to
to
certain
groups
or
subgroups
of
users
simply
by
allowing
by
controlling
the
subtrees
of
the
labels.
I
are
back,
but
the
mechanical
interaction
between
a
label
hierarchy
and
quota,
computational
quota
management,
I'm,
not
sure
yet
I
just.
J
To
add
a
little
bit
about
that
on
that
document
we
I
asked
some
questions
to
Quinton,
for
example
like
how
you
deal
with
multiple
inheritance
right.
So
you
know
things
like
when
you
assign
quota,
if
you
assign
it
in
a
label
type
thing:
how
do
you
prevent
someone
from,
for
example,
exciting
one
label
to
have
30
CPUs
and
another
one
from
assigning
10
CPUs
right,
and
then
how
do
you
resolve
that
I
said
20
CPUs,
a
30s,
a
10?
J
Is
it
40
right
and
I
think
that
that
quote
that
that
proposal
really
has
to
think
a
lot
about
explained
ability
so
that
that
people
understand
where
they're,
where
they're
at
and
what
I
like
about
this
proposal
here,
is
that
because
it's
hierarchical,
it's
very
easy
for
people
to
tell
how
they
got
their
stuff.
So
you
know
I,
think
customers
when
they,
when
that
happens,
they're
gonna
want
to
be
able
to
answer
those
questions
for
that,
for
them
to
feel
comfortable
using
something
like
witness
proposal.
D
Yeah
so
I
think
there's
two
ways
of
interpreting
that
button.
One
was
Clinton's
proposal
like
in
a
literal
sense,
and
then
the
other
is
kubernetes
is
gonna,
have
some
kind
of
hierarchy,
whether
it's
expressed
in
the
way
Quinn
described
with
the
kind
of
hierarchical
labels
or
a
more
lucid
hierarchy
or,
however,
it's
done
if
we're
gonna
have
hierarchy.
For
you
know
more
than
two
levels
should
a
system
like
this
be
designed
with
that
in
mind
from
the
get-go
right
and
so
I
would
think.
D
D
Has
been
decided,
I
mean
so
it's
a
little
speculative
to
date,
one
speculative
system
on
another
speculative
decision.
It's
all
like
speculative
turtles,
all
the
way
down,
but
I
think
it
is
a
relevant
question
if
you
broaden
it
out
too,
like
if
kubernetes
is
gonna,
have
hierarchy
with
multiple
levels
in
general.
Should
this
proposal
that
we're
talking
about
today
take
that
into
account.
D
K
This
just
there's
one
other
thing
that
was
kind
of
touched
on
in
one
of
the
documents,
which
is
the
difference
between
a
tree
and
dag
representation.
I
mean
structures,
but
I
think
you
guys
really
need
the
graph
done.
You.
C
Our
use
case
definitely
does
that,
because
people
can
be
basically
members
in
any
of
the
tenants
and
just
a
side
note:
every
person
is
a
tenant
to
himself
or
torso
so
that,
but
this
the
single
level
abstraction
that
we
choose,
avoids
any
problems
with
that,
because
users
are
not
separate
objects.
It's
just
the
the
tenants,
have
the
resources
and
some
entities
are
allowed
to
consume
them,
but
they're
not
charged
to
the
user
that
charged
in
the
namespace
then
are
collected
and
that
great
nature
to
the
name,
to
the
tempt.
Sorry.
B
Yeah
I
think
there's
lots
of
stuff
to
think
about
and
maybe
also
point
out
more
clearly
in
the
dark,
like
the
motivation,
the
quantities,
maybe
the
levels
and
point
out
exactly
what
the
use
cases
are,
that
we
want
to
cover
I,
think
it's
pretty
important,
yeah,
so
I
think,
as
the
time
is
almost
up
or
it's
up.
I
personally
would
like
to
thank
you
very
much
for
the
insides
and
the
ideas
and
the
hints
and
the
feedback
and
I
think
we'll
work
on
this
more
towards
the
next
time
and
we'll
see
what
we
get
thanks.