►
From YouTube: Kubernetes Working Group Multitenancy 20190618
Description
Agenda/notes:
- Co-chair announcements: Sanjeev Rampal is new co-chair!
- Yushiro Furukawa is going to present his coredump feature. How do you make sure the correct users can access their logs in a multi-tenant cluster?
- Kural to discuss Tenant resource quota feature request/proposal in Tenant controller PoC https://wiki.onap.org/display/DW/ONAP+Cloud+Native+Multi+tenancy+proposal#ONAPCloudNativeMultitenancyproposal-ResourcequotaproposalforthetenantCRD
A
Cool
so
welcome
everybody
to
today's
multi-tenancy
working
group
meeting.
Thanks
very
sending
our
agenda
today.
First
off,
congratulations
to
Sanjeev
Rumpel,
who
is
our
new
co-chair
of
the
multi-tenancy
group
along
with
me,
Tasha
drew
Thank
You
spongey
you've
for
stepping
up
and
I
definitely
look
forward
to
working
with
you,
Thank.
B
A
C
So
hi
nice
to
meet
you
I'm
usual
okay.
Can
you
hear
me,
is
it
okay?
Okay,
so
thank
you
so
today,
I
are
you
sure
and
Bono?
Is
she
and
one
channel
will
present
age
some
explanation
about
our
quartum
feature
with
multi-channel
kubernetes
environment?
So
just
women
and
one
song,
could
you
could
you
explain?
Could
you
continue
to
explain
or
cordon
feature?
Yes.
D
E
C
A
E
E
First,
motivation
in
much
secret.
Yes,
much
can
provide
just
a
monkey.
Crested
amis
researcher
access
code,
umpires
but
general
meaning
straighter
work
relative
of
operator
channel
because
vampire
is
third
on
blockers
plays
no
own
work,
I
know
so
to
access
Kadam
files
created
in
by
container
a
tenant
user
should
ask
crust
administrator
to
fetch
my
and
perfect
client
so
to
facilitate
investigation
of
crust
application.
I
turned
to
you
that
we
are
developing
this
feature
and
this
automatic
Adam
create
a
picture
and
I'm.
E
D
Hi,
everyone
page
to
choose
the
architecture
of
our
design.
Basically,
there
are
three
parts:
the
code
on
fires
generate
at
the
caught
on
fire
download
pod
and
the
backend
stores
part
the
caught
on
fire
generate
part
would
launch
an
amine
pod
to
those
nodes
with
scrotum
Labour's
admin
code
or
the
dual
following
scenes.
Firstly,
it
will
copy
an
executable
file.
Executable
file
called
hinder
to
the
hosts
file
system.
D
Then
it
would
modify
the
core
pattern
fire
on
hosta
to
make
the
to
make
sure
the
kernel
or
the
in
work
the
handle
to
copy
an
extender
to
the
hosta
file
system.
Then
it
would
modify
the
copepod
and
firearm
hosta
to
make
sure
the
corner
would
invoke
the
handler
we
just
copied
when
job
crash
happens,
the
dongle
part,
which
is
she
resists,
the
aggregation
api
to
master
and
could
not
access
backing
stores
directly.
They
mustard
on
all
the
coal
fires
by
our
aggregation
API.
D
D
B
Good
good
good
it
all
so
so
great.
Thank
you.
Thank
you,
for
you
know
bringing
this
to
the
working
group.
Are
you
become
so
basic
questions
on
sort
of
just
the
feature
positioning?
Could
it
have
been
implemented
by
instead
of
pushing
the
code
file
to
remote
storage
and
then
providing
access
credentials?
B
Limited
access
directly
from
the
host
storage,
like
a
a
privileged
container
that
that
has
access
to
the
host
local
storage
mod
but
uses
the
credentials
of
that
user
or
that
namespace
you
know
to
allow
retrieval
of
that
code,
am
from
the
local
storage
directly
into
the
queue
API
without
needing
to
push
it
to
a
remote
storage.
I
mean
I'm
just
trying
to
understand
whether
this
could
have
been
done
without
needing
to
push
it
to
a
remote
storage.
B
C
D
B
C
Okay,
so
citizenship,
so
the
after
finishing
this
meeting
I
will
up.
We
will
upload
some.
This
tie
into
yeah
some
Google
Google
Drive
or
some
somewhere,
and
we
can.
We
can
refer
this
slide
with
no
ranking.
So
yes,
and
so
one
sound,
could
you
could
you
paste
or
this
your
your
URL
for
it
have
repository
and
then.
C
B
So
I
think
it
needs
a
little
bit
more
review
by
the
working
group.
I
mean
gee
sure
is:
is
the
remote
storage
the
right
option,
or
should
it
be
just
one
of
one
of
multiple
options
you
know,
so
maybe
we
can
all
provide
comments
once
you
put
the
document
in
Google
Docs
and
provide
the
link
I.
Think.
Okay,
sorry,
have
you
already
started
prototyping
the
CRD
for
retrieving
the
code
on
files
and
so
on?
B
C
D
D
C
F
G
D
D
D
G
F
Yeah,
okay,
so.
G
G
So
for
any
given
pod
for
any
given
called
up
for
a
pod,
there's
I
back
around
your
API
call
to
control
who
can
access
that?
So
you
don't
want
everybody
to
who
has
access
to
that
names
place
typically
to
have
access
to
all
of
the
core
dumps?
Yes,
so
where
is
the
are
back
defined
for
the
core
dumps
for
a
particular
pod.
D
D
H
D
D
D
C
I
I
I
Let
me
do
a
little
bit
introduction
of
whatever
team,
so
myself
is
Chrome
the
Ramakrishnan
and
I'm
working
as
a
senior
software
Intel,
so
I
have
my
teammate
it
on
also
here
in
the
meeting
and
we
working
on
own
app.
Our
three
know
project.
So
the
one
happy
is
basically
like
a
service
auto
station,
it's
one
layer
above
the
kubernetes
and
then
we
have
feature
requests
wenona
for
multi-tenancy.
I
We
are
looking
at
kubernetes,
whether
the
kubernetes
as
a
resource
or
destitution
of
the
multi-tenancy
future,
and
we
were
following
this
working
group
for
a
couple
of
weeks
for
like
four
weeks,
I
am
following
this
working
group
and
this
working
group
is
really
very
active
and
they
also
did
some
kind
of
multi-tenancy
POC
work.
So
we
thought
we
should
consume
that
work
for
the
our
project
in
one
app
and
as
well
as
in
that
Reno
in
links
foundation.
So
we
went
through
all
the
document
and
also
we
had
run.
I
The
initial
p
was
if
Walker
fish
is
done
by
Sanjeev
and
his
team,
so
this
wiki
page
is
basically
little
bit
explaining
about
the
own
app
recommend
on
multi-tenancy
and
also
to
show
all
the
multi-tenancy
works
in
kubernetes.
So
the
goal
of
this
scope
or
the
proposal
within
the
own
app,
is
to
work
with
kubernetes
SiC
groups
or
working
groups,
and
we
are
not
going
to
have
any
has
a
different
solution
other
than
from
this
working
group
and
I.
I
Also
following
this
working
group,
there
are
other
solutions
is
for
multi-tenancy,
where
you
have
a
separate
control
frame
for
each
tenant
in
your
cluster.
We
find
it's
quite
expensive
and
also
it's
easy.
It's
difficult
to
maintain
in
point
of
for
our
project
in
order
for
in
at
Reno
in
Linux
Foundation,
so
we
want
to
consume.
I
We
thought
this
project,
the
multi
10
controller
project,
it's
very
simple,
and
we
think
this
is
where
we
will
hit
hug
8
to
provide
some
kind
of
your
future
or
your
POS
to
this
project,
and
the
recommend
we
had
is
other
use
cases
so
sorry
to
reduce
this
one
sorry.
So
our
use
case
is
basically
from
multiple
users.
We
have
telecom
comb
segments
and
that
into
user
or
basically
like
a
tenant,
and
we
want
to
track
the
usage
of
the
resources
of
each
tenant,
and
that
is
one
of
the
use.
I
Cases
in
own
admin
acronym
a
stack,
and
we
also
make
sure
that
what
is
the
level
of
resource
consumed
to
by
application.
Also,
and
even
if
it's
a
application
is
running
in
multiple
tenant.
We
should
get
that
resource
allocation
and
how
many
resource
is
used
so
that
when
we
place
a
workload
from
one
app
or
something
it
should
knows
in
which
kubernetes
cluster
it
has
to
place
that
workload.
And
it
has
to
also
understand
how
many
resource
Cotta
is
available
for
each
tenant
in
each
cluster.
I
So
that
is
our
use
cases
and
we
have
other
use
cases
which
is
not
much
related
to
multi,
tenant
or
but
it's
related
to
scheduling
but-
and
we
also
want
to
create
a
centralized
v
document-
is
to
create
a
centralized
credentials
for
all
the
tenants
and
also
the
applications
which
will
be
running
in
one
app.
And
then
that
will
be
passed
down
to
kubernetes
by
saying
that
I
won't
go
in
depth
about
the
multi
tenant
proposal,
because
you
guys
are
expertise
on
that.
I
So
all
the
materials
which
we
put
here
it's
referred
to
from
the
reference
documentation
which
we
provided
at
the
bottom.
So
we
are
not
the
sole
author
for
this
one.
We
said
that
the
document
and
all
the
stuff
is
related
to
the
authors
who
have
written
the
document
in
the
reference
section.
So
this
section
just
explained
how
to
run
that
one.
I
We
just
doing
some
kind
of
financial
people
series
on
that,
and
this
is
what
we
want
to
discuss
today
and
we
want
to
have
a
future
request
or
a
future
which
here
provides
CRD
for
resource
Kota,
and
this
resource
quota
is
basically
like
bookkeeping,
tracking
and
discovering
of
the
tenants
actually.
So
it's
just
a
wrapper
for
the
resource
quota,
which
is
already
in
Canaries.
I
What
we
are
doing
is
like
we
accumulating
all
the
resource
Kota
of
each
namespaces
and
putting
on
top
of
the
tenant,
and
we
basically
do
using
the
working
group
parent
controller,
which
we,
which
is
our
aim
to
reuse
it
again
and-
and
this
is
just
a
structure,
how
the
resource
quota
should
look
like
and
in
the
resource,
spec
Kota
spec.
We
just
reusing
the
kubernetes
resource
list
and
resource
cope.
So
this
is,
as
I
said
before.
I
Right
is
just
a
wrapper
for
the
resource
code,
and
this
is
quite
where
we
want
to
explain
what
is
the
tenant
resource
so
from
one
app
or
from
a
Cree,
no
side.
We
have
this
resources
so
suppose
you
have
a
tenant
or
a
department,
or
something
like
that.
We
will
say
this
much
CPU
or
memory
or
the
pots
or
resource
they
can
use.
So
we
are
basically
using
device.
I
Plugin
called
turmeric
device
plug-in
which
is
open
sourced
by
Red
Hat
City
office,
and
this
is
very
handy
so
once
we
able
to
be
using
it
just
for
the
experimenting
purpose,
so
it
this.
Basically,
this
device
plug-in
just
give
a
dummy
resource,
so
you
can
instead
of
this
one.
You
can
introduce
some
other
device
plugins
like
every
GAA,
qat
or
chip
use
anything
here,
and
so
this
is
where
you
say
like
how
many
resources
you
are
allocating
for
this
tenant,
so
the
CPUs
memories
and
pots.
D
I
We
introducing
like
a
wrapper
and
that
wrapper
will
have
the
and
to
be
using
the
same
tenant
namespace,
what
is
already
created
by
the
tenant
controller,
II
Wasi,
and
where
we
specify
each
tenant,
CPU
memory,
ports
and
device
plug-in
resources.
Actually,
so
what
we
are
trying
to
do
is,
like
you,
be
creating
your
one
layer
above
that,
where
you
create
like
give
us
some
amount
of
resources
for
this
tenant
within
the
tenant.
I
You
can
divide
it
for
each
team
or,
if
for
each
client,
and
we
creating
a
pool
like
silver
gold,
and
this
is
per
team,
and
the
next
one
is
per
application.
As
I
said
in
one
of
our
recommend,
we
want
to
track
how
many
resources
this
particular
application
is
using.
So
our
use
case
is
basically
on
Telecom
and
comm
segments
where
we
can
true
want
to
track
the
resources
of
the
BNF
applications.
So
I
we
just
put
like
the
firewall.
I
You
can
have
multiple
levels
like
v1,
v2
and
v3,
and,
having
said
that,
I
think
the
diagrams
will
explain
more
so
you
have
a
tenant
and
you
have
team
like
gold
or
the
pool
of
customers,
who
is
the
gold
customers
and
team
of
other
customers
who
is
like
a
silver
customers,
so
the
gold
customers
will
able
to
use
a
lot
of
resources.
The
silver
customers
used
to
use
less
resources
apart
from
these
two
tenant.
Even
you
can
draw
any
application
we
had.
We
want
to
track
this
application
with
the
different
versions
right.
I
So
it's
not
that
we
just
concentrating
on
the
client
or
the
user,
but
if
you
want
to
also
track
the
application,
how
much
are
this
application
can
run?
So
we
want
to
do
that.
So
in
the
example
we
we
are
just
putting
in
the
same
namespace,
but
the
application
can
run
in
a
different
name
space
and
it
can
share
by
a
different
tenant.
J
I
D
I
And
traces
quota
and
then
within
the
resource
quota,
it
will
refers
to
this.
This
is
code
of
the
individual
client
or
the
tenants
within
that,
and
we
proposing
a
kind
of
admission,
controller
or
controller,
which
you
will
review
whether
this
over
all
things
is
summing
up
or
not.
If
it's
not
summing
up,
it
will
turn
out
to
error,
saying
that
whatever
you
put
it
here,
it's
not
something
to
the
resource
quota
you
bring
below
this
one.
Is
that
answer
your
question?
Yes,.
J
I
Exactly
yeah,
so
it's
basically
a
bookkeeping
as
I
said
right,
so
it's
basically
doing
the
bookkeeping.
It's
not
doing
any
kind
of
storm
isolation
and
that,
though,
one
more
thing,
it's
all
so
see
whether
this
particular
resource
is
available
or
not.
Then
when,
when
the
user
or
the
client
tried
to
run
your
pod
or
your
workload,
it
also
see
whether
this
tenant
can
run
this
workload
or
not,
because
that's
what
basically
the
recess
quota
do
in
coop
in
it
is
right.
I
I
B
A
I
B
Go
ahead,
yeah
just
to
kind
of
I,
think
there's
a
little
bit
more
clarification
on
the
resource
manager
model
needed
here.
Okay,
if,
if
we
are
looking
so
there
can
be
a
couple
of
ways,
my
initial
expectation
had
been
for
this
resource
quota
that
there
is
a
quota
for
the
tenant
as
a
whole
and
it
doesn't
matter
whether
individual
namespaces
within
that
code
are
within
that
tenant.
B
Rather,
basically
it's
it's
a
quota
act
at
the
tenant
level
and
which
namespaces
don't
need
to
have
individual
quotas,
because
you
know
they
can
yeah
here
the
quota
but
yeah,
but
in
that
case
that's
a
slightly
different
model
than
in
which
you
have
the
tenant
recess
quota
and
you
have
the
namespace
resource
quota
and
the
controller
is
simply
checking
that
the
sum
of
the
namespaces
is
not
greater
than
an
encoder
yeah
forcement.
In
that
case.
In
that
second
model,
the
actual
enforcement
is
still
happening
at
the
namespace
level.
Yeah.
I
I
I
At
this
point
so
far
as
we
are
thinking
at
the
bottom
models
actually
and
because
the
second
model,
we
are
very
much
inclined
because
that's
where
we
can
show
this
particular
namespace
as
a
resource
Kota,
the
first
model,
which
you
talking
without
it
as
a
resource
quota,
it's
kind
of
second
priority
for
us.
Okay,.
I
A
configuration
checker
why
we
are
doing
that
because
we
want
to
showcase
that
it's
a
you
know.
Overall,
it's
summing
up
the
value
for
the
each
namespace
for
each
client
or
the
customers,
using
the
particular
in
resource
or
H
cloud,
and
the
first
proposal
you
mentioned
is
without
any
resource,
Kota
I.
Think
that's
also.
We
have
the
values
for
our
use
case
actually
right
now,
but
we've
want
this
first
future
to
be
a
semester
of
future
request
for
us
actually,
but
what
we
can
do
we
can
able
to
provide
the
both
the
futures.
I
I
E
E
B
I
B
I
B
B
I
E
B
H
H
Oh
sorry,
I
was
going
to
say
that
I
think
that
makes
I
mean
yeah.
So
I
got
your
understanding.
But
what
makes
more
sense
is
that
for
us
to
be
like
able
to
define
across
namespaces
and
then
and
then
maybe
the
implementation
is
that
you
actually
go
and
create
resource
kotas
per
namespace,
which
is
which
divides
the
whole
aggregate
across
these
namespaces
equally
and
then
creates
a
resource
code
upper
namespace.
So
that
could
be
an
implementation
detail
and
one
way
yeah.
I
Actually,
so
the
application
could
run
in
the
same
namespace
or
it
could
run
in
the
multiple
namespace
with
different
priorities
or
resource
Kota,
so
that
also
we
are
targeting,
but
I
we
will
take
the
both
three
use.
Cases
and
I
will
include
that
use
case.
You
guys
suggesting
here
and
I
will
rewrite
this
one.
So
the
one
question
I
want
to
ask
is
that
is
this:
you
want
this
proposal
to
be
be
here
or
we
can
move
it
to
the
Google
Docs
and
add
it
in
no.
No,
your
proposal
actually.
B
Yeah
Sakura
yeah.
We
can
definitely
add
it
to
one
of
the
existing
proposals,
so
on
I
just
feel
like
we
need
to
define,
it
will
define
it
a
little
bit
more
precisely
for
now,
okay,
and
so,
let's
keep
it
as
it
is
right
now,
let's
have
a
little
bit
more
review
by
the
over
the
next.
You
know
couple
of
weeks:
okay,
and
really
the
at
least
my
individual
thing
and
I'm
sure
others
have
their
own
thoughts
as
well.
B
B
About
that-
and
you
know
there
are
implications
in
terms
of
the
runtime-
does
that
actually
map
into
the
underlying
cgroups
of
the
runtime
or
not
right,
so
that
there's
a
little
bit
more
analysis,
we
should
do
in
certainly
merge
it
in
as
appropriate.
This
is
obviously
very
much
affiliated
to
the
controller
CRD,
let's
evaluate
exactly
the
model
here
and
then
we
can.
Then
we
can
merge
it
as
needed.
Yes,
actually.
G
There's
a
there's
a
fourth
level
of
grouping
here
more
to
the
is
it
is
it
just
exactly
the
same?
You
take
the
whole
tenant
if
they've
got
five
names
of
spaces,
do
you
just
split
it
in
you
know
and
you've
got
a
tenant
quota?
Do
you
just
put
20%
of
it
in
each
main
space
there's
a
whole
other
level
of
grouping
here,
because
in
in
our
situation
we
have
a
tenant.
G
G
B
No
I
totally
agree
that
we
definitely
should
not
assume
that
you
know
the
cut
has
divided
equally
amongst
all
new
spaces.
I
think
that's
overly
simplistic.
What
we
really
want
is
statistical
sharing
of
that
resource
across
all
namespaces
of
the
tenant.
The
model
here
is,
at
least
in
my
mind,
is
that
the
tenant
is
the
one
getting
charged
okay,
so
the
charge
back
is
based
on
the
resources
that
the
tenant
has
a
whole
uses.
It
doesn't
matter
whether
the
namespace
is
a
uses,
20%
of
it
or
30%
of
it,
or
it's
totally
fluid
and
dynamic.
B
B
Billing
is
one
of
the
implications
of
it.
The
reason
you
have
resource
coder
is
because
you
bill
for
it
right,
so
you
know
tenant
a
has.
You
know,
quota
of
you
know
X
CPUs
and
Y
memory,
and
maybe
these
are
the
pieces,
can
fit
some
profiles
in
or
later
and
and
so
he's
going
to
be,
build
on
it,
and
so.
G
H
B
The
point
is
that,
if
you
want
to
have,
you
can
additionally
want
to
have
in
space
quotas,
but
that's
more
because
you
want
to
have
some
fairness
within
your
tenant
across.
You
know:
applications
using
namespace
one
versus
applications
using
namespace
too
so
I
give
it
a
quota,
so
that
I'm
sure
some
fairness,
III
want
to
get
at
least
I
want
to
get
twenty
percent
of
the
CPU
right
or
something
like
that.
B
I
B
I
I
do
right:
I
will
create
some
scenarios
for
the
in
s4,
which
is
without
the
recess
quota
and
I
will
put
like
multiple
scenarios
and
use
cases
like
I.
Think
we
discuss
over
like
four
models
here.
I
will
put
all
these
four
models.
We
only
had
two
models.
I
will
put
all
those
four
models
and
you
guys
can
comment
on
this
page.
Actually,
it's
the
same
scenes
here
for
Linux
didn't.
H
H
B
That
was
a
POC
right.
What
we
did
so
far
was
a
POC
actually
that
bill
will
have
and
that
as
an
agenda
item
in
the
next
call
as
well,
where
we
will
review
the
state
of
the
tale
in
control
and
sort
of
what
needs
to
be
done
to
make
a
more
official
version
of
it,
because
right
now,
it's
just
a
POC
right,
and
there
are
some
open
questions
there
as
well
like
the
namespace
naming
and
things
like
that,
and
then
this
resource
quota
thing.