►
From YouTube: 20190702 Kubernetes Multi-tenancy working group meeting
Description
- Tenancy CRD v2 planning and update from Sanjeev Rampal
- Update from Yushiro Furukawa and team on their coredump feature
https://drive.google.com/file/d/1vYmcxYDPG7HxMVautB-GVxIswEVbC1dc/view
A
Okay,
so
I
wanted
to
kind
of
do
a
couple
of
things.
One
is
speak
a
little
bit
about
the
tenant
controller,
Cid
and
activities
related
to
that,
but
also
just
kind
of
just
give
a
review
of
the
different
threads
of
activity
that
are
happening
in
the
working
group.
So
actually,
let's
start
with
that.
First.
A
So
here
are
some
of
the
task
threads
that
are
kind
of
relatively
more
active
than
others,
and
this
is
this
will
help
us
to
kind
of
review
what
what's
going
on
so
the
firstly.
So
now
we
are
kicking
off
an
effort
to
have
a
v2
version
of
the
tenant
controller,
CRD
and
related
model
B
features.
So
if
some
of
you
have
been
watching
our
previous
work,
we've
kind
of
characterized
different
architectural
models
for
multi-tenancy-
and
you
know
so-
there's
models
a
B,
C
and
D
and
B
model.
A
B
is
sort
of
everything
around
this
tent
controller,
C
or
D,
and
associated
CR,
DS
and
so
on,
and
so
that's
what
we
did
the
puc
for
earlier.
So
we
had
a
first
pass
at
it
now
sort
of
the
next
main
task.
There
is
sort
of
have
an
updated
version
or
a
v2
version
of
that
and
related
related
featurettes.
So
that's
one
bucket
of
threads
or
one
bucket
of
tasks.
The
second
bucket
of
tasks
is
to
complete
our
security
reference
profiles
and
initially
documentation,
and
possibly
after
that,
some
audit
audit
checking
utilities
as
well.
A
A
A
We
have
confirmed
that
they
do
intend
to
bring
it
to
open
source
and
we
should
be.
We
should
look
forward
to
hearing
more
from
them
in
the
coming
weeks
and
as
it
moves
to
open
source,
we
will
then
be
able
to
within
the
working
group,
create
some
planning
activity
around
that
proposal.
Okay,
so
on
that
one
right
now
we
want
to
wait
and
see.
We
can
have
the
above
folks
bring
it
to
to
the
working
group.
A
Then
there
is
a
very
small
I,
wouldn't
call
it
small,
but
various
sort
of
different
small
threads
of
activity
that
we
want
to
continue
supporting
and
we
have
to
see
whether
they
are
sort
of
individual
activities
by
themselves
or
do
they
tie
in
to
our
core
architectures
around
models,
B
and
C.
So,
for
example,
the
coding
feature
which
will
hear
of
move
off
today
an
interval
code
on
feature-
is
that
something
you
know,
does
it
tie
into
our
overall
architecture
or
is
it
just
tasks
by
itself
or
does
it
apply
to
all
the
models?
A
So
we
want
to
encourage
that.
Similarly,
we
want
to
encourage
community
discussion
on
many
of
the
kind
of
early
new
ideas
and
as
well
as
work
that
is
related
to
multi-tenancy
but
happening
it
in
other
SIG's
and
working
groups.
So
there's
been
some,
you
know
speculative
discussion
about
hierarchical
namespaces.
The
folks
from
Red
Hat
have
done
some
work
around
operated
routes
which
I
suppose
they
are
pursuing
in
other
SIG's
and
working
groups,
but
they
are
sort
of
related
to
what
we're
doing
here
as
well.
A
So
at
the
moment,
we
don't
have
actual
coding
and
design
tasks
active
in
this
working
group
on
that,
but
certainly
we
can.
We
can
have
discussion
about
this
and
see
how
we
can
plug
into
to
those
working
groups
or
so
or
just
provide
assistance.
And
of
course
we
want
to
continue
to
receive
new
use
cases
and
requirements.
So
we
do
have
some
documentation
around
use
cases
which
we
want
to
continue
building
on.
Ok,
so
these
are
all
sort
of
various
miscellaneous
continued
community
threads
that
we
want
to
keep
encouraging
in
addition
to
our
main
threads.
A
And
finally,
we
want
to
continue
developing
and
more
formalizing
our
working
group
process
and
our
project
tracking
model.
So
we
have
a
project
board.
We
have
a
bunch
of
documents,
some
of
them
are
out-of-date.
Some
of
them
are
not
quite
out
of
date.
We
need
to
clean
up
our
repos,
so
we
need
to
do
some
work
around
cleaning
up
our
working
group
model,
so
these
are
sort
of
very
high-level
summary
of
things
that
are
happening
in
the
working
group.
Obviously,
there's
next
level
of
detail
within
each
of
these
categories.
A
I
may
have
missed
out
some
any
questions
comments
on
these.
So
what
is
cluster
security
reference
profile?
Documentation
mean
so
one
of
the
things
we
said
earlier
was
that
we
would
also
provide
a
set
of
recommendations
on
how
what
kind
of
cluster
level
configurations
we
would
recommend
for
somebody
wanting
to
run
multi-tenancy
on
top
of
whether
it
is
models
B
or
C,
and
so
we
started
some
initial
documentation
on
those
which
are
in
the
repo,
but
we
want
to
build
on
those
further
whether
we
make
those
reference
profiles
actually
auditable
and
have
some
audit
checking.
A
That
is
still
an
open
area
for
the
group
to
decide,
but
anyway,
that's
what
it
is
and
we
have
some
initial
documentation
which
we
want
to
build
on.
So
the
links
reference
links
right
in
the
github
right.
We
there's
a
mix
of
documents
on
the
github.
We
probably
need
to
organize
it
a
little
bit
better.
But
yes,
the
reference
profile.
Documentation
is
also
part
of
the
links.
Oh
yeah,.
B
C
D
D
What
is
the
process
for
that
specifically
right
I'm
thinking
about
areas
that
I
haven't
heard
anything
any
discussion,
and
maybe
it's
because
you're
not
interested
or
nobody's
proposed,
but
some
of
the
most
a
tenancy
around
some
of
the
networking
needs,
or
networking
separation
or
classes
of
service
and
so
forth,
and
also
I
would
be
interested
in
understanding.
If
you
guys
have
any
discussion
in
terms
of
how
it
seems
like
the
topology
manager
or
CPU
manager
have
be
impacted
by
multi-tenancy,
so
in
general,
that
question
would
be
how
do
I
introduce
new
use
cases?
A
So
far,
what
we've
done
is
usually
any
any
idea
or
any
proposal.
It
start
off
as
a
small
write
up,
it
could
be
a
one-page
write
up,
it
could
be
a
small
to
three
slides
and
you
could
bring
it
to
one
of
our
working
groups,
one
of
the
weekly
meetings
to
just
discuss
it.
Sometimes
it
requires
minimal
discussion
and
it
can
be
purely
handled
in
the
slack
or
in
the
email
and
Tasha
I
think
we
probably
need
to
sort
of
group
all
the
use
cases
in
one
place.
C
So
I
will
link
to
that
doc
again
and
it's
also
in
the
links
file,
but
it's
just
the
project
plan.
The
other
thing
that
we
really
kind
of
need
to
keep
an
eye
on
in
this
working
group
is
that
we
have
a
lot
of
presentations
and
proposals,
but
we
don't
have
a
lot
of
movement
forward
after
that.
So
what
we're
trying
to
keep
an
eye
on
is
actually
making
progress
on
the
building
blocks
that
we
have
today
so
right
now.
C
What
that
is
is
that
is
security
profiles
for
both
single
tenants,
cluster
and
a
multi
tenant
cluster,
and
that's
this
tenancy
CRD.
So
what
I
would
really
encourage
everybody
to
do
is
to
look
at
what
we're
working
on
right
now
and
see
how
we
can
make
progress
and
make
that
better
and
as
we
have
new
ideas,
we
should
really
be
curating
a
backlog
of
things
to
pick
up,
as
we
finish
and
make
more
progress
on
what
we
already
have
inflate.
D
D
C
D
A
So
as
we
try
to
integrate
things
like
the
resource,
Kota
management
and
and
have
an
updated
design
of
that,
we
will
need
support
for
admission
where
books,
which
are
handled
better
if
it
was
done
using
queue
builder.
So
there
is
a
so.
What
we're
doing
is
a
small
set
of
people
that
are
more
active
and
have
the
cycles
to
actually
do
some
coding
we'll
be
spending
more
time
on
the
actual
coding
changes
here.
A
So
if
you're
interested,
you
know,
feel
free
to
to
to
contact
tasha
and
me,
but
right
now
you
know
Faye
from
Alibaba
and
Carol
and
a
few
others
who've
got
who
volunteer
to
spend
some
time
on.
The
code
are
going
to
start
looking
at
this.
So
but
of
course,
if
you
feel
like
you
have
cycles
and
are
interested
in
in
contributing
to
the
code,
you
know
feel
feel
free
to
reach
out.
A
We
want
to
kind
of
have
the
v2
of
this
handle
initially
by
a
small
group
to
get
the
critical
mass
going
and
then
obviously
everyone
in
the
working
group
will
be
able
to
review
and
provide
additional
comments.
But
of
course,
if
you
do
have
cycles
to
contribute
to
the
controller
development
for
the
v2
version
feel
free
to
reach
out
to
Josh
and
me,
and
then
we
will
connect
with
the
appropriate
folks.
A
So
so
that's
kind
of
the
first
thing
here
that
we
have
to
refactor
and
rewrite
that
also
what
we
did
is
as
the
architecture
for
the
different
CR
DS.
That
was
just
a
first
version.
We
had
put
certain
things
on
the
back
burner,
for
example,
having
a
tenant
template,
see
already
in
addition
to
a
namespace
template,
so
I'm,
actually
not
going
over
all
the
details
of
the
tenant
controller
here,
because
I'm
assuming
many
of
you
are
familiar.
A
But
if
you
do
want
me
to
go
back
into
a
tenant
controller
v1
in
today's
call,
then
after
the
presentation
from
Fugit
fujitsu,
we
can
go
back
to
the
v1
return
controller.
If,
if
you
want
to
discuss
some
more
details,
today's
call
I
was
just
planning
to
give
a
summary
of
tasks
that
we
are
kicking
off
for
v2
I'd.
A
So
in
the
model
that
we
had
earlier,
we
only
had
namespace
scope
resources
in
tenants,
because
tenants
were
basically
built
out
of
namespaces
and
namespace
templates,
which
could
not
contain
cluster
scope
resources.
So
we
want
to
go
back
to
whether
it's
possible
to
add
cluster
scope,
resources
into
model
B
or
whether
that
stays
out
of
model
B
and
is
only
available
in
model
C.
That's
that's
an
area
which
we
have
to
do
some
design
work
and
obviously
what
we
will
do
here
is
we've
take.
A
A
So
the
thought
was
that
it
appears
that
in
certain
enterprises,
people
feel
like
tenants
ought
to
be
able
to
provision
certain
cluster
scope
resources.
For
example,
you
may
want
each
tenant
to
define
their
own
series
the
artis
today
our
cluster
scoped
resources.
So
if
you
do
not
allow
tenants
to
define
cluster
scope
resources,
that
means
you
do
not
allow
tenants
to
define
their
own
series,
which
means
CDs
have
to
be
free.
A
You
know,
out-of-band,
a
tenant
has
to
make
sure
that
the
cluster
has
the
series
they
need
and
there
has
to
be
a
kind
of
an
offline
process
between
the
tenant
and
the
cluster.
To
make
sure
there
are
the
cluster.
Has
the
C
or
D
is
needed
by
the
tenant,
but
it
would
be
more
self-service
model
if
tenants
could
dynamically
create
their
own
series.
F
D
A
Yes,
so
this
is
an
open
area.
We
should
think
about
different
options
here.
Why
not?
If,
but
again,
if
we
are
going
to
make
a
proposal
to
the
other
teams,
it
would
be
received.
Well,
if
we
add
some
meat
to
it
right,
if
we
just
say
he
can
somebody
think
of
a
cluster
scope,
names,
K,
scope,
CRE,
the
typical
response
to
this
exists.
Yeah
sounds
good.
Can
you
develop
a
prototype,
then
we
can
talk
about
it
right.
So
right
what
happens
usually
with
these
is
that
the
other
six.
A
If
we
propose
something
to
the
other
six,
we
have
to
develop
it
quite
a
bit
before
we
take
it
to
the
other
six.
Otherwise,
you
know
it
will
just
kind
of
fall
by
the
wayside,
but
yeah
your
ideas
are
valid.
It
is,
it
is
what,
while
thinking
of
a
name,
space
scope.c
Rd
option,
but
if
you
meet
around
it
that'll
be
good.
The.
E
A
And
I
think
some
of
the
proposals
from
I
think
from
Red
Hat
about
operator
groups
kind
of
fit
into
that
I
know
we
have
some
attendance
from
Red
Hat,
but
I
would
welcome
more
input
from
Red
Hat
on
how
they
see
that
and
whether
they
could
contribute
to
that
or
any
other
areas.
Here.
Anybody
from
Red
Hat
on
the
call
one
which
I
mean.
B
Yeah
Veronica.
We
also
think
that,
having
like
a
namespaced
version
of
a
CID
would
be
really
cool,
I
think,
as
you
alluded
to,
though
there's
so
much
work
involved
across.
So
many
of
these
things
to
make
that
happen,
but
I'm
not
sure
that
this
is
the
place
for
that.
I
think
that
this
is
that's
more
like
an
epi
Machinery
responsibility
and
it's
unfortunately,
not
super
straightforward,
because
now
the
set
of
API
is
that
you
see
is
different
based
on
the
namespace
that
you're
requesting.
A
Even
a
free
bill,
even
if
you
don't
actually
have
names
spaced
say,
are
days,
but
if
we
could
we
we
should
speculate
about
ways.
We
can
just
limit
the
scope
of
current
cluster
scoped
series
like,
for
example.
This
is
just
a
thought
off
the
top
of
my
head.
I
I
haven't
given
it
full
detailed
analysis,
but
let's
say
that
we
have
a
see
early
creation
request
right,
almost
like
a
certificate
signing
request
where
a
tenant
could
actually
proposed
this
CRD
template
and
a
some
level
of
controller
which
is
dynamically
accepted
by
the
cluster
I
know.
A
B
That's
correct,
if
you,
if
you
publish
an
operator
that
defines
series
that
it
uses
there
is
an
optional
step
of
like
having
that
written
out
as
essentially
a
proposed
object
to
be
approved
by
someone
with
an
uprated
permission
to
install
it.
I
think
the
bigger
issue
is
that
all
of
the
features
being
built
around
C
are
these
right
now
and
kubernetes
are
very,
very
much
scoped
to
the
clusters.
So,
for
example,
one
fifteen.
We
have
cluster
I'm,
sorry
conversion,
my
books,
but
you
can
only
register
one
end
point
for
all
namespaces.
B
Unlike
admission,
look
looks
so
if
you
are
building
something,
that's
using
series
and
you
want
to
be
tenant
scopes.
Well
now
all
of
your
tenants
have
to
share
both
the
CID
version
and
the
conversion
workbooks.
So
there's
no
way
to
have,
for
example,
two
tenants
on
different.
You
know
versions
of
some
spirity
controller
on
because
they
need
to
be
synced,
because
the
requests
all
go
to
the
same
place.
E
A
So
bottom
line
here
is
this:
is
an
open
area
in
the
model
B
work
at
the
moment.
We
do
not
have
any
cluster
scope,
resources
right
and
maybe
that's
adequate
for
certain
deployment
cases,
but
we
should
continue
to
think
about
cluster
scope,
resources
for
model
B,
but
then
this
is
where
models
comes
into
play
with
model
C.
You
can
actually
have
cluster
scope,
resources
pertinent
and
that's
where
the
primary
value
of
the
model
C
comes
in
but,
as
I
said
earlier,
model
C
is
still
not
yet
open
source.
A
So
we
look
forward
to
having
Alibaba
bring
more
details
from
that,
and
we
may
conclude
that
model
B
will
only
have
namespace
gate
scope,
resources
and
if
you
really
want
cluster
scope
resources
per
tenant,
then
you
really
have
to
have
model
C,
which
we
need
to
develop
further.
So
anyway,
we're
not
here
to
solve
the
problem.
We're
just
saying
that
these
are
the
tasks
that
need
attention
and
then
the
resource
Kota
management.
A
So
we
had
crawl
present
sort
of
some
initial
thoughts
there,
but
we
gave
him
some
feedback
about
wanting
to
treat
resource
kotas
at
the
tenant
level
and
kind
of
decouple
them
from
the
namespace
level.
Quotas
and
also
you
know,
take
some
ideas
from
how
OpenShift
and
Rancher
do
their
quota
management
for
products
and
projects
and
cluster
resource
coders.
So
we
need
to
update
that
and
then
integrate
that
quota
management
into
the
controller
for
Model
D.
Is
there
a
write-up
for
how
OpenShift
or
Rancher
does
quota
management
well
of
their
own
respective?
A
You
go
to
the
open,
shipped
internship
web
pages
in
the
big
screen,
what
they
do.
Okay,
yeah
like
we
are
deriving
things
from
there.
So
I
was
thinking.
Maybe
somebody
did
some
perch
and
said
these
are
the
things
we
should
include
from
there.
If
you
look
at
this
object,
called
cluster
resource
coder
in
openshift
that
has
no
details
in
jail
in
Rancher.
There
is
a
project
resource,
Kota,
okay,.
C
A
Is
it's
a
resource
quota
across
a
set
of
namespaces?
So
those
are
the
concepts
that
are
trying
to
do
similar
things
and
we
want
to
bring
it
into
into
this
work
here.
Okay,
extended,
yeah
so
and
that
that
would
then
possibly
make
use
of
an
admission
where
book
and
which
is
why?
Going
back
to
the
first
point
that
we
may
want
to
rewrite
the
current
POC
using
queue
builder
and
integrate
the
resource
quota
management
of
a
book.
So
that's
all
kind
of
related
set
of
tasks.
A
A
Tenants
wouldn't
need
to
coordinate
with
each
other,
they
could
pick
their
own
namespace
names,
and
you
know
we
would
concatenate
the
tenant
name
in
terms
in
front
of
the
namespace
name,
but
that
there
was
some
that
needed
a
little
bit
more
discussion.
So
these
are
all
design
topics
that
that
we'll
be
going
over
in
the
next
few
weeks.
A
The
other
one
was,
you
know
fully
having
a
complete
POC,
with
a
complete
set
of
our
back
and
part
security
reference
policies
to
implement
this
controller,
so
that
we
make
sure
we
really,
you
know,
have
a
fully
working
solution
and
some
of
that
may
tie
into
the
baseline
cluster
security
profile
work
as
well.
But
here
we
are
really
talking
about
the
security
profile,
the
the
features
that
go
into
the
namespace
templates
right.
A
So
we
want
to
have
a
fully
defined
set
of
policies
to
go
with
this
controller
and
then
the
last
thing
is
we
want
to
think
about
other
other
aspects,
including
add-on
services
right
so
integrating
this
tendency
model
with
monitoring
and
logging,
specifically
using
Prometheus
so
having
a
design
and
a
prototype
implementation
of
multi-tenant
Prometheus
using
the
same
CRD
as
as
model
B,
v2
and
sort
of
having
that
be
the
complete
solution.
So.
D
A
Includes
the
log
into
like
blue
and
T,
or
only
commit
yes,
yes,
yes,
logging
as
well.
So
again,
there
is
a
whole
set
of
additional
services
here,
I'm,
not
even
describing
the
whole
things
yet
yeah.
What
we
really.
This
is
more
like
a
subsequent
once
we
close
on
the
exact
I
mean
it
can
be
partially
in
parallel,
but
we
want
to
definitely
close
on
v2
controller
SPECT.
A
You
know-
and
this
is
again
in
open
area
for
somebody
to
come
in
and
and
and
raise
their
hand,
and
if
they
want
to
pick
up
a
of
these
areas
to
say:
okay,
a
let's
look
at
what
can
be
done
with
current
existing
prometheus
that
can
the
are
back
model
of
current
Prometheus
fit
into
this
tenancy
model.
Will
some
new
functionality
be
needed?
A
If
not,
we
at
least
produce
a
reference
POC,
and
if
we
find
some
gaps
are
there,
then
we
will
have
to
go
back
and
talk
to
the
Prometheus
guys
or
whatever,
but
right
now
the
initial
task
would
be
just
develop
a
reference
implementation
and
a
POC
in
which
we
have
Prometheus
as
well
as
efk,
working
in
the
same
multi
descent,
tenant
setup
as
the
tenant
controller
v2.
Well,.
D
G
A
They're
sort
of
we
haven't
listed
every
tasks
because
there's
these
baseline
tasks,
upfront,
which
need
to
happen
even
before
those
right.
So
we
will
America
will
continue
to
clean
up
the
project
board
and
kind
of
the
task
backlog
Allah,
so
it
has
been
listed,
but
we
don't
probably
we
need
to
do
more
in
terms
of
a
formal
backlog.
Are
you?
Are
you
interested
in
sort
of
picking
up
these
tasks
right
away
in
parallel.
G
Yeah
we
have,
we
have
some
of
these
issues.
We
do
multi-tenancy
for
over
a
year
now-
and
I
just
was
curious
to
see
if
there's
any
issue
that
I
can
follow
or
that
we,
maybe
even
that
kind
of
contribute
to
depends
a
bit
on
what
exactly
it
is
so
I
was
looking
into
the
kubernetes
6
multi-tenancy
issues
list
and
I
couldn't
find
those
things
here.
So
I'm
I
was
wondering
where
you
tracked
them.
Yeah.
A
So
we've
got,
you
know
it's
kind
of
a
mix
right
now
we
haven't
been
formally
using
the
issues
list
and
neither
have
people
being
submitting
issues
so
that
information
is
spread
between
some
of
the
docs
page
under
the
links,
as
well
as
the
project
board
that
are
mentioned.
But
maybe
we
you
know,
as
I
said
earlier,
will
have
a
little
bit
cleanup
of
our
issue
management
process
and
whether
we
encourage
everybody
to
submit
issues
or
whether
they
encourage
submit
entries
to
the
project
board.
A
E
G
G
A
If
you
want
a
technical
understanding
of
these
different
models,
take
a
look
at
the
working
group,
deep
dive
and
you
that's
separate
from
the
issue
tracking
and
the
backup
tracking
in
the
project
modes.
So,
but
for
things
like
you
know,
a
full
backlog
of
items,
either
immediate
immediate
items
or
slightly
longer
term
items.
A
C
Yeah,
so
what
I
would
say
is
for
new
members
we
have
so
all
the
docs
are
on
the
github
locations,
so
it's
just
get
comm
/
kubernetes,
sig,
/
multi
tenancy.
So
if
you
click
the
docs
and
then
you
click,
the
links
page
has
the
links
to
all
the
documents
that
Sanjeev
is
referring
to.
We
also
have
a
YouTube
channel
that
has
all
of
the
multi-tenancy
videos
that
you
can
watch.
It
includes
the
coop
con
presentations
so
for
people
who
don't
know
what
we're
talking
about
when
we
refer
to
the
different
models
and
multi-tenancy.
C
There
are
very
good
reviews
there
and
everything's
on
YouTube
and
posted.
So
what
I
would
say
is
like
just
take
a
look
at
the
deep
dive
and
or
if
you're,
really
brand
new,
take
a
look
at
the
intro
from
ku
con
EU,
which
was
just
a
few
weeks
ago,
where
Sanjeev
and
I
went
over.
What
the
working
group
is
been
working
on.
Take
a
look
at
the
project
plan
and
it's
really
very
simple,
like
what
is
in
the
project
plan
is
in
the
backlog
on
the
github
Docs.
A
Okay,
one
small
question
related
to
the
monitoring
and
logging
part
when
we
are
saying
phase
one
of
tenancy
for
add-on
services.
Monitoring
logging
are
we
thinking
about
how
in
a
multi-tenancy
world
prometheus
is
going
to
emit
metrics,
which
are
only
visible
to
one
tenant
metrics
of
one
tenant
being
visible?
Only
for
that
only
to
that
tenant
users,
that's
right,
okay
and
similarly,
logging,
I
guess.
C
H
H
H
I
I
I
We
launched
called
preach
about
two
weeks
ago
and
now
in
Padua
qua
temperatures
about
doctor,
runtime
and
NFS
as
storage
banquet
in
future
version.
We're
a
support.
I
H
B
H
Okay,
what's
up,
can
you
see
this
my
screen,
yeah.
J
J
J
A
J
A
A
A
J
A
B
J
B
J
D
A
You
know,
is
there,
do
you
want
to
first
sort
of?
Have
this
be
a
you
know,
core
dump
solution,
independent
of
multi-tenancy
and
then
as
an
additional
phase,
make
it
tenant
aware,
depending
on
the
tenant
CCR
DS,
that
we
come
up
with?
How
do
you
expect
to
tighten
with
multi-tenancy
because
it
is
useful
on
its
with
or
without
multi-tenancy.
A
A
Not
just
that,
but
but
if
you
want
a
tenant,
a
tenant
of
airco
dump
function,
you
should
be
able
to
have
things
like.
Okay,
here's,
your
common,
remote
code
and
location,
but
cores
from
tenant
K
are
not
accessible
to
tenant.
B
code,
a
subtenant,
a
are
not
accessible.
Is
that
just
a
question
of
our
back
done
appropriately
or
like.
B
F
A
No,
it
could
be
that
our
back
is
sufficient.
I
just
would
like
to
see
it
clearly
spelt
out
like
an
essential,
an
architecture
diagram
which
shows
n
number
of
tenants,
sharing
a
cluster,
each
of
them
having
their
own
core
dump
and
either
a
shared
remote
code,
observer
or
or
dedicated
code
observers,
and
is
it
just,
is
just
a
question
of
setting
up
our
back
correctly
or
something
more
than
that
right.
Yeah.
E
A
A
I
A
I
I
A
A
I
F
C
So
I
think
a
really
good
demo
and
to
kind
of
echo
Sanjay's
point
just
having
a
sort
of
topology
diagram
explaining
how
you
would
use
this
in
a
multi-tenant
cluster
would
be
awesome,
and
if
and
since
this
is
a
tool
that
we
think
would
be
useful
for
people,
even
if
they're
not
running
in
a
multi-tenant
situation,
it's
probably
something
that
we
should
probably
a
tool
that
we
should
suggest
gets
talked
about
and
maybe
sig
off
there,
which
group
do
you
think,
would
be
best.
Sanji.