►
From YouTube: Kubernetes Working Group Multitenancy 2090129
Description
Discussion of the proposed project plan here: https://docs.google.com/presentation/d/1dsAsVm8kCA9Dx9_gMEYeJL7pduAbnfnxT9lhbyCvHDg/edit#slide=id.p9
A
So
this
is
the
multi-tenancy
working
group
meeting
for
Tuesday
January
29th.
Last
week's
meeting
we
went
over
a
working
group
proposal
that
included
a
project
plan,
definitions
of
soft
and
hard
multi-tenancy
and
proposed
work
around
multiple
phases
of
defining
and
implementing
soft
multi-tenancy
and
then
also
hard
multi-tenancy.
A
And
then
you
can
also
find
this
by
going
to
the
agenda
for
this
group,
which
is
right
here,
and
the
link
is
right
there
and
it's
also
been
sent
out
on
the
mailing
list.
So
we
went
over
all
of
this
content
at
the
last
meeting
and
the
next
steps
that
we're
looking
at
doing
is
moving
this
information
formally
into
a
working
group
architecture,
framework,
work
document
accessible
via
our
wiki
page
and
then
documenting
mode
s1,
exactly
completely
in
this
document.
A
A
You
you,
okay,
let's
see
we
got
so
specifically.
What
we're
looking
at
for
next
steps
of
timelines
is
we're
looking
at
the
working
group
as
a
group
kind
of
coming
together
and
agreeing
that
this
is
the
direction
we
want
to
start
pushing
in
and
then
we're
going
to
start
working
on
publishing
the
softs,
multi-tenancy
s1
model
and
functional
definition.
A
So,
as
we
work
towards
doing
that,
we
have
a
bunch
of
documentation
in
here.
That
kind
of
has
some
good
kicking
off
points
and
going
over
what
the
multi-tenancy
model
for
s1
is-
and
this
was
all
gone
over
in
some
depth
at
the
last
meeting,
but
I
just
kind
of
like
to
open
it
up
to
this
group.
As
far
as
looking
at
this
path
do
we
have
any
thoughts
about
gene,
changing
immediate
direction?
Have
people
kind
of
taken
the
time
to
go
over
this?
Are
there
any
initial
reactions
to
this?
A
B
B
C
D
C
Comment
on
that
anything,
if
somebody
wants
to
add
that's
fine
too,
so
the
thought
at
this
point.
So
firstly,
this
is
a
strawman
right,
so
we
want
to
provide
a
reference,
but
we
don't
want
to
be
overly
prescriptive
right,
so
we
are
welcoming
what
everybody
feels
in
terms
of
how
much
to
prescribe
and
how
much
not
to
prescribe
so.
This
is
an
candidate
reference
model
where
we
use
the
default
namespace.
C
Now,
as
I
have
mentioned
in
the
notes
below,
there
is
a
possibility
that
sooner
or
later
we
would
need
multiple
services
namespaces,
because
maybe
you
want
to
shard
the
shared
services.
You
know,
for
example,
shared
service.
One
is
only
for
tenants,
a
and
B
and
shared
service.
Two
of
the
same
type
is
only
for
tenants,
C
and
E,
and
perhaps
you
do
not
want
to
put
them
in
the
same
namespace
for
additional
isolation
purposes,
so
the
thought
was
that
we
would
create
a
reference
model
using
this
to
start
with.
C
C
What
kind
of
how
much
to
offer
we
want
to
be
capturing
into
like
the
kubernetes
conformance
testing,
automation,
keep
in
mind
this
will
this.
This
should
not
be
meaning.
It
will
be
a
kind
of
a
optional
component
of
kubernetes
conformance
right,
so
I'm,
using
the
conformance
and
a
little
bit
loose
sense
here.
So
once
we've
gone
through
that
we,
the
thought,
is
kind
of
make
it
a
jille
and
keep
on
iterating
over
time,
and
then
we
could
you
know
so
that's
this
is
going
to
be
a
bit
of
a
dynamic
process.
C
C
D
It
seems
to
me
big
from
the
slide
before
like
because
we
I
think
we've
agreed
that
this
there's
not
a
one-to-one
for
tenants,
banks,
places
and
tenants
can
have
multiple
spaces,
so
we
already
have
to
have
maintained
somewhere
in
mapping
between
tenants
and
namespaces.
It's
just
one
more
extra
line
in
that
mapping
so
go.
Where
is
this
tenants
so
right,
I'm
space
in
a
two-year
default
in
the
beginning.
C
C
C
I
should
have
probably
tagged
them
better,
so
these
are
four
worker
nodes,
and
here
the
three
colored
bars
are
three
deme
spaces
and
also
three
tenants:
a
B
and
C
and
the
dotted
line
the
dotted
square
below
is
a
one
of
the
system
name
spaces
like
cube
system,
which
you
know
in
potential.
It
could
run
across
all
these
nodes
or
the
default
namespace.
C
C
E
F
D
D
If
you're
not
gonna,
put
something
and
I
have
like
a
broker
API
or
a
forced
mutating.
The
mission
controller
that
gets
pretty
complicated,
whereas
if
you
could
actually
add
the
nodes
into
the
are
back
model,
it
would
just
naturally
fall
out
who
we
just
out
of
the
object
definition
and
not
rely
on
anything
happening
on
the
workload
side.
D
C
C
G
I
ate
at
a
comment,
but
one
thing
that
I
had
an
opinion
on
was
for
maybe
slide
9
just
that
if,
if
you're
getting
started,
if
you
used
a
cushion
you're
gonna
have
a
harder
time.
You
know
it's
a
micro
to
use
alternative
runtime
engines.
If
you
use
something
like
cry
or
container
D,
it's
very
simple
for
people
to
just
plug
in
a
lot
of
other
OCI
compliant
runtimes.
G
Wouldn't
start
with
that
Christian
if
I
was
getting
started,
I
would
use
a
more
flexible
CRI
implementation,
something
like
container
D
or
cryo,
and
the
reason
I
say
that
is
because
if
you
want
to
use
something
like
kata,
that's
gonna
be
step
one.
It's
doing
that
and
to
modify
a
cluster
that
already
has
cry
or
container
D
is
very
easy.
You
can
just
run
the
daemon
set
and
then
you
can
go
on
and
be
using
alternative
runtimes
on
each
node.
G
C
C
When
we
used
so
since
the
calm
is
docker
regular
doctor
and
time,
the
emphasis
with
modulus
one
is
to
go
with
sort
of
common
topologies
that
are
used
today
and
then
and
then
have
maybe
a
model
s2
which
is
sort
of
newer,
runtimes
and
other
other
newer
features.
I
mean
that's
the
thought
so
far.
You
know,
if
you
think
this
pros
and
cons
we
could
you
know
this.
G
F
G
Wouldn't
I
wouldn't
go
in
here
and
say,
like
y'all,
should
use
cattle
run
time
right
now
that
doesn't
make
any
sense,
especially
for
s1
but
I'm,
saying
if
you
start
with
s1
the
gap
between
s1
and
s2,
if
you
use
a
run
time
that
it
has
some
flexibility,
CRI
implementation.
Excuse
me
that
has
some
flexibility
it'll
be
a
lot
easier
for
people
to
move
from
s1
to
s2
or
to
live
somewhere
in
between
ok.
B
Think
to
add
to
that
isn't
continue
to
be
the
default
starting
in
13
or
14
proposed
already
I'm
committed
and
well
I
know.
I
know,
continuity
is
going
to
be
the
default
soon.
So
maybe
that
is
something
that
we
just
really
waiting.
That's
what
we
target
for
us
one
is,
you
know
this
version
of
Kate's
are
above
and
then
sure.
C
And
again,
we
would
welcome
your
expertise
on
all
of
this,
because
this
these
are
very
broad
topics
right.
So
all
of
us
have
a
certain
perspective,
but
really
we
need
to
collectively
pool
our
inputs
do
to
make
this
work,
because
this
is
so
many
broad
collection
of
topics
here
that
we
are
not
necessarily
up-to-date
on
the
exact
status
of
let's
say:
continuity
doesn't.
C
You
one
question
I
had
for
the
for
everyone
attending
here,
and
anyone
else
that
you
think
might
have
an
input
is
so
the
thoughts
of
our
was
that
we
would
create
a
reference
model.
Look
at
adding
some
kind
of
conformance
testing
right
in
Cuban.
It
is
upstream
testing
it,
as
maybe,
as
a
is
a
special
test.
Suite
like
in
bullet
item
three
says
third
party
test
suite
and
then.
C
Does
that
and
then
does
that
sound
about
the
right
approach
to
everyone?
Any
anything?
Any
anyone
have
a
drastically
different
sort
of
suggestion.
I.
E
Was
supposed
to
do
this
and
I
forgot,
but
get
some
guidance
on
what
on
conformance
and
how
and
whether
this
can
be
integrated
into
conformance,
so
III
and
I
had
had
a
discussion
like
this
I
think
I
mentioned
a
couple
months
ago
with
Brian
grants
many
months
ago
like
for
like
maybe
six
months
ago,
and
there
were
some
reasons
why
we
felt
that
why
he
didn't
seem
like
these
kinds
of
security
levels
or
whatever
you
want
to
call
them.
Isolation
levels
were
a
good
match
for
conformance,
so
we
kind
of
dropped.
E
That
idea,
so
I
told
I'm,
pretty
sure
I
told
Tasha
and
I
never
did
it.
That
I
would
get
Brian
to
write
something
up
on
like
kind
of
what
are
the
guidelines
for
kubernetes
conformant
stand
out
works,
and
then
we
could
go
from
there
to
figure
out
how
to
integrate
this,
because
there
were
some
unusual
rules
there
that
I
hadn't
expected
about
like
about
conformance
tests
that
made
it
difficult
to
work.
This
idea
into
conformance
I
no
longer
remember
most
of
the
details,
like
the
only.
E
Was
like
you
can't
use
our
back
in
conformance
tests
from
what
I
recall
and
so
like
I
need
to
I
need
to
get
that
written
data,
get
him
to
write
that
down
or
someone
else
from
the
conformance
working
group
then
or
writing
down
the
guidelines,
so
that
then
we
can
make
a
decision
about
how
and
whether
we
could
do
that.
So
now,
that'll
be
my
comments
on
that
that
piece
and
yeah
on
the
controller
to
enforce
that
I
mean
I.
E
Think
that
kind
of
aligns
pretty
well
with
the
security
profiles
stuff
that
he
is
talking
about
in
the
past,
for,
like
configuring,
a
cluster
with
the
right
command
line,
flags
and
policy
objects
for
particular
policy.
So
that
might
be
a
good
candidate
to
consider
as
a
starting
point.
If
we
decide,
we
want
some
kind
of
controllers
to
enforce,
enforce
this,
but
obviously
getting
the
different
policies
written
down
or
levels
or
whatever.
First
is
the
first
step.
Yeah.
C
E
C
We
perhaps
that
would
be
sort
of
like
a
Model
S
to
wear
anything
that
we
need
that
doesn't
exist
in
cube.
113
is
basically
Model
S
to
write
again.
These
names
are
slightly.
You
know
arbitrary,
let's
just
say,
but
the
point
was
that
Model
S
1
is
you
know
everything
that
you
can
do
with
up
to
and
including,
what's
already
there
in
cubed,
113,
right
and
and
any
anything
that's
not
there
is
is,
is
a
sort
of
subsequent
model.
C
The
new
series,
at
least
at
that
time,
was
part
of
Model
S
2.
But
if
you
know,
if
we
make
rapid
progress-
and
you
know
it
has
been
pointed
out-
that
the
upstream
community
is
becoming
more
and
more
in
favor
of
having
more
and
more
CR
DS,
so
perhaps
they
will
now
be
much
more
open
to
welcoming
you
CDs
for
anything
that
we
think
is
necessary.
C
D
So
my
main
comment
would
be
the
one
to
one
tenant
to
namespace
mapping
being
that
this
s1
is
targeting
on
Prem
I.
Think
a
really
common
use
case
on
premise.
The
tenants
will
have
like
dev
test
staging
and
production
environments
so,
like
some
people
have
separate
clusters,
but
a
lot
of
people
will
have
it
like
a
single
cluster,
particularly
for
one
and
on
pradhan
one
for
Prague
and
I.
Think
he
will.
If
we're
going
to
do
the
taints
of
toleration
thing
as
to
map
the
tenants
to
nodes.
F
D
Already
need
some
kind
of
CID
to
to
store
all
that
and
mapping.
It
doesn't
seem
like
such
a
big
stretch,
to
add
have
that
mapping
to
multiple
works,
multiple
namespaces
for
tenant
they
supposed
to
just
fixing
it
to
one
I
think
there
would
be
a
hard
think
if
we
build
all
the
tooling
around
that
it
would
be
odd
thing
to
add
later
on.
C
We
are
not
so
modulus.
One
is
not
really
modeling
a
service
provider
kind
of
tendency.
It
is
still
a
enterprise,
soft
multi-tenancy
kind
of
a
model,
so
it
would
be
pretty
natural.
This
is
very
a
novice
to
OpenShift
projects
that,
if
you
may
have,
if
you
may
have
reframed
familiar
with
OpenShift,
so
in
OpenShift,
one
to
project
equals
one
namespace
and
you
would
often
do
a
day
of
namespace
center
and
a
testing
namespace
and
there
would
be
two
different
open,
shipped
projects.
C
D
Know
I
understand
the
motivation.
What
that's
doing
now
is
then
pushing
because
what
the
the
commonality
between,
like
particular
dev
testing
and
whatever
environment,
namespaces
or
tenants
in
your
in
this
case,
the
commonality
is
probably
it's.
The
same
people
or
groups,
things
in
the
are
back
and
you're
pushing
that
back
out
to
the
actual
tenants
to
have
to
manage
all
of
that.
B
From
an
implementation
perspective
and
not
to
get
too
technical,
I
guess
is
I
assumed
from
like
attending
perspective.
Anyway,
we
would
follow
some
sort
of
labeling
of
the
tenant
to
label.
This
names
are
labeling
of
a
namespace
to
say
this
is
a
tenant,
and
then
you
know
in
the
future.
We
could,
you
know
add
in
another
label
of
this
is
the
tenant
ID.
C
Right
yeah,
that's
the
thought
that
start
with
one-to-one
mapping
and
not
and
keep
the
n21
in
mind
and
and
again
one-to-one
fits
naturally
with
our
back
as
it
exists
today.
It
is
also
what
products
like
open,
shipped
and
rancher
do
so.
That's
sort
of
the
reason
for
that,
and
you
know
we
might
we
haven't
the
main
thing.
Is
that
again
going
back
to
how
much
we
want
to
be
prescriptive
about
and
how
much
we
want
to
be
known,
prescriptive
about
is
is,
is
gonna,
be
because
one
one
feedback
I
anticipate
coming.
C
F
D
G
D
G
Okay,
any
quick
questions
just
so
I
understand
what
you're
describing
you
kinda
have
like
a
mixed
cluster,
where
it's
both
mixed
tenants
as
well
as
mixed
workload,
trusts
like
the
pre
production
post
production,
you're,
saying
there
isn't
a
one-to-one
mapping
because,
as
a
particular
developer,
could
be
wanting
to
write
to
two
different
essentially
having
to
sandboxes
they
do
it
effectively.
Yes,.
D
G
D
F
H
D
F
C
Yeah,
you
know
a
minute,
you
know
districting.
It
is
just
that
we
starting
point
that
mirrors
what
people
are
already
doing
with.
You
know
solutions
like
openshift,
so
that's
what
we're
targeting
and
that
does
not
preclude
things
like
you
know.
A
dev
tenant
make
Ana
production
tenant
versus
you
know
pre-production
to
it,
because
these
are
all
really
really
not.
These
are
more
like
projects,
as
opposed
to
tenants,
so
you
team
could
have
five
different
projects
on
the
same
cluster.
I
Hi
this
is
a
resin
I,
have
a
question
to
understand
the
s1
model
a
little
bit
better.
So
we're
you
know
for
s1.
You
know
we're,
assuming
that
we
don't
have
any
malicious
actors,
but
do
we
offer
any
kind
of
a
protection
against
accidental
sort
of
errors,
for
example
for
ingress
if
you're
saying
that
that's
a
shared
service,
well,
one
bad
ingress
object
could
break
ingress
for
everyone.
So
so,
if
you're
gonna
say
well,
s1
is
the
low
hanging
fruit
in
soft
multi-tenancy.
Then
we
should
offer
some
protections
against
that.
I
So
we
should
offer
you
know
either
multiple
ingress
controllers
or
some
mechanism,
where
the
various
tenants
can
be
assured
that
some
other
tenant
can
outbreak,
break
their
English
roots
or
schedulers
shared
and
could
accidentally
1:10
and
cause
the
schedule
to
evict
the
pods
of
other
other
tenants.
So
some
kind
of
resource
management
would
be
absolutely
necessary.
If
we're
going
to
say
this
is
yes
s1,
you
know
multi-tenancy
model.
So
what
kind
of
protections
are
we
offering
that
this
actually
works?
If
people
adopt
it,
yeah.
C
C
So
there
will
be
a
level
of
isolation,
but
as
the
level
of
isolation
that
again
projects
like
open
ship,
they
don't
do
you're,
not
getting
the
kind
of
isolation
that
you
get
between
two
clouds
on
IA
WX,
between
2
10
and
so
AWS.
Right,
we're
not
talking
about
that
level
of
isolation.
You're
not
talking
about
Koko's
as
pepsi
isolation.
So
so,
if
you
look
at
the
slides
here,
we've
got
you
know
if
you,
if
Tasha,
if
you
can
go
to
like
maybe
slide
7
or
actually
keep
going
down
a
little
bit,
maybe
9.
C
D
F
F
D
I
I
I
Well
well,
the
whole
question
is
the
definition
of
s1
wouldn't
a
a
s1
which
is
our
base.
Entry
for
soft
multi-tenancy
have
some
optional
components
that
allows
the
people
who
adopt
it
to
have
some
protection
against.
You
know
these
shared
resources,
like
the
scheduler
ignorance
controller,
they're
optional
right,
but
how
can
the
base
model
not
protect
you
so.
C
The
thought
is,
yes,
you
will
have
the
protection
again.
Thought
is
so
far
is
that
modulus
one
is
almost
like
a
best
practices
document:
okay,
it
is.
It
is
literally
now,
depending
on
how
much
we
want
to
put
in
an
operator
that
actually
enforces
this.
That
depends
on
our
execution
over
the
next
few
months,
but
at
least
the
initial
thought
is
that,
at
the
very
least,
this
would
be
a
best
practices
document
about.
If
you
want
to
conform
with.
What
upstream
is
calling
soft
multi-tenancy
version
one?
This
is
what
you
need
to
do.
C
Okay,
and
we
are
not
necessarily
providing
an
operator,
but
we
are
prescribing
how
you
should
be
setting
up
your
particular
tea
policies.
Your
are
back
and
API
server
and
all
that
whether
we
actually
put
a
controller
to
enforce
that
depends
on
whether
we
get
that
done
in
the
same
time
frame
and
other
things
yeah.
E
If
a
call,
because
it's
like
different
for
everyone
like
like
I,
don't
know
how
you
would
like
like
you,
can
you
can
do
something
to
address
the
ingress
thing
that,
like
for
quota,
how
much
quota
would
you
give
each
namespace
would
I
think
that
would
be
hard
to
incorporate
and
still
like
you
can,
you
can
say
everybody
should
use
quota,
but
I'm
not
sure
you
can
like
set
the
quotas
in
some
kind
of
universal
way.
That
would
work
for
everyone.
Well,.
I
How
would
they
respond
to
that
tenants
complained
ban
and
then,
if
there
is
no
response,
then
you
know:
is
this
s
one
a
Minimum
Viable
Product
right,
so
you
know:
will
anybody
use
it
if,
if
one
tenant
can
cause
other
tenants
more
clothes
to
get
it
right?
Well,.
E
So
are
you
concerned
about
preemption,
so
so
the
quota?
You
can
say
that
per
priority
level
and
you
can
avoid
the
preemption
that
you're
talking
about
I,
don't
know
if
that's
what
you're?
That's
what
you're
referring
to
but
I,
don't
know
how
we
can
set
quotas
in
in
in
something
that
applies
to
everyone.
E
We
can
tell
people
to
use
quotas
and
to
set
per
priority
quota,
which
is
a
feature
that's
already
in
kubernetes,
to
prevent
the
wrong
people
from
from
pre-empting
the
wrong
people's
pods,
but,
like
I,
don't
know
how
you
can
formalize
that
in
something.
That's
automated,
because
everybody's
quotas
are
gonna
well.
I
F
It
one
is
viable
if
it
doesn't
have
that
if
that
isn't
a
requirement
for
the
given
use
case,
I
agree
that
that's
probably
a
pretty
like,
perhaps
likely
use
case
or
valuable
use
case
to
be
solved.
But
you
know
it's
possible
that
you
in
certain
soft
multi-tenancy
situation,
is
the
distributor
of
the
software
might
not
have
this
problem
because
they
control
all
of
the
software
that's
being
ran.
F
C
The
goal
is
firstly,
this
is
literally,
however
much
we
can
so
the
goal
is
to:
firstly,
you
know
have
a
reasonable
level
of
solutions
available
for
soft
multi-tenancy
by
the
time
of
cube
con
europe.
In
may
end
of
this
end
of
me
right
and
really
the
this
model
we're
talking
about
whatever
you
can
do
with
cube
130
right,
which
is
the
latest
cube
that
is
out
and
then,
depending
on
how
we
end
up
adding
new
objects
after
cube
113.
C
That
will
be
a
next
phase
of
soft
multi-tenancy
and
you
know
that
may
end
up
being
post
cube
con
and
so
on.
So
you
know
new
controllers
and
so
on
so
I
know
I'm
being
a
little
bit
open-ended
here,
because
this
is
a
very
broad
topic
here
and
in
order
to
keep
it
going,
we
want
to
get
some
simple,
quick
wins
right.
C
So
a
simple,
quick
win
is
a
documented
best
practice,
along
with
some
test
automation,
saying
if
you
follow
this
best
practice,
you
will
get
a
reasonable
soft
multi-tenancy
today
and
we
are
in
the
process
of
developing
controllers
for
it
and
adding
it
to
upstream,
which
will
be
available
in
maybe
cube.
114
115
116
depends
on
how
we
end
up
making
progress.
B
Yeah,
if
I
could
just
add
I,
guess
I
think
something
that
we
should
pretend
to
the
edge
to
this
best
practice
to
kind
of
calm.
Some
of
these
questions
is-
and
these
are
you
know
some
common
use
cases
or
just
some
examples
of
what
this
can
and
can't
you
it
just
has
you
know
to
make
it
clear
that
yeah
there
are
some
holes
in
this,
but
if
this
is
what
you're
looking
to
do,
you
can
accomplish
it
with
this
model?
Well,.
I
So
so
Mike,
Mike,
okay,
so
I
see
sort
of
a
conflict
in
here
that
I
think
it's
perfectly
fine
to
target
a
date
for
a
particular
release.
Well
then,
if
you're
gonna
name
something
and
call
it
a
Model
S
one,
it
needs
to
be
complete
and
a
Minimum
Viable
Product.
So
to
say
that
models
s1
is
it's
got
a
target
release
date.
That's
what
we
have
to
trim
down
its
requirements.
I,
don't
think
that
should
be
a
named
release.
True
true
right.
C
Okay,
that
was
in
a
manner
of
speaking,
but
really
what
Model
S
one
is
really
trying
to
do
is
more
or
less
match
proprietary
solutions
like
OpenShift
projects,
that's
really
what
it
is.
Okay,
without
with
or
without
a
controller,
the
controller
part
depends
on
whether
we
get
it
done
in
time
right
so
Model
S
one
is
trying
to
mirror
in
upstream
what
vendor
solutions
like
OpenShift
are
doing.
That's
really
the
emphasis
of
more
or
less
one.
Okay.
That
timeline
was
just
sort
of
our
own
little
checklist
of
when
we
want
to
get
this
thing
done.
F
D
C
E
Not
an
expert
on
OpenShift
but
like
I,
think
I'm
a
little
confused
here.
It's
like
I
mean
OpenShift
as
I
understand
an
open
ship
I
understand
the
project
stuff
but
like
in
general.
Open
shift
has
various
features
for
multi-tenancy
in
isolation.
Most
of
them
are
just
the
same
as
the
upstream
features,
and
then
they
have
some
small
number
of
unique
features
as
I
understand
it.
But
all
of
these
things
are
like.
E
E
I
C
So
result
the
the
target
for
modulus
one
is
an
enterprise
operator
that
wants
to
provide
service
to
n
number
of
teams
sharing
a
cluster
right,
which
is
exactly
the
use
case
of
open
sector
also
as
if
I
alright.
If
you
have
a
specific
metric
that
is
of
interest
to
you,
please
define
it.
You
know
your
your
metric
could
be
I
want
ingress
to
be
isolated
in
such
a
manner.
So
if
you
can
give
us
a
specific
set
of
metrics
that
you
like
to
cross-check
against
each
model,
we
would
be
happy
to
do
that.
C
Otherwise,
it's
an
open-ended
question,
because
you
ask
the
same
question
to
openshift.
They'll,
give
you
the
same
answer.
Well,
you
know
it's.
A
soft
multi-tenancy
depends
on
what
features
you
turn
on.
You
might
get
some
isolation,
so
this
is
exactly
in
the
same
line
as
David
was
saying,
but
if
you
have
a
specific
set
of
metrics
that
are
of
importance
to
you,
please
list
those
out
yeah.
I
True
that
I
mentioned
are
kind
of
important
to
me.
If
you're
gonna
use
ingress,
then
you
know
we
have
to
be
able
to
protect
the
different
tenants
from
accidental
use
and
then
also
the
scheduler
schedules
shared
resource
and
whatever
we
can
give
the
operator
to
go
to
management
QoS
so
that
one
tenant
cannot.
You
know
something
like
noisy
neighbor
problem
in
the
cluster,
so.
A
I
think
that
those
are
listed
under
tenant
namespace
set
up
on
this
slide
with
curated
tenant
resource
quota
and
then
curated
kubernetes
ingressive
service
model.
So
it
seems,
like
we
at
least
have
the
right
set
of
features
listed
here
to
answer
what
you're
looking
for
and
then
as
we
progress
and
start
building
out
the
recommendations
you
know
it's,
you
should
definitely
take
a
look
at
those
and
make
sure
that
they're
going
in
the
direction
that
you
would
need
them
to
and
then
offer
feedback
there.
All.
C
It
needs
to
be
more
specific
than
saying
ingress
isolation.
It
needs
to
be
a
very
precise
definition
of
isolation.
Otherwise
it's
a
little
bit
of
there's
some
amount
of
ambiguity
that
comes
in
so
yeah
as
Tasha
said
that
these
are
all
metrics
that
are
on
the
table.
But
if
you
have
a
precise
definition
of
isolation,
then
feel
free
to
provide
that
so
that
we
can
cross-check
against
whatever
we
do.
Thank.
I
D
I
just
want
to
throw
in
again
some
of
these
things.
If
there's
a
one-to-one
between
tenets
and
nine
spices.
You'd,
you
almost
get
out
of
the
box,
and
it
only
becomes
tricky
when,
if
the
like,
for
example,
the
tenant
Oh
dresses
across
five
or
six
namespaces
things
like
that,
that's
when
you
need
to
do
something
yeah.
C
F
C
Standard
kubernetes,
yield
modest,
one
is
all
about
just
providing
a
reference
and
best
practice
and
automation.
Around
existing
communities
constructs
we're
not
defining
anything
other
than
possibly
some
controllers,
but
we're
just
providing
a
reference
model
so
that
when
two
people
will
define
a
multi-tenant
cluster,
they
can
look
at
this
document
and
they
can
both
get
similar.
Behavior
yeah.
D
A
Think,
as
we
look
at
what
s1
is
and
we
see
the
gaps,
it'll
be
a
really
good
opportunity
for
us
to
write
down
all
of
those
gaps
and
then
use
those
to
inform
how
we
improve
for
both
s2
and
h1,
because
you
know
these
all
of
everything
that's
coming
up
right
now
is
actually
super
valuable.
Just
for
saying
what
can
we
achieve
today
and
if
it's
not
on
the
list,
but
it's
something
we
all
agree
is
important.
A
D
D
C
C
C
D
D
I
just
wanted
something
that
I
think
is
probably
worth
more
discussion:
the
common
permanent
minimally
dependent
network
storage
cloud
provider
against
that
one
I
think
the
unpacked
a
bit,
because
if
we
start
mandating
some
of
those
things
or
there's
enough
differences
between
things
like
so
we're
in
it.
The
situation
right
now,
because
we
have
bare
metal
and
em
where
and
trying
to
come
up
with
a
like
a
storage
model
that
is
kind
of
a
bit
transparent
to
our
consumers
across
different
things,
as
this
actually
turns
out
to
be
non-trivial
we
want
to
make
make.
C
D
D
We've
just
we've
been
engaging
with
trig
error
because
we
have
where,
when
we
have
a
lot
of
Windows
stuff,
so
they're
actually
building
a
Windows
version
which
is
sort
of
the
same.
And
then
when
you
get
down
to
storage
and
stuff,
it's
it's
different
enough
for
coarse
cloud
providers.
It
just
seems
like
that's
a
a
thorny
problem
to
be
in
to
putting
towering.
F
D
C
Thought
when
I
put,
that
bullet
was
at
any
point
where
we
have
to
make
a
recommendation
that
could
vary
from
one
network
plug-in
to
another.
We
want
to
use
a
lowest
common
denominator
or
a
relatively
common
network
plug-in.
So,
for
example,
calico
would
be
a
relatively
common
network,
plug-in
and
vSphere
would
be
a
relatively
common
on-prem
provider.
So
the
thought
was
that
if
any
of
this
becomes
dependent
on
the
kind
of
provider,
then
maybe
calico
and
vSphere
would
be
the
ones
to
pick,
because
they
are
equally
common.
D
Okay,
yeah
that
that's
starting
to
seem
awfully
prescriptive,
because
it's
not
power
of
kites.
It's
it's
additional
external
third-party,
provided
things.