►
From YouTube: Kubernetes WG Multitenancy 20181024
Description
Notes and Agenda: https://docs.google.com/document/d/1fj3yzmeU2eU8ZNBCUJG97dk_wC7228-e_MmdcmTNrZY/edit#
A
A
A
E
Hey
I'm,
Tasha
I
think
I've
met
a
contribute.
Previously
we
had
been
talking
at
the
last
meeting
about
this
idea
of
soft
multi-tenancy
and
looking
at
how
sort
of
different
enterprises
have
achieved
that
within
kubernetes
using
existing
constructs
to
simply
have
basically
the
ability
to
share
a
cluster
between
multiple
different
groups,
and
so
I
was
hoping.
E
E
E
I
included
how
we're
currently
locking
down
a
multi-tenant
cluster,
just
as
sort
of
a
point
of
reference
and
I
assume
to
other
people,
would
have
ways
that
they
are
locking
down
multi
tenant
clusters
as
well,
that
we
could
kind
of
use
to
share
and
kind
of
collabo
in
that
does
that
make
sense?
Everybody,
okay,
cool.
E
E
Okay,
so
put
some
reference
links
up
at
the
top
I'm
sure
everybody
has
looked
at
these
before,
but
just
some
kind
of
Doc's
that
you
know
I've
gone
through
to
kind
of
inform
my
thinking
around
some
of
these
topics,
the
introduction
kind
of
goes
over.
You
know,
there's
more
than
one
type
of
multi-tenancy
there's
a
couple:
different
definitions
of
the
tenants
for
this
proposal,
we're
focusing
on
a
single
use
case,
which
I
think
it
sounds
like.
E
A
lot
of
people
have
already
run
into
and
implemented,
which
is
this
idea
that
I
am
running
a
single
kubernetes
cluster
I
wish
I
am
an
enterprise,
so
I
have
not
opened
this
up
to
the
world.
I,
don't
have
to
worry
about
complete
randoms
on
the
internet,
attempting
to
attack
this
cluster
from
the
inside,
but
I
am
sharing
it
between
different
business
units
or
different
groups,
and
how
am
I
achieving
what
we're
calling
softs
multi-tenancy
in
this
way
for
this
use
case?
E
So
what
we
were
thinking
would
be
good
is
once
if
other
people
are
also
running
into
this
use
case,
and
it
seems
like
there's
some
parallels
that
we
can
use
to
start.
Having
a
stronger
definition
of
what
soft
multi-tenancy
in
kubernetes
is
and
looks
like,
what
would
be
cool
is,
if
we
could,
you
know,
agree
on
a
set
of
rules.
E
E
Okay,
so
we
thought
maybe
it's
some
good
ways
to
get
started
is
to
kind
of
share.
Multi-Tenant
constructs
from
different
groups
see
if
there
are
some
common
patterns
and
how
we're
setting
stuff
up
you
know
there
might
be
some
really
good
ideas.
We
might
be
all
doing
it
in
exactly
the
same
way
already
and
then
also
share
current
test
failures
or
skips,
and
to
inform
both
new
test
harness
changes
and
also
kind
of
show
some
of
those
common
patterns
that
are
already
occurring.
D
It
seems
to
me
that
one
of
the
interesting
challenges
are
having
conformance
is,
if
you,
if
you
create
the
ability
to
just
say
alright,
I'm
gonna,
create
a
profile.
I'm
gonna,
give
it
a
name
and
off
the
back
of
this
profile.
I'm
gonna
basically
say
you
know
we
should
fail
these
conformance
tests
because
of
this
profile.
I
can
see
that
potentially
leading
to
our
kind
of
you
know
like
a
slippery
slope
argument
a
well
and
then
anyone
can
just
define
a
profile
if
they're
failing
performance
session
just
say.
D
Oh,
this
is
our
young
unique
thing
that
fails
these
particular
tests
and
the
whole
idea
of
performance
kind
of
goes
out.
The
window
so
I
think
in
trying
to
explore
like
what
is
special
about
a
soft,
multi-tenancy
and
and
and
sort
of
making
the
case
that
it's
not
it's
not
that
right.
It's
something
that
is
very
common
to
a
lot
of
people
and
maybe
should
have,
should
be
well
defined
and
should
have
a
set
of
tests
as
well
define
to
it
I
think.
That's
partly
the
argument
that
the
wing
it
away
right.
E
Nikoline,
okay,
so
we
had
a
bunch
of
suggested
processes
for
how
we
could
arrive
at
a
draft
proposal.
Define
RIT
find
our
assumptions
to
find
explicitly
the
parts
of
the
control
plane
which
are
shared
in
this
model
and
the
risks
involved,
building
off
of
Jess's
hard
multi-tenancy
definition.
We
wanted
to
define
the
high
level
areas
that
require
an
opinionated
approach
and
then
also
to
what
and
then
define
the
delegation
model
between
admin
and
consumer
to
find,
what's
meant
by
extensibility
in
this
model
and
understand
and
quantify
the
impact
on
testing
and
compliance
I.
E
Think
probably
this
is
better
if
everyone
just
wants
to
read
it
and
comment:
I
mean
kind
of
collabo
on
that
piece.
It's
definitely
not.
You
know
every
possible
idea
that
everyone
could
have
so
it's
pretty
open
for
comment
and
edit
there
and
then
down
here.
We
just
shared
kind
of
how
we're
currently
achieving
multi-tenancy
in
this
example.
So
we
kind
of
go
over.
You
know
what
are
we
doing
with
storage
classes?
E
So
yeah
it's
a
lot
to
look
at,
and
then
we
have
the
current
conformance
tests
that
we're
failing
and
a
couple
people
have
sent
me
a
link
to
one
that
Asscher
was
having
propyl
widths,
I,
think
back
in
June
or
something
I
think
that's
a
really
good
example
of
a
test.
That
is
one
that
you
can't
kind
of,
but
that
has
a
problem.
You
know
that
should
probably
be
addressed
too
so
yeah.
This
is
kind
of
in
a
nutshell.
What
we're
looking
at?
What
we'd
like
to
talk
about?
E
F
Hi
this
is
sanjeev
at
Cisco,
so,
yes,
my
team
and
I
are
also
in
the
early
stages
of
looking
at
multi-tenancy,
and
this
looks
like
a
good
set
of
thoughts
that
you've
drafted.
So
that's
great
and
I'm
happy
to
contribute
my
bed.
One
quick
comment
was
that
this
working
group
seemed
to
have
had
a
few
draft
documents
in
the
past,
some
of
which
have
been
translated
into
ke.
F
Would
it
be
useful
just
to
catch
everybody
up
to
understand
whether
this
would
be
the
new
master
document
for
the
group
or
whether
this
is
a
supplement
or
a
in
addition
document
to
ongoing
K
peas,
which
are
doing
things
like
namespace
templating
in
security
profiles?
It
would
be
good
to
also
lay
out
where
this
document
stands
within
the
set
of
documents
that
this
Marta
tenancy
group
wants
to
put
out.
Oh
yeah,.
B
I
I
I
was
I,
was
gonna,
bring
up
I'm
exactly
those
two
projects.
We
have
some
work
that
we've
talked
about
in
the
past
on
security
profiles.
That
is
that
it's
related
to
this,
and
there
was
at
one
point,
an
effort
to
try
to
formalize
those
in
conformance
the
effort
was
kind
of
dropped
for
reasons.
I
don't
entirely
remember.
Although
I
was
involved
in
that
discussion,
I
know.
One
of
the
reasons
was
that
the
cig
architecture
told
us
that
our
back
is
not
considered
like
a
core
part
of
kubernetes
and
therefore
you
couldn't
write.
B
You
couldn't
include
in
conformance
or
something
something
something
like
that.
I
have
to
go
back
and
look
at
look
at
my
notes
on
the
discussion,
but
I
think
that
you
know
we
could
try
to
resume
that
effort
to
to
have
some
kind
of
conformance
profiles
related
to
multi-tenancy
and
then
yeah.
The
third
third
part
is
that
the
namespace
KP
that
that
you
referred
to,
that
is
you,
know,
I-
guess
one
mechanism
for
enforcing
certain
kinds
of
policies
that
are
on
a
namespace
basis,
so
so
kind
of
at
the
mechanism.
B
Layer,
layer
of
the
staff
but
I
think
all
of
those
are
relevant
and
I.
Don't
know
if
there's
what
weather
I
don't
know.
What
the
right
answer
to
your
question
is:
I,
guess
being
trusted
and
input
from
other
other
people
about
whether
this
document
should
incorporate
those,
or
is
this
document
intended
to
become
a
ke
P,
or
should
we
incorporate
the
ideas
from
this
document
into
some
of
the
KPS
that
have
already
been
written,
I
guess,
there's
a
variety
of
options
or.
G
B
E
D
That
says,
our
back
is
not
part
of
core
kubernetes
and
therefore,
maybe
as
a
pile
of
compliance,
but
I
think
the
reality
is
that
that
if
this,
then
it
touches
more
than
just
our
back
right.
You
know
it
touches
privileged
containers.
It
touches
the
the
defect
that
the
bug
that
you
mentioned,
which
is
like
exec
privileges
or
whatever
it
is
it.
You
know,
there's
also
that
I
mean
it
may
touch.
D
Daemon
sets
right,
you
know.
If,
if
you
want
to
be
able
to
limit
the
number
of
nodes
of
demons,
that's
deflecting
for
a
particular
particular
tenant,
for
example,
it
may
touch
more
areas
in
our
back
and
I.
Think
maybe
it's
identifying
with
what
those
areas
are
calling
those
out
and
and
and
that
might
be
a
helpful
starting
point.
Yes,.
B
Sorry
I,
just
I,
didn't
I,
just
want
to
clarify
I
didn't
mean
to
suggest
that
that
effort
was
hopeless.
I
was
trying
to
remember
the
reasons
that
we
got
discouraged
and
kind
of
suspended
right
and
by
we
I
think
it
was
just
a
couple
of
folks
at
Google
I.
Don't
think
that
it
had
gotten
too
far
in
the
community,
but
we
had
brought
it
up
to
the
sig
arch
just
as
a
general
concept
and
got
that
feedback
about
our
back,
but
I
totally.
F
About
this
point
about
whether
it
should
be
part
of
conformance
tests
or
not
I
think
we
can
anticipate.
We
all
seem
to
be
anticipating
that
this
is
going
to
be
a
bit
of
a
gray
area
in
terms
of
whether
this
is
just
a
recommended
design
practice
or
whether
this
is
a
mandatory
set
of
capabilities,
and
it
seems
like
at
least
initially,
it
would
be
a
set
of
deployment
recommendations
more
than
a
mandatory
component
of
cumulative.
So
maybe
we
can
differ.
F
Maybe
it
needs
to
be
an
optional
part
of
conformance
until
such
a
time
as
the
cigars
people
have
a
view
on
what
multi-tenancy,
what
parts
of
multi-tenancy
should
be
actually
standardized
versus
what
is
just
a
design
recommendation,
because
a
lot
of
these
things
are
just
design
recommendations
you
know:
do
this
are
back
rule.
Do
that
and
you
know,
set
up
the
network
policy
this
way
and
one
would
debate
whether
that's
actually
a
standard
or
just
a
recommendation.
Yeah.
B
I
think
that
I
haven't
been
following
the
conformance
work.
Unfortunately,
but
I
mean
I
know
they.
They
do
have
this
notion
of
conformance
profiles
and
I.
Think
that
you
know
the
the
idea
that
we
had
that
we
kind
of
stopped
working
on,
but
was
that
that
you
could
have
one
or
more
multi-tenant
profiles?
So
it
wouldn't
be
like
you
know,
your
distribution
would
fail
conformance
if
you
didn't
follow
these
these
rules.
B
But
if
you
wanted
to
say
that
you
conform
to
the
multi-tenant
profile
like
it
would
be
like
an
extra
thing
that
you
could
claim
that
you
conform
to,
though,
that
that
would
be
in
addition
to
this
kind
of
the
standard,
conformance
conformance
test,
so
I
think
that's
a
reasonable
model
and
I
think
that
that
is
still
something
that
they're
planning
to
do
that.
We
could.
We
could
think
about
and
try
to
get
details
on.
H
I'm,
the
new
guy
here
and
I'm
still
I've
been
on
the
list
for
for
a
couple
months,
but
just
struggling
to
get
my
bearings
or
honestly.
They
make
this
meeting
so
I've
been
watching
a
lot
of
the
videos.
Mostly
my
name
is
JBL
I'm
historically
I
was
one
of
the
center
for
the
net
securities.
Original
tech
leads
and
wrote
their
Linux
security
benchmark
and
and
I
wrote.
Bastille
Linux,
which
was
a
long
time
ago,
was,
was
kind
of
the
main
security
gardening
tool
for
Linux.
It
was
an
open
source
project
nowadays.
H
My
my
main
interest
in
in
this
space
is
that
in
guardians
where
I
work
and
keep
getting
brought
in
on
kubernetes
penetration
tasks
and
and
so
we've
been
hacking,
you
know
every
day's
clusters
a
bunch
and
and
constantly
saying
guys,
you've
set
this
up.
You
know
like
either
someone's
created
the
products
for
soft
multi-tenancy
or
later
than
creative
arts
for
hard
multi-tenancy
and
I'm.
Just
writing
myself
saying
to
these
products.
Vendors,
like
don't
do
this.
H
The
kubernetes
front
hasn't
figured
out
what
they're,
what
they're
doing
for
multi-tenancy,
just
yet
you
know
I,
would
I
would
and
and
yet
park
it
open
has
been
moving,
which
anyway,
that's
my
that
I'm.
Sorry
if
I
spoke
too
long
right
there.
That's
my
like.
H
Think
there's
been
lots
of
great
talks
at
khun,
kon,
and
so
on
that
it
basically
said
the
defaults
are
where
most
people
end
up.
If
you
don't,
if
your,
if
your
default
is,
is
too
weak,
that's
that's
what
most
of
the
clusters
that
we'll
ever
see
in
real
life
look
like
and
and
so
I
just
yeah
I
just
wanted
to
voice
a
strong
opinion
that
you
know
whether
it's
yeah
I'm
good
I'm,
the
new
guy,
with
only
so
holy
stomach
experience
I'm
just
trying
to
help.
I
Er,
sighs,
I
think
that
makes
a
lot
of
sense
and
I
think
you
know
related
to
that.
There's
a
there's
kind
of
a
user
experience
question
it's
implied
by
these
levels
of
configuration
right.
So
you
know
we
find
that
when
you
configure
a
cluster
for
multi-tenancy,
there's
a
couple
sort
of
distinct
roles
that
emerge
like
someone,
that's
the
administrator
of
the
cluster
people
that
are
administrators
of
namespaces
people
that
are
users
of
namespaces
and
as
you
go
through,
you
know
the
kubernetes
Doc's.
I
They
don't
take
into
account
those
different
personas,
and
so
you
know
looking
at
it
from
that
perspective
of
making
sure
that
when
you
have
configured
it
with
these
policy
objects
that
the
user
experience
for
each
of
those
three
personas
is
whole
that
there
aren't.
You
know,
gaps
where,
like.
Oh,
it's
completely
unusable
because
you
don't
have
this
permission,
but
if
I
give
you
this
permission,
it's
gonna
make
the
whole
cluster
in
insecure.
I
C
C
Multi-Tenant
cluster
I
think
there
was
like
20,000
users
at
some
one
of
the
clusters
that
our
thousand.
So
it's
just
a
combination
of
like
recommendation,
you
know
I
would
say
it
would
be
a
combination
of
like
common-sense
recommendations
when
using
kubernetes
or
containers
in
general.
Like
you
know,
we
don't
allow
you
to
run
as
root
inside
inside
of
a
container
other
things
like
SELinux,
comm
capabilities.
Those
are
taken
care
of
with
security
contacts,
constraints
in
openshift
or
in
kubernetes.
That
would
be
like
pod
security
policy.
C
So,
and
all
of
this
boils
down
to
a
combination
of
our
back
and
policies
and
and
then
we
have
a
special
networking
or
omen
shift
Sdn.
We
have
our
OBS
multi
tenant
plugin
that
isolates
pods
from
different
projects
and
that's
the
other
thing
we
have.
The
project
object
that
in
the
wrapper
of
urbanize
namespaces
and
that
allows
overshift
to
set
like
a
limit
per
user.
This
user
can
only
set,
can
only
create
one
project
and
can
only
you
know
see
that
one
project
things
like
that.
I
Are
observation
as
well
that
that
you
know,
in
terms
of
you
know,
sophistic
levels
of
sophistication
that
the
there
exists
today?
You
know
sufficient
policy
objects
to
configure
kubernetes
for
some
of
these
forms
of
stock
multi-tenancy,
and
so
you
know,
without
trying
to
extend
kubernetes
at
all.
You
know
documenting
like
how
that
sort
of
opinionated
use
of
this
public
objects
convenient
on
today.
You
know
with
that.
As
a
baseline,
you
know,
a
subsequent
effort
might
be
to
look
at
what
extensions
to
policy
or
new
capabilities
might
want
to
add
to
that.
D
C
A
good
question
I
I
just
in
preparing
knowing
I,
was
going
to
this
meeting.
I
hope
there
are
Docs
and
OpenShift
and
there's
actually
quite
a
lot
of
good
information
there.
If
you
go
to
Docs,
ok,
Dao,
for
instance,
with
the
security
context
constraints.
If
you
just
look
under
additional
concepts
and
authorization
you'll
find
a
bunch
of
stuff
there,
but
yeah
doc,
Doc's,
ok,
DL
and
I
was
thinking
that
I
would
write
the
stuff
in
a
blog,
because
I
read.
D
G
Can
talk
a
little
bit
about
it
if
you
want?
Thank
you.
Yeah
thanks
Erica
and
the
the
project
concept
became
absolutely
necessary
as
a
way
to
really
have
the
multi
level
access
controls
upon
on
the
namespace
division.
So
a
lot
of
the
policies
apply
at
the
namespaced
level,
but
you
can't
have
you
want
within
the
namespace
people
to
be
able
to
do
whatever
they
want,
except
configure
the
policies
that
govern
that
namespace.
G
So,
in
order
to
achieve
that,
we
have
the
kind
of
project
wrapper,
which
and
through
a
part
request
that
enables
users
kind
of
self-service
namespaces
without
them
being
able
to
configure
the
strong
policies
that
we
put
over
the
namespace
themselves
and
having
some
sort
of
wrapper
or
like
that,
I
think
is
necessary
to
for
a
multi-tenant
operation,
whether
it
should
be
a
one-to-one
relationship
with
namespaces
like
if
that
was
the
best
decision
is
a
little
bit
more
up
here.
So.
B
C
That's
that's
exactly
what
the
project
request
template
does
so
in
overshift
online,
any
new
user
gets
configured
with
any
new
project
that
regular
user
creates,
is
configured
with
the
project,
request,
template
and
it
it
limits.
How
much
CPU
I
think
it's
limited
for
like
either
one
gig
or
two
gigs,
that
a
whole
bunch
of
constraints
are
put
just
right
in
that
project.
Request
time,
yeah.
B
B
F
I
Got
one
question:
what's
the
what's
the
status
of
those
existing
caps
like
for
the
namespace
templates,
you
know
it.
Should
we
assume
those
things
go
forward
and
skip
that
first
stage
and
do
this
like
with
the
expectation
those
caps
get
pulled
into
in
kubernetes
or
or
is
it
worth
excluding
them
from
initial
step?
I
just
don't
know
about
the
maturity
of
those
our
acceptance
level.
B
So
I
can't
remember
the
status
and
it's
when
it's
one
of
my
colleagues
here
at
Google
was
the
one
who
wrote
that
KP
other
with
a
lot
of
consultation
from
Clayton
Coleman
from
Red,
Hat
and
I
I
picked
him
to
see
if
he
could
come
to
the
meeting
he's
in
another
meeting.
So
he
can't
attend
and
I
don't
remember
the
status
of
it,
but
we
should.
We
should
take
a
look
then
for
deciding.
F
Think
we'll
still
need
that
first
stage
of
what
you
can
do
today,
because
I
think
people
need
solutions
today
and
these
K
Peas
do
imply
some
new
code,
some
new
CR,
DS
and
so
on,
which
wouldn't
be
immediately
available.
So
we
I
think
there's
value
in
having
the
document
of
what
you
can
do
with
existing
Kuban.
It
is
111
or
112
I'm.
H
Sorry,
new
guy
again
I
I,
want
to
echo
that
or
I
want
+1
on
that,
because
just
to
echo
what
I
said
earlier
there
there
are
lots
of
people
already
doing
this
already,
who
are
already
trying
to
do
we're
already
trying
to
multi-tenancy,
and
if
you
could,
if
you
could
give
them
eight.
This
is
the
right
way
to
do
it.
For
now
that
would
that'll
protect
people
for
the
next
year.
Mm-Hmm.
D
Good
yeah,
it
seems
like
the
project
constructs,
is
as
much
about
scale
as
anything
like
you
know.
Sally
you
mentioned
20,000
users
like
I,
can
imagine
in
that
scenario
you
know
it's
very
useful
to
be
able
to
have
another
level
of
subdivision
and
its
hierarchy
in
a
smaller
case,
where
you
know
maybe
you're
talking
more,
like
you
know,
hundred
users
or
less.
G
D
B
I
Into
the
weeds
here,
but
you
know
you
don't
want
a
namespace
administrator
to
be
able
to
write
their
own
quotas.
Okay,.
C
A
H
C
G
D
C
And
then
there
are
certain
times
when
you
you
need
to
access
the
different.
Your
patent
ease
access
to
different
namespace
like
for
me
just
anyone
you
want
to
access
that,
so
you
you
have
to
use
OC
admin
and
okay,
and
then
you
have
these.
The
cluster
admin
come
in
to
like
join
the
pop
network,
I'm,
not
sure
if
that's
an
old-school
thing
or
anything
actually,
okay,.
I
F
A
little
bit
into
detail
here,
but
I
just
mentioned
one
relevant
point
to
this-
is
that
if
you
treat
a
namespace
as
meant
for
an
application
very
often,
you
will
find
the
need
for
selectively
selective
transparency
between
two
names
places,
meaning
and
that's
how
the
I
believe
the
openshift
Sdn
also
has
the
ability
to
selectively
allow
two
namespaces
to
talk
to
each
other.
Yes,.
F
F
Talk
to
each
other
should
deploy
both
applications
in
the
same
naming
space
and
they
have
the
ability
to
then
add
other
network
policies
or
policies
with
namespace
if
they
want
some
kind
of
intra
namespace
isolation
between
two
applications,
but
a
namespace
is
meant
for
more
than
just
one
application.
A
namespace
is
meant
for
a
team
and
any
number
of
applications
that
team.
I
I
feel,
like
that's
kind
of
an
administrative
decision
that
a
customer
can
make
I
mean
I,
don't
sure
that
it's
important
to
say
whether
a
namespace
is
an
application
or
a
team
like
what's
important
to
say
is
that,
like
these
are
the
kinds
of
policy
controls
we
can
put
on
a
namespace
should?
And
so
you
know,
one
scenario
like
one
team
might
decide
to
put
multiple
microservices
inside
of
a
namespace
there's,
certainly
a
level
of
convenience
there,
because
you're
not
gonna,
have
to
deal
with
like
cross
namespace,
Network
policies,
or
things
like
that.
I
You
know,
and
and
maybe
that
works
for
your
team,
but
I
think
that
you
know
one
of
the
scenarios
of
the
enterprise
sophomore
T
tenancy,
where
I
have
multiple
teams
sharing
cluster
is
that
it
will
always
emerge
the
scenario
where
I
have
a
service
in
namespace
a
and
a
service
in
namespace,
B
and
I
need
to
enable
to
talk
to
each
other
and
I.
Think
you
know
the
difference
from
core
kubernetes
is
that
you
know,
while
core
kubernetes
assumes
there's
like
a
flat
network,
namespace
and
everything's
always
open.
I
You
know
what
we
observe
in
the
namespace
tenancy
model
is
that
the
like,
you
essentially
have
like
a
default,
deny
network
policy
on
every
namespace
and
you
need
to
whitelist
another
namespace
to
be
able
to
talk
to
you
and
so
whether
you're,
whether
the
things
inside
that
namespace,
are
a
team
or
application
or
multiple
applications,
kind
of
becomes
administrative
decision.
You
know,
but
it
but
the
constraints
of
the
system
or
the
way
that
the
network
policies
are
structure
that
governs
like
sort
of
what
your
of
what
you're
able
to
control.
F
It
was
just
a
suggestion
that
maybe,
as
a
starting
draft,
we
think
of
a
namespace
as
holding
multiple
applications
potentially,
and
if
you
really
want
the
isolation.
Well,
you
deploy
to
different
namespaces,
but
if
you
want,
we
don't
want
to
get
into
that
selective
leakage
between
namespaces
just
yet.
F
I
Guess
you
know
my
thought
on
that
is
like
I
feel,
like
you
know,
lucubra
days
today
has
network
policies
which
accomplish
exactly
that
scenario
of
you
know
controlling
east-west
connectivity,
between
namespaces
and
so
I
guess,
like
first
of
all,
I
feel
like
that's
definitely
in
the
domain
of
what's
possible
and
useful
in
kubernetes
today.
So
so,
regardless
of
what
we
say,
a
namespace
is
handling
that
scenario
of
enabling
or
just
across
name
space
network
connectivity
is
a
the
thing
that
we
should
document
and
capture.
I
I
guess
you
know
I'm
inclined
to
say,
look
I
feel
like
we
should
hold
back
from
saying
weather
like
what
a
namespace
is
like
a
namespace
at
some
level,
it's
just
a
collection
of
kubernetes
objects
and
whether
you
choose
to
use
that
for
a
team
or
a
single
micro
service
or
a
bunch
of
related
micro
services
is
kind
of
up
to
the
end-user
I'm,
not
sure
that
it's
important
for
us
to
define
that
here.
Sorry.
I
F
F
But
to
take
a
very
quick
partial
analogy
with
OpenStack
tenancy
in
an
openstack
term
means
implicitly,
tenants
are
isolated
and
you
can
always
deploy
multiple
applications
in
the
same
tenant
and-
and
we
just
various
degrees
of
selective
leakage,
can
be
something
we
pick
up
once
we
have
an
initial
working
draft
of
the
baseline
model.
I
think
that
selectively,
you
know
is
there
is
a
potential
for
some
some
amount
of
great
grayness
in
that.
F
But
what
I'm
I
think
we're
saying
the
same
thing
here,
but
essentially
what
we're
saying
is
be
so
the
the
conclusive
statements
are
number
one.
This
is
not
explicitly
meant
for
just
an
application
or
just
a
team,
but
in
general
it
could
be
targeted
for
both
those
use
cases
and,
let's
draft
out
a
scenario
where
we
can
get
fully
isolated,
namespaces
working
and
documented,
and
then
we
can
talk
about
selective
leakage.
H
D
D
H
H
D
And
I
think
part
of
that
is,
is
laying
out
the
risks
right
saying:
okay,
this
is
how
its
configured
these
the
choices
we've
made
so
like
in
this
bullet
list
that
we're
sharing
here,
for
example,
like
the
question
of
where
you
draw
isolation,
boundaries
is
really
important
right,
if
you
say
to
people
pods
from
one
user
can
end
up
on
the
same
notice,
pods
from
another
user
and
therefore
they're
in
the
same
fault
domain.
You
should
understand
implications
of
that
right.
D
I
Blended,
you
know,
perhaps
you
know
from
you
know.
Maybe
one
of
the
things
we
want
to
add
to
the
output
of
this
is
like
a
use
case
or
scenario.
So,
for
example,
if
we
were
documenting
you
know,
hey
here's
how
you
configure
policy
objects
for
multi-tenancy,
maybe
there's
like
maybe
it's
either
part
of
the
docs
or
maybe
there's
even
like
a
blog
post
or
something
somewhere
that
says
hey.
You
know:
here's
coca-cola
they've
got
app
teams
a
B
and
C
and
other
maintaining
micro-services.
You
know
one
through
ten
and
we
sort
of
walk
through
the.
I
How
that
maps
on
to
the
constructs
that
we're
documenting
in
this
approach?
Yes,
absolutely
and
also
offer
an
opportunity
to
highlight
some
of
the
different
design
choices
that
you
as
a
customer
could
make
you
know.
So
you
could
talk
about
the
trade-off
of
putting
multiple
services
into
a
namespace
or
separating
them
across
namespaces.
F
And
also,
we
should
show
an
example
of
a
full
workflow
integration.
With
this,
for
example,
you
know
see
ICD
workflows,
integrating
into
a
multi-tenant
cluster
I
think
there
could
be
some
things
there,
which
we
haven't
yet
fully
thought
through
or
I
have
a
flute
on
through
some
of
those
I'm
a
member
will
John
Jay
in
attack.
H
That's
awesome,
I
I
think
the
other
question
I
just
want
to
bring
up
is,
and
it's
only
because
I
haven't
spent
any
time
with
service
mashes,
but
I'm
as
I
talked
to
people
as
I
talk
to
people.
They
talk
about
using
service
mashes
more
for
network
isolation.
The
network
policies
does
any
of
this.
Does
the
if,
if
things
are
moving
in
that
direction
towards
where
service
masters
become
more
and
more
popular,
is
that
in
any
way
affect
this
I.
I
Mean
I
wouldn't
say
like
like
we
said
with
the
Kemp's,
you
know,
I'd
say
let's,
let's
start
off
by
just
focusing
on
core
kubernetes
and
what's
in
the
docs
and
the
whole
system
right
now
and
then
over
time.
As
you
know,
namespace
templates
come
in
and
as
this
geo
becomes
more
widely,
you
know
prevalence.
We
can
start
to
incorporate
those
into
into
the
model.
I
C
I
E
B
Yeah,
that
sounds
great
I'll,
definitely
add
the
the
caps
to
it.
Also
I
might
link
to
I,
don't
know
want
to
be
too
self-serving.
I
gave
a
talk
at
cute
Con
in
Austin
earlier
this
year
about
multi-tenancy
that
talked
about
kind
of
how
the
different
multi-tenancy
features
put
together
and
how
they
can
be
used.
It
might
be
interesting
if
people
haven't
haven't
seen
it.
It's.
I
E
A
E
Yeah
and
I
wasn't
gonna
open
up
edit
access
to
everybody.
Maybe
just
comment
access,
so
if
you
want
EDX
access,
just
should
be
your
Google
ID
I'm,
just
Tasha
dr.
Emad
gmail.com.