►
From YouTube: Kubernetes Working Group Multitenancy 20190226
Description
Discussion of draft project plan for the multitenancy working group. Document is open for comments here: https://docs.google.com/document/d/1U8RQQmTUjxgMZY05HG2f7b3KsB94BhK4Ko6aWbLNXcc/edit?usp=sharing
A
Hey
everybody
and
welcome
to
the
multi-tenancy
working
group.
Today
we
are
going
to
go
over
a
draft
proposal
of
how
we
want
to
move
forward
to
achieve
the
goals
of
the
working
group,
which
are
to
get
a
formal
definition
of
multi-tenancy
and
kubernetes
a
doc
that
I
am
sharing
right
now,
I
will
drop
a
link
in
chat
and
I
will
send
it
out
on
the
mailing
list
as
well.
It
should
be
open
for
anyone
to
hop
in
and
comment
on.
A
Cool,
so
gonna
get
started,
feel
free
to
ask
questions
as
we
go,
but
basically,
what
we're
looking
at
is
having
a
high-level
road
map
and
that
road
map
is
carved
into
three
phases,
so
the
first
epoch
would
be
to
define
a
secure
cluster
using
existing
features.
The
first
steps
for
that
would
be
to
define
a
secure
single
tenant
cluster
so
that
we
have
that
profile
available
and
then
define
a
secure,
soft
multi
tenant
cluster.
A
The
second
epic
is
developing
new
features
and
tools
to
improve
support
for
multi-tenancy.
So
once
we
have
that
cluster
definition,
we
would
be
looking
at
building
what
we're
calling
a
cluster
auditing
suite
and
those
words
were
picked
with
care.
So
this
would
be
a
suite
that
you
could
run
against
your
cluster
to
see
if
it's
been
configured
in
a
secured
multi-tenant
way,
as
according
to
the
secure
profile
that
we've
developed.
A
There
would
also
be
some
work
on
a
multi-tenancy
cid
and
then
also
work
on
looking
at
multi
tenant,
monitoring
so
being
able
to
tag
various
namespaces
for
the
tenants
so
that
there
isn't
different
tenants
data
showing
up
in
monitoring
solutions
in
the
future.
We
would
continue
to
gather
improvements
of
feature
requests,
socialise
those
with
the
SIG's
and
then
feed
what
we
learned
into
our
plan
for
both
soft
and
hard
multi-tenancy
in
the
future
and
then.
A
Finally,
our
goal
is
to
support
hard
multi-tenancy
and
where
we
are
will
be
just
gathering
requirements
and
feature
requests
and
starting
to
triage
those
and
decide
exactly
how
we
want
to
go
about
attacking
those
and
then
put
together
a
roadmap
for
that
in
the
future
that
we
can
share
with
API
machinery
and
sig
off,
continue
to
gather
feedback
and
then
put
together
a
concrete
project
plan.
So
this
is
the
high
level
of
this
doc.
A
So
what
is
our
motivation?
What
are
we
trying
to
do
here?
So
what
we're
looking
to
do
is
to
find
best
practices
in
a
reference
architecture
that
allows
a
consumer
of
our
recommendations
to
create
a
multi-tenant
cluster
to
provide
as
a
service
to
their
users.
A
kubernetes
cluster
sometimes
needs
to
be
shared
among
multiple
users,
and/or
applications
in
an
enterprise.
This
may
look
like
multiple
teams
deploying
applications,
the
same
kubernetes
cluster
for
a
service
provider.
This
may
be
multiple
different
end
customers,
with
deployments
managed
by
the
same
kubernetes
cluster.
A
There's
benefits
to
sharing,
including
manageability
reduced
clusters
for
all
improved
resource
utilization,
and
these
benefits
may
become
more
significant,
as
the
industry
moves
to
deplane
containers
on
bare
metals.
Since
many
teams
don't
want
to
pay
for
a
control
plane
of
her
deployed
application,
and
then
we
just
kind
of
wanted
to
call
out
that
this
working
group
absolutely
sees
benefits
in
cluster
api
cluster
Federation.
We
don't
see
multi-tenancy
as
the
one
solution
to
rule
all
solutions.
A
A
Here
you
will
see
that
we
have
defined
a
tenant,
and
so
we
have
some
comments
already
on
this.
But
what
we're
doing
is
we're
defining
what
we
mean
by
tenants
in
terms
of
a
functional
specification
of
the
business
value
provided
by
tenancy.
What
we
want
to
do
with
this
is
avoid
ambiguity
and
make
it
easier
to
reason
about
whether
a
proposed
solution
meets
these
functional
requirements.
A
So
a
tenant
is
a
client
of
a
bounded
and
isolated
amount
of
compute
storage,
networking
and
control
plane
resources
within
a
kubernetes
cluster.
These
compute
boundaries
can
be
defined
by
the
following
resource
limits,
so
that
could
be
CPU
memory
network
bandwidth,
IO
bandwidth,
request
rate
API
is
there
could
be
optional
resource
reservations,
which
ensure
minimum
resource
availability
for
deploying
pods
with
guaranteed
quality
of
service,
and
you
know,
I
think
we're
going
to
work
on
that
bullet
point.
A
An
exclusive
control
plane
domain
to
allow
for
grouping
of
resources
owned
by
the
client
and
to
limit
the
control
plane,
read/write
access
to
only
resources
within
the
domain
authentication
to
ensure
that
access
to
the
control
plane
domain
is
secured
and
limited
visibility
or
access
to
api's
and/or
cluster
resources
outside
the
control
plane
domain.
This
definition
is
not
attempting
to
reason
about
implementation.
A
A
These
are
just
sort
of
a
starting
point,
so
use
cases
that
we're
thinking
about
right
now
include
enterprises
sharing
a
kubernetes
cluster
in
their
public
or
private
cloud
between
multiple
teams
of
employees,
enterprises
sharing
a
kubernetes
cluster
in
their
public
or
private
cloud
between
stages
of
a
workloads,
deployment,
lifecycle,
service
provider,
providing
software
as
a
service
to
end-users,
which
runs
on
a
kubernetes
cluster
and
a
cloud
provider
using
kubernetes
to
manage
kubernetes
as
a
service
for
my
external
customers
and
that's
what
we
call
the
Coke
and
Pepsi
model.
Tasha.
B
Can
I
ask
a
question
yeah?
Would
it
make
sense
to
have
a
table
reflecting
that
definition
right,
I
think
this
definition
is
cooled,
a
resource
limit,
a
control
plane
of
visibility,
improve
security
to
kind
of
apply
that
sort
of
a
table
into
this
use
case
where
we
talk
about
the
use
case
and
they
say
resource
isolation.
Yes,
you
know
how
much
or
something
so
that
you
know
at
a
glance
we
can
figure
out.
Okay,
which
use
case
you
know,
which
you
know,
cares
which
aspect
of
the
definition
of
the
tenant
yeah.
A
A
So
personas
that
we've
been
talking
about
have
been
cluster
administrator,
tenant,
administrator
workload,
developer
operator
again
super
open
to
adding
work
personas
here
and
adding
more
description
around
the
personas
that
we're
thinking
about.
We
also
have
some
user
stories
and
so
I,
so
these
are
again
kind
of
the
beginning.
So
you
know,
let's
gather
more
user
stories
from
the
community
and
throw
them
in
here
so
that
we're
really
thinking
about
the
scope
of
the
people
that
were
serving
here.
A
But
here
we
have
as
a
workload
develop
I
need
to
know
if
my
kubernetes
cluster
is
configured
to
provide
secure
isolation
between
untrusted
users
and
applications,
so
that
I
can
assure
the
business
that
my
API
resources
can't
be
read,
read
or
written
by
other
tenants.
There
are
no
information
leaks
through
shared
components.
My
resource
objects
don't
collide
with
other
tenants.
Malicious
tenants
can't
intercept
my
network
traffic
or
access
my
store
data,
malicious
tenants
can't
infect
my
application
by
a
compromising
the
hosts
and
other
applications
well
denial
of
service
or
impact.
A
My
access
to
resources
as
a
cluster
admin
I
need
to
be
able
to
set
up
my
cluster
to
provide
services
to
my
teams
that
are
working
independently
and
as
a
cluster
administrator,
I
need
to
allow
different
versions
of
the
same
resource
on
a
multi
tenant,
kubernetes
cluster,
so
that
tenants
can
manage
their
dependency.
Is
we
have
a
brief
definition
of
soft
and
hard
multi
tendency
for
soft
multi
tendency,
which
is
what
we're
focusing
on
in
our
initial
sprint?
A
We
have
soft
multi-tenancy
as
the
model
of
multi
tendency
in
which
users
of
a
shared
cluster
are
trusted
to
have
an
incentive
to
behave
as
good
actors
on
the
system.
Tenants
should
not
have
access
to
anything
from
other
tenants,
but
it
would
be
acceptable
to
share
the
kubernetes
api,
for
example,
and
then
hard
multi-tenancy,
hard
multi-tenancy
is
the
model
of
multi-tenancy
in
which
there
is
zero
trust
between
users
of
a
shared
cluster
to
behave
as
good
actors
on
the
system.
A
Tennant
should
not
have
access
to
anything
from
other
tenants,
except
for
the
kubernetes
api,
which
must
be
protected,
but
should
still
be
accessible
to
allowed
sentence,
users
to
run
their
own
controllers
and
operators
and
take
access
on
options
on
objects
within
their
namespaces
and
I.
Think
there
may
be
some
back
and
forth
about
the
access
to
the
kubernetes
api
that
we
should
explore.
But
this
is
the
high
level
definitions
that
we're
going
with
right
now
and
very
open
to
feedback
and
comment.
A
Fa
Q's
I
only
have
one
FAQ
right
now,
but
I'm
sure
we
can
add
more.
One
is
just
what
are
the
vulnerabilities
that
still
remain
if
I
follow
these
recommendations
for
this
working
group,
it
is
out
of
scope
to
try
and
identify
vulnerabilities
in
kubernetes,
and
this
is
a
set
of
best
practices
put
together
from
the
experience
of
the
broader
community.
We
will
be
giving
these
best
practices
as
a
reference
architecture
to
the
security
researchers
who
are
working
on
kubernetes
bounty
and
use
what
they
find
to
improve
the
project
design
principles.
A
A
This
is
a
dick
plan
that
goes
into
more
detail
about
all
of
the
sort
of
high-level
action
items
that
I
described
earlier.
I
will
not
read
through
this
to
everybody
today,
but
I
do
encourage
everyone
to
read
it
and
see
if
there
are
things
here
that
you
want
to
start
engaging
with,
and
we
can
start
getting,
it
started,
but
basically
at
a
high
level
step
one
is
we
just
need
a
baseline
security
profile
today
for
a
single
tenant
cluster
using
the
current
release
of
kubernetes,
assuming
the
entire
cluster
is
used
by
a
single
team.
A
There's
no
multi-tenancy
and
document
this
baseline
security
profile
to
run
a
secure
closure
for
number
two.
We
then
will
do
the
same
so
a
baseline
security
profile
for
a
soft
multi
tenant
cluster
and
then
will
provide
these
two
templates
or
these
two
profiles
to
the
security
researchers,
for
the
bug,
bounty,
get
feedback
from
both
them
and
from
other
SIG's
and
from
the
community,
and
continue
to
improve
these
as
we
do
that
we
anticipate
that
we'll
learn
a
lot
that
we
can
feed
directly
into
our
own
roadmap
on
the
roadmap
of
various
SIG's.
A
Once
we
have
gotten
to
this
step,
then
we're
ready
to
add
new
abstractions
features
and
tools
to
configure
it
enforce
multi-tenancy
on
a
kubernetes
cluster
again,
this
is
still
focusing
on
the
soft
multi-tenancy.
So
we
want
to
create
a
cluster
auditing
suite.
We
want
to
create
a
multi-tenancy
custom
resource
definition
set,
and
then
we
want
to
look
for
common
patterns
in
kubernetes,
for
monitoring,
multi-tenancy
and
more
deeply
understands
how
we
should
be
providing
data
to
those
monitoring
solutions
in
a
multi-tenant
and
secure
manner.
A
Throughout
all
of
this,
we
want
to
continue
to
learn
and
gather
feature
requests
that
we
can
then
feed
into
our
roadmap
for
hard
multi-tenancy,
which
we
anticipate
will
require
working
with
sig
API
machinery
signals
and
having
some
core
kubernetes
development
work
involved,
and
then
finally,
we
do
want
to
kind
of
call
out
this
reading
list,
which
is
has
a
ton
of
awesome
information
that
has
been
developed
over
the
past
year
of
this
working
group.
So
if
you
haven't
read
this
these,
this
is
all
really
valuable
information.
A
A
lot
of
it
is
informing
the
earlier
parts
of
the
document,
specifically
the
security
profiles,
proposal
number
11
and
number
12
are
going
to
be
very
useful
resources
for
when
we're
putting
together
the
secure
profiles,
and
then
we
also
have
already
Mike
arpaia
has
written
a
kubernetes
operator
for
managing
multi
tenant
workloads
that
we
could
leverage
when
we
get
to
that
part
of
the
project
so
yeah.
This
is
overall
the
project
plan
that
we're
looking
at
doing.
A
We
want
to
opened
up
so
the
community
into
this
working
group
to
see
where
people
want
to
kind
of
dive
in
and
participate
and
help
just
kind
of
get
started.
So
definitely
you
know
let
the
group
know
if
you
want
to
start
working
on
parts
of
this
and
we
can
start
making
more
concrete
progress.
I'd
like
to
open
it
up
to
questions
and
I
know
some
people
had
some
user
stories
and
customer
use
cases
that
they'd
been
sort
of
that
they
wanted
to
bring
up
as
well.
Today,.
C
Oh,
this
is
sanjeev,
so
I
have
a
several
comments
or
some
related
to
the
dock
and
some
generally
on
the
work.
One
couple
of
comments
was
that
when
we've
got
all
these
user
personas
and
so
on,
just
a
clarification
to
the
team
that
we
are
not
expecting
that,
for
example,
we
have
a
persona
right
now
called
a
tenant,
tenant
administrator,
but
really
the
end
goal
is
for
it
to
be
self-service
right.
So
there
does
not
actually
eventually
need
to
be
a
tenant
administrator.
C
C
I
I
was
able
to
talk
to
a
couple
of
folks
that
are
actually
running
soft
multi-tenancy
in
productions.
So
if
there's
interest,
I
can
just
briefly
shared
a
few
notes
from
that
and
maybe
I'll
be
able
to
capture
some
of
that
in
the
documents
as
well,
and
we
can
have
them.
You
know
document
something
as
well,
but
is
that
something
that
would
be
of
interest
for
me
to
briefly
describe.
C
Okay,
so
two
tenants
to
two
customers
that
happen
to
be
running.
There's
a
lot
of
people
that
are
running.
You
know
DIY
multi-tenancy
solutions
right
now,
because
obviously
many
of
the
pieces
are
already
in
place.
You
know
cue.
Bananas
has
namespaces,
you
have
our
back.
You
have
got
security
policies,
Network
policies,
all
of
that,
so
I
I
spoke
to
one
customer.
In
fact,
there
was
the
IT
department
of
Atlassian
and
they
have
done
a
DIY
multi-tenancy
solution
and
they've
done
a
pretty
good
job.
C
C
So
a
namespace
creator,
CR
D,
so
you
know
we
can
take
a
look
at
something
like
that,
as
well
as
power
of
our
tenant
management,
C
or
D,
or
else
the
other
team
that
I
spoke
to
was
actually
using
one
of
the
standard
vendor
products,
namely
openshift,
which,
as
you
know,
has
extensive
set
of
multi-tenancy
solutions
and
that
one
so
so
there
again
they
extensively
use
multi-tenancy.
So
they
were
also
in
favor
of
the
upstream
community.
Developing
reference
models
like
like
what
our
working
group
here
is
doing.
C
Their
typical
workflow
is
that
they
have
a
tenant
provisioning
portal
right,
a
project
importance
of
running
on
top
of
the
kubernetes
cluster,
so
users.
So,
even
though
products
like
openshift
have
explicit
objects
for
tenants
right,
there's
an
explicit
object
in
OpenShift
called
a
project
right
and
you
can
create
those
objects.
You
can
say
OSI
create
new
project
right
and
that
actually
creates
an
open
shift.
Object
called
a
project
which
happens
to
be
an
aggregation
of
a
namespace
and
a
certain
set
of
policies
and
so
on.
C
So
they
essentially
have
that
kind
of
CRD
already
in
open
shield.
But
this
customer
does
not
use
that
they,
they
have
a
separate
provisioning
portal
and
you
go
in
there
and
you
say:
please
create
me
a
project
with
you
know
what
certain
characteristics
and
then
that
portal
comes
back
down
to
the
open,
shaped
cluster
and
then
ends
up
creating
a
project.
So
no
point
in
this
is
that
most
people
doing
some
kind
of
kubernetes,
Multi,
multi,
tenancy
or
more
or
less
thinking
of
a
solution
similar
to
what
we
have.
C
What
we've
been
trying
to
consolidate
here
but
I
think
the
concept
of
self-service
project
creation
is
very
important
and
as
we
develop,
that
the
precise
definition
of
the
CRD
for
tenant
management
will
need
to
kind
of
work
through
the
details
on
that
see,
there
was
anything
else
might
be
worth
sharing
here
and
and
and
I
can
have
them
come
in
to
one
of
our
cause.
If,
if
we
need
them
to
you
know,
sort
of
make
a
read
out
to
the
team,
I
will
have
to
check
with
them.
If
they
they
could
do
that.
C
There's
some
details
there
on
okay
yeah,
so
both
of
them
use
a
combination
of
pod
security
policies,
as
well
as
dynamic
admission
where
books
so,
for
example,
dynamic
admission
control
is,
is
used
for
things
like
tormenting
namespace
clashes
or
ensuring
that
different
tenants
do
not
create
English
objects
for
URLs
that
they
shouldn't
be
creating,
and
things
like
that.
So
I
think
we
should
expect
that
a
mix
of
static
and
dynamic
admission
control
will
be
needed.
C
This
also
was
another
question
which
I
had
which
for
the
team,
which
is
that,
if
we
are
initially
defining
a
reference
model
using
pod
security
policies,
that
may
be
okay
for
on-premise
clusters,
but
for
hosted
clusters.
We
cannot
really
influence,
hosted
providers
from
setting
enabling
pod
security
policies,
for
example,
because
that's
a
cluster
creation
time
setting
that
they
will
need
to
do
so.
If
anybody
from
gke
or
anybody
is
on
the
call
I
know
Yoshi
any
issue
we
are
there.
C
They
may
want
to
comment
whether
if
we
want
to
have
the
solution
work
even
for
hosted
given
it
is
then
really
the
only
option
for
admission
control.
There
would
be
dynamic
admission
control
because
we
really
wouldn't
be
able
to
embed
any
part
any
recommended
pod
security
policies
into
their
clusters.
C
D
C
D
B
And
I
think
from
a
user
perspective,
I
think
you
know
as
far
as
like.
Let's
just
be,
let
me
just
think
about
from
a
JK
perspective
because
to
be
more
simple.
As
far
as
from
a
user
perspective,
if
the
property
of
the
multi-tenancy
configuration
we're
trying
to
define
here
can
be
somehow
implemented
me
kind
of
differently
on
the
on
pram
or,
like
jke
I,
think
it's
still
sort
of
okay
likely
to
confirm
like
security
policy,
your
policy
can
be
done
in
beta.
C
One
is
you
know
so
there's
this
X
number
of
tools
that
we
could
do
it
we
could
be
developing.
One
is
an
audit
auditing
tool
for
a
cluster
to
check
whether
it's
ready
for
multi-tenancy
two
is:
is
that
CRD
and
controller
logic
for
actually
namespace
management,
and
then
three
we've
talked
about
some
conformance
testing
and
so
on.
So
we
need
to
work
through
that,
a
little
bit
more
and
maybe
in
one
of
the
upcoming
calls.
We
can
share
that
in
more
detail,
yeah.
A
E
A
I
think
we
should
gather
that
as
I
think
we
should
gather
that
as
an
example
of
hard,
multi-tenancy
and
think
of
sort
of
both
hard
and
soft
multi-tenancy
as
a
range
of
capabilities,
and
so
in
some
of
them.
I
think
people
will
want
to
lock
down
access
to
the
api
master
entirely
common
in
some.
I
think
they.
E
My
second
question
was
really
around
user
name
space
mapping.
So
so
what
I
do
is
I
I
run
a
humongous,
CI
CD
and
we
obviously
don't
want
people
running
images
as
root,
but
like
it
or
not,
just
about
every
image
out
there
is
set,
the
user
is
set
to
use
route
by
default
and
so
I
just
what
are
people
saw
Sonja's
your
name
space,
nothing,
there's!
Actually
some
exist
work
they're
going
on
in
sick
node,
but
and
it's
you
know
it's
it's
one
of
these
things
where
they
keep
on
saying.
E
C
The
namespaces
are
not
yet
ready
for
primetime.
We
have
been
talking
to
some
of
the
folks
there.
So
yes,
as
as
that
feature
becomes
production
ready.
That
would
be
part
of
our
reference
architecture,
but
at
the
moment,
user
name
spaces
are
not
quite
ready
for
secure
isolation
so
into
so.
This
brings
me
to
another
point
which
I
missed
mentioning
about
the
customer
deployments
in
the
one
case
where
they
were
using
open
shift.
They
basically
had
only
two
odd
security
profiles.
C
Either
you
were
a
privileged
tenant,
in
which
case
you
pretty
much
had
cluster
admin
like
privileges
or
you
were
a
very
restricted
tenant
in
which
you
essentially
had
you
know
no,
no
root
privileges,
no
no
privileges
at
all,
and
it
would
be
a
very
very
so
our
thought
initially
would
be
that
we
would
essentially
just
recommend
multi-tenancy.
For
that
case,
we
wouldn't
have
a
grey
area
where
they
would
be
tenants
which
we
to
have
limited
levels
of
privileges,
and
so
on
that
kind
of
gets
into
a
little
bit
of
a
gray
area.
E
Yeah,
it's
it's
a
critical
problem
for
us
like
because
we
we
don't
have
a
choice.
People
as
I
say:
there's
a
gazillion
images
out
there
that
are
set
to
run
is
route,
and
so
we
end
up
having
to
put
an
additional
layer
around
people's
images
in
order
to
let
them
use
them.
And
it's
it's
a
pain
in
the
butt.
So
you
know,
direct
support
and
tzedakah
rails
would
be,
is
basically
an
essential
piece
for
us
for
multi-tenancy.
C
E
A
Okay,
again,
I
encourage
people
to
sign
up
for
working
on
the
secure
profiles.
We'd
love
to
open
this
up
to
you
know
a
team
that
can
just
start
iterating
on
this,
and
then
we
can
present
on
progress
at
our
follow-up
meetings.
Also,
if
anyone
has
any
other
topics,
they'd
like
to
talk
at
about
at
our
next
regular
meeting,
hit
mailing
lists.
I'll,
add
you
to
the
agenda.
Oh
sorry,.
B
Sorry
for
a
very,
very
late
question
in
this
fundamental,
but
what
was
the
target
timeline
that
we
had
in
mind
in
terms
of
getting
the
first
draft
you
know,
reading
I
saw
the
timeline.
It
says
12
today
for
your
presentation.
What's
this
sort
of
a
targeted
time
frame
for
getting
the
draft
done,
I
think.