►
From YouTube: Kubernetes WG Multitenancy 20180214
Description
Notes and Agenda: https://docs.google.com/document/d/1fj3yzmeU2eU8ZNBCUJG97dk_wC7228-e_MmdcmTNrZY/edit
A
A
A
C
C
E
A
F
Probably
should
say
hi
say
Mubeen
before
I
know:
I'm
Nick
from
the
Lycian
I
mainly
came
because
I'm
only
coming
because
I'm
already
running
multi-tenancy
multi-tenant
cost
as
in
production,
so
I
just
wanted
to
get
track
of.
What's
going
on,
I'm
running
both
later
I'm
running
a
couple
of
hot
I'm
running
a
hard
and
all
eternity
cluster
and
a
sophomore
agency
clusters,
so
I
was
yeah.
So
if
you
want
to
know
anything
about
what
we're
doing
for
realz,
then
I'm
happy
to
find
details.
C
Yeah
I
mean
if,
if
we
don't
have
any
other
agenda
items,
maybe
do
you
want
to
take
a
couple
minutes
and
tell
us
a
little
like
last
week
we
talked
people
that
had
talked
about
what
a
little
bit
about
what
they
were
using.
The
existing
multi-tenancy
features
for
how
they
were
using
them
and
like
what
they
thought
was
missing
or
how
things
could
be
improved.
F
Sure
sure
so
the
we're
running
two
kinds
of
classes
at
the
moment,
the
we're
only
an
internal
sort
of
has
cluster
that
is
intended
to
be
used
for
lots
of
different
compute
workloads
inside
of
ASEAN
the
right
now
most
of
that
is
builds,
and
so
all
the
built
on
a
fair
chunk
of
the
container
builds
that,
where
that
lysine
is
doing
internally
are
now
running
on
the
cluster
yeah,
it
scales
up
it
scales
up
and
down
great.
The
multi-tenancy
part
is
we've
chosen
where
using
pod
security
policies.
F
F
F
Those
are
the
those
are
the
two
main
things,
because
we're
running
in
a
TMS
we've
had
to
do
a
lot
of
network
stuff
to
seal
off
the
metadata
service,
and
things
like
that
and
in
the
the
other,
the
other
cost
of
the
hard
multi-tenancy
one
I,
don't
know
if
you've
heard
you've
seen
a
bit
bucket
but
license
github
II
thing
provides
CI
solution
called
pipelines
that
actually
all
of
the
builds
that
running
that
run
inside
inside
of
communities
close
to
the
Weaver
ride,
so
that
one
is
a
hard
multi-tenancy
in
as
many
senses
of
the
word.
F
We
have
much
harder
Network
policy,
mainly
the
we've,
actually
found
that
to
date,
the
we've
just
turned
on
all
of
the
available
security
stuff
that
we
could
yeah
in
the
pod
security
policy.
We're
using
you
know
we're
making
sure
we
turn
on
the
second
profiles,
turning
off
privileged
pods
or
all
of
the
stuff
that
you
would
was
all
sort
of
the
faulty
stuff
that
you
would
think
of.
F
E
I
was
gonna
ask,
and
maybe
you
don't
have
to
answer
it
now,
but
it'd
be
useful
to
understand
what
you
know
the
biggest
challenges
were.
It
sounds
like
you,
you
know
you
got
pretty
far
but
I
guess
there
were
some
things
that
were
more
difficult
than
other
things
be
interesting
to
dive
into
those
at
some
point,
yeah.
F
I
think
sort
of
looking
back
over
at
the
the
thing.
The
thing
that
was
the
hardest
was
was
the
network
isolation
stuff,
because
the
the
ways
in
which
the
different
layers
of
network
isolation
play
together
or
not
CLE,
the
guys
in
the
team
who's
actually
done.
Most
of
the
work
on
that
has
written
yeah
a
real
handy
test
harness
for
himself
that
I'm
poking
him
to
tidy
up
so
that,
hopefully
we
could
over
sources.
But
it's
you
know
just
a
just
a
small
thing
that
runs
that
lets.
F
You
run
a
set
of
light
network,
no
accursed
and
say
this
traffic
should
be
alone.
This
traffic
should
be
blocked
between
290
spaces
to
do
pods
between
whatever
you
can
think
of,
it's
yeah
really
nice,
and
so
it's
really
useful
for
this
sort
of
thing.
Where
you
know
in
one
cluster,
we
need
it
to
be
to
look
one
way
and
in
another
cluster
we
needed
to
look
another
way.
G
Was
to
say
Nick:
do
you
have
I'm
also
in
Australia,
do
you
have
like
a
deer
sale
or
something?
How
do
you
configure
that
I
mean?
How
do
you
is
it
all
driven
from
code
somehow
so.
D
F
The
no
right
now
we
just
we
have
a
set
of
policy
objects
in
yeah,
Millar
and
Calico
stuff.
The
Calico
stuff
is
also
I,
can't
remember
how
we
configure
that
sorry,
but
yeah,
it's
all
it's
all
done
on
as
part
of
normal
deployments.
We
have
been
doing
sort
of
deployments
on
top
of
communities
for
a
long
time
sort
of
pre
and
we
originally
wanted
to
use
helm
for
all
of
the
actual
kubernetes
level
config.
F
But
the
issues
with
having
with
tiller
needing
cluster
admin,
level,
access
and
being
having
to
be
accessible
from
everything
and
so
stuff
I've
meant
that
we
haven't
done
that.
So
we
have.
We
have
this
horrifying
ansible
thing
that
just
templates
out
a
whole
bunch
of
yeah
mole
and
then
glance
it
at
the
cluster
yeah,
so
not
ideal,
but
yeah.
We've
spent
enough
time
on
it.
Now
that
it's
it's
hard
to
make
the
case
for
spending
the
technical
effort
to
move
to
anything
else,
I'm.
F
F
F
For
the
hard
multi-tenancy
one,
they
don't
have
api
access
to
the
clusters.
The
part
of
what
we
did
was
obviously
stop
the
containers
that
are
running
from
from
being
able
to
get
API
access
to
the
clusters,
and
so
the
the
containers
that
run
in
the
pipeline's
clusters.
They
they
can't
talk
to.
They
do
so
via
a
doll
or
to
the
cubelet
or
to
any
of
the
community
infrastructure.
F
As
far
as
they're
concerned,
it
is
running
and
they
there's
nothing,
they
can
do,
they
can
get
out
to
the
Internet
and
that's
it
like
the
other
one
customers
do
have
API
access
the
because
it's
you
know
soft
monitor
can
see
work
where
we
don't.
We
just
use
our
back
to
control
a
date.
What
people
can
do
the
way
that
we
grant
we
set?
F
The
are
back
role
for
users
and
for
box
is
users
we
have
a
little
tool
called
queue
token
that
issues
a
six
hour,
sir,
and
to
a
user
and
based
on
membership
in
a
no
doubt
group,
and
then
the
cert
has
a
CN
of
the
other
of
the
t,
Alec
role,
and
so
then
that'll
works
about
and
robots
and
stuff.
It's
just
we
issue,
we
show
people
how
to
set
up
service
counts
for
over
ten
tokens
for
all
thing.
E
F
Like
that
arm,
so
the
wall,
no
because
all
of
the
tenants
get
access
to
their
namespaces
part
of
the
roll
part
of
the
are
back
that
would
give
them
restriction
to
just
their
name
space.
So
we
yeah
back
rolls
we
give
out,
give
sister
the
goof,
but
pretty
much
like
cluster
admin
access
level,
but
to
just
their
namespace.
So
we
have
like
a
cluster
admin,
cost
a
role
that
we
buying
into
just
the
namespace
with
the
role
binding.
E
F
F
Say
another
question:
the
chat,
the
yes,
oh
no
I'm!
You
know
in
the
cluster
role
the
cluster
role.
We
literally
have
a
cost
to
admin,
one
that
provides
most
permissions
available
in
the
cluster,
but
the
because
we
bind
it
in
to
a
namespace
with
a
roll
binding.
We
don't
let
people
interact
with
the
are
back,
even
the
so
we
do
disable.
F
F
Ya
be
pretty
hard
about
that.
Actually,
United,
like
the
I,
mean
it's
really.
The
hot
part
of
the
moment
ruling
is
just
understanding
all
the
constructs
and
being
able
to
wire
them
together.
I
think
the
you
had
like
the
and
yeah
and
making
sure
that
you
haven't
missed
any
along
the
right,
I
think
I'll
back
once
you
get,
it
is
really
straightforward
and
all
stuff
yeah.
F
The
and
the
poor
security
policies
are
kind
of
a
set
and
forget
thing
we
found:
we've
done
the
thing
where
you
associate
a
restricted
pond
security
policy
with
all
service
accounts
by
default,
and
so
there's
the
restricted
pond
security
policy
he
stops
starts
people
from,
like
you
know,
getting
most
of
the
the
capabilities
and
privilege,
privilege,
networking,
acrylic
containers
or
all
the
sort
of
stuff
that
did
yeah.
We
want
to
make
sure
that
no
one
didn't
nobody
can
do
unless
we
really
need
to
gets
restricted
by
the
pond
security
policy.
F
F
The
elder
group
that
you
connect
to
has
like
the
name
of
the
role
and
the
name
of
the
next
place,
that
you're,
using
in
the
name
of
the
other,
just
a
magic
name
in
the
or
that
group
that
we
pass
out.
It's
a
bit
happy,
but
it
actually.
It
has
worked
quite
well
so
far.
So
the
name
of
your
LDAP
group
that
you
pick
has
you
know
one
of
them
will
be
like
the
team
name
and
then
the
namespace
name
and
the
and
access
level.
D
C
F
Right
right
now,
the
way
that
we
were
that
we
address,
that
is
that
creating
namespaces
is
a
manual
process.
So
you
know
somebody
eyeballs
the
name
in
the
namespace
to
make
sure
it
doesn't.
It
doesn't
hide
the
the
yan
in
the
when
once
once
so
people,
basically
just
log
ticket
with
us
and
say
hey,
I
want
a
no
space
I'd
like
it
to
be
named
this
and
then
we
go
and
create
the
conflict
so
that
the
name
space
creation
will
be
consistent
and
we
say
hey
to
do
that.
F
You
need
to
make
an
LDAP
group
that
has
this
name
and
then
the
name
of
the
other
group
includes
includes
the
name
of
the
namespace
and
then
once
the
people
create
that
over
get
that
over
created
by
our
internal
systems,
then
the
and
then
we
can
wire
they
all.
That
group
together.
Well
it'll
just
work
with
the
other
group
with
the
with
queue
token
to
give
you.
When
you
connect
to
the
cluster,
it
will
give
you
it
will
drop
you
into
that
namespace.
F
Yeah,
sorry,
another
question
from
the
chat
for
the
older
integration:
we
don't
so
the
way
that
the
old
up
integration
work
is
that
kyouto
can
actually
will
do
they
all
that
lookup
and
then
provide
you
with
a
certificate
that
has
the
that
has
the
name
of
the
are
back
role
you
you
should
assume
even
a
certificate
and
then
just
we
just
drop
that
certificate
into
your
cube.
Config
directory
aria
thanks
and
then
yeah
and
it'll.
Do
that?
Yes,
so
yeah
we
did
release
an
open
source
thanks
to
the
attorney
for
writing
that,
while
I
was.
H
F
Like
this
yeah
yeah,
so
we'll,
obviously
a
lot
of
the
stuff
that
we
actually
use
for
our
as
the
platform
team,
our
needs
to
run
privilege
like
we
run
calico,
obviously
that
in
these
privileged
pods
there's
a
few
other
things
that
need
privilege.
We
have
another
privilege
PSP
that
to
do
that,
we
can.
We
can
use
a
cluster
robe
I
need
to
bind
the
privilege,
PS
the
privilege
PSP
to
another,
to
any
service
account.
We
want
so
for
a
for
yeah,
so
the
in
a
particular
namespace
will
say.
Are
you
really
really
need
privileged
pods?
F
Are
you
sure
you
really
need
privileged
pods
ID,
any
privileged
pods?
You
know
that's
a
terrible
idea.
You
better
have
a
really
good
use
case,
but
if
all
of
those
get
past
then
then
you
get
to
we
just
create
another
service
account
buy
into
the
privilege.
Psp
to
that
service
account
and
then
say.
Okay,
could
you
make
sure
that
you
use
this
service
account
for
all
of
your
so
that
your
pods
end
up
actually
using
as
soon
as
a
can?
F
F
F
So
for
yet
for
connection
yes,
for
so
for
secrets
for
like
database
connection
strings
and
something
like
that,
yeah
yeah,
so
I'm
again,
the
the
main
use
case
we
see
for
this
is
in
the
eventual,
as
the
pair's
will
generate.
The
pose
is
going
to
use
the
service
broker
and
open
service
broker
api's
to
then
call
out
to
a
service
that
will
generate
your
RDS
instance.
F
The
OSB
will
come
back
with
the
creds
which
this
service,
the
service
broker
service,
will
then
could
set
up
secrets
inside
your
namespace
for
you
that,
then
you
can
then
consume.
The
pooling
is
intended
to
all
wire
this
up
for
people
so
that
you
know
if
I'm
a
when
I
find
a
developer.
Who
wants
to
run
a
small
web
service?
I
just
say:
hey,
I
won
an
RTS
I
need
handles
all
of
the
details
about.
C
Just
a
big
question,
quick
question
going
back
to
the
LDAP
stuff,
so
does
it
also
create
the
namespace
and
the
policies
and
the
cluster
or
an
addition
victus,
the
cube
token.
Do
you
know
that
or
it
just
provisions
and
configures
the
queue
config
file
and
then
there's
some
other
mechanism
to
sync
your
aim,
space
and
policies?
Yes,.
F
For
the
newsperson
policies,
well,
so
we've
set
all
the
policies
are
all
defaults,
and
yes,
so
for
when
we
can,
we
are
the
ones
who
configure
the
no
spaces
for
people
at
the
moment,
because
we
don't
anticipate
so
again.
This
is
in
the
sophomore
lieutenancy
prosper
in
the
hard
multi-tenancy
cluster,
the
the
pipeline's
team,
who
actually
wrote
the
product.
F
That's
not
us
inside
that
cluster,
so
that
all
the
users
of
users
are
is
just
is
just
the
agent
and
their
team
to
check
that
the
agent
is
running
so
yeah,
basically,
because
we
don't
have
a
lot
of
automated
the
lot
of
automated
requests
and
stuff
like
that
or
like
named
namespaces
with
the
other
axis
at
the
moment
of
the.
We
literally
will
make
changes
to
our
ants,
we'll
make
sure
that
the
namespaces
get
pushed
everywhere
and
then
they'll
get
pushed
everywhere
as
we
deploy
that
out
to
the
various
clusters.
F
That
means
administer
users
ratios,
we've
got
I,
think
there's
eight
of
us
in
the
team
right
now.
It
probably
I
mean
in
terms
of
customers.
We've
got
like
five
or
six,
but
the
customers
then
have
their
own
customers.
So
the
number
of
actual
users,
it's
a
bit
hard
to
say
we
run,
we
run
I.
Think
I
can't
remember
the
last
time.
I
looked
I
think
it
was
about
twenty
or
thirty
thousand
bills
a
day
I'm
on
the
internal
cluster
I.
F
C
F
F
Yeah
we
just
do
it,
I
mean
that,
and
that
is
not
again
just
a
very
simple
templating
exercise.
We
have
a
known
in
most
of
the
most.
The
only
conflict
that
we
need
to
do
really
is
to
is
create
the
namespace
and
bind
the
find
the
correct
role
into
the
namespace
on,
and
that's
it
because
there
we
have
the
the
unpretty
unprivileged
poss
security
policy
is
bound
to
default,
to
newly
created
service
accounts
by
default
and
so
long
as
long
as
we
wire
up
the
our
back
correctly,
then
it's.
C
F
F
F
G
I
think
this
might
be
relevant,
so
you
would
you
mentioned
that
helm
wasn't
gonna
work
because
of
the
because
of
tiller
and
namespaces
and
stuff.
Could
you
talk
a
bit
more
about
that?
Yeah.
B
Sure
middle.
F
So
the
it's
been
a
little
while
since
I
checked,
but
I'm
I
think
the
last
update
I
saw
on
the
tickets.
Was
that
and
so
tiller
needs
to
have
a
set
of
permissions
to
talk
to
the
cluster.
It
doesn't
support
impersonation
because,
because
that's
still
sort
of
you
not
quite
finished
I
guess
is
the
word
the,
but
so
because
tiller,
because
you
can't,
because
the
tiller
can't
sort
of
part
like
re
off
the
request
that
it's
getting
it
nice
to
have
cluster
admin
access,
and
so
you
need
to
so.
F
If,
for
your
home
things
to
run,
whoever
is
talking
to
him
as
caustic
admin
acts
and
there's
no
there's
no
way.
When
I
looked
last
time,
there
was
no
way
for
killer
to
say,
hey
to
take
an
hour
back
like
my
set
of
credentials
and
then
test
that
against
the
cluster
there
was
work
tickets
open
with
Hellman
tiller
about
that,
but
the
last
I
saw
they
were
like
this
is
this
is
super
hard
I'm.
You
should
use
a
telephone,
namespace
I,
think
that
was
what
I
remember
being
inside
yeah.
C
F
G
F
A
G
A
C
And
I
think
there
were
some
else.
It's
like
some
good
comments
in
there
about
some
like
additions
that
can
be
made
to
the
dock
like
kind
of
making
the
use
cases
a
little
clearer
and
maybe
also
describing
the
alternatives,
because,
like
the
way
I
saw
it,
it
was
proposing
an
alternative
to
some
kind
of
hierarchy.
Some
kind
of
makes
a
namespace
hierarchy.
Do
it
doing
something
more
flexible
through
labels?
But
there
wasn't
a
lot
of
discussion
of
like
an
alternative
like
hierarchical
namespaces,
or
something
like
that,
so
that.
B
A
D
Wasn't
there
so
I'm
not
sure
how
relevant
it
was
versus
the
doc
on
secure
container
isolation,
but
I
hear
that
there
was
a
lot
of
discussion
or
interesting
discussion
in
that
direction,
maybe
on
the
signal
meeting
this
week.
So
it's
on
my
to-do
list
at
least
to
go
back
and
watch
the
recording
of
that
yeah.
A
I
I
I
But
my
hope
is
that
you
know
we
can
come
up
with
like
a
one
or
multiple
kind
of
proposals
for
an
actual
solution
that
meets
all
of
the
requirements
and
then
kind
of
run
through
everything.
All
the
considerations
and
the
requirements
laid
out
in
there
and
kind
of
evaluate
it
in
a
more
structured
way
than
kind
of
the
tossing
out
of
arguments
that
we've
been
doing
so
far
and
so
in
so
then.
I'll
first
well
follow
up
with
an
actual
solution,
proposal
and
row
of
weeks.