►
From YouTube: 20190924 Kubernetes Multi-Tenancy Working Group
Description
Adrian Ludwin: “Hierarchical Namespace Controller” demo
Shikha (IBM): IBM Multitenancy proposal & alignment with WG
Sanjeev Rampal: Summary of Multitenancy WG architecture tracks
A
B
B
We
also
have
a
panel
discussion
where
some
of
you
are
also
participating
and
we'll
hope
to
engage
with
a
wider
audience.
There
try
to
get
more
contributors,
try
to
get
more
requirements,
see
what
the
industry
is
thinking
and
in
addition,
on
the
contributors
summit,
the
day
before
the
main
conference
Tasha
has
set
up
some
sessions.
B
So
if
you
are
all
gonna
be
any
of,
you
are
going
to
be
in
San
Diego
you're,
most
welcome
and
recommended
to
join
the
contributors
summit,
where
you
will
be
able
to
talk
to
others
working
in
this
area
or
interested
in
this
area,
plus
I.
Think
as
we
get
closer
to
the
conference,
we
can
plan
additional
informal
get-togethers
during
the
conference
so
that
we
can
meet
each
other,
both
socially
as
well
as
exchange
technical
notes.
B
B
So
no
there's
the
working
group
sessions
so
go.
What
are
those
the
working
group
track
is
basically
so.
Basically,
just
every
working
group
gets
these
working
group
reps
and
so
at
the
moment,
Tasha
and
I
are
handling
that
you
know
you're
all
welcome
to
not
just
welcome
you
you're
most.
It
would
be
great
if
you
can
all
join
in
and
we
will
be
essentially
providing
a
summary
of
the
working
group
activities.
B
C
Thanks,
okay,
so
I'm
going
to
give
my
somewhat
delayed
demonstration
of
the
article
name,
space
controllers,
so
for
those
of
you
who
are
joining
for
the
first
time,
the
HNC
hierarchical
namespace
controller.
This
is
a
project
that
I've
been
working
on
as
a
complement
to
some
of
the
earlier
work.
That's
been
going
on
around
the
tenancy
controller
use
of
tenancy
air
to
your
tenant
operator.
Well,
think,
we've
settled
on
a
name
yet,
and
so
she
could,
and
and
for
others
who
are
new.
What
this
is
is.
C
This
is
really
for,
like
multi
team,
tenancy
or
soft
multi-tenancy
I
think
it's
the
s1
scenario.
If
my
memory
is
correct,
where
you
basically
have
a
bunch
of
people
from
typically
the
same
company
that
are
sharing
a
cluster,
so
think
a
lot
of
different
development
teams
that
are
working
to
share
a
large
cluster
either
for
development
purposes
or
to
actually
run
their
production
services.
This
is
not
for
either
to
make
kubernetes
itself
multi-tone.
So,
for
example,
this
is
not
something
that
Google
would
use
so
I
have
four
people.
C
This
is
not
something
that
Google
would
use
to
expose
a
kubernetes
cluster
to
do
different
customers,
and
this
is
not
something
that
you
would
necessarily
use
as
part
of
your
SAS
solution.
If
you
are
running,
let's
say
a
wordpress
deployment
for
each
of
your
customers,
you
could,
if
you
wanted
to,
but
that's
not
really
what
it's
designed
for.
So
it's
really
the
multi
team
scenario.
C
C
So
this
is
all
now
part
of
the
repository,
so
I'm
just
going
to
give
an
example
of
how
one
might
use
the
article
namespace
controller
to
set
up
a
hierarchy
of
namespaces
and
how
you
can
use
this
to
organize
access
to
a
cluster
of
on
your
team.
So
let's
say
that
you've
got
an
organization
called
Acme
org,
and
so
the
first
thing
we're
going
to
do
is
create
a
namespace
that
represents
that
entire
organization
and
you
might
have
within
that
a
team.
C
Let's
call
it
just
team,
a
and
team
might
run
a
service
which
we're
just
going
to
call
service
one.
Now
you
might
say
well
why
not?
Why
do
we
need
to
have
a
namespace
before
the
service
itself?
Why
not
just
have
all
of
the
services
owned
by
the
team
within
that
namespace?
And
the
answer
is
that
for
isolation
purposes,
because
really
only
namespaces
are
a
boundary
where
you
can
control
access.
It
is
often
a
good
idea
to
keep
things
to
keep
services
isolated
into
a
single
namespace
under
some
circumstances,
I've
always
necessary.
C
Now,
let's
say
that
you
have
some
forward
wide
sres,
reliability,
engineers
or
sorry,
some
team,
wide
assessories,
and
so,
if
you're
going
to
create
some
are
back
rolls
and
roll
binding.
So
you've
got
the
roll
called
team
asre
and
they
can
update
deployments
and
then
you'll
create
a
rule
binding
that,
in
this
case,
I'm
just
going
to
give
it
to
the
service
account
and
team
a
because
I'm
not
running
on
a
real
cluster
right
now,
I'm
running
on
kind,
kubernetes
and
docker,
and
so
there's
no
real
users
on
that
other
than
the
service
accounts.
C
And
so
you
can
have
those
kind
of
a
series
for
the
for
the
team.
You
can
also
have
them
for
the
organization,
so
here
you've
got
an
organizing,
a
rule
called
organization
essary
and
once
again,
if
the
service
account
in
the
Acme
org
has
has
that
privilege
now,
of
course
nothing
that
I've
done
here
will
affect
service
one.
There
is
no
relationship
at
all
between
the
different
namespaces
and
even
though
organizationally
I've
said
that
there
is
a
relationship.
C
So
this
is
where
we
start
using
the
HNC,
and
so
one
of
the
things
I'm
going
to
do
is
I'm
going
to
call
this
new
command
called
cue.
Cuddle
agency
set
parent
I'm
gonna
say
that
the
parent
of
team,
a
is
acne
org
and
what
that's
going
to
do
is
it's
going
to
copy
down
the
resources
from
acne
or
to
team
a
and
then
I'm
going
to
do
the
same
thing
for
teammates
a
service
friend
typed
it
in
wrong.
C
So
here
I
try
to
set
the
parent
of
acne
org
back
to
team
a
which
created
a
cycle,
so
that
was
rejected.
So
just
if
we
correct
that
so
now,
I'm
gonna
say
the
parent
of
service.
One
is
team,
a
and
that
works
much
better
now,
and
we
can
quickly
see
the
directory
structure
that
we
set
up
here,
which
is
that
acne
org
is
the
parent
of
team
a
and
team
a
is
the
parents
of
service
one.
Now
before
we
looked
inside
the
service,
one
namespace
there
were
no
role
bindings
or
anything.
C
C
We
can
see
that
they
are
that
they're
exactly
what
we
expect,
which
the
update
permission
on
deployments,
and
you
can
also
see
that
they've
been
labeled
showing
where
these
policies
came
from.
So
we
can
see
that
or
guesser
who
came
from
the
namespace
Acme
org,
where
his
team
a
s
or
he
came
from
the
namespace
team,
a
as
you
would
expect.
C
So
the
controller
will
keep
all
of
these
things
in
sync,
with
the
current
hierarchies.
So
let's
say
that
you
have
another
namespace
or
another
team
called
team,
B
and
so
I've
just
added
a
namespace
for
them,
and
so
you
can
see
now
the
team
B
is
so
a
child
evacuate,
org
and
we're
gonna
create
some
rolls
and
roll
bindings
there
as
well
just
to
be
make
it
easier
to
show
up.
I'm
gonna
call
these
wizards
instead
of
saris
and
now
I
can
change
the
parent
of
service,
one
from
team
a
to
Team
B.
C
The
different
policy
objects
with
the
current
state
of
the
Harkey.
Now
this
doesn't
just
work
for
our
back
rolls.
So,
for
example,
let's
say
that
Team
B
has
some
credentials
that
they
share
among
all
of
their
services,
so
we
can
create
whoops
sorry,
this
is
the
just,
so
you
can
see
it
so
I've
just
created
some
secret
called
my
credentials
with
a
very
simple
password,
and
this
is
inside
team
B.
But,
as
you
might
imagine,
if
we
look
inside
service
one
which
is
a
child,
we
will
see
that
my
credentials
has
shown
up
there.
C
But
of
course,
if
we
move
that
service
back
to
team,
a
have
another
look
in
at
the
secrets.
You'll
see
that
my
creds
has
been
deleted
because
it
no
longer
exists
there
and
the
tree
has
been
restored
to
what
we
started
with
now.
Of
course,
not
all
customers
will
want
to
share
secrets
in
that
way.
Some
people
will
not
want
them
to
automatically
propagate
down,
and
so
that
is
all
going
to
be
configurable
at
some
point
and
you'll
be
able
to
add
your
own
types.
C
C
So
not
all
of
these
have
been
checked
in
yet
the
webhook
that
you
saw
and
the
cute
little
plugin
there's
not
actually
gone
into
the
main
wrapper
yet,
but
other
than
that.
This
is
pretty
much
what's
checked
into
github.
Everything
is
currently
either
being
full
reviewed
or
requested,
or
is
on
my
fork
and
just
waiting
for
earlier
changes
to
go
through.
So
that's
it
and
I'm
happy
to
answer
any
questions.
If
people
have
any.
B
C
Right
now,
it's
hard
coded
right
now.
It's
roles,
roll
bindings
and
secrets.
That's
it
that,
as
I
said,
it
is
all
going
to
be
configurable.
The
code
is
the
codes
all
ready
to
be
generic,
so
there's
basically
just
three
lines
in
the
setup
code
where
we
add
those
three
kinds
and
at
some
point
we'll
take
out
the
hard
coding
and
replace
it
with
something.
C
That's
configurable
I
suspect
that
at
a
bare
minimum,
we
will
leave
in
roles
and
role
bindings
as
being
hard
coded,
so
that
you
cannot
disable
them,
because
one
of
the
most
important
things
that
we
expect
this
system
to
do
is
to
be
managing
access
to
the
objects
themselves
and
the
admission
notebooks
are
going
to
rely,
are
going
to
actually
enforce
some
checks,
and
so
it
may
not
make
sense
to
have
the
agency
without
our
back,
but
once
again,
I
could
be
wrong.
We'll
see
what
user
demand
is
so.
C
An
excellent
question
so
I
actually
have
two
people
who
are
joining
us
on
this
call
for
the
first
time,
eg
and
Leticia
are
both
in
Waterloo
and
they've
started
working
with
me
on
this
project,
and
she
has
a
pull
request
that
is
currently
undergoing
review.
That
will
initially
mark
any
objects
that
have
been
improperly
modified
that
were
propagated
and
then
changed,
and
then
we're
also
going
to
update
the
web
to
prevent
it.
So
the
answer
is
by
default.
C
No,
you
cannot
modify
a
propagated
object
because
then
a
administrator,
the
child
namespace,
couldn't
deny
access
to
the
administrator
of
a
parent
namespace,
which
is
not
something
that's
allowed
at
least
not
generally.
We
will
almost
certainly
at
some
point
need
to
add
exceptions,
but
the
rule
will
basically
be
that
only
the
person
who
has
set
up
a
policy
can
make
exceptions
to
it.
So
if
you
are
modifying
being
propagated
object,
you
must
also
have
permission
to
modify
the
original
object
that
was
propagated
from,
and
that
is
it
going
to
be
how
we
manage
exceptions
by.
A
The
way
sandy
I
think
for
the
other
resources,
at
least
in
my
mind,
for
example.
Crts
today
is
a
global
sort
of
a
scope
versus
nowhere
policies
actually
names
a
scope,
I
think
from
a
use
case,
perspective
I
would
like
those
as
well
to
be
sort
of
a
you
know
structured
in
a
consistent
way,
because
today
there's
not
a
good
explanation
of
why
actually
Asian
you
know,
things
are
defined
that
way.
So
it's
a
little
bit
aspirational,
but
just
from
use
case
for
sure
yeah.
B
A
True,
so
that's
not
it's
completely
sort
of
a
you
know
aspiration
about.
For
example,
if
people
wanted
to
have
their
own
scope,
you
know
see
you
know,
custom
resources,
then
yes,
I
want
I.
Think
there.
There
is
a
sort
of
a
I
want,
say,
there's
a
clear
demand,
but
I
think
there
is
a
kind
of
a
you
know,
expectation
there
as
well
yeah.
C
C
Obviously
that
will
the
storage
required
for
that
will
increase
based
on
the
depth
of
your
a
tree,
I'm
thinking
that
any
depth
greater
than
even
five
is
going
to
be
rare.
Ten
is
probably
going
to
be.
We
might
just
limit
it
at
some
point
because
you
don't
you're
not
going
to
want
to
have
a
thousand
main
spaces
deep.
The
the
system
won't
be
designed
for
that.
D
It
would
be
fair
to
ask
since
I'm
joining
for
the
first
time,
and
some
of
my
team
members
are
here
as
well.
So
how
does
this
tie
up
with
the
overall
multi-tenancy
goal?
Is
this
the
this?
Is
the
architecture
kind
of
aligning
with
the
multitude
in
c
to
have?
You
know
multiple
turns
on
single
cluster
and
all
the
underlying
artifacts
aligned
with
this
or
work
towards,
so
that
we
have
a
you
know
a
layer
on
top
of
kubernetes
that
make
sure
that
the
tenancy
is
there
so.
C
I
think
it
depends
so
there's
a
it
is
a
building
block
which
you
could
use
either
in
a
multi-tenant
scenario
or
in
a
non
multi
tenant
scenario.
If
you
just
want
to
create
whatever
structures
you
like
for
your
own
management
purposes,
I
think
that
what
we
are
working
towards
fa,
who
is
on
the
couple,
actually
wrote
a
document,
how
we
would
integrate
it
with
some
of
the
other
projects
that
are
going
on.
So,
if
you've
have
you
looked
at
the
document
for
the
tenant
c
rd
or
the
time
operator,
we.
D
C
C
There's
no
admin
per
se
of
an
agency
that
are
simply
users
who
have
sorted
permission,
and
so
what
you
can
say
is
that
agency
is
kind
of
at
the
kubernetes
primitive
level
where,
as
the
tenant
operator
is
and
I,
suppose
you
could
call
it
the
application
level.
And
so
obviously
there
is
we
are
looking
at
building
or
you
could
say,
replac
for
being
the
tenant
operator
to
be
on
top
of
the
agency
so
that
it
makes
use
of
the
same
things.
C
D
C
E
C
D
C
D
C
Yeah
and
likewise
like
for
H
and
C
I,
have
heard
reliable
ports
report
that
exactly
one
other
person
other
than
me
have
ever
installed
it
and
gone
through
this
demo,
probably
two
now
that
each
he
is
working
on
this
as
well.
So
this
is
very
early
code
for
the
HNC,
and
so
it
has
not
been
used
and
would
not.
That
was
yes,
it's
too
deep.
That's
right!.
E
C
Chap
if
he
was
able
to
make
it
work
and
as
I
said,
that
cube
cuddle
plug-in,
which
I
showed
today
that's
hot
off,
the
press,
I
haven't
even
read
yet
so
Murrell
ways
away
from
making
this
work.
However,
there
are
three
of
us
on
Google
who
are
now
working
on
it
myself
and
two
others,
and
so
we
are
looking
at
production.
Ization
such
as
metric
web
hooks
for
guardrails
integrations
with
network
policy
and
stuff
like
that.
E
You
quickly
one
question,
then:
maybe
you
mention
a
minute:
miss
around
deletion
of
nate
spaces
that
have
Hera
keys
question.
Yes,
I
didn't
mention
it
go
ahead.
You'd
mention
it.
So
what
are
the
rules
around
that?
So,
when
you
had
I
have
a
topology
of
a
bunch
of
namespaces
and
you
tried
to
lead
up
parent?
What
happens
you
just
bark
or
is
it
gonna
start
deleting
the
tree?
So.
C
The
funny
thing
is
is
that
I've
heard
people
say
that
they
would
like
different
things,
and
so
one
kind
of
weaselly
answer
I
could
give.
Is
that
we
might
make
it
configurable?
So
one
answer
is:
is
that
if,
if
you
want
to
skating
deletion,
we'll
set
up
the
owners,
references
from
the
children
to
the
parents,
and
so
if
you
delete
the
parent,
the
whole
subtree
gets
wiped
out?
Another
is
that
we
set
up
web
hooks
to
stop
you
from
deleting
parents.
C
C
C
Right
now,
I
think
today,
all
of
the
objects
from
that
parent
will
be
deleted.
That
is
probably
going
to
change
that
so
that
they'll
just
stick
around
until
you
either
actively
unset
the
parent
and
then
the
objects
will
be
deleted
or
you
recreate
the
parent.
One
of
the
reasons
it
is
important
to
allow
any
space
to
exist,
pointing
to
a
parent
that
does
not
exist
is
if
you
want
to
cute
cuddle,
apply
a
directory
and
just
create
a
whole
bunch
of
namespaces
and
the
article
links.
E
C
A
The
way
do
we
track
these
up
questions
or
feature
requests
in
the
Curan
repository
under
HST
I.
Think
it's
really
important
to
sort
of
a
track
and
make
sure
that
we
kind
of
make
a
conscious
decision
on
a
behavior,
because
it's
pretty
powerful
but
at
the
same
time
I
can
imagine
people
getting
a
little
bit
confused
with
behavior,
so
sure
I
can
and
you
know
making
it
reflecting.
That
probably
is
good
good
thing
to
do
in
Jessup.
So.
C
C
This
is
all
covered
right
now
in
the
design
document.
So
please
do
go.
Have
a
look
at
it
if
you
like
I'm,
happy
to
answer
questions,
but
if
you
want
to
learn
more,
if
you
want
to
comment
it's
all
there,
and
some
of
these
things
will
have
personal
as
we
as
people
hopefully
start
to
use
it
and
and
start
having
opinions
about
how
it
should
work.
If.
B
B
A
B
And
actually
that
does
mean
that
let
me
before
sheikah
start
so
section
I
can
provide
that
update,
which
I
was
planning
to
provide
at
the
end,
because
that
might
lead
into
some
of
the
questions
that
new
attendees
like
Shakur
might
have
or
anybody
else
that
hasn't
been
following
the
group.
So
let
me
take
just
two
and
share
and
update,
and
hopefully
that
might
help
people
who
haven't
been
familiar
with
the
multi-tenancy
working
room.
D
B
Okay
and
most
recently,
we've
done
this-
we
talked
to
the
policy
working
group
and
just
today
morning
we
had
a
presentation
to
the
financial
users
working
group
and
both
of
these
working
groups
are
very
actively
very
interested
in
multi-tenancy,
so
I'm
just
going
to
share
this,
because
this
is
what
we
tell
others
of
what
multi-tenancy
working
group
is
doing,
and
it's
useful
for
new
members
in
this
group
as
well
to
kind
of
connect.
Quick
summary,
so
these
are
the
threads
that
are
currently
happening
in
the
multi-tenancy
working
group.
Okay,
let
me
adjust
the
size
here.
B
B
Can
see?
Okay,
no
okay,
you
know,
so
this
is
sort
of
a
summary
of
all
that's
happening
in
the
working
group
right.
So
first
thing
we've
been
doing
is
defining
an
overall
multi-tenancy
architecture
framework
right,
so
we
categorize
different
approaches.
We
defined
for
convenience,
certain
models,
a
B,
C
and
D,
and
the
current
focus
is
largely
on
compute
networking
and
security
aspects
of
multi-tenancy,
but
eventually,
storage,
monitoring
and
all
the
other
aspects.
We
would
want
to
have
a
reference
model
for
that
as
well.
B
These
models,
B
and
C
in
terms
of
the
tenancy
control
claim
right
and
specifically
what
we
calling
Model
D,
is
also
sometimes
called
as
namespace
grouping
model
right,
which
is
essentially
a
tenant
controller
or
tenant
operator,
managing
a
set
of
namespaces
and
resources
belonging
to
a
tenant.
So
we
had
a
v1
of
that
which
we
did
a
POC
and
demoed
in
cube,
corn
and
now
under
development
is
v2,
which
is
Q,
Builder
based,
which
is
being
led
by
faith.
B
Okay,
now
I,
although
have
listed
hierarchical
namespaces
here,
as
we
said
early
earlier,
this
is
actually
independent
of
these
multi-tenancy
models.
We
could
use
hierarchical
namespaces
as
part
of
these
models,
but
it
is
also
a
standalone
feature
by
itself,
so
I'm,
just
putting
it
here
with
with
the
with
the
note
that
it
is,
it
has
value
by
itself
as
well,
but
it
can
plug
into
these
multi-tenancy
models
as
well.
Then,
model
C
is
what
has
also
been
known
as
the
virtual
clusters
model
right.
B
That
came
largely
from
the
folks
at
Alibaba
cloud,
and
there
is
also
k3
B,
which
we
haven't
yet
discussed
within
Zeus
working
group.
Yet
from
the
rancher
folks,
which
is
doing
something
similar
and
again
fail,
team
are
working
on
developing
that
further
within
the
incubator,
folders
of
the
multi
Tennessee
working
group,
and
then
fate
has
also
documented
unifying
models,
B
and
C.
B
For
example,
we
just
had
the
discussion
about
leveraging
the
tenancy
already
from
our
Derby,
along
with
the
virtual
clusters
from
model
C
and
then
other
things
that
are
happening
is
we
continue
to
review
and
collect
user
requirements?
We
have
been
reviewing
all
these
directions
with
other
working
groups,
as
I
said,
the
policy
working
room,
Sagat
and
so
on,
and
so
this
is
kind
of
a
one
slide
summary
on
sort
of
the
different
tracks
and
how
they
relate
to
each
other.
Within
this
working
group,
he.
B
So
we'll
have
another
refresher
at
some
point,
but
basically
this
is
one
way
we
had
categorized
it
when
we
give
an
update
at
cube,
corn,
Barcelona
and
I'll.
Give
you
just
two
minutes.
Somebody
for
now
I
think
a
is
multiple
clusters
managed
by
some
kind
of
cluster
management
service.
One
is
right
so,
basically,
here
this
is
multiple
independent
tenant
clusters
right
operated
by
VMs
on
and
is
mmm-hmm
model.
B
is
a
single
kubernetes
cluster,
where
X
number
of
resources,
including
namespaces,
are
grouped
under
one
tenant
tenant
is
a
new
CRE
along
with
associated
CDs
right.
B
This
is
where
we
talk
about
the
tenant
operator
yeah.
So
that's
what
we
call
the
namespace
grouping
model,
also
known
as
model
B
model.
C
is
right
here,
which
is
the
virtual
clusters
proposal,
which
is
essentially
kubernetes
on
kubernetes,
which
is
tenant
current
tenant,
kubernetes
clusters
running
on
a
super
kubernetes
again.
B
So
then
we
talk
about
these
models.
We,
this
is
how
we
kind
of
created
four
categories
so
that
whenever
you
know
people
have
different
notions
of
multi-tenancy,
we
can
talk
in
the
context
of
a
particular
model
and
then
our
working
group
has
been
largely
devoted
to
models
B
and
C
in
the
recent
monster.
So
we
are
largely
focusing
on
models
B
and
C,
as
well
as
building
blocks
like
H
and
C
cluster
profiles,
security
profiles
and
so
on.
So.
D
B
Right,
we
were
not
planning
in
to
spend
too
much
time
on
this
way.
We'll
definitely
have
an
update
for
us
in
one
of
the
future
meetings-
yeah.
Ok,
this
was
kind
of
the
quick
summary
so
that
everybody
gets
caught
up
on.
How
are
we
summarizing
this
working
groups?
Current
tasks,
because
this
is
what
we
tell
other
working
groups
when
we
talk
about
what
the
Mata
Tennessee
working
group
is
doing,
quick.
E
B
E
C
This
is
just
my
personal
opinion.
I
think
that
if
we
were
to
demonstrate
to
you
demonstrate
and
then
as
all
of
those
things
get
users,
you
will
demonstrate
a
demand
for
D
I.
Think
that
that's
how
it
works
so
that
the
more
people
we
have
using
models
B
in
CP,
the
more
they'll
start
to
chafe
against
the
restrictions
that
they
have
and
say,
hey.
Why
can't
I
have
multiple
versions
of
the
same
CRD
in
different
namespaces?
Why
can't
I
what
why
do
I
need
to
have
a
separate,
API
server
running
for
different
customers?
C
B
E
Right
but
for
some
use
cases
like
some
of
the
use
cases
that
were
interest
and
I
work
for
a
telco
is
is
container
network
functions
in
those
cases,
are
extra
layers
of
virtualization
are
gonna,
make
too
complex
or
trying
to
do
with
a
network.
So
now
we
wouldn't
be
interested
in
that
right.
So
for
us
we
have
to
make
B
work
for
even
if
it
means
things
like
the
C
or
D
is
a
cluster
level
admin,
and
you
tell
me
what
you
want
to
create
it
for
you
and
then
you
can
manage
this.
C
The
advantage
to
D
over
something
like
C
I
think
the
D
is
kind
of
taking
the
best
parts
of
B
and
C.
So
with
B.
You
are
allowed
to
open
up
things
from
110
it
to
another.
If
you
wish,
including
network
connections,
are
back
anything
with
see,
that's
impossible
and
then
with
D
I
think
you
could
probably
add
that
back
in
so
you
kind
of
want
the
the
ability
to
have
the
guarantees
of
C
with
the
flexibility
for
B
and
that's
what
gives
you
D
is
that
fav
and
send
you?
Would
you
agree
with
that?.
F
I,
just
wanna
add
one
more
coin,
so
so
as
this
moment,
so
everything
we
are
talking
about
is
I
would
say.
Even
we
got,
two
e
is
not
exactly
there
to
marry
tenancy
people
think
are
talking
about
it,
because
we
should
have
a
forget.
A
one
big
component,
which
is
about
the
northern
ever
you
need
support,
is
strong
in
your
network
isolation.
B
So
with
that
I
know
we
didn't,
this
was
not.
We
were
not
planning
to
spend
too
much
time
on.
This
just
was
that
was
a
brief
update,
so
that
new
folks
can
get
kind
of
caught
up
on
what
are
the
different
threads
of
activity
in
this
working
group.
And
how
are
we
messaging
that
outside
of
this
working
group,
if
we
need
to
have
more
on
this
refreshers,
we
will
have
that
as
future
items
we
can
switch
now
sheekha
if
you
want
to
share
your
team's
thoughts
or
requests,
we
have
about
15
minutes,
left
and
you're.
D
D
C
D
So
this
coming
fresh
to
this
forum,
we
what
we
did
was
to
bring
down
bring,
bring
in
our
use
cases.
That's
what
we
decided
to
move
forward
and
then
figure
out
how
we
aligned
with
how
things
are
proceeding
it.
You
know
in
the
work
group
and
where
we
can
actually
come
together
or
contribute
7.
The
stuff
that
we
will
talk
through
will
be.
D
Maybe
is
old
discussions
with
you
all
for
you
all,
because
you
might
have
gone
through
these
discussions
previously,
but
it
might
be
just
good
to
sync
up
as
to
where
we
are
where
what
our
thoughts
are
rather
not
where
we
are,
but
whatever
thoughts
are
and
how
we
are
looking
at
multis,
Nancy
and
just
a
background.
We
do
we
we
deal
with
customers
that
are
kind
of
two
kinds
of
customers.
D
One
are
small
small
telco
customers
who
who
have
to
set
up
multi-tenancy
and
they
would
rather
use
a
single
cluster
to
set
up
the
multi-tenancy
and
save
on
resources
versus.
There
are
other
customers
who
are
wanting
to
set
up
multi-tenancy
for
for
the
scale
of
a
cloud
provider
where
they
have
multiple
clients
and
each
client
is
a
big
enough
client
to
to
be
owning.
You
know
a
few
thousands
of
clusters,
so
those
are
the
two
extreme
use
cases
that
we
struggle
with.
D
I
was
not
aware
of
this
workgroup
until
recently,
as
we
found
out,
we
decided
to
bring
our
use
cases
and
I'll
be
happy
to
put
it
wherever
you
suggests
ng.
To
put
these,
we
have
certain
use
cases
that
we
wanted
to
go
through
what
we
are
trying
to
do
and
align
as
we
move
forward.
So
some
of
this
will
be
like
oh
yeah,
that's
that's
what
we're
working
on
so,
but
let
me
just
go
through
it,
so
we
what
we
want
to
do.
D
What
we
want
to
get
to
is
multiple
tenant,
sharing
a
single
instance
of
a
cluster
which
I
think
was
your
model
B
or
we
have
a
use
case.
We
have
a
multi
classroom
management
if
you
have
heard
of
that
and
multiplayer
when
running
multiplex,
for
management
manager,
running
on
a
hub
or
running
on
a
single
cluster
and
managing
multiple
clusters
assigned
to
different
tenants,
which
was
which
in
my
mind,
was
your
model,
see
so
that
that's
where
it's
it's.
A
combination
of
B
and
C
that
we
are
dealing
with
I
can't.
C
D
C
D
It's
like
eat
your
own
dog
food
in
that
case,
so
we
need
to
be
really
really
secure
in
this
Lisa
notice
and
the
second
one
is
system
is
secure
and
such
that
no
account
can
access
each
other's
configurations.
Data
logs
call
it
so
this
is
the
one
I
think
somebody
just
talked
about
what
if
multi-tenancy
was
part
of
the
flood,
you
know
k
it
itself
and
there's
so
much
more
stuff
on
top
of
it
to
get
the
tenancy
worked
out.
So
that's
the
second
piece
of
it.
We
have
some
use
cases.
D
D
Now
there
is
a
there's,
a
slight
difference
where
we
use
identity
and
access
manager
that
you
to
create
these
teams,
create
the
users
assign
resources
to
those
users
and
those
resources
could
be
kubernetes
resources
or
could
be
a
non
namespace
non
kubernetes
resources
as
well,
and
we
have
some
admission
controllers
to
make
sure
that
you
know
users
have
the
right
permission
to
before
they
create
the
resources
within
their
teams
there.
There
they
have
access
to
creating
those
resources.
D
Then
the
next
one
is
the
account
loose
enforcement
of
controls.
So
once
we
create
like
you're
we're
talking
in
this
call,
once
we
have
these
tenants
or
different
accounts,
we
want
to
have
some
account
rules
in
place.
Now
this
rules
can
be
more
like
policies.
Somebody
mentioned
the
policy
work
group
as
well.
These
are
more
like
policies,
and
then
we
want
to
have
the
de
layer
to
enforce
these
rules
loose.
D
So
a
simple
example
that
we
are
tackling
right
now
and
trying
to
get
it
in
one
of
your
products
is
be
able
to
enforce
a
number
of
clusters.
A
tenant
can
create,
or
a
number
of
namespaces,
that
a
tenant
in
crete.
So
along
that
line,
so
that
those
are
the
use
cases
that
we
have
and
I'm
sure
sangeun
team
these
are.
These
are
similar
to
what
you
all
have
been
talking
about.
Looking
at
the
update,
so
that's.
D
Yeah,
that's
where
we
think
that
the
work
that
we
are
trying
to
tackle
is
very
much
aligned,
I'm,
just
trying
to
figure
out
where,
where
does
the
alignment
come
from
like
where
the
you
thought
about
the
multi-tenancy
operator
or
tenant
operator?
I
can
connect
with
you
later
Sanjay,
where
the
maturity
of
that
one
or
how?
How
and
when
will
be
with
that
be
available
or
what's
left
in
it.
So
we
can
contribute
as
well
and
then
it's
HNC
does
make
sense
a
lot
under
hooking
under
the
operator
tenant
operator
itself.
D
It's
a
couple
of
things
that
we
are.
We
also
have
established
like
account
same
as
tenant.
We
talked
about
and
then
of
course
we
would
want
account
to
be
able
to
bring
their
own
LDAP
or
be
able
to
have
their
own
over
IDC
provider
account
admin
account.
Admin
for
us
is
different
from
cluster
administrator.
Cluster
administrator
has
access
to
the
full
Custer
and
Kakao.
An
admin
will
have
access
to
only
certain
for
lack
of
better,
maybe
some
certain
namespaces
and
once
he
has.
D
C
D
Some,
what
are
Cardinals
so
account
account
is
a
work
area
inside
the
cluster
that
account
can
work
in.
So
you
create
an
account
and
there
is
an
account
admin
that
is
pretty
much
the
pretty
much
like
a
cluster
administrator
within
that
account
and
that
account
actually
has
a
collection
of
namespaces
ID.
F
D
That
can
be
assigned
to
that
account
in
only
to
that
account,
so
this
can
be
considered
as
client
client
a
administer
multiple
clients
in
in
the
multi-tenancy
client,
a
client
field.
Client,
a
admin
is
the
account
admin,
client,
eight
users
and
all
the
team
and
whatever
namespaces
as
you
kind
of
isolate
and
assign
its
resources
to
the
account
becomes
those
resources
that
the
account
can
see.
Okay,.
C
D
D
In
your
hierarchical
namespace
was
very
interesting
because
we
within
account
we
would
want
some
namespaces
to
be
assigned
to
some
users
and
others
to
the
other
ones
as
well.
The
service
provider
is
more
like,
like
IBM
setting
up
the
cloud
provider
or
some
other
client
setting
up
the
cloud
provider
and
being
able
to
access
all
the
all
the
accounts,
but
we
don't
want
the
even
IBM
to
look
at
or
the
cloud
provider
to
look
at
all
the
details
for
the
account
as
well.
D
So
there
has
to
be
some
masking
there
as
well,
so
this
that
leads
to
this
picture.
There
is
more
details
in
here
if
anybody
wants
to
look
at
it,
but
this
looks
like
very
similar
to
I
think
your
model
B
or
C
I'd
forget
now,
but
the
idea
is
there
is
a
cluster
kubernetes
cluster,
where
the
accounts
are
on
we're
on
bordered.
So
these
are
to
consider
these
as
different
accounts,
and
this
is
the
cloud
provider.
D
D
C
That
was
the
difference
between
models,
C
and
D
D.
Basically,
models
C
gets
around
that
by
basically
deploying
one
API
server
per
accounts
in
your
case,
and
then
they
they
share
nodes
between
them,
or
at
least
in
in
Fae,
like
in
the
Alibaba
version.
They
share
nodes,
but
they
don't
share
API
server
so
as
so
exploits
it,
one
can't
affect
the
others.
So
what
do
I?
Think
of
what
you're
showing
here
is
multi,
so.
B
We
can
do
that.
We're
gonna
be
running
out
of
time
pretty
soon
here.
So
I
think
what
here's,
what
we
could
do
if
you
could
copy
your
link
here
and
I,
also
put
this
on
the
meeting
minutes
so
that
we
can
all
look
at
your
requirements.
I
agree
that
a
lot
of
this
model
maps
to
one
of
one
or
more
of
the
models
to
be
shared
earlier
right.
D
Yeah
yeah
I
know
that
makes
sense.
Okay,
yeah
I'll
figure
it
out
I'll
figure
out
to
put
this
out
in
the
github
repo
and
share
with
you
all
it's
right
right
now
inside
IBM,
github
repo,
so
I'll
put
it
out
in
a
github
repo
and
share
it.
If
you
can
share
your
deck,
that
would
be
great
for
for
me
to
remember
which
model.
B
D
B
Thanks
Shakur
I
think
this
is
exciting,
because
I
think
a
lot
of
both
vendors
as
well
as
end-users
are
realizing
that
having
dedicated
clusters
is
not
efficient
and
a
lot
more.
People
are
now
talking
about
shared
clusters
with
some
level
of
tenant,
isolation,
so
I
think
we're
gonna,
see
more
and
more
and
I
think
this
working
group
can
play
a
very
important
role
in
facilitating
whatever
the
needs
are.
B
At
the
same
time,
we
want
to
avoid
getting
spread
too
thin
because
there's
a
lot
of
tracks
here
that
we
could
be
pursuing,
so,
let's
all
collectively
try
to
keep
focused
and
thanks
to
everyone,
and
it's
thanks
to
people
like
Adrienne
and
Faye,
who
are
really
driving
kind
of
core
parts
of
these
proposals
and
look
forward
to
having
more
engagement
with
you
all
on
slack,
as
well
as
in
future
meetings
and
please
review
documents
from
Adrienne
and
in
trade.
We'll
have
the
links
for
those
as
well.