►
From YouTube: 20200825 Kubernetes Working Group for Multi-Tenancy
Description
Presentation on the Multitenancy Operator by Capsule
A
What's
that
I
just
hit
record
okay
thanks
yeah:
do
you
wanna?
Do
the
intros
yeah
go
for
it?
Oh
okay!
Welcome
everybody
to
the
multi-tenancy
working
group
meeting
for
this
late
summer
day,
if
we're
in
the
northern
hemisphere.
A
So
I
asked
adriana
to
come
and
join
us
today,
because
he
is
the
author
or
one
of
the
authors
of
capsule,
which
is
a
multi-tenancy
operator,
and
I've
noticed
that
there's
there
we're
starting
to
see
a
couple
of
these
pop
up
like
that,
so
there's
capsule,
there's
kiosk,
which
I
don't
know
very
well.
I
only
found
about
that
one
about
a
month
ago.
A
There
is,
of
course,
the
tenant
operator
or
the
aka,
the
tenant
crd,
which
is
a
project
within
this
group,
and
so
those
are
both
kind
of
like,
let's
say,
user
level
constructs,
and
then
you
have
hnc,
which,
like
one
level
below
that,
I
think
on
the
stack,
and
I
think
this
is
a
good
time
to
start
understanding.
A
What's
the
same
like
what's
common
between
these
systems,
what's
different,
can
they
work
together?
Should
they
work
together,
like
there's,
no
need
to
consolidate
things
if
they're
truly
solving
different
use
cases,
but
but
yeah?
I
wanted
to
hear
about
what
was
going
on
with
capsule,
and
maybe
we
could
have
a
discussion
afterwards
about
how
these
pieces
should
fit
together,
even
though
we
haven't
heard
from
the
kiosk
team
yet
so
sorry
adriana.
Is
that
the
right
way
to
pronounce
your
name.
A
My
name
in
english
and
french
are
very
different
and
you're
joining
us
joining
us
from.
C
Yeah
so,
okay,
first
of
all,
thank
you
very
much
for
inviting
me
to
this
to
this
group.
That's
my
first
time
and
let
me
just
introduce
capsule
and
myself,
so
we
developed
a
capsule
when
we
developed
the
capsule.
We
were
not
aware
about
this
working
group.
Really
we
discovered
this
later
after
we
released
the
the
first
prototype.
C
So
what's
the
story
of
a
capsule
capsule
was
designed
because
a
requirement
because
the
requirements
coming
from
our
customer,
we
are
a
small
consultancy,
firm
basic
in
italy
and
we
provide
consultancy,
support
and
on
kubernetes
and
cognitive
technologies.
So
we
have.
We
have
a
customer
that
is
trying
to
build
a
public
container
as
a
service
platform
using
bernates,
and
so
we
soon
discovered
that
straight
kubernetes
was
not
enough
to
support
multi-tenancy,
and
so
that's
why
we
developed
a
capsule.
C
Capsule
is
a
seal
in
early
early
days.
We
did
not
yet
release
the
zero
zero
one
and
we
are
still
on
zero,
zero,
zero
and
so,
but
we
are
receiving
very
good
feedbacks
from
customers,
of
course,
and
also
from
the
community.
So
then
we
discovered
this
working
group.
The
multi-tenancy
working
group
then,
and
the
a
hnc,
h
and
c
yeah
that
that's
the
the.
C
Yeah
the
hierarchy,
namespace
controller,
and
so
we
are
really
really
interested
in
this
in
your
work
we
are
still
discussing
if
we
can
in
some
way
adopt
a
part
or
totally
your
code
base.
C
We
are
still
trying
to
understand
the
the
common
part,
the
a
and
the
the
the
approach
that
the
common
approach.
C
I
don't
know
if
this
is
a
a
feasible
thing,
but
we
hope
we
hope
to
how
to
say
to
to
adopt
your
project
as
a
foundation
for
our
for
our
capsule.
So
please
consider
that
capsule,
the
open
source
project
is
just
the
the
building
block
for
an
enterprise
platform
that
we
are
designing,
and
so
a
possible
approach
is
to
adopt
your
code
base
into
this
platform.
D
C
Really
we
a
lot
of
work
we
have
to.
We
have
to
do
so
anyway,
this
this
meeting
for
me,
is
it's
just
the
first
step
in
this
in
this
direction,
and
so
thank
you
very
much
for
for
for
for
inviting
me
to
this
to
this.
A
Yeah
thanks
for
for
joining
us,
I
think
maybe
one
of
the
I
think
the
the
best
place
to
go
now
is
maybe
for
you
to
tell
us
a
little
bit
about
what
capsule
does
like
what
features.
Does
it
add
to
chord
kubernetes.
C
Yeah,
so
basically
the
idea
behind
capsule
is
to
to
is
the
the
kiss
approach
try
to.
We
adopted
a
minimalistic
approach
to
design
the
the
operator,
so
one
of
the
requirement
was
avoid
custom
resources
to
give
to
give
custom
resources
to
the
end
user.
C
We
we
defined
the
only
one
crd,
the
tenant
crd,
but
this
resource
is
not
is
not
for
the
end
users.
This
resource
is
only
for
the
cluster
administrator.
C
So
all
the
experience
for
the
end
users
is
just
straight
kubernetes,
so
the
end
user
doesn't
have
done,
don't
need
to
to
use
custom
resource,
don't
need
to
use
cube,
ctl,
plugins.
C
He
just
needs
to
operate
in
a
regular
in
a
regular
way.
So
all
the
job
is
done
by
the
by
the
by
the
the
operator
and
the
operator
takes
care
to
group
namespaces
in
in
a
tenant.
C
The
job
of
the
operator
is
just
grouping
the
namespace
in
in
a
tenant,
so
there
is
also
something
that
is
in
common
with
your
project.
That
is
the
the
inheritance.
C
C
So,
for
example,
this
namespace
in
reads
the
resource
quota,
the
limit
range
storage
classes
and
so
on
and
so
on.
So
that's
excuse.
C
A
Let
me
just
want
to
check
it
stay
on
the
line.
Are
you
there
yeah?
Do
you
see
my
screen
yeah?
We
see
your
screen
fake
wow.
Are
you?
Are
you
online
yeah?
Okay,
great
is
any.
Is
anybody
else
who
works
on
the
tenant
operator
online
as
well
or
can
you
represent
any
discussion
about
the
the
tenant
operator.
C
Yeah,
so
this
okay,
these
slides,
are
a
bit
old.
We
don't
have
updated
slides,
but
just
to
give
you
the
raw
idea
about
the
about
the
the
project.
Okay.
So
basically
we
want
to
group
namespaces
in
in.
C
So
that's
the
the
approach
namespace
aggregation.
We
want
to
aggregate
namespace
in
a
tenant
abstraction,
so
that's
the
basically
what
we
okay,
we
we
present
this
in
this
way.
So
namespace
is
a
flat
structure.
Any
namespace,
it's
an
isolated
environment
with
constraints
and
these
constraints
cannot
be
grouped,
cannot
be
shared
between
different
namespaces.
C
A
Question,
how
did
you,
how
did
you
implement
that
for
the
resource
quotas,
how
do
you
implement
the
sharing.
C
The
resource
quota
you
mean
the
resource
quota
yeah.
We
we
avoided.
You
know,
probably
you
know
that
in
open
shift.
For
example,
you
have
the
project,
a
resource
quota
that
is
a
crd,
a
castle
resource
that
is
used
to
share
resources
between
different
namespaces
different
projects
in
openshift.
They
are
called
projects.
C
C
C
So
at
this
point
the
standard,
wrestler
admission
controller,
the
resource
code,
admission
controller
blocks,
the
creation
of
a
new
new
pods
or
a
new.
C
So
let
me
let
me
show
you
this.
If
I
try
so
before
to
go
in
this
in
this
detail,
I
wanted
to
just
show
you
the
main
scenarios
that
we
can
cover
with
the
capsule,
and
these
scenarios
come
from
the
requirement,
the
requirement
of
our
customer.
C
So
we
designed
a
capsule
having
in
mind
these
requirements,
so
we,
for
example,
we
can
enforce
the
quota
at
tenant
level.
We
can
enforce
the
node
selector
and
the
affinity
for
the
tenant.
C
C
Yeah
yeah,
it
depends,
it
depends
if
you
have
a
single
ingress
controller,
for
example,
you
can
shard,
you
can
partition
the
depending
on
the
ingress
controller,
but
in
our
case
the
requirement
was
that
we
have
a
multiple
ingress
controllers,
one
for
a
different
tenant,
so
each
tenant
has
its
own
ingress
controller,
and
so
we
use
capsule
to
assign
the
ingress
control
the
the
ingress
class
to
the
to
the
tenant,
and
so
only
the
tenant.
C
Only
the
ingress
controller
that
you
assigned
to
the
tenant
is
able
to
to
to
expose
the
the
service
the
interest,
but.
A
C
No,
no,
no,
it
depends
inside
the
namespace
you
can,
you
can
do
you
can
work
in
a
regular
in
a
regular
way.
So
let
me
show
just
to
if
you
check
the
the
github,
you
can
see
more.
These
use
cases
with
more
detail.
C
Yeah
yeah,
but
you
see
here
this
is
the
the
tenant
the
tenant
object,
so
you
can
assign
the
ingress
classes
to
the
tenant,
the
storage
classes,
the
name.
C
C
Okay,
so
other
use
cases,
the
storage
class.
You
can
assign
a
storage
class
to
the
tenant
so.
F
How
about
how
about
the
pv
persistent
volume
object?
It's
a
non-namespace
scope,
yeah.
C
Yeah,
exactly
exactly
exactly
so
all
the
persistent
volumes
created
in
the
namespace
entertainment
in
the
namespaces
that
belong
to
the
tenant
will
receive
the
the
the
annotation
telling
you
the
the
storage
class
to
use
and
the.
In
addition
to
the
controller,
there
are
a
set
of
admission
control
admission
web
book,
validating
a
web
book
that
enforce
the
usage
of
the
specific
storage
class.
For
for
this,
so
let
me
let
me
show,
for
example,
you
see
you
assign
these
this
one
to
the
tenant
to
the
oil
tenant.
Okay.
C
C
G
C
Ingress
selector
storage
class,
for
example,
you
see
you
assign
this
storage
classes
to
the
tenant.
Then
all
the
pvc
created
in
these
namespaces
will
receive
this
storage
class,
and
so
the
end
user
is
not
allowed
to
change
this,
because
this
is.
This
is
a
requirement
that
comes
from
the
cluster
admin.
So
the
cluster
admin
set,
the
storage
class,
the
alloyed
storage
classes
into
the
tenant,
and
so
the
regular
user
receive
automatically
this
search
class
and
cannot
change.
F
C
Yeah
you,
the
pv
the
persistent
volume,
the
persistent
bond
yeah.
We
we
assume
that
the
dynamic
storage
provisioning
is
in
place.
We
don't
deal,
we
don't
consider
it
the
the
manual
pvr
assignment
manual,
pb
creation.
C
So
if
you
have
different
storage
classes,
so
you
can-
and
you
force
the
end
user
to
use
yes
only
as
designed
storage
classes
automatically,
you
have
the
isolation.
For
example,
you
can
assign
a
storage
class
to
a
tenant,
and
this
is,
for
example,
a
sf.
You
are
using
a
sef
storage
class,
and
so
this
storage
class,
for
example,
can
point
to
a
storage
pool
in
yourself
environment,
and
so
you
have
a
storage
pool
assigned
to
the
to
the
single
tenant.
F
But
I
see
I
get
how
it's
created,
but
how
do
you
guarantee
a
talent
can
only
like
a
list
or
get
his
own
periods?
Not
other
talents
appear.
F
I
don't
get
the
questions
like.
Let's
say
you
or
you
have
several
pvs
belong
to
different
storage
class
created
by
the
provision
right.
They
are
all
in
the
class
level.
How
do
how
do
we
guarantee
one
talent
can
only
access
his
own,
his
own
tv,
instead
of
other
people,
there.
C
But
user,
usually
the
the
the
end
user
is
not
allowed
to
access
the
pv.
He
only
sees
the
pvc.
F
Yes,
that's,
yes,
that's
the
usual
access
pattern,
but
on
the
api
level,
what's
the
what's
the
mechanism
that
we
prevent
one
time
and
from
accessing
other
talents.
A
C
User,
the
the
the
end
user
here
allies,
the
end,
our
end
user
is
not
able
to
get
pv
to
get
storage
classes
to
get
nodes
because
it's
it's
you
see.
Let
me
show
you:
when
the
allies
login
to
the
to
the
to
the
to
the
platform
allies
can
check.
Can
I
get
namespace?
No.
Can
I
get
nodes?
No
can
can
I
get
persistent
volume,
no
and
so
on.
F
C
It
exactly
exactly
exactly
from
from
the
allies
point
of
view.
The
hero
experience
is
the
is
like
a
a
regular,
a
regular
kubernetes
user.
This
this
was
intentional.
This
was
one
of
the
key
design,
key
criteria
for
design
capsule.
C
Yeah
that
that's
the
and
so
you
can
see
capsule
as
a
combination
of
a
controller
for
the
crd
and
a
combination
of
admission,
controller
and
validating
controller.
C
You
see
we
have
a
set
of
the
dynamic
admission
controller
in
addition
to
the
to
the
basic
one
partner,
selector
limit
range
resource
quota,
mutating
and
validating.
C
C
C
A
Well,
why
would
why
was
it
important
to
you
to
only
have
one
crd,
especially
if
regular
users
can't
see
them
just
to
make
it
easier
to
manage.
A
C
No,
no,
we
did
the
this
was
intentional.
We
designed
it
having
in
mind
this
requirement,
we
do
well.
You
mentioned
the
kiosk,
for
example,
because
in
koski
you
have
a
lot
of
crd,
and
so
the
end
user
has
to
learn
a
new
crd,
new
object
and
so.
D
C
Wanted
to
avoid
this
it's
by
by
design
we
only
the
experience
at
the
end,
the
experience
of
the
end
user
is
it's
like
a
regular
user.
You,
don't
you
don't
know
the
end
user
is
not
aware
of.
Capsule
is
not
aware
of.
Multi-Tenancy
is
not
aware
of
crd
and
so
on.
When.
D
C
User,
when
the
end
user
login,
I
mean
the
end
user-
is
the
the
so-called
tenant
owner,
the
the
the
people
that
can
create
namespace
inside
the
tenant.
So
this
user
is
not
aware
of
the
of
the
of
the
tenant
and
the
crd.
C
Namespaces,
yes,
it's
able
to
create
namespaces,
because
when
we
assign,
when
you
create
a
tenant,
let
me
show
we
also
create
these
two
cluster
role:
namespace,
provisional.
C
Yeah
cluster
rule,
okay
and
the
class
rule
binding
okay,
it's
a
namespace,
provisional.
This
row,
this
class
rule
is
assigned
to
the
group
of
the
user,
so
the
only
requirements,
but
we
we
are
changing
this.
We
are
changing
this
behavior.
The
only
requirement
for
to
have
a
capsule
work
to
have
a
working
cap,
a
working
setup
with
cars
with
capsule,
is
to
have
the
user
belonging
to
this
group.
Okay.
C
So
that's
the
only
requirement,
so
we
assign
to
this
group
this
cluster
role,
this
cluster
rule.
So
all
users
all
end
users
are
able
to
create
namespaces,
but
but
the
user
can
delete
namespaces
because
we
assigned
the
permission
to
delete
namespaces,
but
not
at
the
cluster
level,
but
only
at
namespace
level.
So
when
the
user,
when
alice,
creates
a
new
namespace,
she
gets
this
role
binding
namespace
later.
But
this
this
is
assigned
only
to
the
namespace
to
only
to
the
namespace
that
she
created.
C
C
No,
I
don't
have
here,
but
so
trust
me.
She
is
not
able
to
to
she's
able
to
delete
only
hair,
namespaces.
C
C
This
was
another
requirement,
only
used
at
only
the
standard,
kubernetes
tools,
nothing
to
invent
other
stuff,
because
that
this
requirement
went
from
the
the
the
customer
went
from
the
client.
C
C
D
C
This
by
implementing,
so
I
told
you
before
that
capsule
will
be
the
building
block
for
a
a
more
wide
platform,
so
this
platform
will
implement
an
api
that
that
it's
a
open
shift
style
api.
That
will
give
you
the
the
option
to
list
only
your
namespaces,
but
this
will
be
just
a
customization,
is
not
the
the
regular
kubernetes
is
not
allowing
you.
This.
C
C
So
that's
it
so
the
the
idea.
Anyway.
The
idea
is
to
try
to
to
to
find
something
in
common
with
your
project
and
try
to
try
to
use
part
part
or
all
of
your
project
to
build
something
more
complex.
I
don't
know
it's.
We
are
just
studying
your
your
code
base
just
to
understand,
and
so,
but
we
have,
because
we
are
in
the
early
days.
C
We
we
want.
We
desire
to.
B
C
This
is
just
our
opinionated
way
to
implement
multi-tenancy.
I
I
know
that
these
other
people
cannot
agree
with
this
approach,
but
you
know
I
told
you
before
this
went
from
requirement
from
our
customer,
and
so
we
we
we
had
to
implement
this
in
this
way
because
of
the
requirements.
A
Yeah,
this
is
all
really
interesting,
and
thanks
for
joining
to
to
tell
us
about
it,
let's
see
I'm
not
sure
what
the
right
thing
is
to
do.
Next,
how
does
this
compare
to
the
tenant.
G
Yeah,
so
we
we,
we
have
implemented
a
similar
version
of
the
tenant
crd.
The
idea
is
so
you
use
every
api
server,
but
we
do
we
use
crd
to
encapsulate
the
namespace
to
work
on
the
program
that
you
list
about
and
how
to
view
their
own
namespace.
Only
so
we
use
a
kind
of
crd
called
tenant
namespace,
which
has
a
kind
of
representative
proxy
for
the
actual
name
space.
G
The
lieutenant
owen
can
manage
the
tenant
name.
Space
object,
so
we
set
right
object,
rules
for
tenant,
namespace
object.
Then
I
mean
this
has
some
you
know
learning
curve.
People
should
understand.
Okay,
one
tenant
name
space
is
corresponding
to
one
real
name
space.
So
that
is
the
way
that
we
we
does-
and
I
I
checked,
but
you
you
does
actually
does
more
than
that.
So
we
currently
only
do
that.
G
So
resolving
the
you
know,
namespace
management
like
create
delete
list,
and
that's
all
about
it,
so
each
tenant
can
only
look
at
its
own,
but
this
is
the
talent
crd.
I
mean
that
that
that
we
have
in
in
upstream
I
mean
in
a
working
group
right
now,
but
yeah,
but
we
do
have
some
other
ways
of
you
know:
grouping
namespace,
you
probably
already
know
we
have.
You
know
I
chance
or
name
space
to
have
another
way
of
grouping
them
space
yeah.
We
do.
G
We
have
a
virtual
cluster,
which
is
a
pretty
different
way
of
another,
very
different
way
to
grouping
them
space
yeah
that
that
that's
what
all
I
can
see,
but
I
think
you
guys
so
you
got
definitely
does
more
than
the
currently
what
we
do
here.
We
already
look
at
the
name,
space
list
power
and
this
power.
A
How
many
people
are
using
the
tenant
operator
right
now
other
than
virtual
clusters?
Do
you
know.
G
A
Yeah,
so
right
so
like
I'm
interacting
now
with
some
gcp
customers,
I'm
I'm
getting
some
like
somebody
posted
a
blog
post
today
from
sainsbury's
the
the
british
grocery
store.
So
I'm
that
basically
I
hear
it
because
if
people
want
to
use
it,
they
end
up,
they
end
up
getting
in
touch
with
me.
A
So
I
can
I'd
say
that
there's
maybe
two
or
three
large
customers
who
are
looking
at
it
or
using
it
seriously
like
we
started
accepting
pull
requests
now
from
murakari
in
japan
and
in
internally
at
google,
we're
talking
to
some
other
customers
that
I
can't
talk
about,
obviously
but
yeah.
A
So
I
I
do
see
some
people
starting
to
use
it
and
just
to
be
clear,
like
with
h
c,
we've
been
working
on
it
for
close
to
a
year
now,
but
we
only
recently
like
really
tried
to
sort
of
push
out
getting
people
to
like
asking
people
to
use
it,
because
it
wasn't
ready
for
a
long
time.
We
didn't
want
people
to
be
using
it
until
we
reached
a
certain
level
of
maturity.
A
So
I
think
we
are
now
at
that
level
of
maturity.
And
so
that's
why
we
talked
about
it
at
qcon.
A
We
we've
released
it
at
google
as
a
supportive
product
or
sorry
it's
beta
right
now,
so
you
don't
get
full
support
yet,
but
and
we
wrote
the
blog
on
on
kubernetes.io,
so
I
am
seeing
an
uptick
in
interest
now,
which
is
what
we
were,
which
is
what
we
were
aiming
for
now
that
we're
at
this
kind
of
approaching
stability
level.
C
Yeah
yeah,
that's
sorry
d.
I
see
we
see
a
lot
of
interest
in
multi-tenancy
in
these
days
and
we
also
discovered
that
is
another
project,
another
open
source
project.
It
is
called
the
ka
k8
spin
operator
that
is
trying
to
implement
the
multi-tenant.
C
In
the
same
way,
the
these
guys
are
from
spain,
and
so
also
they
are
trying
to
implement
the
multi-tenancy.
A
Yeah
angel
or
angela,
I'm
not
sure
he
got
in
touch
with
me-
he's
been
trying
out
agency
as
well.
C
Yeah
yeah
yeah
a
lot
of
interest
in
the
community
about
to
market
tennessee,
because
you
I
I
followed
your
talk
adrian
in
at
cubicon
and.
D
C
It
very
interesting-
and
so
I
liked
the
slide
when,
where
you
showed
the
cluster
sprawl,
probably
yeah
this
one-
I
like
it,
I
like
it
this
yeah
yeah.
Exactly
this,
you
see,
I
we
see,
we
see
a
lot
of
a
solution
for
multi-cluster
management
from
any
vendor.
Any
vendor
wants
to
sell
you
cluster
management
solution
with
thousands
of
clusters,
but
we
found
this
a
bit
pain.
C
It's
a
it's,
a
it's
a
pain
when
you
have
to
deal
with
100
of
the
clusters,
so
we
are
trying
to
to
use
a
different
approach
instead
to
create
a
multiple
clusters,
a
cluster
for
each
department,
a
cluster
for
each
user.
C
Put
the
things
back
to
the
simplicity
with
a
single
cluster
and
multi-tenants:
that's
our
that
this
was
our
our
scope,
our
goal.
A
D
One
quick
question
before
you
go
there,
so
how
are
you
dealing
with
the
crd
problem?
Are
you
tenants
not
having
crd
collisions
or
things
like
that
for
multiple
tenants?
Let
me
try
to
ask
it
in
a
simple
way,
so
you
have
multiple
tenants
that
happen
to
be
using
similar
components.
They
want
to
deploy
they're,
bringing
but
maybe
they're
different
versions
or
or
different
expectations,
and
and
they
don't
have
the
rights
to
deliver
the
crds
to
the
cluster.
So
now
you
have.
D
D
The
end
user
right,
the
end
user,
you
have
tenant
a
tenant
b,
they
each
for
whatever
reason
are
using
the
same
operator,
but
maybe
they're
using
different
versions.
So
they
each
want
to
bring
a
c
or
d
to
the
to
their
name,
spaces,
which
you
cannot
create.
C
D
A
Yeah,
I
once
had
a
chat
about
it
with
daniel
smith,
who's
on
the
api
machinery
sig
and
yeah.
So
there
is
no
good
way
to
do
that
right
now
there
are
a
bunch
of
hacks
that
you
could
build,
but
they
all
break
down
in
various
entertaining
ways.
A
I've
I've
looked
at
a
couple
of
ways
of
possibly
doing
it,
but
the
one
thing
that
I
could
never
figure
out
how
to
do
is
how
just
the
discovery
docs
would
work.
If
anybody
knows
what
discovery
docs
are.
I
I
wanted
to
actually
go
ask
a
couple
of
people.
What
would
happen
if
we
just
didn't
do
discovery?
What
would
break
but
yeah,
so
hnc
took
the
approach
that
we're
just
adding
features
to
namespaces.
A
We
are
not
trying
to
create
virtual
clusters,
so
virtual
clusters
will
solve
this
problem
because,
with
virtual
clusters
each
tenant
gets
their
own
api
server,
yeah,
and
so
with
you
with
your
own
api
server.
Obviously
you
get
your
own
crds
and
so
right
now
that
is
one
of
the
I
would
say
like
if
you
want
a
multi-tenant
cluster,
but
you
also
want
crds
pretending.
Crds
virtual
clusters
are
the
way
to
go.
You
don't
really
have
any
options
and
there's
either
so
yeah
fake
and
can
help
you
out
there.
A
I'd
love
to
see
namespace
crds
become
a
thing
to
hack
them
in,
like
without
major
modifications
to
core
kubernetes.
As
I
said,
I
have
to
figure
out
the
discovery
problem
with
that.
If
they
were
to
become
popular,
then
we
could
probably
get
them
into
core
kubernetes.
D
Oh
yeah,
I
was
familiar
with
that.
I
was
more
interested
in
whether
capsule
had
some
creative
way
of
dealing
with
it
sounds
like
you're
telling
me.
There
are
no
creative
ways
at
the
moment
right,
essentially
and,
and
the
problem
with
something
like
virtual
clusters
is.
If
you
have
networking
an
example
that
is
a
little
more
sophisticated
than
than
a
flat
net
typical
approach,
it
kind
of
breaks
down.
A
D
A
And
then
I
do
want
to
go
back
and
talk
a
little
bit
more
about
capsule.
But
can
I
ask
what
is
the
the
use
case?
It's
where
you
have
different,
basically
different
tenants
wanting
to
install
possibly
different
versions
of
the
same
operator
which
might
have
incompatible
crds.
D
I
can
tell
you
what
we
have
right:
we
have
our
tenants,
our
vendors
that
come
with
their
own
products
and
they're
all
might
bring
their
own
telemetry
or
or
their
own,
in
some
cases,
they're
trying
to
run
istia,
which
we
know
we
understand
those
problems,
there's
no
real
multi-tenancy
in
this
example,
but
but
things
like
that,
right
or
or
even
they
might
bring
their
own
cert
managers
built
into
this
thing,
they're
all
kinds
of
basic
stuff
like
that,
really
it's
and
so
sure
you
can
try
to
create
a
coalition
where
you
get
all
these
vendors
to
align
in
the
same
versions
and
but
that's
that's
complex
in
this
space
that
we're
dealing
with,
which
is
a
telecom
world,
and
so
that's
why
so
what
we
landed
now
is
I
looked
at
vnc
and
based
on
our
requirements
at
the
moment.
D
I
I
built
a
solution
that
builds
bare
metal
cluster.
I
I
took
phase
10
and
operator
and
I
expanded
it
to
do
something
along
the
lines
what
capsule
is
doing,
but
but
we
always
run
into
this
crd
problem.
So
at
the
moment,
we
roll
back
we're
dealing
with
basically
building
building
clusters
on
vms,
on
behalf
of
the
vendors,
to
avoid
this
problem.
A
Yeah,
I
think
that
was
it.
I'm
sure
faye
would
want
to
know
more
about
why
vc
isn't
working
for
you
virtual
clusters,
maybe
there's
some
reduced
version
of
virtual
clusters
that
could
work
better.
D
H
There's
also
just
a
call
out
a
handful
of
other
things.
I
know
we're
experimenting
with
virtual
cluster
and
it
seems
pretty
great
for
what
we're
trying
to
do,
but
there's
also
a
handful
of
other
projects
to
check
out
in
this
space.
Like
I
know,
darren
shepard
from
rancher
has
an
old
project
called
k3v,
which
is
meant
to
be
like
k3s
virtual
clusters,
but
that
changes
the
way
that
everything
is
structured,
where
it
ends
up
mutating
all
requests
and
putting
them
into
a
single
name
space.
H
H
A3V
I'll
take
a
look
yeah
check
out.
I
think
I
think
it's
github
handles.
I
build
the
cloud
and
it's
just
I
build
slash.
K3V
should
be
the
project.
H
Maintained
at
this
point,
but
I
know
when
I've
when
I
talked
with
him
in
the
past,
there's
always
there's
always
the
opportunity
to
respark
projects.
G
Yeah
there
is
another
project
I
think
called
the
loft
loft,
I
think,
which
is
involved
the
version
of
the
k3v,
but
that
is
a
past
past
platform.
If
it
provides
more
services,
you
know
seriously
to
handle
the
virtual
cluster
you
can.
G
I
would
say
that
is
a
commercial
version
of
the
virtual
cluster,
but
again
the
the
their
implementation
is
kind
of
still
different
from
what
exactly
we
do
in
the
details,
but
high
level
picture
the
same
thing:
each
talent
has
its
own
dedicated
api
server
and
they
share
one
super
master
for
the
particle
vision.
But
we
differ
in
the
details.
I
don't
want
yeah,
I
don't
think
yeah.
I
I
want
to
think
more.
C
C
So
we
we
are,
we,
I
told
I
told
you
before,
that
we
want
to
approach
this
your
work,
and
I
think
that
the
the
first
approach
could
be
this.
We
want
to
try.
C
E
No
absolutely-
and
this
is
why
we
sort
of
initiated
this
track,
because
I
think
what
we
sort
of
foresaw-
even
even
you
know,
as
as
before
some
of
these
newer
projects
came
about.
There
was
this
growing
interest
in
different
ways
of
doing
multi-tenancy,
but
there
was
no
quick
way
of
determining
if
a
namespace
was
configured
for
multi-tenancy
right.
So
that's
that's
what
we're
trying
to
accomplish
here-
and
I
believe
I
I
think,
maybe
we
you
know-
maybe
we
interacted
on
one
of
the
pull
requests
you
created.
E
If
I
recall
so,
we
have
the
scoop
cuddle,
mtb
plug-in
now,
which
you
can
run
on
a
namespace.
It
will
go
through
a
set
of
benchmarks
and
give
you
a
report.
We
demonstrated
that
I
think
a
few
weeks
ago
in
one
of
our
meetings,
so
the
idea
is,
we
need
to
go
back
and
define
now
what
the
right
you
know
profile
level.
One
and
two
benchmarks
would
be
for
different
levels
of
multi-tenancy
right,
one
of
the
things
that
was
important,
though,
and
I
think
you've
solved
this
a
bit.
E
You
know
with
your
project
is
allowing
self-service
and
sticking
to.
You
know
just
kubernetes
concepts
versus
introducing
new
abstractions
right,
so
that's
those
are
some
of
the
things
we're
trying
to
figure
out
and
then
also,
I
think,
for
this
working
group.
What's
interesting
is
you
know,
do
we
sort
of
accept
that
okay,
there's
gonna
be
several
ways
of
configuring
multi-tenancy
and
we
want
to
encourage
different
ways
of
doing
this,
but
maybe
there's
a
few
things.
We
want
to
standardize
on
like
with
the
benchmarks
and
perhaps
with
some
annotations
and
things
we
can
define.
E
For
example,
one
problem
we're
running
with
the
into
the
benchmarks
is:
how
do
we
know
like,
for
example
like
if
you
create
a
in
you
know
an
ingress
controller
for
a
particular
tenant
or
let's
say
you
create
even
a
quota
or
something
within
the
namespace.
E
Maybe
quota
is
not
a
good
example,
because
you
can't
edit
that
with
the
namespace
admin
role,
but
let's
say
you
create,
you
know,
network
policy
right
and
how
do
you
prevent
the
tenant
from
deleting
this?
So
I
think
what
you've
done
is
you've
written
a
custom,
mutating
or
validating
webhook
for
that,
but
if
there's
a
general
way
to
know
that
this
resource
needs
to
be
protected,
those
are
the
sort
of
things
we
might
need
to
standardize
on,
so
that
this
particular
resource
is
not
something.
E
Is
there's
actually-
and
this
is
some
work
that
we're
doing
in
the
policy
working
group
is
we're
creating
a
common
report
for
different
policy
engines,
so
the
benchmark
tool
will
also
produce
that
common
report,
which
can
then
be
consumed
programmatically
or
just
as
a
cr.
However,
you
wish
okay
so
yeah.
The
idea
would
be.
You
can
periodically
scan
your
cluster
if
a
new
namespace
is
created,
you
can
audit
it
for
multi-tenancy
and
at
least
show
compliance
right.
So,
okay,.
C
E
All
right
so
not
yeah,
not
so
much
from
the
benchmark
point
of
view,
but
if
you're
looking
for
configuration,
audits,
polaris
is
one
there's
kiberno,
there's
oppa
gatekeeper.
E
So
all
of
those
policy
engines
do
configuration
validation,
for
example,
a
lot
of
what
you're
doing
with
your
custom
web
hooks
kiverno
can
also
implement
as
validating
and
mutating
checks
right.
So
there's
two
different
ways
of
doing
that
as
well:
okay,
okay,
great
so,
but
yeah
on
the
benchmarks.
E
Definitely,
let's,
let's
collaborate,
and
maybe
we
can
discuss
more
on
the
slack
channel
and
you
know
it
would
be
interesting
to
run
so
if
you
have
a
cluster
set
up
or
if
you
know
we
can
try
this
out
on
one
of
our
test
clusters
and
also
run
the
benchmarks
and
we're
going
through
the
same
process
with
hnc
as
well
and
we'll
do
the
same
with
virtual
clusters
to
audit.
For
you
know:
compliance
okay,.
C
And
what
about
this
tenant
operator
is
some.
I
is
something
that
is.
E
A
Yeah,
that's
the
one,
that's
the
most
similar
to
what
you've
built
in
that,
whereas
agency
is
quite
low
level
and
you
could
probably
build
not
all
of
capsule
on
it
because
we
do
have
like
for
name
space
self
service.
We
do
have
a
special
object.
It
is
the
it's
only
one
that
users
have
to
deal
with,
but
it
is
one
so
yeah
we
do
other
than
that.
A
A
I
don't
know
that
they're
composable,
exactly
because
you've
got
the
concept
of
like
correct
me
if
I'm
wrong,
but
but
like
in
the
tenant
operator,
you've
got
sort
of
like
the
tenant
name
space,
and
then
you
put
a
bunch
of
stuff
in
there
and
then
you
have
the
the
sort
of
lower
level
name
spaces
as
well.
A
You
don't
have
that
you
don't
have
any
one
object.
That
represents
the
sorry
you
don't
have.
You
have
exactly
one
object
that
represents
the
tenant,
but
there's
no
one
namespace
that
represents
the
tenant.
It
is
an
aggregation
of
namespaces,
whereas
that's
not
true
in
the
tenant
operator,
I'm
not
sure
which
I
don't
know.
If
we
we
need
both,
it
could
be
that
it
would
be
nice
to
sort
of
sit
down.
I'd
love
to
do
it
in
person,
but
that's
not
possible.
It
would
be
nice
to
have
a
look
sometime
and
see
like.
E
Okay,
well,
the
other
interesting
question.
There
is
also
what
sort
of
flexibility
do
cluster
admins
require
right.
So
I
think,
with
with
some
of
these
names,
multi-tenancy
or
namespace
based
constructs
like,
for
example,
with
ingresses
right,
allowing
shared
ingresses
versus
per
namespace
like
who
should
make
that
decision?
Is
it
like
a
cluster
admin,
or
is
it
a
a
project
that
makes
that
decision
right.
C
Do
you
mean
in
in
capsule,
you
know,
do
you
mean
cups.
E
A
A
C
So
by
sp
I
can.
I
can
speak
from
my
experience
from
my
field
experience
this.
This
is
something
that
is
the
the
clustered
means
tend
to
to
keep
for
themselves,
yeah
that
they
don't
don't
leave
these
freedom
to
the
end
users
so
because,
because
they
so
you
know
the
ingress,
it's
more
related
to
the
to
the
networking
to
the
networking
part
yeah.
So
you
know
we
we
can.
C
We
everybody
want
to
wants
to
to
remove
silos
in
I.t,
but
silos
are
still
there
and
so
sometimes
the
network
guys
they
don't
allow
end
users
to
decide
which
which
how
to
expose
the
the
services
they
want
to
to
keep
this
under
their
control.
So
it's
more
something
that
is
from
the
cluster
admin
duty.
A
Yeah
with
that,
I'm
afraid
we're
out
of
time,
but
this
has
been
a
good
discussion
and
I
think
that
we
should
maybe
jim
fay
isano,
and
I
should
get
together,
possibly
on
slack
and
talk
about
our
next
steps.
Yeah.
That
sounds
good.