►
From YouTube: 20220726 Kubernetes WG Multi-tenancy
Description
Patterns for Multi-Tenancy and MultiCluster-Management - Christian Stark, Red Hat
https://github.com/ch-stark/gitops-rbac-example/blob/main/blog/blog.md#organizational-needs
https://github.com/ch-stark/gitops-rbac-example
https://github.com/ch-stark/gatekeeper-kyverno-policyset
Kubernetes Multi-tenancy recipes - Devdatta Kulkarni (CloudARK), Sizhan Xu (UT Austin)
https://docs.google.com/document/d/1NiabaNjYgD7hqmMM-NYq3A6ODR4cP_hKaCSn0cduV7k/edit?usp=sharing
A
Here
we
go,
this
meeting
is
being
recorded,
hi
everybody
and
welcome
to
our
regularly
scheduled
working
group
for
multi-tenancy.
Today,
christian
stark
from
red
hat
is
going
to
be
going
over
patterns
for
multi-tenancy
and
multi-cluster
management,
and
then
devdata
is
going
to
be
talking
about
kubernetes
multi-tenancy
recipes
christian
over
to
you.
B
Thank
you
tasha.
So
let
me
let
me
share
my
my
screen,
so
please
let
me
know
in
case
that
you
can't
see
it
so
okay,
so
let
me
introduce
myself
quickly,
and
so
my
name
is
christian.
I'm
based
in
germany,
munich
area,
I'm
a
technical
product
manager
at
red
hat
and
I'm
responsible
for
reddit
advanced
cluster
management,
so
open
cluster
management
is
basically
the
upstream
version
of
reddit
advanced
cluster
management
and
I'm
responsible,
especially
for
the
governance
part
and
for
the
application
life
cycle
management.
B
Part-
and
maybe
I
start
a
little
bit
with
the
motivation
and
the
why
I
would
like
to
talk
with
you
about
this
topic.
So
certainly
we
talk
a
lot
with
customers
and
we
see
that
this
is
really
a
burning
topic
so
with
multi-tenancy
how
to
achieve
this,
and
basically
I
was
looking
into
the
great
new
documentation
of
multi-tenancy,
what
you
off
have
all
provided
and
what
I
really
see
and
what
is
missing
and,
from
my
point
of
view,
is
really
the
multi-cluster
management.
B
So
all
these
concepts-
what
I
see
here
are
basically
for
a
single
kubernetes
cluster,
which
is
certainly
fine,
maybe
for
a
virtual
cluster.
I
was
watching
the
recent
presentations,
but
my
topic
is
certainly
today
the
multi-cluster
management
and
the
complexity
regarding
multi-tenancy,
which
you
have
when
you
do
multi-cluster
management,
so
in
the
following.
So
maybe
let
me
rephrase
the
goal,
so
the
goal
is
to
discuss
with
different
options
what
we
have,
and
certainly
our
customers.
B
We
are
expecting
that
we
are
providing
guidance
as
much
as
possible
and
certainly
that
we
achieve
simplification
and
at
the
end,
I
really
would
also
like
to
show
you
how
we
are
thinking
we
can
achieve
this
and
what
what
technologies
we
are
using
in
order
to
achieve
this
so
very
quickly.
So
this
is
our
upstream
repository
for
open
cluster
management
io.
I
believe
you
know
it.
B
So
it's
also
an
open
sourcing,
cf
sandbox
project,
and
the
really
important
thing
here
is
that
we
have
a
hub-and-spoke
architecture
which
means
vf1
cluster
one
kubernetes
cluster,
which
is
managing
a
fleet
of
different
other
clusters.
Certainly
they
can
be
different,
kubernetes
provider.
The
clusters
can
be
in
the
cloud
they
can
be
on
premise,
and
here
I
believe
you
already
feel
that
from
here
comes
a
lot
of
complexity,
a
lot
of
different
configuration
options
and
what
you,
what
you
will
have.
B
So
why
are
customers
having
more
and
more
kubernetes
cluster?
So
I
know,
but
also
we
at
red
hat.
We
have
often
in
the
past,
promoted
big
large
open
shift
clusters
because
we
always
claimed
that
we
have
lots
of
multi-tenancy
concepts,
just
to
name
the
namespace
configuration
operator,
which
some
of
you
know
where
you
can
very
flexible,
set
permissions
on
different
name
spaces
and
achieve
some
kind
of
multi-tenancy.
B
But
you
see
here
many
customers
have
clusters
for
dev
stage
in
prod.
There
are
many
customers
who
have
different
vendors,
different
kubernetes
providers
from
the
cloud
providers,
but
not
only
the
cloud
providers,
so
they
are
certainly
customers
who
want
to
have
some
clusters
on
premise
some
in
the
cloud.
So
they
want
to
have
this
hybrid
cloud.
As
you
know,
certainly
we
have
also
customers
who
have
very
different
security
restrictions,
so
some
clusters
need
to
be
very
restricted.
B
Some
can
be,
maybe
even
in
the
public,
and
we
have
and
not
to
forget
many
etched
scenarios,
and
here
we
are
speaking
about
use
cases
where
customers
are
asking
us
for
50
000,
smaller
clusters
now.
So
this
is
a
complete,
a
different
use
case
and
very
very
different
from
maybe
started
with
one
single
big
cluster,
and
one
of
the
reasons
why
we
think,
but
it's
much
easier
now
to
have
more
cluster
clusters
than
it
was
in
the
past-
is
that
it's
very
easy
to
install
it
when
we
have
a
api.
B
B
So
these
are
just
some
of
the
reasons
why
customers
are
tending
to
go
more
and
more
into
smaller
clusters,
but
have
more
and
more
different
clusters.
Christian.
C
Can
I
ask
a
couple
of
quick
questions
on
that?
So
sorry,
what
is
ipi
and
upi.
B
So
ipi
stands
for
installer,
provisioned,
installation,
okay.
This
means
that
we
are,
if
you,
for
example,
create
an
openshift
cluster
in
amazon.
You
are
not
just
installing
openshift,
you
are
creating
all
the
aws
resources
which
are
necessary
to
set
it
up.
Not
so
you
are
setting
up
the
gateways,
the
dns
entries,
no
also
the
storage,
so
everything
what
is
necessary
and
with
upi
you
have,
for
example,
or
already
an
existing
account.
You
have
all
your
nodes
already
and
you
just
install
kubernetes
in
this.
B
On
top
on
top
of
it,
and
and
for
example,
open
cluster
management
has,
by
default
an
ipi
support,
so
it
means
it
does
not
matter
on
which
provider
is
all
major
providers
are
supported
and
you
just
create
your
classes,
install
all
the
insert
infrastructure
there,
and
certainly
when
you
destroy
it,
then
it
will
be
also
cleaned
up.
I
can
certainly
give
you
more
details
on
this,
so
we
have
a
framework
which
is
called
hive.
B
Maybe
you
have
heard
about
this,
which
is
which
we
are
using,
which
we
are
leveraging,
so
hive
is
used
to
do
some
cluster
deployment
objects
and
when
it
will
be
also
cleaned
up,
certainly
via
hive.
So
just
to
give
you
some
some
overview
about
about
how
this
will
be
done.
So
this
is
a
crucial
part,
certainly
of
open
cluster
management.
B
It
is
one
of
the
five
five
crucial
parts,
this
cluster
life
cycle
management-
and
here
I
come
to
to
a
list
of
questions,
and
I
I
have
a
feeling
that
when
I,
when
I
talk
with
my
customers
about
this,
then
I
have
a
feeling
that
very
few
have
really
thought
about
all
these
different
topics,
and
I
really
would
like
to
highlight
with
the
motivation
for
talking
about
this
comes
from
the
customers
and
I've
really
one
of
my
customers
here
in
germany
who
works
for
public
sector,
and
he
really
said
that
he
was
thinking
six
months
about
all
these
topics
and
he's
very
experienced,
and
he
says
you
need
your
time
to
talk
to
the
people
in
your
organization.
B
You
need
to
tie
you
really
do
you
need
your
time
to
think
about
the
different
technologies
which
you
might
use
and
basically
to
be
at
a
high
level.
There
is
always
a
trade-off
between
flexibility,
which
you
need.
You
need
to
have
the
control
over
all
the
different
clusters,
but
you
also
need
to
give
freedom
to
the
people
working
in
your
organization
else,
you,
as
an
administrator,
you
are
getting
completely
overloaded,
so
this
is
just
a
short
word
before.
B
So,
let's
go
over
the
question,
so
certainly
we
have
many
times
with
discussion
so
who
is
really
working
with
the
cluster?
So
are
these
teams
within
your
organization
or
are
with
really
different
customers?
So
you
have
a
fleet
of
clusters
and
every
some
clusters
belong
to
customer
one.
Some
clusters
belong
to
customer
too.
How
can
you
organize
this
in
a
very
efficient
way?
I
already
highlighted
that
customers
have
often
dev
stage
and
broadclusters.
B
Is
it
maybe
the
same,
also
with
the
name
space
level
that
you
are
separating
between
deaths
brought
in
stage?
So
a
very
important
aspect
will
come
to
this.
Later
is
what
are
the
different
roles
that
a
team
has
now
so
is
there,
for
example,
one
administrator
of
the
whole
fleet
of
clusters,
or
do
you
have
basically
teams
which
are
managing
the
cluster
very
independent
now?
So
I
I
I
see
this
question
later
so
who
can
create
a
cluster
in
a
advanced
cluster
management,
an
open
cluster
management?
B
B
And
we
will
come
to
this
also
a
little
bit
later
and
we
will
also
discuss
about
who
can,
for
example,
create
resources
who
can,
for
example,
create
namespaces
on
the
hub
cluster
is
with
only
the
cluster
fleet
administrator
or
can
team
administrator
also
create
namespaces
on
the
hub
cluster?
Who
can
create
the
namespaces
on
the
manage
cluster?
Then
we
can.
We
are
sometimes
getting
the
question:
who
can
access
the
managed
cluster?
So,
for
example,
there
is
the
pattern
that
you
can
say.
I
disallow
any
kind
of
access
on
the
managed
cluster.
B
I
just
manage
the
whole
fleet
from
the
hub
cluster.
So
in
another
important
part
is,
for
example,
should
baby
shared
namespaces.
So,
for
example,
you
have
several
clusters,
but
you
want
that
a
team
can
have
a
single
namespace
on
every
cluster,
or
maybe
all
of
the
teams
should
be
able
to
watch
or
write
through
namespaces,
which,
with
a
certain
name
pattern,
for
example,
on
all
clusters.
B
So
these
are
all
the
the
questions
where
I
have
the
feeling,
but
it's
worth
to
discuss
a
lot
with
the
team
with
with
the
different
roles
in
your
organization
before
you
think
about
how
to
how
to
implement
with
another
thing,
and
you
know
that
we
are
promoting
git
ops
as
well
as
the
best
way
for
bringing
applications
into
the
classes
into
the
fleet
of
classes.
So
certainly
you
need
also
to
think
about
who
is
responsible
for
the
permissions
in
git.
B
So,
for
example,
if
you
have
a
seg
role
in
your
organization
who
is
responsible
for
modeling
policies,
for
example,
then
is
he
allowed
to
deploy
this
policies?
Or
is
he
only
allowed
to
put
this
into
a
git
repository,
and
you
have
another
role?
Who
is
responsible
for
deploying
this
policies
into
the
cluster
and
again,
can
he
deploy
into
the
whole
cluster
or
can
he
just
deploy
into
a
part
of
cluster?
B
So
I
hope
that
I
could
show
you
here
some
of
the
questions,
what
we
are
getting,
what
we
are
discussing
with
the
customer
and
let's
go
quickly
to
the
patterns,
and
here
I
really
wanted
to
check
with
you
if
it's
really
something
which
is
a
pattern
also
in
your
point
of
view.
So
the
first
pattern
is
certainly
that
we
take
what
you
have
documented
in
the
kubernetes
multi-tenancy
section,
and
we
say
that
every
cluster
in
the
fleet
can
be
free
can
be
treated
as
a
normal
cluster,
where
you
have
multi-tenancy
also
in
this
cluster
itself.
B
So
it
would
be
a
quite
easy
to
to
configure
with
so
you
just
set
up
some
rules
on
the
hub
cluster
and
every
every
rule
applies
the
same
for
every
managed
cluster.
What
you
are
managing
in
the
first
approach,
so
the
second
pattern
is
certainly
that
you
have
basically
really
separated
teams,
and
you
say
basically
that
every
team
needs
to
be
completely
independent.
B
So
every
team
gets
maybe
a
list
of
clusters
and
they
can
do
whatever
they
like
on
both
clusters,
and
you
are
just
ensuring
that
the
teams
cannot
by
no
means
see
the
clusters
of
the
other
teams,
and
certainly,
as
I
have
mentioned,
there
might
be
also
the
mix
between
one
and
two.
So
basically,
you
might
want
to
have
whether
team
can
have
can
see
or
write
to
any
namespaces
or
on
a
set
of
clusters,
and
the
other
team
can
see
these
name
spaces
as
well.
B
So
here
I
wanted
to
know
basically,
and
what
is
your
experience?
Have
you
ever
got
questions
from
your
customers
regarding
something
like
a
multi-cluster
role
where
you
know
not
have
only
a
cluster
role,
but
you
have
a
role
among
different
clusters
and
among
different
name
spaces
in
this
cluster.
So
this
is
basically
something
we
are
thinking
about
if
this,
if
it
makes
sense
to
introduce
such
a
new
concept,
so
let's
go
quickly
to
the
personas,
so
this
is
certainly
also
very,
very
important
to
think
about
who
is
responsible
for
managing
open
cluster
management.
B
B
He
can
set
up
application
management,
everything
what
you
like,
and
certainly
this
cluster
fleet
admin
has
more
or
less
cluster
administration
rights
on
every
every
cluster
in
in
reality,
we
have
a
role
which
is
a
little
bit
less
than
cluster
admin
rights,
but
certainly
there
might
be
use
cases
where
you
need
to
extend
it.
You
can
we
can
come
to
this
in
more
detail,
so
then
we
have,
for
example,
a
secops
administrator.
B
I
already
mentioned
that
we
might
have
the
secops
administrator,
who
has
not
not
not
a
lot
of
rights
on
the
on
the
cluster,
but
maybe
he
will
likely
see
the
the
policies,
for
example,
so
the
governance
part
he
wants
to
see
if
the
clusters
are
compliant.
He
wants
to
see
if
there
are
any
violations
regarding
the
setup
what
he
has
done
and
he
wants
to
see,
for
example,
if
there
is
any
configuration
drift.
B
So,
let's
assume
that
that
you
have
defined
that
on
every
cluster
there
should
not
be
any
cube
admin
user,
then
a
sec
ops
admin
certainly
wants
to
be
notified.
If
on
one
of
the
clusters
which
you
are
managing,
you
are
getting
any
kind
of
creation
of
this
cube
admin.
So
this
is
a
typical
use
case,
so
you
need
to
ensure
what
is
the
right
role?
Should
he
be
able
to
create
something?
Should
he
set
up
governance,
or
should
he
just
be
able
to
read
all
these
violations
on
the
cluster?
B
B
No,
so
I
told
you
that
if,
if
you
don't
allow
them
to
create
namespaces,
then
there
might
be
an
administration
overhead
for
the
clusterfield
administrator
because
he
needs
to
all
the
time
approve
and
he
needs
to
all
the
time
approve
limit
ranges,
for
example,
and
here
we
think
that
we
can
automate
a
lot
and
we
can
monitor
certainly
that
there
is
no
misuse
of
their
permissions,
and
here
we
have.
I
will
talk
later
and
please
interrupt
me.
B
If
I
speak
too
long,
we
will
show
how
we
can,
how
we
can
do
it
from
a
technical
point
of
view,
and
certainly
we
have
also
a
developer,
and
here
again
we
would
like
to
to
see.
Can
he
deploy
in
which
namespaces
can
he
deploy?
So
so
often
there
is
a
question:
should
he
have
only
one
namespace
or
does
he
need
to
have
deployments
over
separate
namespaces
and
certainly
developers
want
to
see
about
their
application
performance?
B
B
I
don't
speak
about
this
today,
so
basically,
it's
our
concept
for
grouping
clusters
into
a
cluster
set
now,
so
you
have
a
group
of
clusters,
every
cluster
can
only
be
part
of
one
cluster
set
and
cluster
sets
have
certainly
also
cluster
roles,
so
you
can
either
be
a
cluster
set
administrator
or
a
cluster
set
viewer,
so
this
means
that
operations
might
be
a
cluster
set
administrator
from
one
cluster
set
where
we
can
create
clusters.
Also.
Now
this
is
the
important
part,
but
on
the
other
cluster
set,
he
does
not
have
even
view
permissions.
B
You
have
to
manage
cluster
set
binding,
and
this
is
a
very
important
object.
So,
even
when
you
are
starting
with
setting
up
your
environment,
you
always
need
to
manage
cluster
set
binding,
because
this
is
basically
the
connection
between
a
namespace
and
a
cluster
set.
So
if
you
don't,
if
I
manage
classes
at
binding,
you
cannot
really
expose
a
namespace
in
to
a
cluster,
and
we
talk
here
about
a
namespace
on
the
hub
cluster
in
placement.
B
This
is
another
very
important
concept,
so
this
is
basically
looking
for
a
managed
cluster
set
binding
and
if
it
finds
it
in
a
namespace,
it
is
evaluating
the
rule
of
the
of
replacement.
So,
for
example,
you
can
say
I
want
to
place
an
application
to
all
my
clusters,
which
have
the
label
environment
def.
This
means
that
you
need
to
have
a
new.
You
need
to
find
a
cluster
which
has
this
label
and
it
needs
to
be
also
part
of
this
cluster
set,
and
what
you
can
do
here
is.
B
You
can
certainly
also
create
several
managed
cluster
set
bindings
in
one
name
space
so
that
you
are
binding.
Several
cluster
sets
or
all
clusters
for
this
namespace.
So
these
are
important
concept,
and
I
have
you
mentioned
that
we
are
older.
Placement
rule
objects,
so
we
are
currently
used
with
open
cluster
management,
but
they
are
not
multi-tenancy
ready
because
you
have
here
a
placement
rule
where
you
just
say.
B
I
have
a
selector
like
environment
dev,
so
you
have
a
label
and
you
are
checking
for
all
clusters
in
the
whole
fleet,
which
are
matching
the
label,
but
you
cannot
restrict
it
to
a
namespace
and
to
a
cluster
set.
So
this
is
where
the
difference
of
placement.
This
is
why
you
should
be
aware
of
placement
manage
cluster
set
bindings
in
cluster
sets,
and
if
we
go
ahead,
I
wanted
to
quickly
show
you
and
I'm
I'm
talking
here
very
quickly
about
applications,
as
you
know
that
we
are
integrating
with
argo
cd.
B
This
is
an
important
step
for
us,
and
you
know
with
argo.
Cd
has
now
application
sets
and
we
have
basically
combined
the
application
sets
with
replacement,
as
you
see
here.
So
basically,
an
application
set
can
be
placed
to
a
set
of
clusters
which
are
defined
by
the
placement.
What
we
have
discussed
just
before
right,
so
you
see
here
that
we
have
the
cluster
decision,
resource
generator,
which
was
just
generated
for
the
integration
of
open
cluster
management
and
application
sets.
B
So
if
you
want
to
read
more
so,
we
have
here
some
documentation,
but
you
can
also
reach
out
to
me.
If
you
would
like
to
to
get
more
about
this
application
set
integration
and
acm,
so
you
might
have
known,
but
where
applications
have
different
generators,
we
are
designed
for
multi-clusters
already,
but
not
on
with
level,
as
we
have
here
with
open
cluster
management.
B
So
if
we
have
introduced
the
concepts
in
the
previous
slides,
then
certainly
we
need
to
rephrase
the
questions
and
what
we
had
so,
first
of
all,
we
should
check
who
is
getting
really
our
own
cluster
set.
Should
this
be
for
every
team?
Should
this
be
for
every
customer,
or
should
a
customer
just
get
one
cluster,
no,
where
he
can
work?
So
again,
we
have
for
a
cluster
in
cluster,
set
different
cluster
roles,
which
you
can
assign,
or
not
both
on
the
admin
and
review
level
and
as
I
as
we
discussed.
B
So
how
can
we
ensure
that
the
team,
one
cluster
said
one
cannot
get
any
information
of
the
cluster
set
too
and
the
question
now
with
git
ops?
I
don't
want
to
talk
a
lot
about
this,
so
basically
the
question
is
really:
how
should
we
set
up
this
in
openshift?
Should
we
set
it
up?
Basically,
on
the
cluster
level,
or
should
we
set
it
up
on
the
team
level?
I'm
not
speaking
a
lot
about
this,
but
what
I
really
want
to
highlight
is
the
acm
policy,
so
the
open
cluster
management
policies.
B
Or
do
you
want
to
set
it
up
on
them
on
the
whole
fleet
level
and
as
I,
as
I
told
you
before,
the
replacement
rules,
which
is
an
old
concept
of
which
is
still
working
fine,
so
it's
just
not
multi-tenant
ready,
so
we
could
discuss
if
a
team
administrator
should
not
be
allowed
to
create
placement
rules
either
in
the
ui,
the
cli
and
inviscid.
B
How
can
we
ensure
that
this
is
still
not
working,
but
he
can
know
by
no
mean
select
the
wrong
cluster,
so
these
are.
These
are
some
questions
and
here
they're,
coming
basically
to
the
implementation,
how
we
are
doing
this,
so
we
are
using
argo
cd,
so
we
have
the
github
separator
at
reddit.
We
are
using
web
of
apps
pattern
and
this
means
we
have
a
basic
argo
cd
application,
which
is
setting
up
20
other
applications,
which
are
configuring
all
the
objects
which
I
have
described
and
described
in
my
previous
slides.
B
So
we
are
working
with
application
sets,
as
I
have
shown
you,
which
are
combining
replacement.
What
I
have
described
when
we
work
with
open
cluster
management
policies,
so
our
policies
are
basically
different,
different
kinds
of
policies,
but
what
we
are
doing
basically
is:
we
are
configuring,
kubernetes
resources,
so,
for
example,
what
you
can
do
is
you
can
say
I
want
that
every
cluster
or
every
cluster
of
my
cluster
set
is
configured
identically
regarding
the
roles
regarding
the
role
bindings
regarding
the
cluster
roles,
so
this
is
very
easy
to
set
up
into
monitor.
B
We
have
policy
sets
which
is
basically
a
group
of
policies
which
are
fitting
together,
which
can
be
deployed
together,
monitored
together.
We
have
a
policy
generator
which
allows
us
to
generate
policies
from
plain
kubernetes
resources,
and
here
we
also
generate
the
policies
from
gatekeeper
or
from
kubernetes.
So
this
means
you
have
a
cavernous
cluster
policy.
B
You
place
it
somewhere,
you
have
a
policy
generator
configuration
file
and
when
you
can
space,
when
you
can
create
an
acm
policy
and
distribute
this
cabana
policy
with
all
kinds
of
options,
what
you
like
onto
the
fleet
in
your
cluster
set
or
your
cluster-
and
we
are
using
coverno
for
for
with
validation.
B
What
I
have
mentioned,
and
not
only
for
this
validation,
also
for
the
generation
of
object,
so
as
I
as
I
have
mentioned,
for
example,
if
I
want
to
create
a
new
namespace
and
if
a
team
administrator
has
their
permissions
to
create
a
new
namespace,
I
want
to
ensure
that
all
the
objects
which
are
belonging,
for
example,
I
manage
cluster
set
binding,
that
I
ensure
that
this
namespace
is
bound
to
the
correct
a
cluster
set
by
default,
so
that
you
don't
need
to
care
about
this
anymore.
B
So
this
is
a
typical
use
case,
where
we
think
that
kubernetes
does
a
very
good
job
by
making
this
very
elegant
and
align
it
so
and
one
challenge.
What
I
would
like
to
highlight
here
is
that
we
need
to
if
you,
if
you
remember
a
question
at
the
beginning,
we
need
to
ensure
that
we
are
placing
the
policies
either
on
the
hub
cluster
or
on
the
manage
cluster.
B
So,
for
example,
if
we
want
to
not
allow,
but
a
team
administrator
can
create
name
spaces
on
the
hub
cluster,
then
certainly
we
need
to
have
a
policy
which
is
disallowing
this
and
it
needs
to
be
assigned
on
the
hub
cluster,
but
certainly
we
will
also
have
certain
rules.
For
example,
as
I
mentioned,
roles
setting
up
the
name
spaces
on
the
managed
clusters
in
here,
we
would
need
to
have
a
policy
which
is
deployed
which
has
a
placement
which
is
targeting
only
the
managed
classes
or
a
set
of
managed
clusters
from
from
a
cluster
set.
B
So
this
is
basically
the
implementation,
and
here
you
see
basically
where
we
are
working
on.
So
first
of
all,
you
see
here
our
open
cluster
management
policy
collection
with
all
the
examples
here.
You
see,
for
example,
the
access
control,
where
we
are
setting
up
examples,
how
to
set
up
this
role
for
a
fleet
of
clusters.
B
These
examples
still
use
the
old
placement
rule
api.
So
they
are
they.
The
policies
are,
are
ready
to
use,
but
you
need
to
you.
You
need
to
use
the
new
placement
api
for
multi
tenancy
on
the
fleet
level.
Now
so
this
is
what
I
try
to
highlight,
so
we
are
working
on
a
block.
Currently,
this
is
not
ready
yet,
but
basically
the
block
can
be
highlighted
in
f.
It
can
be
separated
in
five
contents.
B
So
we
are
talking
about
where
organization,
the
infrastructure
aspects
about
the
patterns
and
personas
about
acm
multi-tenancy
concept
and
the
applications
and
integration.
What
I
mentioned
about
the
cubano
integration,
and
certainly
we
have
a
demo
repository
where
you
can
see
and
where
we
have
also
some
validation
example
how
it
looks
like
when
you
try
to
create
a
placement
rule
in
the
ui.
How
does
how
does
revalidation
work
if
you
have,
if
you
want
to
create
a
cluster
with
a
wrong
name,
space
pattern
and
so
on?
B
So
this
looks
already
very
nice
and,
as
I
mentioned,
we
are
not
ready
yet,
but
we
are
making
progress
and
if
you
want
to
collaborate,
maybe
even
this
would
be
certainly
very
nice
and
if
you
would
like
to
have
a
discussion,
so
here
you
see
some
screenshots.
So
this
is
where
review.
If
you
set
up
application
sets
you
see
here
discovered
in
argo,
cd
applications.
B
You
see
that
this
is
currently
not
targeted
to
any
any
cluster,
and
here
you
see
the
argo
cd
view.
So
this
is
all
integrated
here.
You
see
basically
the
application
set
view
if
you
are
logging
in
as
a
team
administrator.
So
this
is
certainly
the
challenge
only.
You
should
only
see
the
applications
where
you
have
the
permissions,
which
are
part
of
your
basically
cluster
set
and
when
you
see
here
some
validation,
so
here,
for
example,
I
want
to
create
an
application
which
I'm
not
allowed.
B
So
you
get
here
revalidation
error,
and
here
we
have.
For
example,
we
want
to
sync
a
policy
which
is
using
the
placement
rule
which
I
have
discovered
described.
So
you
are
getting
here
in
go
cd
validation,
but
certainly
we
have
also
kubernetes
checks.
So,
for
example,
here
you
see
that
I
want
to
create
this
placement
rule
via
the
ui.
You
see
here
and
it's
getting
blocked
by
by
covering
and
show
you
one
more
example.
So
here
we
are
creating
an
application
set
and
we
are.
We
are
violating
here
three
rules.
B
Basically,
so
we
are
not.
We
are
not
allowing
yet
we
don't
have
the
correct,
namespace
namespace
pattern
which
we
are
allowed
to
have,
and
we
don't
use
the
correct
project.
So
in
argo
cd,
you
can
assign
a
project
where
you
can
define
also
different
arbuck
rules,
and
we
also
don't
have
the
right
placement,
as
I
have
mentioned.
B
So
these
are
even
if
you
are
setting
up
arbuck
you,
it
gives
you
a
very
nice
and
elegant
way
to
control
basically
the
creation
of
your
of
your
class
of
your
objects,
and
it
gives
you
also
a
nice
way
to
monitor
a
certain
division.
So
you
see
here,
for
example,
that
we
have
here
just
we
want
to
create
a
cluster,
and
this
administrator
must
also
use
a
different
namespace
schema
for
creating
this
class
system
and
yeah.
This
is
just
an
example:
uncertainty.
We
are
testing
more
and
more.
B
How
and
how
this
is
working.
We
are.
We
have
also
examples
where
we
are
adding
labels.
So,
for
example,
you
create
a
new
namespace
and
you
want
to
be
ensuring
that
your
tenant,
so
your
argo
cd
tenant,
is
managing
this
name
is
managing
this
namespace
has
the
rights
to
deploy?
Also
in
this
namespace,
you
achieve
this
by
automatically
adding
a
label
via
mutation
policy,
and,
as
I
mentioned
also,
we
have
many
use.
B
Cases
where
we
generate
makes
a
lot
of
sense,
so
we
can
generate
all
the
objects
like
placement
like
manage
cluster
set
binding
when
we
are
creating
a
new
namespace
so
that
we
know,
but
this
namespace
is
assigned
to
the
right
fleet
m
of
clusters,
and
here
you
see,
for
example,
that
you
can
certainly
look
for
the
groups
which
are
create
which
are
performing
with
action
and
only
if
it's
done
by
a
group
administrator,
you
can
execute
the
validation,
but
this
is
clear
for
you.
B
A
Cool,
thank
you
so
much
for
that
overview.
This
reminded
me
that
sig,
u's
ability
next
week
is
going
to
be
having
brian
grant
present
his
kpt
project.
If
anybody's
interested,
I
think
it'll
be
a
really
cool
presentation,
but
yeah
did
anyone
have
any
questions.
I
know
we're
a
little
short
on
time
for
our
next
presentation,
so.
C
A
Yeah
yeah
really
interested
yeah
cool
awesome
to
see
the
caverno
angle
for
sure,
okay,
dev
dada.
Would
you
like
to
talk
about
your
kubernetes
multi-tenancy
recipes.
C
C
Thank
you
tasha,
and
that
was
a
great
presentation
christian
before
I
jump
into
what
I
wanted
to
talk
about.
I
just
quickly
wanted
to
mention
couple
points
about
what
you
talked
about
krishna,
so
just
wanted
to
see.
If
you
had
seen,
I'm
sure
you
have
seen
this
but
startlingx
from
the
openstack
community
had
something
of
a
similar
notion
of
you
know
hub
cluster
and
managing
multiple
kubernetes
clusters.
Of
course
they
did
not.
C
I
don't
think
they
did
it
to
the
level
where
you
are
doing
with
policies
in
the
hub
cluster
and
the
edge
clusters,
but
so
something
that
came
to
my
mind.
We
have
experience
at
cloud
arc.
We
were
working
with
a
telecom
provider
who
were
using
startling
x
and
then
deploying
the
clusters
on
the
edge
to
run
the
5g
workloads,
and
there
were
certain
similarities
between
what
you
are
describing
about
the
open
cluster
management,
which
I
have
not
yet
looked
into,
but
yeah.
C
So
maybe
something
interesting
you
might
find
through
this
project.
Apart
from
that,
as
jim
said,
it
will
be
very
interesting
to
see
I
mean
I
was
looking
at
it
from
the
multi-tenancy
dog
that
we.
B
C
Worked
on
and
I
I
do
see,
there
is
a
lot
of
good
good
content
that
you
have,
which
can
improve
the
docs,
so
maybe
in
series
of
discussions
and
meetings
we
probably
want
to
create
version
2.,
so
yeah
with
that
I'll
now
talk
about
the
agenda
that
I
have
so
this
is
basically
continuing
from
where
the
multi-tenancy
dock
after
it
was
published.
C
What
we
started
thinking
at
cloud
arc
was,
you
know
this
dog
is
good,
but
it
will
be
good
if
we
have
some
very
specific
recipes
to
achieve
multi-tenancy,
and
this
is
a
single
cluster
multi-tenancy
that
we
just
talked
about
christian
right.
This
is
only
talking
about
within
a
single
cluster.
C
There
are
two
different,
broadly
used
cases
we
have
laid
out
in
that
dock,
which
is
multi-customer
and
the
multi-team,
and
there
are
a
lot
of
these
good
recommendations
that
we
have
come
up
with
as
a
team
and
so
what
we
started
thinking
I
have
a
student
who
is
working
with
me:
suzanne
zoo
he
is
a
sophomore
at
ud
austin
and
he
is
helping
in
this
work,
which
is:
can
we
come
up
with
some
very
specific,
like
let's
say,
yaml
definitions
and
steps
to
achieve
certain
multi-tenancy
policies
that
we
have
identified
in
that
docs?
C
So
this
is
just
wanted
to
share
it
with
the
community
that
this
is
a
work
in
progress.
I
have
included
this
link
on
the
in
our
in
our
google
doc
and
right
now.
What
we
have
done
is
through
that
multi-tenancy
dock,
which
we
have
on
kubernetes
dot
io.
We
have
sort
of
identified
five
different
policies
which
are
very
concrete
policies
or
concrete
directives
that,
as
a
team
we
have
identified
and
then
the
goal
is
we
will
come
up
with
very
specific
steps.
C
Kubernetes
emails
manifest
to
achieve
that,
and
the
inspiration
of
this
was
from
this
very
awesome,
repository
from
amit
b,
who
has
defined
such
recipes
for
network
policy.
So
we
thought,
can
we
do
something
similar
for
multi-tenancy
and
right
now,
it's
still
early.
We
are,
as
I
said,
work
in
progress
and
yeah.
I
just
wanted
to
get
it
in
front
of
community.
If
you
have
any
suggestions,
the
doc
is
out,
there
feel
free
to
provide
comments
and
yeah.
C
C
B
C
For
them
or
correct
correct,
so
I
don't
know
if
you
have
seen
this
repository.
Let
me
just
open
this
up,
so
the
goal
we
felt
is
very
similar
to
let's
say,
for
example,
this
one
right.
It
says
that
okay
deny
network
traffic
to
an
application
and
there
are
steps,
and
then
there
are
specific
yaml
manifest
right.
So
now,
for
example,
let's
say
we
were
to
let
me
scroll
down
and
this
we
will
do
for
if
it
makes
sense
to
use
q
or
no
and
I'll
specifically
want
to
yeah.
C
Let's
say
this:
one
ensure
that
the
storage
class
used
within
cluster
has
a
reclaimed
policy
of
delete.
This
is
one
of
the
things
that
we
have
identified
right
as
part
of
storage
isolation.
So
what
we
were
thinking
was
we
will.
This
can
be
enforced,
using
keyword,
no
and
a
clustered
policy
object,
so
the
goal
will
be
just
to
identify
the
sample,
or
example,
cluster
policy
object
which
will
enforce
this
policy.
C
So
it's
not
like
too
much
work
or
too
many
details,
but
it
will
be
one
place
where
we
will
have
sample
recipes
or
sample
emails
to
achieve
some
of
these
multi-tenancy
policies
that
we
have
identified.
In
that
talk.
Does
that
make
sense?
C
It
does
yeah.
Thank
you
yeah.
So
this
doc
is
currently
we
have
granted
commenter
access
to
everyone
who
has
access
to
this
link
so
feel
free
to
add
to
this.
C
That's
all
I
have
one
before
I
stop.
My
share
just
wanted
to
say
thanks
to
everyone
on
the
team,
jim
especially
you
draw
that
kubernetes
upstream
work
about
our
multi-tenancy
documentation,
so
it
does
come
out
really
well
thanks.
All
right.
A
Yeah,
so
if
you
would
like
more
eyes
on
this,
a
couple
options
we
have
is
we
could
have
you.
Did
you
end
up
joining
the
mailing
list
for
the
for
the
working
group,
because
what
I
would
suggest
is
don't
give
everyone
edit
access
just
give
everyone
like
comment
access
and
then
we
could
send
it
out
to
the
mailing
list
and
say:
hey.
A
C
A
Okay,
do
folks
have
anything
else
they
wanted
to
chat
about
today
before
we
wind
down.