►
From YouTube: Kubernetes Working Group for Multi-Tenancy 20200112
Description
- HNC has been approved by sig-auth to move to its own repo, and we need to make some decisions on that front (e.g. keep the hnc.x-k8s.io API group, or switch to hnc.k8s.io). [Adrian]
- Multi-tenancy with Kyverno:
-- Namespaces-as-a-service: Jim Bugwadia
-- Clusters-as-a-Service: Scott Rosenberg
A
Hey
everybody
and
welcome
to
our
regularly
scheduled
multi-tenancy
working
group
for
kubernetes
meeting
today,
adrian's
gonna
give
us
a
quick
update
on
hnc
and
then
jim
is
going
to
do
an
update
in
a
demo
on
the
latest
in
the
multi-tenancy
benchmarks.
So
adrian
you
want
to
kick
off.
B
Yeah
thanks
sorry,
my
camera
is
not
working
today,
so
ryan
has
been
who's
not
on
the
call.
Today,
ryan
has
been
working
hard
to
help
h
c
graduate
out
of
the
incubator
directory
in
our
github
repo.
So
for
those
who
don't
know
traditionally,
working
groups
did
not
have
their
own
github
repos,
but
there
were
so
many
things
going
on
with
multi-tenancy
that
we
had
our
own
repo
and
we
started
building
a
bunch
of
stuff
in
there.
So
virtual
clusters
started
life
in
there
and,
I
believe,
is
still
hosted
there.
B
Today,
hnc
did
and
so
did
all
of
the
the
multi-tenancy
benchmarking
projects.
However,
the
idea
was
always
that
at
some
point
we
would
graduate
to
a
full-fledged
repo,
that's
sponsored
directly
by
one
of
the
sigs,
as
opposed
to
the
working
group.
So
we
have
now
gotten
our
cap,
our
kubernetes
enhancement
proposal
merged
to
create
a
new
hierarchical,
namespaces,
repo
underneath
kubernetes
sigs
in
github.
So
thanks
to
ryan
for
driving
that
and
getting
us
to
that
state.
B
So
there's
basically
only
one
there's,
there's
two
decisions
we
have
to
make
one
is:
when
do
we
do
this,
and
so
we'll
probably
do
this
migration
over
the
next
couple
of
months?
This
is
a
good
time
for
hnc,
because
agency's
really
stabilizing,
we've
added
a
bunch
of
new
features,
a
bunch
of
new
features,
but
our
roadmap
is
currently
just
at
this
point
stabilization.
So
this
is
a
good
time
for
us
to
be
switching
from
one
repo
to
another,
because
there's
not
a
ton
of
activity.
B
The
only
other
thing
that
we
needed
to
discuss
was
whether
or
not
we
wanted
to
change
our
api
name.
So
all
apis
in
agency
right
now
begin
with
h,
c
dot,
x,
kate's,
dot,
io,
where
x
case
is
a
prefix,
that's
reserved
for
anything.
That's
in
kubernetes
sigs,
but
hasn't
been
formally
approved
as
a
kubernetes
api.
B
B
Well,
sorry,
I
shouldn't
say
that
if
we
ever
did
graduate
to
being,
for
example,
a
core
part
of
kubernetes,
then
we
would
have
to
change
to
the
standard
kubernetes
api
group.
So
you
could
say
that
the
earlier
we
do
this,
the
better.
My
feeling
right
now
is
that
I
don't
actually
want
to
make
this
change.
I
don't
want
to
break
our
early
existing
users.
I
don't
see
a
lot
of
benefit
to
it.
B
It
also
will
make
any
future
change
to
the
api
much
harder
as
we'll
need
to
get
a
formal
api
review
for
each
and
any
change,
and
I
don't
think
that
we're
at
the
state
necessarily
where
we
want
to
do
that,
but
I
haven't
made
a
final
decision
as
to
what
I
want
to
do,
let
alone
you
know
decided
on
behalf
of
the
project,
what
we're
going
to
do.
So,
if
you
have
an
opinion
on
this,
please
add
it
to
the
multi-tenancy
working
group
channel
on
slack
and
or
email
me.
B
If
you
don't
know
how
to
join
this
lab
for
anybody
who's
new
here
and
we'll
try
to
make
the
decision
over
the
next
couple
of
weeks.
There's,
as
I
said,
not
a
ton
of
urgency
here,
but
that
will
that's
the
only
I
think
decision
we
need
to
make
before
we
move
on
to
the
next
step.
So
that's
it
from
me
other
than
that
agency.
We've
released
0.7,
which
has
a
bunch
of
new
features
and
sorry
the
menu
features
exceptions
which
jimmy
contributed
that
she's
on
this
call.
B
So
thanks
to
ginny
and
other
than
that,
as
I
said,
we
don't
have
a
set
of
new
features
for
0.8
right
now.
It's
just
going
to
be
increasing
stabilization
as
we
move
to
at
some
point
1.0,
which
will
probably
be
after
we
change
repos.
So
any
other
question.
Any
questions
for
me
about
hnc,
that
is,
our
2021
update,
early
2021
update.
B
Okay,
hearing
no
additional
questions
again.
Please
contact
me
by
email,
my
emails
on
the
agency
repo
or
through
the
slack
channel.
If
you
want
to
hear
anything
more
and
that's
it
for
me,
thanks
tasha.
A
Thanks
adrian
yeah,
now
handing
it
over
to
jim
to
go
over
the
benchmarks
updates.
A
C
So
I
think
scott
who's
also
on
was
gonna
scott,
I'm
not
sure
if
you
had
any
slides
or
anything
to
present
or
just
wanted
to
kind
of
discuss.
But
you
know
we
were
gonna
talk.
C
Yeah,
so
just
you
know
in
terms
of
context
and
maybe
before
we
dive
into
this
topic,
so
what
I
was
gonna
demo
was
some
of
the
use
cases
that
we're
seeing
with
caverno,
which
is
a
policy
management
tool
and
a
cncf
sandbox
project
and
multi-tenancy.
C
But
you
know
before
we
dive
into
that
just
to
quickly
mention
and
give
you
natasha.
You
also
talked
about
the
benchmarks.
So
so
we
don't
have
a
demo
ready
for
this
session,
but
we
will
have
something
to
demonstrate
in
the
next
session.
What
we're
working
on
there
is
to
now
extend
the
benchmarks
to
to
work
across
namespaces
so
so
far
we're
able
to
do
in
a
benchmark
in
a
single
name,
space
check
for
all
the
required
multi-tenancy
configurations,
and
one
of
the
things
I
think
fey
you'd
also
mention
is.
C
It
would
be
nice
to
do
more
behavioral
tests
like,
for
example,
if
network
policies
are
configured,
can
we
actually
create
two
namespaces
and
make
sure
that
two
different
tenants
can't
you
know,
talk
to
talk
across
applications,
so
things
like
that
is
what
we're
looking
at
next.
So
the
next
step
was
to
extend.
You
know
to
allow
the
cli
and
the
tool
to
take
multiple
namespaces
as
inputs
with
multiple
tenants
for
each
namespace,
and
then
they
run
these
tests
in
an
automated
fashion.
C
So
we
should
have
we'll
have
something
to
you
know
demonstrate
on
that
when
we
meet
again
in
the
next
couple
of
weeks
but
yeah
today.
What
I
wanted
to
talk
about
more
is
these
patterns
we're
seeing
with
kiberno
and
I'll
quickly
introduce
giberno,
for
those
of
you
might
not
be
familiar
with
what
that
is,
and
I'm
going
to
show
focus
more
on
namespaces
as
a
service
and
scott
is
also
you
know
in
the
cabernet
community
has
an
interesting
use
case,
he's
exploring
with
clusters
as
a
service
and
using
cappy
and
kivono
together.
C
Okay,
so
just
you
know,
starting
with
a
quick
intro
to
kiberno
and
what
that
does-
and
you
know
how
it
applies
to
so
given
just
in
general,
is
a
policy
management
framework
and
it
applies
to
more
than
multi-tenancy.
But
here,
of
course,
we'll
focus
on
these
two.
C
You
know
namespace
as
a
service
and
then
clusters
as
a
service,
and
I
did
by
the
way
I
installed
hnc-
and
you
know
I
was
playing
around
with
it,
then
to
make
sure
it's
compatible
with
caverno
and
there's
some
interesting
things
there,
which
I'll
demonstrate
that
can
be
done.
C
So
just
you
know
in
terms
of
why
caverno
and
what
makes
it
different
from
something
like
you
know,
opa
or
oppa,
which
is
another
policy
engine
in
the
kubernetes
community.
So
the
whole
intent
is
to
make
policies
and
policy
management
native
to
kubernetes
and
not
require
different
languages,
or
you
know,
require
you
know
or
add,
sort
of
more
paradigms
or
complexity
to
it
right
and
also
in
terms
of
policies.
C
The
focus
here
is
not
just
to
validate
and
enforce
certain
settings,
but
also,
as
you'll
see
in
the
demo
is
be
to
be
able
to
mutate,
generate
different
resources
based
on
policies
that
are
being
set
and
extended,
and
then
you
know,
because
caverno
is
only
focused
on
kubernetes.
C
It
has
the
benefits
and
the
ability
to
leverage
a
lot
of
the
native
patterns
in
inside
of
kubernetes
right,
like
things
like
whether
it's
labels,
selectors
annotations,
making
sure
like
owner
references.
So
all
of
those
type
of
things
kiverno
can
take
advantage
of,
even,
for
example,
between
the
relationship
between
pods
and
pod
controllers.
C
You
know,
and
as
opposed
to
a
general
purpose
policy
engine
which
doesn't
have
that
those
kind
of
the
knowledge
of
the
kubernetes
specific
domain
so
the
way
it
works.
You
know,
as
you
can
probably
you
know
envision
with
a
any
policy
engine.
One
of
the
main
functions
is
to
you
know,
review
admission,
requests
right,
so
qiverno
plugs
in
as
an
admission
controller,
and
it
is
highly
optimized
in
terms
of
how
it
processes
these
requests.
Typically
admission
controllers
get
30
seconds.
C
We've
artificially
set
the
deadline
for
kiverno
to
be
under
three
seconds
to
process
any
request,
and
that
includes
you
know,
validate
mutate
and
other
types
of
requests
and
kivern
also
can
operate
as
a
background
controller.
So
it'll
scan
your
cluster
for
violations.
Report
on
these
and
the
policy
reports
it
generates.
Are
you
know
something
which
is
being
worked
on
in
the
policy
working
group,
so
even
that
is
a
custom
resource,
so
the
it's
pretty
easy
to
get
namespace
level
reporting
on.
C
This
policy
is
checking
in
the
pod
security
context
to
the
security
context
with
the
at
the
pod
level,
as
well
as
the
container
level,
and
it's
making
sure
that
you
know
run
as
root,
for
example,
is
set
to
true.
So
this
is
a
validation
rule,
and
you
know
those
are
useful
to
check
for
required
configurations.
C
C
Okay,
so
for
namespace
as
a
service-
and
you
know
what
we
what
kiberno
can
do-
and
I
think
also
like
sort
of
you
know
showing
how
it
will
work
with
hnc.
C
C
The
other
thing
that's
very
typical,
is
you
know,
of
course
you
want
common
configurations,
and
this
applies
not
just
to
sub-name
spaces
like
hnc
solves,
but
also
to
you
know,
top-level
namespaces
right
so,
very
typically,
within
an
enterprise
you
will
have
several
common
configuration
settings
that
you
want
to
enforce
and
I'll
show
an
example
of
you
know
something
else
that
was
shared
with
the
kiberno
community
of
how
users
are
addressing
this
using
labeling
and
things
like
that.
C
But
it's
typically
as
we've
seen
with
other
controllers
so
requiring
standard
labels
requiring
naming
conventions
generating
defaults
things
to
that
nature
and
then
also
on
another,
more
advanced
namespace
and
one
of
my
colleagues
is
going
to
do
a
presentation
in
the
valero
community
about
this
is
allowing
more
self-service
for
namespaces
right.
So
once
a
name
space
is
created.
If,
let's
say
a
user
wants
to
request
a
backup
in
the
valero
use
case
or
in
other
cases.
Maybe
if
you
know
users
want,
let's
say
their
own
monitoring
or
logging.
C
Those
type
of
things
can
be
automated
using
caverno
and
just
using
labels.
So
the
idea
is
that
to
request
this,
the
user
would
just
create
a
label
and
if
they
say
monitoring
enabled,
then
there
could
be
automation
in
place
through
kiberno
to
generate
these
services
and
configure
them
for
that
particular
namespaces
itself.
C
So
a
quick
kind
of
overview
on
you
know
where
hnc
and
caverno
could
potentially
work
together
and
how
this
you
know
how
we're
seeing
this
play
out.
You
know
with
some
customers,
and
users
is
so
this
top
level
management
of
namespaces,
which
the
administrator
typically
does
is
something
that
kiberno
can.
You
know
automate
and
I'll
show
mostly
through
the
command
line,
but
you
know
there's
tools
like
nirmata,
or
you
know,
other
automation,
tools
and
even
integration
with
things
like
argo
cd,
which
can
help
with
that.
C
But
the
idea
is,
to
you
know,
be
able
to
fully
automate
so
create
a
standard
set
of
you
know,
namespace
types
and
to
allow
users
to
come
in
and
manage
those
and
then
automate
what
happens?
You
know
when
that
namespace
is
created.
C
Now,
of
course,
once
the
namespace
is
created,
what
hnc
nicely
solves
is
how
can
you
you
know
letting
the
namespace
owner
and
namespace
admin
manage
things
from
there
on
and
create
sub
name
spaces
for
their
application
itself
right.
So
this
is,
you
know
one
pattern,
one
way
of
kind
of
using
the
two
together
to
solve
this
problem
holistically,
a
foreign
enterprise.
C
Okay,
so
before
I
dive
into
the
demo
I
prepared
just
want
to
quickly
share.
Like
I
mentioned,
this
is
something
gregory
also,
you
know
who's
been
active
in
the
cavanaugh
community.
He
shared
this
and
I
don't
think
he
was
able
to
make
the
call
today.
But
if,
as
you
can
see
like
the
use
case,
he's
trying
to
work
out
and
what
they
have
done
here,
it's
fairly
elaborate,
so
they're
creating
name
spaces.
You
know
with
different
business
units
and
different.
C
C
If
that
unit,
you
know
based
on
certain
settings,
they
can
generate.
You
know
different
role,
bindings
or
even
you
know
like
if
that
unit
is
marked
for
pci
compliance,
as
in
this
example,
then
create
certain.
You
know
additional
annotations
additional
configurations
to
apply
to
the
workloads,
maybe
restrict
you
know,
based
on
network
policies
and
node
selectors
things
like
that
and
then.
Finally,
of
course,
if
there's
yeah,
like
other
network
configurations
or
requirements
like
in
this
case,
if
there's
a
dmz
label,
make
sure
that
workloads
get
tagged
with
that
too.
C
C
The
demo
that
you
know
I'm
going
to
show
is
somewhat
similar
but
simpler
right.
It
doesn't
go
through
all
of
those
different
use
cases,
but
what
I
kind
of
just
prepared-
and
this
was
you
know-
as
I
was
playing
around
with
hnc
and
thinking
about
what
to
demo
yesterday
is-
I
was
came
up
with
you
know.
The
first
thing
we
see
very
common
is
auto
labeling,
so
just
having
clear
sort
of
ownership
or
who
created
this
namespace.
C
Things
like
that.
So
I'll
show
an
example
of
a
policy
for
that
then
enforcing
certain
conventions
for,
like
you,
know,
sizing
or
types
now.
I
I
chose
to
do
this
through
naming,
but
you
could
do
you
know
in
with,
given
it's
fairly
easy
to
do
that
through
annotations
or
labels
and
then
generating
you
know
automatically
generating
not
just
role
bindings
but
also
roles
for
that
specific
namespace.
C
So,
like
a
named
role
for
that
namespace
to
allow
that
namespace
owner
to
be
able
to
then
go-
and
you
know
of
course
delete
or
view
that
one
particular
namespace
itself
for
the
demo-
I
did-
I
also
you
know
attached
some
hnc,
our
back
configuration
to
allow
that
namespace
owner
to
go
and
then
configure
their
hierarchy
and
manage
that
and
gen,
of
course
generate
role
bindings
to
these
roles,
generating
things
like
quotas
and
this,
in
this
case,
quotas
based
on
the
sizing
selected
and
other
default
resources.
C
So
here
I
have
an
example
of
a
network
policy.
So
those
are
some
of
the
you
know
typical
things
that
would
need
to
get
automated
or
done,
and
that's
what
you
know
kind
of
showcase.
How
that's
done
in
quirano.
C
Yeah,
no,
it's
it's
nice
to
see
all
of
this
company.
You
know
when,
when
it
sort
of
works-
and
it
automates
quite
a
lot
of
different
things-
so
just
you
know-
and
so
you're
I'm
showing
a
policy
definition
in
caverno
where
you
know,
as
you
can
see
it's
just
you
know,
the
ammo
is
very
similar
to
kubernetes
can
fix.
So
what
it's
doing
is
this
policy
is
adding
a
label
to
a
namespace,
it's
excluding
if
the
namespace
is
created
by
a
customer
admin
or
hnc.
It
excludes
that.
C
C
This
one
is
interesting
right,
so
it
it
requires
that
every
namespace
again
created
by
any
roles
excluding
these
two
have
to
end
with
small,
medium
or
large
and
then
based
on
the
selected
one.
I
have
policies
here
which
you
know
just
set
particular
quotas
now,
in
this
case
I've
just
inlined
the
data,
but
you
know
it's
possible
to
reference
an
external
resource.
If
you
you
know,
have
these
quotas
managed
in
some
top-level
name
space
in
your
cluster
right.
So
that's!
C
This
is
one
another
example
of
policy
to
generate
resources,
and
then
you
know
for
the
network
policy.
It's
also
pretty
straightforward.
It's
just
a
policy
to
generate
it
and
similarly
for
limit
range
based
on
you
know,
quotas
and
things
right.
So
those
are
the
ones
I
have
and
what
I'm
going
to
do
for
this
demo
is
I'm
going
to
use
a
cluster
role?
C
You
know
called
ns
creator
of
which
I
have
associate,
so
I
actually
created
an
agency
admin
role
too,
and
we're
going
to
use
those
two
roles
and
through
policies
associate
them
to
the
person
who's
allowed
to.
You
know
access
this
namespace,
so
let
me
apply
I'll
switch
to
a
cluster
view
and
if
you
do,
let
me
make
this
a
little
bit
larger.
C
C
So
it
has
these
roles
and
these
policies
currently
right.
So
now
now
that
we
have
that
applied.
What
I'm
going
to
do
is
you
know.
First
thing
I'll
do
is
I'll
just
create
a
namespace
which,
if
I
do
this
as
an
admin,
I
should
be
able
to
create
any
namespace
right.
So
in
this
case
I
just
created
test
and
as
an
admin.
But
now,
if
I
try
to
do
this,
let's
say
I
try
to
do
test2
and
I'm
going
to
use
the
user.
The
namespace
admin
user
I
had
was
nancy.
C
You
know
the
kiverno
policy
kicks
in
and
says
you
can't
do
that
because
you're
missing
the
required
suffix
right,
so
it
requires
the
small
medium
large.
So,
let's
create
you
know
this
as
that
user.
So
now
there's
a
namespace
created
and
if
we
do
describe-
and
I
want
to
do
so,
of
course
I
can
do
a
describe
as
an
admin,
but
I
want
to
make
sure
I
can
also
view
that
namespace
and
look
at
the
details,
as
that
particular
user
in
this
case
nancy.
C
C
Or
actually,
let's
do
role
bindings.
We
should
see
that
we
have
two
role
bindings
right,
one
for
the
namespace
admin
and
one
for
hnc
admin,
so
that
allows
nancy
to
now
manage
and
administer
the
namespace
and
the
hierarchy
there
as
well
right.
So
the
next
thing
I
want
to
do
is
show
again
now,
if
I
you
know,
use
the
hnc
coupe,
cuddle,
plugin
and
if
I
say
you
know
create,
let's
say:
s1.
C
C
They
can
also
kind
of
clean
up
and
delete
and
do
things
once
they.
You
know
once
they're
done
with
that
right.
So,
and
you
know
at
this
point
of
course
they
can
run
workloads
and
do
whatever
they
wish
in
the
namespace.
But
the
last
thing
I
want
to
show
here
is:
if
I
do
delete
and
I'm
going
to
do
sub
name
space,
which
is
the
h
and
c
resource,
we'll
just
say:
minus
minus
all
and
minus
n.
C
C
C
It
quickly
adds
up
in
terms
of
what
needs
to
be
done,
but
the
whole
idea
here
is
with
something
like
iverno
and
hnc
working
together
now
namespaces
as
a
service,
not
just
for
the
not
just
for
the
you
know,
sub
name
spaces,
but
even
doing
that
for
the
top
level
name,
spaces
can
be
fully
automated
controlled
through
policies,
and
it
just
makes
things
very
easy
to
sort
of
manage
end
to
end.
C
So
that's
what
I
wanted
to
sort
of
demonstrate
and
just
show
some
of
the
patterns
and
things
that
are
emerging
would
love
to
get
feedback.
Thoughts
on
these
policies,
more
things
we
could
do
with
cabernet,
potentially,
and
also
you
know,
I
would
love
to
explore
again
more
integration.
Use
cases
between
givernow
and
hnc
and
how
the
two
can
work
together.
C
Okay,
so
yeah
feel
free
to
reach
out
on
slack
if
things
come
up
or
you
know
just
want
to
find
out
more
info
and
we're
continuing
to,
of
course,
you
know
work
towards
several
additional
features
in
caverno
for
different
types
of
policies
and
things
we're
applying,
but
the
other
topic
I
just
wanted
to
also
quickly
mention
and
scott
rosenberger
is
also
on
I'll.
Let
him
you
know
kind
of
share
this.
It's
a
slight.
It's
different
use
case
for
managing
multi-tenancy
with
cappy
and
caverno.
D
Yeah
awesome
thanks
jim
great
to
be
here
and
so
yeah.
I
was
talking
with
jim
actually
on
the
caverno
slack
and
about
a
used
case
that
I'm
currently
building
with
caverno,
which
is
actually
integrating
it
with
cluster
api,
hopefully
in
the
future.
Also
with
cap
n,
once
that's
allowed
and
running
for
the
new
virtual
cluster
implementation,
but
basically
the
use
cases
to
be
able
to
provision
clusters
automatically
when
a
namespace
is
created.
D
The
specific
customer
that
this
is
being
run
at
or
being
billed
for
has
a
used
case
of
a
basically.
They
need
to
be
they're,
developing
different
crds
and
different
controllers,
and
they
want
to
be
able
to
manage
the
deployment
of
clusters
for
their
developers
in
a
simple,
easy
manner
and
cluster
api
on
its
own.
To
deploy
a
cluster
is
a
cluster
that
would
be
ready
for
deploying
workloads
would
be
a
minimum
of
usually
13
different
crds.
D
That
would
need
to
be
deployed
plus
more
if
you
want
any
customization
within
the
cluster,
and
the
use
case
that
came
up
was
being
able
to
use
basically
caverno
to
automate
using
generate
policies
all
of
the
cluster
api,
relevant
resources
and
then
generating
as
well
with
that
a
role
binding
to
allow
the
user
that
created
that
namespace
access
to
cluster
api
creates
the
cubeconfig
in
a
secret
local
in
that
namespace,
so,
basically
allowing
the
user
to
automatically
get
that
cube
config
and
then
once
he
created
a
namespace
within
about
five
five
to
ten
minutes.
D
Whenever
the
cluster
is
ready
automatically,
they
can
just
get
that
cubeconfig
from
the
secret
in
the
namespace
and
start
running,
and
that
cluster
would
be
deleted
at
the
moment
that
the
namespace
is
deleted
as
well.
So
really
an
easy
way
of
just
cubectl
create
namespace
and
you
get
a
cluster
and
the
real.
Now
that
also
cluster,
auto
scaler
works
together
with
cluster
api
and
the
whole
ecosystem
is
really
growing.
D
In
that
sense,
it's
really
easy
to
create
a
default
cluster
with
caverno
here
and
then
allow
that
to
scale
based
off
of
the
needs.
D
So
it's
currently
being
prototyped
in
a
poc,
but
this
is
the
use
case
currently
that
I'm
working
on
with
caverno
for
multi-tenancy
so
yeah.
If
there
are
any
questions
or
thoughts,
anything.
C
E
Yeah,
I
agree.
That's
that's
a
very
good
kind
of
use
case.
Honestly,
we
haven't
thought
about
exactly
how
to
manage
the
cr
stuff
so
far,
everything's
menu,
so
yeah,
no
matter
how
vc
works,
you
still
need
to
have
a
centralized.
Like
you
know
the
super
cluster
kind
of
things.
E
You
still
have
a
management
issue
on
that.
So
I'm
happy
that
I
know
cavanaugh
can
help
on
that,
to
you
know,
protect
protecting
the
actual
cr
of
the
either
captain
or
cappy,
but
that
that's
that's
a
great
use
case.
I
I
would
agree
so
honestly,
I
haven't
thought
about
that.
I
was
focusing
more
on
our
player,
but
I
do
think
I
think
this
program
does
exist
because
the
cluster
administrator
has
some.
You
know
this
is
the
burden
of
the
class
administrator.
You
need
to
manage
those
crs
anyway.
E
You
need
to
figure
out
a
good
way.
You
know
how
to
make
it
so
in
terms
of
the
namespace.
Why
so
we
do
with
even
kpm.
We
probably
will
slightly
change
the
way
we
manage
the
silver
cluster
namespace,
which
chris
and
I
have
kind
of
briefly
discussed
with
this
discussion,
but
we
haven't
made
a
formal
decision.
Yet
some
of
the
way
that
currently
doing
is
more
is
for
in
easy
implementation
perspective.
E
So
I
I
use
you
know
the
the
namespace
for
each
virtual
cluster,
which
has
I
mean
basically
there's
no
directly.
You
know
correlation
between
where
the
cr
locates
and
where
the
control
plane
locates
they're,
probably
in
different
name
space,
at
least
for
now,
they're,
not
necessarily
in
the
same
name
space
but
yeah
yeah.
That's
so
over
what
I
have
a
comment
so
far.
E
I
think
indeed
coming
out
as
as
you
know,
obviously,
this
kind
of
automation
is
pretty
good
for
automation,
so
you
set
up
some
rule
and
when
you
create
a
namespace,
you
you
hook
kernel
with
kind
of
automated
actions
on
that
yeah,
but
by
the
way
for
the
cover.
No
so
so
is
is
api
currently
fixed,
though
I
I
see
it's
kind
of
complicated,
it's
not
exactly
complicated,
there's
some
learning
curve
to
learn
the
api.
So
I'm
just
wondering
what
is
the
philosophy
of
design
roles?
E
Api,
so
is
this
I
would
say
you
know
for
things
like
this.
The
usability
is
always
my
first
concern.
So
if
I
want
to
ride
with
somebody
else,
I
need
to
convince
them.
Okay.
This
is
a
pretty
good
device.
Pretty
easy
way
to
do
this,
not
exactly
the
best
doesn't
have
to
be
the
best
way,
but.
C
Right,
yeah,
no,
that's
a
good
good
question
right,
so
there
is-
and
we
do
support,
of
course,
full
compatibility
on
the
api
itself,
but
yeah
just
looking
at
you
know
the
policy
structure,
so
that's
fairly
stable
and
we've
kind
of
you
know
been
this
okay.
Renault
has
been
around
now
for
close
to
about
a
year
and
we've.
You
know
sort
of
battle
tested
this
with
several
different
variations,
and
things
like
that.
So
this
common
structure
like
to
do
match,
exclude
and
then
mutate
validate,
generate
and
then
how
to
populate
things.
C
So,
by
the
way
you
can
get
external
data
through
config
maps,
you
can
get
external
data.
So
it's
very
easy
if
you
have
some
other
controllers
to
create
a
config
map
and
extend
your
policy
that
way
right.
So
kiberno
remains
pretty
fast
and
stable,
but
you
know,
as
opposed
to
something
like
oppa,
where
you
can
make
external
calls.
We
don't
allow
that
in
kiberno
for
security
reasons,
but
you
can
reference
config
maps
which
will
be
cached
and
users,
policies
yeah.
C
So
but
the
basics
of
writing
a
policy
I
mean
simple
policies
are
simple
and
I
guess,
as
you
you
know,
if
you
go
to
more
complex
policies,
of
course,
it
gets
a
little
bit.
You
know
more
complicated,
but
a
very
basic
policy
is
just
something
like
this
right.
So
you
write
validate
you
write
an
overlay
pattern
which
was
more
or
less
inspired
by
what
customize
does
and
you
you
know,
write
some
wild
cards
to
say
what
you
want
to
match
things
like
that,
so
it's
very
minimal
learning
curve
on
top
of
kubernetes.
E
C
It's
very
flexible
right
so
here,
for
example,
just
to
show
some
psps,
as
probably
everyone
knows,
are
sort
of
on
the
path
to
being
marked
for
deprecation.
So
if
you
want
to
replace
psps
with
cabernet
policies,
we
have
you
know
two
based
on
the
pod
security
standards.
C
We
have
two
levels
default
and
restricted
if
I
go
to
default.
These
are
each
are
individual
policies
right
so
and
each
policy
can
have
several
rules.
So
it's
up
to
you
how
big
you
want
to
make
a
cert
single
policy.
Ideally
you
would
group
them
in
this
manner,
so
each
policy
is
doing
one
thing
and
it's
sort
of
easy
to
maintain.
C
So
this
becomes
an
instance
of
a
policy
and
you
would
apply
it
to
one
or
more
namespaces
or
pods,
and
you
know
the
the
nice
thing
is
because
it's
just
resources
all
you
need
to
do
if
you
want
to
apply.
These
is
a
single
one
line
with
customize
and
it
will
apply
them
to
your
cluster
right.
So
pretty
pretty
easy
to
get
that
level
of
security
up
and
running.
E
Yeah,
I
think,
as
always,
so
if,
if
I
want
to
talk
about
this
with,
you
know
kind
of
multi-tenant,
so
I
don't
know
if
the
problem
is
exist
or
not,
but
do
we
assuming
just
one
guy,
could
create
a
rose
policy
or
so
so?
What?
If
I
I
said,
I
have
two
guys
create
two
policies:
is
there
a
way
to
make
sure
they
are
not
conflicting,
each
other
or.
C
Well,
so
I
mean
some
of
this
just
like.
If
you
were
writing
a
controller
and
they're
doing
you
know
opposing
things,
then
of
course
you
have
to
manage
that
right,
but
yeah.
So
the
typically
an
administrator
would
configure
the
policies,
but
you
can
have
name
space
level
policies.
C
You
can
even
use
policies
to
generate
policies,
but
yes,
I
mean
ultimately
just
like
with
any
scripting
or
coding.
You
would
have
to
make
sure
the
policies
are
doing
the
right
thing,
but
we
do
have
the
there's
a
command
line
to
test
things.
Right
so
admit,
it's
very
easy
to
test
and
validate
results
before
you.
E
Sure
so
that
that's
actually
come
up
by
my
previous
question,
why
I
say:
writing
one
cr
or
multiple
cr,
because
even
even
yeah,
I'm
the
only
one
jurors
policies.
But
if
I
wrote
one
cr,
probably
I
can
figure.
Oh,
I
realized
I
didn't
created
the
complicated
one,
but
if
I
write
in
10
crs,
I
probably
already
forgot
what
the
first
cr
was
doing
so
yeah.
So
so
then
come
out
of
the
question.
So
if
if
there
is
any
conflict,
is
it
possible
to
have
a
kind
of
detection
or
kind
of
so.
C
There
is,
there
are
some
validation
checks
that
you
know
a
giver
note
does
when
you
install
the
policy
so,
for
example,
if
the
permissions
do
not
exist
for
the
resource
you're
trying
to
generate
things
like
that
it'll
check,
I
can't
there's.
No,
you
know
I
mean,
although
it's
possible
for
policies
to
do
different
things
yeah,
we
have
to
think
more
about
which
conflicts
we
want
to
automatically
check
for,
and
things
like
that.
But
there
is
a
validation
module
as
policies
are
being
installed
and
you
can
also
validate
those
offline
and
you
yeah.
E
C
Right,
yeah,
so
that
you
would
obviously
I
mean
there's
still
you
know
when
you
do
policy
as
code.
The
idea
is,
you
would
just
test
it
like
you
would
test
with
other
code
as
well
right
so
yeah,
and
you
can
do
that
offline,
and
you
know,
before
it's
introduced
into
your
cluster
and
check
for
conditions
like
that.
B
F
Ones,
and
so,
if
they
could
create
a
policy
that
would
allow
them
privilege
escalation,
essentially
because
it
would
contradict
that's.
F
So
I
think
some
sort
of
validation
would
be
super
nice
there
to
prevent,
creating
a
new
one
to
give
a
user
more
permission
than
there.
They
should
allow
no.
C
So
that
keyboard
won't
allow
just
like
with
kubernetes,
you
can't
do
that.
You
can't,
you
know,
assume
more
privileges
than
you
can
so
with
give
or
not
policies,
you
can't
override
another
policy,
so
all
of
them
apply.
You
know
as
a
complete
set
and
if
any
policy
denies
that
condition,
it
will
still
stand
as
a
deny.
D
D
You
could
also
write
a
validating
policy
on
a
object
of
a
policy
and
then
actually
validate
your
policies.
D
D
Well
again,
but
yeah,
I
definitely
see
the
integration
with
cap
n
being
very
interesting
down
the
road.
I
think
that
could
be
a
very
interesting
use
case
for
what
I'm
doing
now
with
cap
v
with
cap-
and
I
think,
could
be
very
interesting.
E
Yeah,
I
think,
overall,
that
that's
kind
of
you
probably
need
to
realize.
Don't
have
a
you
know,
super
classical
meme
that
you
know
knows
everything
and
that'll
have
a
yeah,
but
that,
unfortunately,
is
is
the
truth,
and
you
cannot
have
a
downloaded
media
operator
kubernetes
in
order
to
make
everything
work.
C
All
right,
so
I
think
that's
what
we
wanted
to
present
today
and
certainly,
as
you
know,
maybe
scott
as
your
use
case.
Matures
I
mean
it
would
be
great
to
see
a
demo
at
some
point,
and
certainly
you
know
if
there's
anything
else
on
the
namespace
side.
A
Nope
yeah,
I
think
we're
good.
The
only
thing
that
we
have
sort
of
coming
up
for
the
leads
is
that
we
need
to
decide
what
we
want
to
put
in
as
our
proposal
for
kubecon
eu.
So
if
any
of
the
leads
have
any
ideas
or
proposals
send
them
over
to
me,
and
we
can
work
on
that,
but
yeah
that
was,
that
was
all
of
our
content
for
today.
A
F
B
Yeah
we
spoke
about
it
briefly,
we
didn't
have
any
conclusion.
I
just
asked:
if
people
had
had
opinions,
they
should
let
us
know
in
slack
great
thanks
and
thanks
in
person
or
whatever.
This
is.
E
Yeah,
andrew
so
for
the
for
the
group
things:
can
you
do
automatic
conversion
or
is
it
possible?
What
do
you
mean
by
automatic?
You
have
controllers
if
I
use
the
old
version
of
crd.
B
We
could
I
I
can't
say
that
that
really
excites
me
very
much
like
we
just
did.
We
just
went
through
one
conversion
from
one
off
the
one
to
v1
alpha
2.,
it's
tricky
now
that
was
two
different
versions
of
the
same
crd.
This
would
be
something
even
more
custom
where,
basically,
as
we
read
in
one
crd,
we
would
write
out
the
new
one
and
then
delete
the
first
one.
Oh,
let.
E
B
You
would
you
would
need
to
have
two
basically
two
controllers,
one
that
just
did
conversions.
It
would
probably
be
easier
to
write
like
a
one-time
tool
that
just
read
everything
and
wrote
it
back
rather
than
trying
to
put
it
into
the
into
hnc
itself
so
yeah.
It
would
probably
make
more
sense
to
just
have
a
client-side
tool
that
just
did
it
all
in
one.
Go
yeah.
E
B
E
F
Yeah-
and
I
mean
not
just
that
right
like
switching,
it
would
also
mean
a
lot
more
red
tape
and
overhead
for
any
changes,
which
is
also,
I
mean
it's
good
and
bad,
but
you
know,
since
we're
not
really
core
kubernetes,
I
do
you
have
to
change
it.
So
I
I
I
no
so
yeah.
We
have
the
choice
right
so
right
now
we're
x,
slash,
kate's
yeah
and
we
have
the
because
we
did
a
formal
api
review
to
in
order
to
become
a
subproject.
F
Now
we
have
the
choice
to
basically
go
to
kate's
dot.
I
o
so
it'll
be
hnc
decades,
but
what
that
means
is
any
api
change
or
improvement
or
feature
would
basically
have
to
follow
the
kept
process
and
and
a
full
api
review
for
every
change.
E
F
E
Okay,
it's
up
to
you
guys
but
sounds
like
I
I
I
may
prefer
just
keep
x.
It
depends.
You
know
how
many
users
actually
and
also
depends
how
many
api
changes
are
you
planning?
Are
you
guys
planning
to
do.
B
C
E
Yeah
kubernetes
without
x
sounds
more
formal
and
it's
it's
more
exciting.
B
Yeah,
it
is
true:
it's
pretty
cool
to
have
that
kate
studio,
but
yeah,
I'm
not
ruling
it
out
yet
by
any
means.
B
Ooh,
let's
see,
we've
had
a
couple
hundred
downloads
of
the
client-side
tool,
which
is
probably
the
best
indication
as
to
how
many
people
are
using
it
yeah.
So
we've
had
like
the
most
recent
version
of
h
c,
which
was
0.6
well,
we
just
released
0.7,
but
that
one's
still
ramping
up
0.6
got
to
about
200,
so
did
0.5,
so
there's
probably
about
200
people
out
there
using
it
it's
hard
to
tell
from
like
how
many
clusters
there
are
like
are
using
it
because
people
don't
get
back
to
me.
B
I
can
just
tell
when
somebody
downloads
it,
the
yaml
files
have
been
downloaded
many
more
times,
but
it
could
be
like
just
reinstalling
on
the
same
cluster.
So
it's
hard
to
tell.
B
I
don't
think
anybody's
using
it
for
production,
because
we
haven't
really
recommended
that
until
fairly
recently
and
even
then
like
the
the
google
supported
version
of
this
is
not
is
still
in
beta,
so
we're
not
recommending
it
for
production.
Yet
sorry.
E
F
I
guess
I
for
my
for
just
full
disclosure
right
like
we're
using
it
internally
for
our
dev
systems
and
we
hope
to
eventually
use
it
for
production,
but
you
know
we're
kind
of
waiting
for
it
to
be
ready.
B
E
B
And
the
other
thing
is,
we
are
still
an
alpha
api,
so
that
might
give
us
brian.
Do
you
know
if
that
makes
anything
easier.
F
And
there'll
be
a
migration,
so
I
mean
you
know
it's
a
guarantee
that
the
api
will
change
once
you
move
to
a
beta,
it's
just
how
much
work
and
obviously
we
can.
B
I'm
not
sure,
that's
exactly
true,
because
when
you
go
from
alpha
to
beta,
you
keep
the
group
the
same
like
the
the
version
changes,
but
the
versions
are
all
within
the
same
crd,
you.
F
B
Some
other
kind
of
scheme.
F
Yeah-
and
I
mean
I've
talked
to
a
few
people,
and
a
lot
of
them
basically
said
you
know.
Xkates
is
probably
what
would
be
better
for
us
and
if
we
did
decide
to
move
to
get
kate's
at
io,
we
could
basically
do
the
work
we
did
now
for
our
transition.
It's
not
fun,
but
it
could
be
a
one-time
thing.
Yeah
of
we
will
migrate
to
you,
the
new
crd
yeah.
I
think.
F
E
I
think
the
migration
may
be
okay,
but
it
depends
so
in
for
the
for
the
for
this
hmccr
does
this
has
any
you
know
very
disruptive
operations
like
delete
things
if
you
delete
row,
cr.
B
E
B
E
E
B
E
But
you
know
auto
reference
is
hard.
You
don't
want
to.
You
know,
stop
everything!
Yeah,
that's
yeah,
technically
possible.
B
Yeah,
you
know
what
the
right
thing
to
do
is
I'm
sorry.
I'm
gonna
have
to
leave
in
a
moment
probably
to
write
a
design
doc
just
showing
like
how
you
would
do
it
if
you
wanted
to
transition,
and
then
we
talk
specifically
about
if,
if
we
think
that's
worth
it.
F
Another
option
too
andrew-
and
I
just
thought
of
this:
we
have
that
special
annotation
that
we
said
that
say
this
is
managed
externally.
You
could
just
go
through
and
write
all
that
to
the
existing
tree
to
prevent
it
any
changes
while
you
migrate,
oh
yeah,
that
would
be.
That
would
be
a
way
to
increase
safety
for
sure.
B
I've
just
had
some
other
things,
work
that
I
need
to
finish
up.
First.