►
Description
Policy as Code enables continuous compliance and protects against common misconfigurations. Kyverno, a CNCF sandbox project, enables Kubernetes native policy as code, in a simple and scalable manner. This livestream will explore how enterprises can manage security and best practices compliance for their Kubernetes clusters and workloads using Kyverno.
A
A
A
A
B
B
I
think,
as
we
all
know,
there's
a
lot
of
power
to
how
kubernetes
does
configuration
management,
but
with
that
power
also
comes,
you
know
some
complexity
and
you
know
the
need
to
manage
the
configurations
correctly.
There's
also
always
this
question
about
what
parts
of
the
configuration
developers
should
be
able
to
update
and
manage
what
parts,
operators
and
cluster
admins
should
manage
and
enforce,
and
then
how
that
you
know
balance
should
be
coordinated
so
to
solve
this.
You
know
in
kubernetes.
B
We
believe
that
policy
management
is
necessary
to
secure
clusters
and
to
manage
clusters
at
scale,
and
we
want
to
talk
about.
You
know
how
kiverno,
which
is
a
cncf
sandbox
project,
addresses
this
problem
for
policy
management
right
and
in
fact,
as
we
are
chatting,
I
can
also
share
I'll
share
my
screen
and
pull
up.
You
know
the
giver,
no
website
which
showcases
and
we'll
kind
of
take
a
quick
look
at
some
of
the
basics
of
queberno
itself.
B
So
let
me
switch
quickly
to
the
website
yeah.
So
you
know
what
we
wanted
to
do,
and
one
of
the
reasons
why
we
created
kiverno
is
you
know,
dealing
with
this
problem
of
managing
declarative
configurations,
making
sure
that
the
right
settings
are
in
place,
validating
for
best
practices,
security,
compliance,
even
things
like
you
know,
I
think
we've
heard
about
you
know
pod
security
policies,
how
to
make
sure
those
settings
are
in
place.
B
We
want
to
make
that
very
simple
and
make
sure
that
all
clusters
can
you
know
utilize,
a
toolkit
which
is
simple
to
install
simple,
to
manage
in
kubernetes
and
designed
specifically
for
kubernetes.
So
with
caverno
policies
are
written
just
like
any
other
kubernetes
resource.
They
are
a
cr
rich,
a
custom
resource
which
can
be
managed
much
like
you
would
a
pod
or
a
deployment
or
anything
else
related
to
your
kubernetes
cluster
and
configuration
itself.
A
Jim,
how,
I
think,
that's
amazing
project
and
we
have
discontinued
in
the
future
near
future
for
the
new
version
of
kubernetes
of
pdp.
So
one
question
cover
how
how
does
kverner
compare
to
oppa
and
gatekeeper
opel.
B
B
It
does
policies
using
a
language
called
rego
and
using
this
language
you
can,
you
know,
write
in
a
you
know
you
can
basically
write
code
which
manages
or
which
controls
aspects
of
your
policy
configurations.
So
rego
is
optimized
for
json
processing
and
there's
a
lot
of
power
and
flexibility
with
that.
But
it's
also,
you
know
it
is
a
new
language
to
learn
it's
something
that
you
know
was
created
outside
of
kubernetes
and
then
sort
of
adapted
to
kubernetes
through
the
gatekeeper
project.
So
it
is,
you
know,
certainly
an
option
for
policy
management.
B
So
a
good
example
of
this
is,
if
you
have
a
multi-tenant
cluster
or
even
different
teams
within
the
same
enterprise
using
a
cluster,
you
want
to
be
able
to
easily.
You
know,
manage
policies
right,
so
we
wanted
to
make
sure
that
the
there
was
some
solution
which
was
very
native
in
kubernetes
and
which
allowed
us
to
write
these
policies
in
a
simple
manner.
So
like
gatekeeper,
kiverno
works
as
an
admission
controller,
and
you
know
kubernetes,
of
course,
is
designed
to
be
very
extensible,
so
leveraging
the
admission
control
and
the
webhook
capabilities.
B
Kiverno
can,
you
know,
receive
every
api
request,
but
then
the
way
you
manage
the
policy
and
here
I'll
show
a
quick
example
is
extremely
simple
right.
So
this
is
a
a
cavernous
policy
which
enforces
at
admission
control
that
a
particular
label
is
present
now.
So
this
is
what
we
want
to
make
you
know,
and
you
don't
require
to
learn
a
programming
language
or
have
any
other.
You
know
external
tooling,
to
be
able
to
maintain
these
type
of
policies.
B
In
fact,
you
can
use
your
existing
ci
cd
pipelines,
you
can
use
githubs,
and
that
was
that's
what
we
mean
by
policies
as
code
right,
so
the
same
best
practices.
You
want
to
use
to
manage
your
kubernetes
resources
like
customize
or
coupe,
cuddle
and
other
you
know
tools.
You
can
now
leverage
for
policies
without
having
to
do
anything.
You
know
again,
external
or
bring
in
some
other
complexity.
A
Quick
tip
one
question:
interesting:
did
you
talk
about
grid
ups,
then
you
can
apply
gear.
Ups,
don't
think
this
kind
of
polson
engine
can't
feel
it
or
not.
The
principles
get
bought.
Githubs.
Sorry.
B
B
It
is
very
much
in
line
with
githubs
right,
in
fact,
one
one
thing
I
can
quickly
show-
and
this
is
interesting
so
if
you
can't,
if
I'll
just
google
flux
and
kivarno
so
flux,
as
as
a
lot
of
folks
know,
is
a
very
popular
git,
ops,
toolkit
right
and
in
you
know,
influx
2
to
deal
with
multi-tenancy
challenges,
in
fact,
they're
leveraging,
caverno
and
kiverno
policies
now
to
solve
these
these.
B
What
happens
when
different
tenants
want
to
manage
their
own
applications?
So
the
example
of
the
kirona
policy.
Here,
let
me
see
if
I
can
see
so
you're,
it's
actually
installing
caverno
through
using
github's
principle,
and
then
it's
pulling
some
default
policies
for
caverno
right.
So
if
we
go
back
to
the
flux
to
repo,
I
can
show
like
what
policies
are
available
here.
B
So
this,
for
example,
is
a
flux
policy
which,
or
it's
a
kirona
policy
included
as
part
of
flux,
which
is
checking
a
custom
resource
called
customization
or
a
home
release,
and
it's
making
sure
that
there's
a
service
account
specified
right.
So
very
much.
You
know,
I
guess,
if
you
again
think
about
this,
how
how
policies
can
be
used
as
resources,
then
you
can
apply
all
the
same
git
ups
best
practices
that
you
would
for
a
normal
policy.
B
B
C
Yeah
sure,
let
me
share
my
screen:
okay,
so
hey
everyone,
this
is
shooting
and
first
to
start
off,
I
want
to
show
you
the
ability
of
the
mutation
of
kiberno,
so,
like
jim
mentioned
before,
I
will
show
it
as
the
part
of
the
ci
to
a
process
where
I
will
be
applying
the
mutation
policy
to
an
like
deployment,
and
we
will
see
how
the
mutation
is
patched
to
your
results
right.
C
C
So
it
does
nothing
but
to
patch
those
containers
to
the
deployment
here,
I'm
using
the
match
clause
to
match
the
kind
by
the
deployment
and
I'm
searching
for
the
particular
label.
This
bracket
here
means,
if
I
have
this
label
defined
in
the
deployment,
then
I'll
apply
the
policy,
otherwise
it'll
just
skip.
C
C
C
Right
so
I
download.
A
C
C
Yeah
I've
downloaded
this
inject
sidecar
mutation
policy
to
my
local
directory
and
then
I
have
us
example
deployment
here,
which
has
this
annotation
defined,
and
I
only
have
a
single
busybox
container
inside
the
deployment
right
and
from
the
cli.
What
I
expect
is
the
kirino
cli
will
mutate
my
deployment
with
those
cycle
containers
also
it
monster
volume
to
the
pod
to
the
containers
right.
C
Oh,
let
me
pull
up
the
terminal,
so
I
already
have
oh,
I
already
have
the
sale
installed,
but
just
to
give
you
some
idea,
we
have
a
document,
see
that
how
you
can
install
the
key
renault
cli
it
is,
you
can
download
the
the
binary
directly
from
the
ripple
or
you
can
install
veracru
and
we
made
the
key
renault
kukkado
plugin
so
that
you
can
use
it
along
with
the
colcado
command
right.
C
Okay,
so,
let's
check
I
have
the
policy
here
all
right.
I
have
this
sidecar
injection
policy
and
the
deployment.
What
I'm
going
to
do
now
is
to
use
crocodile
kubernetes
and
we
have
this
apply
command
which
allows
you
to
apply
this
policy
that
car
injection
policy
and
if
you
specify
dash
r,
it
takes
the
resource
manifest.
C
What
I
have
here
is
the
deployment
okay.
So
once
I
run
this
command
it'll
output,
the
expected
the
mutate
resources,
the
mutated
resources.
Here
I
have
the
deployment,
and
this
is
the
original
container
I
have,
but
you
can
see
here,
the
vault
agent
is
injected
and
the
any
container
is
inserted.
Also,
and
then
I
have
a
informal
volumes
mounted
to
the
pot
to
the
container
right.
This
is
just
the
ability
of
mutation
and
I'm
doing
it
in
the
ci,
but
also
I
have
kiberno
part
run
here.
C
C
C
You
will
see
there
is
a
unique
container
and
also
another
container
is
injected
to
the
pot.
So
this
is
the
mutation
as
well
as
how
the
cli
works
with
kubernetes
and
next,
what
will
be
interesting
is.
I
want
to
show
you
the
validate
ability
of
kivernal.
C
We
have
a
set
of
policies
that
restrict
your
pod
security
contacts
and,
as
we
know
that
the
pod
security
policy
is
going
to
deprecate
it
in
kubernetes
1.21,
I
think
and
it'll
be
removed
eventually.
So,
as
an
alternative
kirino
provides
you
a
way
to
validate
your
security
configurations.
C
Okay
from
our
website,
we
have
this
set
of
pod
security
policies
defined.
There
are
default
policies
and
also
the
restricted
ones
which
enforces
the
best
practice
of
your
pod
configurations.
C
So
today,
I'm
gonna
only
show
you
the
default
security
policies,
and
this
these
are
all
the
validated
policies.
Let's
take
an
example
for
the
validate
policy
here.
C
C
C
Okay,
once
it's
created,
let's
check
again
by
kukado,
get
cpa.
Now
you
can
see.
I
have
this
bunch
of
security
policies
deployed
to
my
cluster
okay,
what
what
would
be
interesting
next
yeah.
A
Sure,
just
a
minute,
please,
we
have
a.
We
have
another
question
here
that
is
important
before
you
continue.
What
does
the
sidecar
parts
do
exactly.
C
Oh,
the
psycho
policy.
Let
me
go
back
to
the
mutation
policy.
This
is
just
to
inject
the
vote
agent,
the
harshi
cardboard
agent.
As
you
know,
they
have
this
account
container
to
automatically
create
a
secret
and
that
can
be
later
used
in
your
running
application
for
that
secret.
So
here
the
indie
container
is
responsible
for
creating
the
secret
and
I
month
to
this
volume
to
this
pass,
and
then
I
create
the
volume
and
then
to
use
it
inside
the
vault
agent
or
in
your
running
application.
Any
of
your
running
applications.
B
Yeah,
so
this
is
an
example
of
you
know,
injecting
a
sidecar,
but
it
could
be
anything
that
you
might
want
it.
You
know
that
the
side
car
could
be
your
own
container.
Here
we
have
used.
You
know
the
walt
agent
as
an
example,
but
there
are
several
use
cases
where
we
have
seen
like
for
certificate
management,
security
for
monitoring
that
sidecar
containers
are
required.
B
So
one
interesting
thing
is
here:
you
can
quickly,
you
know
inject
all
of
these,
and
just
you
can
customize
these
policies
also,
if
you're
using
something
like
fargate
or
you
know
any
serverless
tool
where
you
cannot
run.
You
know,
standard
services
like
for
logging
monitoring,
external
right
as
external
daemon
sets.
One
interesting
pattern
is
to
use
sidecars
to
be
able
to
perform
those
services
within.
You
know
for
your
main
application
itself
and
again
that's
where
kiberno
policies
could
be
very
useful.
A
Great
great
great,
what's
one
most
normal
question
before
continuing
the
question
from
christopher
is:
can
I
apply
this
cluster
pause
only
if
namespace
after
off
object
had
a
specific
label?
I
saw
I
I
what
this
did
label,
but
I
can
explain
where
it
is.
Please.
C
Yeah,
I'm
sure,
as
I
mentioned
before,
you
can
match
by
an
annotation
with
the
anchor,
but
also
in
the
match
block.
C
I
think,
oh,
this
is
not
in
the
in
this
release,
but
with
the
recent
changes
we
added,
we
have
a
namespace
selector
when
you
that
you
can
use
to
match
a
policy
and
also
we
have
the
preconditions
and
with
the
conditions
that
you
can
look
up
the
labels
and
then
match
to
the
specific
values.
If
you
want
so
yeah,
that's
possible.
A
Okay,
please
continue
to
show
the
code.
C
All
right,
okay,
let
me
switch
back
to
the
pod
security
example.
We
have
as
I've
installed
all
those
policies
yeah.
What's
what
would
be
interesting
next
is
to
see
how
the
validate
policy
works
with
key
renault
right.
I
want
to
test
what,
if
I
create
a
pad,
will
what
happened
right
so
I
came
across
this
ripple
called
bad
pods.
They
have
provide
a
bunch
of
pod,
manifest
that
you
can
use
to
valid,
validate
your
security
contacts
of
the
pod
configurations.
C
C
You
can
see
here
I
have
host
network
set
to
true
host
paid
host
ipc
set
to
true
and
also
I'm
using
the
host
pass,
that's
one
of
the
volume
and
then
I'm
running
the
container
as
a
privilege
mode
right
so
by
default.
What
ex?
What
I
expect
is
that
kirino
policy
would
block
this
pod
creation
because
it
validates
the
policy
or
the
security
context
right
then,
let's
take
the
raw
manifest.
C
You
can
see
the
pod
creation
here
is
blocked
by
kubernetes
and
these
are
the
policies
that
is
actually
blocking
the
results.
Creation
right.
Let's
take
a
closer
look
saying
that
this
pod
was
blocked
due
to
the
following
policies
and
the
first
one
is
the
field
host
network
net
network
host
ipc
hostped
must
not
be
set
to
true
right,
so
here
it
blocks
the
population
and
let's
see,
let's
use
another
example.
What
if
my
pod
has
nothing
allowed
like?
C
C
C
In
most
cases,
you
may
not
have
a
standalone
pod
created
right.
You
may
manage
your
pod
by
the
power
controller.
For
example,
the
deployments
demonstrate
stiffer,
set
or
etc
right
you.
You
may
start
wondering
what
will
happen
when
I
create
the
power
controllers.
Typically,
this
port
security
policy
and
kubernetes
everything
will
go
silent.
C
C
We
will
reject
that
creation
immediately
and
let's
first
take
an
example
and
then
I'll
explain
what
happens
behind
the
scene.
A
Shutting
just
just
a
minute
before
christopher
explain
a
little
bit
more
about
this
ques.
His
question
is
not
talk
about
at
namespace
selector,
but
about
match
all
parts.
If
all
namespace
have
label
like
apply
policy
true,
make
sense
catch
the
question:
okay,
yeah.
C
B
A
Okay,
I
think
so.
The
other
question
is
so
specifically
ask
if
you
you
can
provide
a
brief
differences
between
the
simple
and
global
network
polls.
B
B
If
you
have
multiple
pods,
you
know,
network
policies
are
useful
in
controlling
traffic
between
those
pods
within
your
cluster
or
between
namespaces
or
even
any
ingress
and
egress
traffic
for
those
pods
within
your
cluster
network
policies
require
cni
which
can
enforce
that
network.
You
know
the
the
firewalling
like
whether
it's
ingress
or
egress,
so
you
would
have
to
run
a
cli
like
teleco
or
cube
router
to
be
able
to
use
network
policies.
B
The
c
poll
that
shooting
was
showing
is
a
kiverno
policy
which
is
a
custom
resource
in
kubernetes,
which
runs
you
know
you
have
to
install
the
kubernetes
controller
and
with
that
you
can
kind
of
use
the
runner
policies
right.
So
they're,
two
different
types
of
things:
kubernetes
has
native
policy
objects
like
pod
security
policies,
network
policies,
but
then,
in
addition,
if
you
need
more
security,
like
your
you
know,
we
were
talking
about
pod
security
enforcement.
B
If
you
want
to
have
like
for
by
default
with
pod
securities
there
to
enable
them,
you
have
to
use
our
back
and
rolls
and
that's
very
complex
because
you
know
like
shooting,
was
just
showing
typically
it's
a
pod
controller
like
a
deployment
which
creates
the
pod
and
that
role
is
not
associated
to
a
person
it's
associated
to
the
pod
controller.
So
there
are
so
many
problems
like
that,
which
is
why
network
you
know.
Pod
security
policies
are
being
deprecated
and
ki.
Verno
just
gives
a
more
flexible.
B
A
way
to
you
know,
implement
some
of
these
security
boundaries
that
you
will
want
in
your
clusters.
Otherwise
you
know
you're
kind
of
exposed,
like
the
bad
pods
website,
shows
very
well.
A
Thank
you
jim.
Thank
you
shutting.
Could
you
please
return
for
your
behind
the
scenes?
Let
let
let
us
check
this.
It's
very
interesting,
yeah.
C
C
Okay,
it's
just
the
simple
deployment
which
has
the
exact
the
same
configuration
as
the
batpod
I
showed
before,
but
here
this
time
I'm
creating
a
deployment,
not
a
simple
standalone.
Pod.
Okay,
go
back
to
the
show.
C
If
I
say
kukkado
apply
deployment,
you
will
see
the
deployment
creation
will
be
reject
immediately,
saying
that
you
cannot
set
those
data
right.
So
what
behind
the
thing
is
that,
as
I've
shown
you
all,
the
policies
are
right
on
the
pot,
but
in
this
case,
if
you
have
a
kirono
running
your
cluster,
when
you
apply
the
policy,
ki
verno
will
automatically
convert
those
part
rule
to
the
paw
controller
rules.
C
Here
I
can
convert
the
this
rule
to
demonstrate
to
match
the
demonstration
deployment
job
and
default,
set
saying
that
you
can't
use
host
pass
in
even
in
either
of
the
this
resources
right
and
also
we
have
another
autogen,
auto
generated
rule
for
crown
drop.
The
spec
looks
somehow
different,
but
it
still
disallows
the
host
path
right.
But
with
this
feature
you
will
see
the
results
or
the
part
controllers
creation
will
be
blocked
immediately.
So
you
will
know
that
what
would
happen
and
allows
you
to
configure
or
confine
your
configurations
folder.
C
A
Okay,
thank
you.
Leonardo.
B
Yeah
great
question,
especially,
of
course
with
you
know
the
focus
now
with
the
solarwinds
hack
on
you
know,
supply
chain
and
you
know
kind
of
managing
the
supply
chain.
Integrity
right.
So
there's
two
two
parts
to
think
about
over
here.
Right
so
in
every
image
has
a
digest,
which
is
an
immutable
hash.
That's
created
to
represent
the
digest.
B
B
So,
if
you
go
to
that
link
and
go
to
best
practices,
it
shows
you
know
how
to
validate.
The
registry
is
a
registry
that
you
trust
now.
The
second
thing
after
that
is
for
a
particular
image.
You
want
to
make
sure
that
the
image
you're
running
you
know
if
you're,
using
a
name
and
a
tag
like
a
repo
name,
the
image
name,
the
tag
that
translates
to
a
digest
which
actually
matches
the
digest
in
your
registry
right.
B
B
The
final
piece
of
this,
though,
is
to
do
with
you,
know,
signing
images
right
and
for
that
you
would
need
something
like
notary
or
notary
v2,
which
is
being
worked
on
in
the
community
and
those
projects.
What
they
do
is
they
verify
that
the
images
are
not
only.
You
know
that
you
have
the
valid
hash,
but
the
contents
are
also
signed
by
somebody.
You
trust
right.
So
that
would
that's
an
external
check,
a
third
check
which
kiverno
would
not
perform,
but
you
would
need
you
know
something
like
you
know.
B
A
notary,
v2
client
which
or
a
notary
client
which
can
you
know,
check
the
signature,
the
digital
signature
and
enforce
that,
but
yeah
very,
very
good
topic.
And
that's
still,
you
know
something
we're
looking
at
very
closely
and
interested
in
ideas
in
terms
of
how
we
can
you
know,
expand
what
cavernous
does
for
end-to-end
sort
of
image
and
content
trust
right.
B
Yep,
so
yes,
okay,
the
question
is,
what
does
giver
know
still
still
doesn't
do,
and
what
are
the
plans
for
the
near
future
yeah?
So
we're
constantly
you
know
so
qiverno
is
by
the
way
the
schema
is
we
want
right,
so
we
manage
compatibility
so
all
of
the
basics
that
you
saw
with
these
policies
they're,
you
know
well
supported
and
not
something
that
we're
gonna.
B
You
know
kind
of
change
without
you
know
backward
compatibility
management
right,
but
of
course,
there's
always
more
features
more
advanced
things
we
want
to
do
like
you
know
the
the
whole
image
signing
is
another
interesting
area
to
explore.
So
can
kiverno
and
notary
v2
work
together
to
solve
a
particular
problem?
Right,
though,
that's
something
that
could
be
a
future
development,
other
examples
of
things
actually
coming
out,
so
we
are
at
132
rc1
right
now,
and
you
know
we
are
actually
gonna.
B
So
in
the
next
few
days
132
will
be
out
and
in
this
release
itself
there's
you
know
very
some
interesting
features.
Shooting
mentioned
the
namespace
selector.
So
going
back
to
the
previous
question
of
okay,
if
I
have
want
to
match
pods
on
the
namespace,
and
I
want
to
check
the
label
of
a
namespace
so
today,
kiverno
can
check
the
namespace
in
our
labels
or
annotations
in
the
in
the
object
being
changed.
All
of
that,
based
on
the
incoming
request,
so
based
on
the
api
server
request.
B
What
this
feature
that's
coming
in
132
allows
you
to
do
is
check
an
existing
namespace
for
its
labels
or
you
know,
by
selecting
that
label,
so
the
policy
will
only
in
a
match
if
you
know
that
if
there's
an
existing
namespace
with
those
labels,
it's
not
checking
the
namespace
in
the
incoming
api
request
right
so
with
you
know,
so
that's
one
of
the
use
cases
that
had
come
up
and,
like
I
think
christopher,
also
asked
before
so
that's
coming
in
132,
it's
already
available
as
an
rc,
and
we
will
be
you
know,
kind
of
releasing
the
final
after
some
more
testing
in
the
next
few
days.
B
Another
interesting
feature
in
132
and
let
me
share
my
screen
I
can
actually
show
this
is
a
a
use
case
which
often
comes
up.
Is
what
if
you
want
to
you
know,
write
a
policy
which
I'm
limiting
or
managing
things
based
on
based
on
some
existing
api
data
right
so
you're?
Sorry,
I
was
trying
to
make
my
screen
bigger.
Okay,
hopefully
that's
more
visible.
B
B
Better
yep
all
right,
perfect
yeah.
So
this
is
a
policy
which
is
checking
you
know
within
the
cluster,
for
a
particular
api
object
right.
So
by
the
way,
one
interesting
thing-
I
don't
know
if
you
notice
when
shooting
was
doing
a
demo,
but
I'm
getting
help
from
my
policy
directly
in
visual
studio
code
right
and
that's
a
feature
in
kiverno,
which
you
know
is
it's
a
small
feature,
but
something
that
is
pretty
nice
because
you
can,
you
can
get
help
for
any
part
of
your.
B
You
know
cluster
or
your
cluster
policy
directly
either
using
coop
cuddle,
explain
or
in
this
case,
because
I'm
running
visual
studio
code-
I
can
see
you
know
all
of
the
help
fields
and
it
does
syntax
checking
and
things
like
that
directly
in
visual
studio
code
right
so
again,
another
example
of
this
whole
policy
is
code
and
why
that's
important
to
have
this
native
approach,
you
know,
so
you
can
use
standard
tools
etc,
but
going
back
to
this
policy.
B
This
is
a
interesting
part
over
here,
right
and
I'll
I'll.
Actually,
we'll
run
this
policy
and
we'll
see
make
sure
it
works.
But
if
you
look
at
what's
happening
over
here,
it's
saying
I'm
going
to
make
an
api
call
to
the
kubernetes
api
server
and
for
that
api
server.
What
I'm
going
to
do
is
I'm
going
to
you
know,
check
namespaces
and
I'm
going
to
use
from
my
request,
the
incoming
namespace
for
the
object
and
in
that
namespace
I'm
going
to
check
how
many
services
exist.
B
So
here,
and
you
cannot
enforce
this
using
default
quotas,
because
if
you're
doing
default
quotas,
you
can
only
limit
a
service
object.
You
cannot
say
a
service
object
of
type
load,
balancer
right,
so
this
is
something
that,
with
the
combination
of
the
simple
policy
you
can
now
easily
leverage
in
your
cluster.
B
The
other
cool
thing
is
kind
of.
If
I
go
to
my
terminal
and
let's
make
this
bigger,
so
the
same
thing
I
want
to
test
in
my
policy,
if
I
kind
of
just
just
do
group
cuddle
get-
and
I
say:
minus
minus
raw
and
I'll-
take
that
same
path
right,
so
I
want
to
say
api,
let's
say:
v1
namespaces.
B
I
can
it's
the
exact
same.
You
know,
syntax
that
I
would
use
within
my
policy
and
now,
if
I
want
to
get
a
count,
I
can
you
know,
use
jmes
path,
so
I
have
a
command
line
tool
called
jp,
which
is
a
you
know
which
process
is
jamey
spat
and
I
can
do
something
like
if
I
say
items
dot
or
I
can
even
just
say
items
and
then
I
can
say
length
and
the
syntax.
B
So
this
is
all
documented
on
the
jmespath
website
and
this
by
the
way,
is
also
used
by
coop,
cuddle
or
aws
clies,
etc.
So,
if
I
run
this
now,
it's
telling
me
that
hey,
I
have
five
namespaces
in
my
cluster,
so
the
kiberno
policy
is
using
ex
the
same
familiar
syntax.
The
same
expressions
over
here
that
you
would
you
know
kind
of
use
within
your
within
you
know
just
outside
with
coop
cuddle
itself
right.
B
B
Yeah
so
I
already
have
you
know:
load
balancer
bullets,
so
I
have
a
pod.
I
have
a
load
balancer
policy
because
you
know
we're
testing
and
playing
around
before
and
this
policy
is
installed.
So
if
I
do
something
like
you
know,
expose
let's
say
we
want
pod
engine
x
and
what
we
do
I'll
just
say:
type
equals
load,
balancer.
B
So
what
should
happen
is
now
kiverno
should
check
and
see
that
you
know
there's
already
one
load
balancer
service
in
this
default
namespace
and
it
will
block
it
right.
So
that's
the
an
example
of
using
this
policy
in
action
to
solve
a
problem
like
this
or
something
you
want
to
enforce
in
your
namespace.
B
So
that's
also
a
feature
coming
in.
You
know
this
feature
is
you
know
available
in
132
and
it's
something
new
other
new
features,
and
you
know
examples
of
things
we're
you
know
working
on
have
to
do
with
more
just
simplifying
you
know,
lookups
like
for
images
like
if
you
want
within
the
policy
itself.
B
Let's
say
you
want
to
use
a
like.
You
know
like
a
tag
or
things
like
that
to
be
able
to
even
use
like
regex
expressions
within
policies,
so
advanced
things
like
that
right,
so
those
are
the
type
of
you
know,
sort
of
the
constructs
or
simple
but
powerful
things
that
we
are
able
to
do
quickly
with
giver
know
and
with
kievana
policies
itself.
A
Where
jim
jim,
we
have
two
questions
here,
one
is
from
ogre.
I
don't.
I
don't
know
if
I
get
the
really
the
the
what
the
he
means,
but
will
kverner
be
comparable
with
all
three
major
clouds.
I
don't
know
exactly
what's
mean,
but.
B
Yeah,
so
maybe
perhaps
the
question
is
compatible
with
with
all
major
clouds
or
will
it
work
with
all
you
know:
control
planes?
Yes,
so
kiverno
is
all
standard
kubernetes.
You
know
in
terms
of
the
machinery
behind
the
scenes
it
installs
itself
as
a
weber
server,
and
that
is
supported
with
eks
aks
gke
oke.
B
All
of
these
type
of
systems,
where
you
know
as
long
as
it's
kubernetes
compliant
kiverno,
can
install
itself
and
you
can
act.
It
acts
as
an
admission
controller
within
the
control
planes
of
all
of
these
cloud
providers.
A
Great
you
already
answered
this
question,
but
we
have
for
eight
minutes
and
more
five
minutes
to
complete.
We
have
a
question
about
what
would
be
you
recommend.
What's
your
recommendation
with
respect
of
different
use
case
for
kubernetes.
B
Also
working,
you
know,
like
another
sort
manager
to
do
some
automated
configurations,
automated
kind
of
generation
of
defaults
using
kiberno
one
other
quick
example.
I
can
show-
and
you
know
we
might
not
have
time
to
go
through
this
full
example.
But
one
very
interesting
thing
with
any
any
multi-tenant
system
is
you
know
you
want
for
namespaces,
you
want
control
over,
you
know
usage
and
you
want
the
ability
to
securely
share
namespaces.
B
So
here
I
have
a
policy
which
is
you
know,
adding
access
controls
to
a
namespace
and
base,
so
it's
combining
validate
mutate
and
generate
policies
all
into
one
and
those
are
the
types
of
use
cases
that
are
most
powerful.
So
in
this
case
it's
checking
to
see
who's,
creating
the
namespace
based
on
that
it's
setting
certain
labels
and
it's
enforcing
that.
There's
a
naming
convention
you
know
created
as
part
of
the
namespace
and
then
for
that
namespace.
It
automatically
generates
some
defaults
itself
right.
Let
me
check
and
see
if
I
have
this
policy.
A
Between
this,
there
is
another
question
here:
quickly:
are
crd
controllers,
supported
by
out
generation
rules.
B
Yes,
so
if
the
question
is
like
are
crd,
supported
and
control
is
yes,
so
any
any
kubernetes
resource
is
supported
with
kiberno.
B
We
don't
limit
ourselves
to
the
standard
resources,
but
all
crds
can
be
managed,
and
those
are
some
of
the
examples
like
I
was
mentioning
with
cluster
api
or
cert
manager
like
search
manager,
uses
certificates
and
certificate
authorities.
So
all
of
those
custom
resources
can
be,
you
know,
managed
also
using
kiberno
yeah
so
and-
and
it's
really
coming-
you
know
like
the
interesting
use
cases
are
when
you
start
combining
some
of
these
policies.
So
if
I
look
here,
I
already
have
these
name
space,
quality
policies
and
other
things
installed.
B
What
I
also
installed
just
to
kind
of
complete
the
example
is,
I
have
a
role
created,
a
user
named
nancy
and
if
you
know
that
is
a
namespace
administrator
right,
so
they
have
a
role
binding
where
they
can,
you
know,
create
namespaces
and
then
they're
assigned
you
know.
The
role
of
an
admin
kiverno
will
automatically
generate
a
role
within
that
you
know
to
be
able
to
only
view
or
manage
the
namespaces
that
user
creates
right.
B
C
B
I
say
describe
namespace:
I
see
that
that
namespace
is
created,
kiverno
automatically
added
a
label
to
say
who
created
that
namespace.
I
have
a
quota
set
already
on
the
namespace
and
if
I
also
now
do
something
like
you
know,
if
I
say
get
network
policy
oops
in
this
case.
B
So
let's
give
the
namespace
get
network
policy.
I
see
that
there's
a
default
network
policy
also
generated
right.
So
all
of
this
is,
you
know,
managed
through
that.
Actually
one
interesting
thing
is:
if
I
look
at
that
policy,
let's
try
to
delete
that
network
policy
and
see
what
happens
right
so
I'm
gonna
say:
delete
network
policy
default,
deny.
B
And
you
know,
although
the
user
in
this
case,
had
the
rbac
permissions
to
delete
that
resource,
given
will
always
recreate
or
regenerate
that
policy
based
on
the
settings
so
pablo
to
answer
your
question.
Those
are
the
types
of
use
cases
we're
seeing
now
like
where
users
are
combining.
You
know,
validation,
mutation
generation,
to
solve
some
very
interesting
problems
in
managing
clusters
and
managing
kubernetes
at
scale
in
a
secure
fashion.
A
Great
question:
great
sharing,
let's
we
are
really
cl
very
close
to
them,
but
I
should
ask
you:
cover
is
a
very
promising
project
and
it's
starting
as
a.
A
B
Yeah
several
ways
right
and-
and
thank
you
for
you-
know
kind
of
you
know
mentioning
that
on
the
community
side,
because
we
so
cuberno
is
a
very
you
know:
open,
flexible
and
friendly
community,
so
definitely
and
I'll
post.
You
know
even
our
github
link,
if
you
you
know
so,
there's
several
ways:
users
can
start
and
you
don't
have
to
be.
You
know
if
you're,
just
simply
a
policy
user.
You
can
also,
you
know
just
submit
ideas
for
new
policies.
B
You
can,
you
know,
contribute
by
submitting
sample
policies
in
the
repo
that
we
shared,
and
you
can
also
you
know
kind
of
just
give
us.
You
know
if
you
have
an
idea
for
a
policy
something
you're
trying
to
solve.
You
know
just
reach
out
and
we
will
you
know,
help
you
create
those
sample
policies
and
we
can
add
them
to
our
repo.
B
Otherwise,
of
course
like
for
documentation
and
other
things,
we're
always
you
know
helping
improve,
and
so
that's
another
simple
way
to
contribute
and
then,
of
course,
if
you're
interested
in
coding-
and
you
know,
developing
kubernetes
controllers-
there's
several-
you
know
bugs
in
our
git
repo
that
there's
a
link
posted
here
which
are
marked
as
good
first
issue.
So
just
look
for
those
good
first
issue.
Type
of
you
know,
items
in
github
and
those
are
very
simple,
very
easy
to
get
started
with
and
also
you
know,
the
kiverno
slack
channel
is
very
active.
B
So
if
you're
interested
in
policy
management,
kubernetes
security
make
sure
you
join
our
slack
channel
as
well
and
just
reach
out
over
there
say
hello
and
you
know
just
feel
free
to
ask
questions.
All
questions
are
good
questions,
so
don't
hesitate
if
something's
not
clear
just
to
reach
out
and
ask.
A
Jail
sharing,
thank
you.
So
much
was
amazing.
I'm
very
proud
to
be
here
with
you
and
I
will
ask
everyone
that
want
to
contribute
to
open
source
join
and
like
policies,
pulse
control,
security
control.
A
Please
join
this
project
because
it's
an
open
source
in
the
our
cncf
sandbox
and
with
your
contribution,
this
brush
will
go
ahead
very
well.
Jim
again,
thank
you
so
much.
Thank
you
sharing!
That's
all
our
finished.
Our
time
was
amazing.
Thank
you.
Everyone
for
joining
us.
The
last
episode
of
this
week
in
the
cloud
native.
It
was
great
to
have
my
friend
jean
and
sharing
talk
about
kevin
orr
security
as
a
code.
We
also
really
love
the
interaction.
Thank
you
so
much
everyone
see
you
next
week.