►
From YouTube: Cloud Native Live: Kyverno in Production
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
cloud
native
live
where
we
dive
into
the
code
behind
cloud
native.
I
am
itai
shakuri,
I'm
director
of
open
source
at
aqua
security,
I'm
also
a
cncf
ambassador,
and
I
will
be
hosting
today's
show.
Every
week
we
bring
a
new
set
of
presenters
to
showcase
how
to
work
with
cloud
native
technologies.
A
So
before
we
get
going
just
a
quick
reminder
that
this
is
an
official
live
stream
of
the
cncf
and
as
such
is
subject
to
the
cncf
code
of
conduct.
So
please
don't
add
anything
to
the
chat
or
questions
that
could
be
in
violation
of
that
code
of
conduct,
basically
just
be
respectful
of
everyone.
Thank
you
and,
with
that
I'll
hand
it
over
to
jim,
would
you
like
to
introduce
yourself
first
yeah.
A
Yeah
and
and
shootings.
How
would
you
like
to
introduce
yourself.
C
Yeah
thanks
etai,
hello,
everyone,
I'm
shooting
zao
from
namada,
I'm
currently
the
maintainer
of
the
kimbernal
project
and
thanks
for
having
us.
A
Our
pleasure,
all
right
so
wanna
tell
the
audience
who
are
not
yet
familiar.
I
mean
the
topic
is
kevin
in
production,
so
probably
like
more
advanced
stuff,
but
you
want
to
maybe
give
a
quick
introduction
to
the
project
itself.
B
Yeah
absolutely-
and
let
me
I'll
share
my
screen
as
we
get
started,
but
just
to
kind
of
set
the
context-
and
you
know
talk
a
little
bit
about
what
qiverno
is
where
it
fits-
why
you
might
want
to
use
it
in
your
production
kubernetes
or
should
definitely
consider
this
as
one
of
the
add-ons
you
add
so
sukiverno
is
a
policy
manager
right
and
we,
the
kubernetes,
of
course,
has
built-in
policy
objects,
like
we've
all
heard
about
pod
security
policies,
and
you
know
what's
going
on
with
their
life
cycle,
we
think
you're
about
things
like
network
policies,
but
kubernetes
is
also,
of
course,
designed
to
be
fully
extensible
right.
B
So
this
allows
third
party
policy
management,
external
policy
management
tools
like
kiverno,
which
is
a
cncf
sandbox
project
or
like
oppa,
which
is
also
a
cncf
project
along
with
gatekeeper,
which
is
its
you
know,
kubernetes
policy
manager.
These
type
of
you
know
external
policy
managers
can
be
plugged
in
into
kubernetes,
so
kiverno
was
created.
You
know
just
for
kubernetes
and
we'll
talk
about
I'm
just
going
to
so
you're
right.
B
This
was
the
cavernous
website
by
the
way
kevernow.io
I'm
just
going
to
click
on
the
documentation,
link
and
we'll
browse
through
some
of
the
you
know,
topics
here,
but
the
idea
was
to
create
a
policy
manager
for
kubernetes
and
really
adopt
and
embrace
some
of
the
kubernetes
concepts
right.
Everything
we,
I
guess
know
love
and
or
maybe
even
sometimes
hate
about
kubernetes
qiverno
kind
of
conforms
to
those
patterns
leverages
those
patterns
to
do
some
pretty
interesting
things
and
make
you
know
the
administrator
and
the
user.
B
You
know
both
the
administrators
live,
simpler
and,
of
course,
improves
the
user
experience.
The
developer
experience
on
kubernetes
right
so
by
the
way,
one
thing
we
always
get
asked
is
what
just?
Why
kevin?
What
does
the
name
mean
right?
So
it
means
governance.
It's
a
greek
word
like
kubernetes,
so
we
thought
it's
a
nice
term
for
it's
a
good
fit
for
what
it
does.
So
that's
how
we
name
the
project
give
or
no
all
right.
B
So
how
does
it
I
mean
I'll,
explain
a
little
bit
about
how
it
works,
where
it
fits
I'll
show
a
very
quick
introduction
to
getting
started,
and
then
you
know,
shooting
is
going
to
help
us
dive
into
some
of
the
more
advanced
production
topics
of
if
you're
running
policy
managers
and
ki
verno
in
production
right
so
sounds
good.
A
Here
I
just
wanna
remind
everyone
to
ask
any
questions
that
comes
up
in
the
chats,
whether
you're
watching
this
wherever
you're
watching
this
ask
in
the
chat.
If
it
comes
up
I'll,
read
it
or
jim
catches,
it
he'll
answer
it
and
also
there's
the
cloud
native
live
channel
on
the
cncf
slack.
You
can
also
ask
questions
there,
sorry
for
the
interruption.
Jim,
oh.
B
Not
at
all,
let's
keep
it
interactive,
just
feel
free
to
interrupt
any
time
right
because
as
we're
going
along
so
yeah
please.
Well,
we
welcome
any
questions
so
just
a
quick
explanation
on
how
kiberno
works
and
where
it
fits
in
right.
So
I
think
a
lot
of
us
know
already.
B
B
It
also,
you
know,
has
admission
controls
and
kiverno
fits
in
into
that
admission.
Controls
part
of
the
api
lifecycle
before
you
know
some.
Your
data
is
either
retrieved
from
fcd
the
database
or
it's
you
know
it's
updated
and
inserted
back
into
xcd,
so
kiverno
installs
when,
as
you'll
see,
once
you
bring
up
kubernetes,
it
becomes
a
mutating
as
well
as
a
validating
admission
controller,
which
allows
it
to
now
receive
web
books
on
every
api
request
and
then
based
on
your
policies,
there's
multiple
controllers
in
kiberno,
which
will
take
decisions
and
actions
right.
B
So
the
whole
idea
is
now.
You
are
able
to
customize
some
of
the
behaviors
for
various
api
requests,
and
this
could
be
something
as
simple
as
you
know,
when
you're
retrieving
objects
or
you're
updating
objects
or
resources
and
kubernetes
or
could
be
even
something
as
complex
as
like.
You
know,
there's
a
connect
request
which
comes
in
if
you're,
trying
to
exec
into
a
pod,
and
if
you
want
to
validate
that
or
enforce
that
in
certain
namespaces.
That
connection
is
not
you
don't
want
to
allow
exact.
B
All
of
this
now
can
be
controlled
through
customizable
policies
in
kiverno
right.
So
this
is,
you
know.
The
picture
here
that
we're
looking
at
is
showing
some
of
the
major
components
in
kivarno,
so
there's
a
webhook
which
receives
these
api
requests
and
then
there's
multiple
background
controllers
which
do
processing,
for
you
know
any
policy
changes
so
there
the
policy
controller
will
listen
to
updates
and
changes
and
policies
like
when
you
add
a
new
policy.
B
It
also
triggers
periodic
background
scans.
So
for
any
of
your
existing
resources,
it
will
trigger
a
scan
and,
of
course,
if
there's
a
real-time
request
coming
in
through
the
web
books
that
gets
checked
against
the
policy
cache,
which
is
you
know,
in
memory,
so
for
faster
processing,
faster
lookups,
and
at
that
point
the
webwork
will
respond
back
to
the
api
server
with
the
appropriate.
You
know
error
or
allow
or
whatever
the
response
may
be
right.
B
So
if
it's
a
validate
operation,
typically
either
you're
returning
in
you
know,
you're
saying
deny,
and
you
give
an
appropriate
error
message
or
you're
saying
I'm
going
to
allow
this
mess.
This
request
to
proceed
and
eventually
hit
at
cd.
B
So
that's
the
basics
of
how
kiberno
works
and
we'll
see
a
lot
of
this
in
action
and
in
fact,
what
you
know
we
can
also
do
is
for
some
of
this
I'll
I'll.
Actually,
just
you
know
we'll
switch
to
a
live
demo
and
I'm
going
to
install
qiverno
and
we'll
take
a
look
at
these
components
right
so
in
the
docs
I
went
into
the
installation
section,
there's
a
you
know,
also
a
definition
here
of
how
kiberno
installs
what
it
looks
like.
B
So,
as
you
can
expect,
you
know,
there's
a
deployment
which
manages
pods
there's
a
service
which
the
api
server
talks
to
and
cavarno
listens
to,
there's
a
lot
of
other
resources
that
either
get
installed,
or
some
of
these
are
automatically
created
and
maintained
by
kiverno
itself.
One
of
the
things
that
shooting
will
cover
in
detail
is
with
our
you
know.
B
Latest
release
given
now
is
fully,
which
means
you
can
run
multiple
instances
for
availability
for
scale,
and
you
know
kivarna
will
then
run
a
leader
election
and
coordinate
work
across
these
replicas
across
the
multiple
instances
that
you
have
in
your
cluster
itself
right.
So,
let's
get
into
it.
Let's
you
know,
I'm
gonna
just
copy
this
command
over
here
which,
as
you
can
see,
it's
just
pulling
down
some
yamls
from
the
kiverno
repo
and
we'll
see
how
that
works.
B
So,
switching
to
my
terminal,
I'm
just
going
to
pull
down
these
yaml,
so
I
can
see
a
bunch
of
things
you
know
got
created.
There's
the
web
books
there's
the
generate
controller,
so
similar
things
like
we
saw
in
the
picture
right.
So
now.
Let's
look
at
you
know
what
exactly
happened.
So
if
I
do
by
default
it
goes
into
a
namespace
called
qiverno.
I
mean
you
can
customize
this
with
the
hum
chart.
B
But
if
I
do
get
pods
here,
I
should
see
there's
the
kimono
pod
work
running
and
if
I
do
logs,
let's
say-
and
we
want
to
check
on
the
logs
for
this
we'll
see
what
exactly
the
controller
did.
So
a
few
things
you
can
see
it
created
it
checked
for
the
web,
hooks
it
created
those
if
they
don't
exist
and
then,
if
you're
looking
over
here,
you'll
also
see
that
one
of
the
things
qivarno
is
doing
is
it.
B
It
will
not
say
it's
ready
until
it
receives
a
signal
back
from
the
web
book
which
says
that
okay,
now
I'm
you
know
kind
of
ready
to
start
processing
requests
right.
One
other
thing
that
you
would
see
in
the
logs
typically
is
kiberno
will
also
run
a
leader
election.
So
here
it
will.
You
know,
since
we
only
have
one
instance,
it
doesn't
really
mean
much,
but
because
that
instance
will
always
be
the
leader.
B
But
if
you
were
running
multiple
replicas,
it
would
automatically
run
a
leader
election
and
one
of
the
instances
would
you
know,
register
itself
as
the
leader
and
then
it
would
be
the
one.
So
all
instances
will
serve
api
requests,
but
that
instance
will
be
the
one
coordinating
some
of
the
background
tasks
across
the
replicas.
A
Go
ahead.
Sorry,
there
was
a
question
in
the
chat.
I
think
that
can
be
a
good
time
to
break
in.
So
if
a
background
scan
reveals
a
policy
violation,
what
can
be
what
actions
can
be
done
by
kiverno
yeah,
good
question.
B
Right
so
currently
what
caverno
does-
and
you
know
we'll
see
some
of
this
in
shootings
demo
also
is
kiverno-
produces
a
policy
report
which
is
another
resource
and
it
will
you
know
it
will
update
that
with
any
violations
in
background
in
any
background,
workloads
or
resources,
so
cavernout
doesn't
automatically
shut
down
any
running.
B
You
know
workloads
right
now
and
we're
you
know,
there's
discussions
in
the
community
in
the
kiberno
slack
channel
about
whether
that
should
be
allowed
like
mutating
existing
resources
impacting
existing
workloads,
but
currently
what
it
will
do
is
it
reports
that
it
will
create
the
policy
report
with
the
violation,
and
it
will
also
in
the
metrics
which
are
created
and
reported
through
prometheus
qiverno
will
report
that
there's
existing
violations.
B
A
Great
thanks
there's
another
question
which
is
maybe
a
clarification
question.
It
would
be
nice
if
you
could
say
how
kiverner
detects
policy
violation
that
is
which
mechanism.
B
No
absolutely
right,
so
we
we
talked
about
some
of
the
here,
how
it
installs
etc.
But
what
I
did
not
show
yet
is
what
exactly
a
policy
looks
like
right.
So
it's
a
great
question
and
that's
something
we
can
dive
into
right
now,
right
so
in
kiverno,
each
policy
is
a
set
of
rules
and
you
can
of
course,
group
these
policies.
B
However,
you
wish,
but
ideally
each
policy
is
a
set
of
rules
and
those
rules
will
have
different
match
and
exclude
blocks,
so
you
can
match
various
resources
based
on
kinds:
names
label,
selectors
things
like
namespace,
you
know,
even
you
can
select
particular
namespaces
by
labels
so
or
even
like
user
information.
B
That's
in
the
api
request
right,
so
you
could
write
some
fairly
complex
logic
to
match
and
select
resources.
First
off
now,
once
you
select
a
resource,
you
can
write,
you
know
validate
mutate
or
generate
a
logic
in
your
policy,
and
you
can
of
course
have
a
single
policy
that
combines
some
of
these
for
a
set
of
functions.
B
So
what
that
looks
like
and
I'll
just
actually
we
can
look
at
some.
You
know
examples.
So
let's
say
I
want
to
you
know:
do
some
validation
right,
and
this
is
a
simple
validation
and
let's
say
for
you
know
in
my
clusters:
I
want
to
make
sure
that
every
namespace
has
a
particular
label
or
a
prefix,
or
something
like
that.
So
this
is
how
simple
a
caverno
validate
rule
is
right.
It's
kivernal
policies
are
kubernetes
resources,
so
they're
custom
resources.
B
They
use.
You
know
yamo
for
their
declaration.
Of
course
you
can
create
them
with
json
or
through
the
api
or
any
other.
You
know
way
you
wish,
but
ultimately
it's
a
kubernetes
resource
and
you
can
write
them
as
declarative,
yaml
right.
So
this
policy,
what
it's
saying
is,
if
you
know,
if
I
have
a
namespace
which
does
not
have
you
know
this
a
label
with
this
type
of
with
the
value
type
and
the
you
know
some
or
the
label,
with
the
key
type
and
the
value
small
medium
large.
B
I'm
going
to
block
this
particular
request
right,
so
you're,
requiring
now
that
every
namespace
has
a
label
with
the
key
of
type
and
a
value
of
one
of
these.
You
know
fields
and
that's
all
that
this
policy,
this
rule
is
very
simply
doing
but
or
you
could
have
much
more,
of
course
complex
rules
right.
So
if
you
want
to
now,
this
is
a
generate
rule
which
is
kind
of
saying
I'm
going
to
add
resource
quotas.
Things
like
that.
B
I
could
also
you
know
if
we
kind
of
look
at
some
other
things
like,
for
example,
if
I
want
to
let's
say:
let's
go
to
a
policy
and
I
want
to
match
you
know
a
particular
selector
in
this
case,
so
I'm
going
to
check
and
make
sure
that
the
selector
is
specified
and
if
it's
not
again,
I
can
block
right.
So
several
things,
and
but
you
know
you
can.
Of
course
you
can
also
do
if
you
want
more
complex
operations
like
in
this
policy.
What's
happening
is
we're
checking
some
data
from
the
api?
B
You
know
the
api
server
itself,
so
this
the
way
this
policy
it
works
is
it's
saying,
match
ingresses
and
then
you
know,
go
and
check
and
see
if
you
know
if
based
on
the
configured
ingresses.
So
this
is
the
path
and
you
can
of
course,
test
all
of
this
through
coop
cuddle.
B
You
know
I'm
going
to
go
and
store
all
my
configured
host
names
and
then
I
want
to
make
sure
that
the
incoming
ingress,
if
a
new
ingress
is
created,
so
I'm
going
to
filter
on
the
operation
and
make
sure
that
that
host
name
is
not
already
used
right.
But
this
is
you
know
how
simple
again
writing
the
policy
is
with
a
few.
You
know
kind
of
lines
of
yaml
you
can.
B
You
know,
have
some
fairly
powerful
logic
to
be
able
to
look
up
data
as
well
as
to
be
able
to
validate
data
or
to
mutate
right.
So
kiberno
also
allows
full
mute
rate
through
you
know
either
a
json
patch
syntax
or
you
can
also
use.
B
Yeah,
so
I
think,
let's,
let's
actually
try
so
I
installed
kiverno
and
kirano-
is
running,
but
let's
make
sure
it's
working
right.
So
what
I'm
gonna
do
you
know
to
make
sure
it's
working?
One
of
the
most
you
know,
common
use.
Cases
for
caverno
is,
of
course
around
pod
security
right,
and
I
think
everyone's
heard
that,
probably
or
hopefully
by
now
that
pod
security
policies
psps
have
been
marked
for
deprecation.
B
There
is
still
we
have
till
you
know
version
1.25
of
kubernetes
before
they
are
removed,
but
if
you
haven't
implemented
psps
and
are
looking
for
options
right
now,
you
know
you
could
of
course
use
psps
and
then
look
at
upgrading
to
whatever
replaces
psps
as
an
entry
option.
B
You
know
it's
in
the
documentation.
You
can
see
what
controls
need
to
be
validated
and
verified
right,
so
this
is
all
now
standardized
and
which
allows
the
use
of
policy
engines
like
kiberno.
B
So
let's
do
this,
I'm
going
to
actually
run
this
command
to
install
kivarno,
and
here
I'm
doing
something
interesting
right,
I'm
using
customize
and
I'm
pulling
down
a
bunch
of
policies
and
then
I'm
applying
you
know
basically
those
policies
into
after
the
customization
I'm
going
to
apply
them
into
my
cluster
right.
So
you
can
see
it
pulled
down.
All
of
these.
You
know
different
part
security
policies
which
are
listed
over
here
in
terms
of
what
what
to
do
right,
so
everything
from
requiring
run
as
non-groups
to
checking
for
host
name
spaces.
B
The
host
paths-
things
like
that
now
behind
the
scenes.
What
just
happened-
and
this
is
showing
the
flexibility
and
the
value
of
using
kubernetes
resources
internally.
If
I
go
in
here
and
look
at
the
repo
where
I
pull
this
from,
I
see
there's
a
customization,
and
this
is
very
simple.
It's
just
saying:
okay
go
into
basis
restricted,
but
I
can
also
go
now
further
down
here
and
if
I
look
at
my
customization
here,
it's
saying
that
you
know
these
are
the
types
of
policies
resources.
B
I
want
to
pull
and
I'm
going
to
patch
these
resources
to
change
my
validation
failure
action
to
enforce
so
by
default.
These
policies
are
written
to
audit
to
run
in
the
background
mode,
but
I
just
customize
them
with
this.
One:
simple
command
to
go,
replace
all
of
them
to
enforce,
right
and
customize
is
fantastic
in
your
you
know,
devops
pipeline
or
if
you're,
using
git,
ops
and
kivarno
just
works
really
well
with
that
and
kiverna
policies
being
again
kubernetes
resources.
B
All
of
this
fits
in
nicely
together
so
now
to
test
those
policies,
I'm
going
to
use
this
site.
You
know
we
like
to
use
this
for
demos.
It
call
it's
called
bad
pods,
because
these
are
pods
which
are
wide
open
things.
You
should
not
be
running
in
in
your
production
clusters
or
even
dev,
test
clusters
for
that
matter.
B
Let's,
let's
see
if
we
can
get
this
running
in
my
you
know
local
cluster
right,
so
we'll
do
we'll
just
grab
that
to
raw
ammo
and
we're
going
to
try
and
run
it
in
my
cluster
and
what
caverno
just
did.
Is
it
because
I
had
my
you
know:
pod
security
policies
installed.
It
immediately
said
no,
don't
run
something
in
a
host:
namespace,
don't
use
hostpath,
don't
you
know,
run
privilege,
control
containers
and,
of
course
you
don't
want
to
run
containers
as
root
right.
So
all
of
these
things
got
blocked.
B
All
of
these
are
based
on
the
bot
security
standards
and,
again
that's
what,
in
a
few
minutes,
you
can
go
from
having
a
fairly
open
cluster,
with
no
security
to
a
pretty
secure
cluster.
Just
by
grabbing.
You
know
the
the
standard,
pod
security
policies
and
you
can
of
course
again
the
vehicle
knows
design.
There's
a
lot
of
flexibility
in
how
you
apply
these.
How
you
roll
these
out,
it
will
not
impact
your
existing
workloads,
which
you
know,
provides
quite
a
lot
of
value.
B
A
B
Absolutely
yeah,
so
you
can.
You
can
have
api
versions
kinds.
All
of
that
detailed.
In
fact,
one
of
the
policies
that
we
just
added
into
our
into
our
new
sample
is
a
policy
to
check.
For
you
know,
as
you
probably
also
saw
recently,
there
were
a
lot
of
different
apis
which
got
deprecated
in
in
1.22.
B
So
I'll
quickly
show
this
even
it's
a
good
policy
to
have
in
your
clusters,
so
this
check
deprecated
apis.
It
shows
so
all
of
these
api.
You
know
resources
as
well
as
versions
are
marked
now
for
removal
in
1.22,
so
not
only
are
they
being
deprecated
but
they're
being
removed,
so
you
can
put
this
policy
in
your
cluster.
You'll
immediately
get
a
report
for
where
they're
being
used-
and
you
can
you
know,
control
again,
you
can
manage
them.
You
know
replacing
these
with
the
allowed
api
versions.
A
Yeah
thanks
all
right,
I
guess
we
can
move
on.
Did
we
lose
a
shooting
or
are
you
still
with
us?
Can.
A
C
Okay,
great
so
yeah.
Next,
I'm
gonna
talk
about
the
details
of
the
high
availability
of
kiberno,
how
you
run
that
in
your
production
cluster
and
then
we'll
go
to
the
like
how
you
monitor
the
kubernetes,
the
prometheus
matrix
and
after
that
I'll
switch
to
the
demos
I
prepared
for
today
I'll
cover
the
basic
mutate
and
validate
policies.
I'll
show
you
the
policy
report
and
then
we'll
talk
about
the
permit,
like
the
policy
reporter.
C
That
is,
like
one
major
feature
contributed
by
one
of
the,
like
contributors
rank
out
there
in
the
from
the
community.
Okay.
So
the
first
topic
I'm
gonna
discuss
here
is
the
high
availability
of
kiberno,
like
james,
showed.
You
like
quickly
showed
you
before.
This
is
the
architecture
of
kubernetes.
C
Those
like
blocks
in
yellow
is
the
resources
that
are
managed
by
kubernetes
directly
and
the
block.
Those
square
blocks
and
gray
is
how,
like
the
kubernetes
default
components
and
all
the
ones
in
blue
is
the
kirono
controller
here.
So
qreno
runs
majorly
like
majorly
3
controllers.
One
is
the
weapon
controller.
Another
is
the
general
controller
and
policy
controller.
C
The
the
the
latter
two
controller
actually
like
runs
in
the
background
process,
so
it
is
not
running
as
long
like,
along
with
the
mission
webhook
and
here
are
the
webhook
controller
or
you
call
it
webhook
server,
which
serves
all
the
admission
requests
and
then
apply
all
the
policies
like
the
mutate
and
validate
policies
to
your
incoming
resource
and
then
write
that
to
etcd
right.
So
with
h,
a
we
like
before
on
q,
run
release
140.
C
There
is
really
no
aha
available
here
so
that,
if
you're
running
kiberno
in
a
large
scale
of
cluster,
it
puts
a
lot
of
pressure
on
the
single
instance
of
kubernetes.
So
we
see
a
lot
of
requests
for
this
feature
and
with
140
we
enabled
the
aha
support
with
the
leader
election
built
in
inquirer
controllers,
and
to
dig
into
that
implementation.
C
Here,
the
webhooks
are
like
once
you
spin
up
multiple
instance
of
kiberno.
The
webhook
server
will
serve
all
the
admission
requests.
That
means
the
kubernetes
service
will
distribute
it
all.
The
incoming
admission
requests
to
the
running
instances
and
that's
it
like
district,
distribute
the
the
workloads
evenly
across
instances
and
as
for
the
background
controller,
we
sort
of
enable
this
leader
election
for
the
generate
controller
and
the
policy
controller,
because
those
those
two
controllers
are
really
not
that
the
time
consumption
is
not
really
important
there.
C
So,
as
the
background
process,
we
can
process
the
general
policy,
and
the
policy
report
like
on
asynchronously
right
so
with
the
that
you
can,
you
can
depend,
depends
on
your
scale,
a
cluster
scale.
You
can
configure
the
instance
the
replica
number
to
either
three
or
five
or
even
more
instances
to
distribute
all
the
admission
requests.
C
Okay
like
before
I
I
I
switched
to
the
demo
I'll
briefly
talk
about
the
monetary
ability
of
kiberno,
then
I'll
show
you
how
works
in
the
live
cluster.
C
All
right,
let's
go
to
monitoring
here,
always
one
for
osw.
We
have
this
monitoring
ability
which
kiranova
exposed
all
the
metrics
to
the
prometheus
to
the
prometheus,
and
then
we
exposed
like.
Let
me
zoom
out
a
little
bit.
C
We
exposed
five
metrics
to
prometheus,
and
here
you
will
have
the
record
of
the
policy
and
the
rule
counts,
the
rule,
the
policy
and
the
rule,
execution
results
and
the
power
the
rule
execution
latency,
as
well
as
the
admission
review
latency,
and
you
also
get
the
policy
change,
counts
metric
with
prometheus
and,
let's,
let's,
let's
look
at
the
pick
one
or
two
matrix
here
and
take
a
close
look
at
those
metric
like,
for
example,
let's
say
this
admission
reveal
agency
right,
let's
say,
as
a
cluster
administrator,
you
may
want
to
know,
like
the
average
latency
of
all
your
admission
requests
here.
C
This
matrix
admission,
religiosity
metric,
provides
you
that
kind
of
information
right
in
this
case
you
you
can
know
how
fast
or
slow
that
all
your
admission
reviews
has
been
for
and
for
incoming
requests,
or
you
can
like
be
alerted
that
that,
like
a
certain
policy
that
takes
a
lot
of
time
to
process
the
admission
requests
so
that
you
can
tune
your
policy
either
the
policy
or
the
resource
to
reduce
that
time,
consumption
right
and
with
those
labels
available,
you
will
also
be
able
to
create
different
results.
C
Based
on
the
like
different
labels
in
this
stock.
We
also
provided
some
useful
queries
that
you
can
check
the
average
latency
by
the
resource
type,
or
I
can
check
the
max
like
latency,
that
the
the
the
an
animation
request
takes
and
etc
right.
So
this
is
one
of
the
metric.
We
exposed
the
mission
review
one
and
then
let's
go
back
pick
up
another
one.
Let's
say
this
policy
and
rule
counts
metric
here.
C
This
matrix
provides
you
the
like
historical
data
of
our
your
like
previous
or
the
existing
policies
based
on
the
different
metric
value,
and
here
you
have
the
information
of
what
kind
of
policies
you
have
installed
in
your
cluster
before
and
what
are
the
like
policies
available
right
now
and
the
one
interesting
thing
about
this
metric
is
that
with
coupo
you
are
not
able
to
see
what
type
of
the
policy
or
what
type
of
the
rule
your
on
a
currently
installed
in
your
cluster,
but
with
the
different
labels.
C
I
believe
there
is
a
rule
type
right.
So
here
you
can
query
like
what
are
the
like.
What
are
the
mutate
policies
I
have
installed
in
my
cluster?
What
are
the
validate
policies
and
generate
as
well?
So
you
can
have
like
an
overview
of
the
current,
the
current
status
of
the
policies
installed
in
the
cluster
okay.
Before
I
move
to
the
demo,
is
there
any
questions.
A
C
Yeah,
I
think
jim
has
that
the
link
maybe
has
that
link,
and
he
probably
will
share
it
with
you
as
we
go
all
right.
C
B
Okay,
oh
then
she's
back,
oh.
C
Yeah
yeah,
just
like
what
I'm
saying
is
that
to
give
you
some
context
about
my
cluster,
I'm
running
a
single.
C
Cool
right,
so
I'm
running
a
single
node
mini
cube
cluster
here
and
I
have
already
installed
kubernetes
3
instance.
C
What
you
can
do
here
is
use
customize,
because
I
set
up
all
the
committees
or
the
dashboard
today,
so
I
already
have
it
installed,
but
it's
really
simple:
to
do
right
to
install
kubernetes
with
ham
or
with
install
the
ammo,
as
jim
showed
before.
All
you
have
to
do
is
use
this
camera,
install
kubernetes,
use
this
ham
command
and
specify
the
replica
account
like
your
desired
replica
account
and
then
ham
will
take.
C
C
You'll
have
those
are
the
policies
installed
by
him
like
those
are
the
default
built-in
policies
and
all
of
them
are
in
audit
mode,
so
it
won't
impact
on
any
of
your
incoming.
Like
admission
requests,
okay
back
to
our
agenda,
I
want
to
show
you
how
aja
works
right
before
I
start
the
demo.
I
want
to
show
how
you
can
determine
which
pivot
pod
is
currently
the
leader.
C
Okay.
To
do
that
we
have,
you,
can
check
police
resource
created
and
the
kibrano
namespace.
As
you
can
see
here,
we
have
two
leased
objects
here.
One
is
kiberno,
another
is
webhook
register.
If
you
recall
the
architecture,
I
showed
you
before.
Kiverno
runs
three
major
components.
One
is
the
ambition
web
hook
server
or
the
controller.
The
other
two
is
the
background
controllers,
the
generate
and
the
policy
controller
right.
C
So
the
kirino
lease
object
here
is
really
the
leader
of
the
background
controller
and
the
webhook
register
is
the
component
that
is
used
to
register
the
ignition
webhooks
like.
If
you
do
proto
get
mutating
and
validating
reports,
you
can
see
those
are
the
web
hooks
registered
by
kirono
and
I
have
some
for
prometheus
as
well,
but
yeah,
that's
the
idea
and
if
you
take
a
close
look
at,
let
me
clean
this
up.
If
you
take
a
close
look
at
this
lease
object,
you
will
see
the
holder
id
here.
C
It
is
basically
has
the
pod
information,
the
pod
id
and
concatenated
with
the
random
uid
here
and
let's
get
the
qrinopod
again
and
see
which
part
it
is
for
f4,
okay,
the
third
one
is
currently
the
leader
of
the
background
controller,
as
well
as
the
the
backpack
register,
but
also
like
I
mentioned
before.
The
webhook
server
really
not
use
this
later,
because
the
all
the
emission
requests
are
evenly
distributed
to
each
and
every
running
instance.
C
So
what
I'm
going
to
do
here
is
that
I
want
to
install
a
new
false
policy
and
I'm
going
to
create
a
part
that
violates
this
enforce
policy
and
then
I'll
randomly
kill
one
of
the
pod.
It
can
be
leader
it
can
be
like
non-leader.
It
doesn't
matter
in
this
case,
because
we're
talking
about
the
the
webhook
controller
here
right
and
what
I
expected
to
see
is
that
kirino
should
keep
con
like
blocking
my
results.
Creation,
no
matter
like
one
or
two
instances
done
right.
C
So
to
do
that,
let
me
pull
up
another
terminal
and
I
wrote
this
little
script
to
continuously
create
50,
50
parts
with
image
and
genex
latest
and
what
I
have
with
the
cluster
policy.
I
have
one
of
the
cluster
policy.
This
disable
latest
tag
is
that
is
really
that
I
want
to
disable
the
pod
creation
if
it's
using
the
latest
tag.
C
C
Let's
double
check
the
policy
it's
set
to
enforce.
Okay,
great,
you
can
see
this
policy
is
now
at
in
in
false
mode
and
let's
manually
run
like
one
engine
next
part
using
the
latest
image
and
see
what
that
happens
right.
As
expected,
it
blocks
my
resource
creation
right.
So
what
I'm
going
to
do
now
is
that
I
want
to
continuously
continuously
create
this
resource
and
I'm
going
to
kill
one
of
the
tyrannopod
and
then
the
given
the
rest
of
the
run
instance
should
keep
blocking
my
results
creation.
C
C
Leader,
so
it
is
still
blocking.
None
of
the
creation
requests
pass
through.
Okay.
Now,
the
pod
is
deleted.
Let's
check
that
again.
You'll
see,
there's
a
new
pot
coming
up
red
this.
Those
emission
requests
are
blocked
like
no
matter
one
or
two
instances
done
right
and
the
same
thing.
With
the
background
controller,
you
may
wonder,
like
the
generate
controller,
if
you
kill
the
leader,
will
that
impact
the
policy
application?
B
And
I
think
yeah.
A
Yeah
you,
if
you
see
them
jim,
you
can.
A
We're
gonna
take
a
quick
questions
break
so
one
policy.
One
question
we
had
is:
how
can
the
policy
affect
the
performance,
especially
when
the
policy
is
user
generated
and
the
user
may
make
some
mistake
which
can
be
performance.
C
A
B
Yeah
no
great
question
right
so
policies
and
admission
controls,
of
course,
are
very
critical
in
a
system,
so
I
think
the
best
practice
is.
You
know,
there's
no
easy
answer
to
this,
but
it's
just
treating
policies
as
code
and
following
the
best
practices
we
do
with
any
other.
You
know
kind
of
code
making
sure
you
know
we
have
and
kevon
also
has
some
integrated
testing
abilities
right.
So
you
can
of
course
apply.
B
You
know
some
tests
in
your
ci
cd
pipeline
to
kirana
policies,
but
then
also
making
sure
that
you
are
introducing
you
know
these
in
your
pipeline,
just
like
with
any
other.
Like
your
workload
itself,
I
mean,
if
your
workload
of
course
has
as
bugs
as
things
like
that
you
expect
some
of
those
to
be
caught
in
qa
and
staging
before
it
hits
production
right.
So
that's
why
it
was
very
you
know.
B
That
is
true
and
kiverno
also
doesn't
allow
one
thing
from
a
security
point
of
view.
It
doesn't
allow
calling
into
arbitrary
like
sites,
so
you
cannot
call
some
external.
You
can
make
an
api
server
call.
You
can
call
secure
registries
that
you
configure,
but
you
cannot
call
like
just
another
application
right.
So
one
one
interesting
you
know,
activity
going
on
in
the
cncf
security
tag
is
just
sort
of
looking
at
the
you
know:
threat
model
and
security
posture
of
admission
controllers
of
this.
B
You
know
this
whole
model
to
say
how
do
you
make
sure
that
your
policies
themselves
cannot?
You
know,
introduce
some
vulnerabilities
and
things
like
that
right
so
that
that
is
a
you
know.
That
is
a
concern
of
course,
and
the
way
again
we
have
designed
tivorno
is
to
reduce
that
server
that
attack
surface
and
make
sure
that
you're
only
calling
trusted
systems
that
are
configured
and
your
policy
cannot
make
some
arbitrary
call
and
it's
declarative
right.
B
So
some
of
those
things
like
we,
you
know
like
you
know
there
was
a
someone
filed
an
issue,
and
this
is
a
fixes
in
progress
where,
if
you're,
if
you
end
up
in
any
like
loop
and
things
like
that,
you
know
giver
know
itself
can
validate
that
right
and
it
can
abort.
Of
course,
the
api
server
has
a
lot
of
mechanisms
to
prevent
that
it
has
throttling.
A
Right,
thank
you
and
another
question
was
about
managing
exceptions
to
policies.
If
we
create
a
c,
we
don't
want
it
to
apply
universally.
Can
we
do
that.
B
Yes,
so
there
is
an
exclude
block
and
you
can
in
any
policy
you
can
exclude
based
on
usernames,
on
labels
on
namespaces
on
kinds.
There's
all
sorts
of
you
know
ways
to
exclude
certain.
So
it's
one
common
thing
to
do
is
to
say
I
want
this
policy
to
apply
to
everyone,
except
maybe
cluster
admin.
B
B
I
think
maybe
we
lost
shooting
again
but
yeah.
So
I
think
what
she
was
saying
is
absolutely
a
great
point.
So
if
you
want
policies
to
only
apply
to
a
particular
namespace,
kevon
also
supports
that,
so
you
can
have
policies
only
configured
in
certain
namespace.
In
fact,
one
interesting
thing
you
can
do
is
you
can
use
policies
to
generate
policies
as
well
right
so
which
kind
of
you
know
is
a
little
bit
of
inception,
but
if.
B
Yeah,
if
you
want
to
create
policies
on
the
fly
for
a
certain
workload,
you
can
right
because
it's
it's
all
kubernetes
resources
after
all
and
qiverno
detect
a
trigger
that
a
namespace
got
generated
and
you
can
install
a
set
of
policies
for
that
particular
namespace
based
on
some
label
or
things
like
that.
A
Interesting
another
question:
what
if
you
kill
all
the
kiverno
pods
or
maybe
you
can
generalize
the
question?
What's
the
failure
policy
strategy
of
kiverno.
B
Yeah,
that's
a
good
question
because
so
admission
controls
have
a
fail
safe
over
there.
Where
you,
when
you
are
configuring,
an
admission
control
you
can,
you
can
tell
it
in
the
web
book.
You
know
configuration
whether
the
api
server
should
should
stop
the
operation.
If
that
admission
controller
is
down
or
it
should
still
allow
it
right
so
by
default.
Currently,
caverno,
you
know,
says
installs
with
failure
ignore,
but
if
you
have
policies
that
you
absolutely
want
to
enforce,
you
know
the
recommendation
is
to
say
change
that
failure
in
the
admis
web
book.
B
You
know
configuration
to
to
actually
fail
and
not
allow
that
request.
B
One
something
we're
building
in
the
community-
and
this
gave
us
you
know-
is
being
discussed
for
the
1.5
release-
is
how
do
we
dynamically
allow
on
a
per
policy
basis
that
failure
mode
right
so
kubernetes
allows
it
for
every
api
request
for
a
particular
webhook
server.
But
the
idea
you
know
one
of
our
community
users
came
up
with
to
say,
is
hey
but
certain
policies
I
want
to
fail
and
other
policies.
I
don't
care
if
they're
ignored
right
like
pod
security.
B
I
don't
want
to
compromise
on,
but
okay,
maybe
if
somebody
has
missed
the
label,
maybe
I'll,
allow
that,
and
you
know,
when
kevin
was
back
up
and
running
yeah
I'll
I'll
go,
it
will
produce
a
violation.
So
I
know
so
you
can
now.
You
know
with
that
feature,
you'll
be
able
to
make
those
granular
decisions.
Today
you
have
to
configure
the
web
book
as
either
fail
or
ignore.
A
Yeah,
very
cool
hi
shooting
we
have
you
back,
so
I
guess
we
can.
We
don't
have
a
lot
of
time,
but
there
was
another
section
that
you
wanted
to
discuss.
If
you
could
present
it
quickly.
C
Yeah,
I
want
to
show
the
prometheus
matrix
as
well
as
the
grandfather,
dashboard
and
so
like
I
described
before
we
exposed
this.
Those
permit
five
made
his
metrics
and
you
can
have
external
tools
consume
those
prometheus
data
to
have
an
overview
of
like
each
metric
data.
There-
and
here
is
the
detailed
blog
post
I
have
written
before
it
tells
you
how
to
like
set
up
a
grandfather
dashboard
with
comedians
and
with
the
very
detailed
steps
and
after
all
the
setup
is
done.
C
Sorry
policy
violations
and
let
me
switch
back
to
my
terminal
and
I
guess
I'll
drop
like
skip
the
mutation
and
validation
policies-
application
we
have
a
bunch
of
simple
policies
available
on
our
website
feel
free
to
like
try
it
out
and
what
next
one?
What
I
want
to
show
you
is
the
policy
report
right.
So
let's
do
kukato
get
policy
report
dash
a
since.
I
have
all
those
cluster
policy
cluster
policies
installed
and
I
have
few
workloads
running
in
my
cluster.
C
C
So
here
I
grab
on
the
status
you'll,
see
the
detailed
information
of
each
policy
application
result
or
at
the
root
level
here
right.
So
let's
take
an
example,
I
have
a
stiffer
cell
running
and
I
have
this
policy
to
disable
privileged
containers
with
this
rule
to
auto
privilege,
auto
disable
privileged
container.
Here
it
is
the
paw
controller
right.
So
this
is
the
autogen
feature
that
is
enabled
and
that
captures
the
state.
C
The
violation
or
here
is
the
like
success-
result
on
the
stiffer
set
and
if
you
want
to
see
one
of
the
failed
results
here
is
that
I
have
a
dimmer
set.
I
don't
want
this
demonstrated
to
use
the
host
port,
and
here
apparently
it
does
not
have
this
security
contacts
defined.
So
it
evaluates
one
of
the
policies
right.
So
we
have
this
these.
This
information
are
available
through
the
the
coop
cutter
command,
but
you
don't
have
the
overall
view
of
like,
like
the
the
policy
violations
of
the
application
result.
C
C
One
is
to
send
those
policy
report,
information
to
the
external
targets
like
graffana,
elasticsearch,
slack,
etc,
and
the
second
one
is
that
without
those
external
tools
installed
in
your
cluster,
you
can
also
bring
up
the
policy
reporter
ui
locally,
and
that
gives
you
the
dashboards
of
the
policy
reports.
Information
and
the
third
is
integrated
with
the
prometheus
matrix.
C
Is
that
it
scripts
all
the
data
exposed
by
committees
and
give
generate
the
dashboard
base?
The
dashboard
based
on
these
policy
reports,
information
and
here
is
an
example
of
that.
I
also
have
it
installed
in
my
cluster,
so
what
I
can
do
is
to
show
you
that
the
dashboard,
so
I
let's,
let's,
let's
look
at
the
applications
I
have
installed
here.
C
So
all
the
pods
are
running
under
this
namespace
cattle
dashboards
and
I
have
a
service
deployed.
What
I'm
going
to
do
graphene
service,
deploy?
What
I'm
going
to
do
here
is.
I
want
to
forward
all
the
traffic
to
my
local
port
so
that
I
can
see
the
dashboard
from
my
from
my
local
browser
here
is
the
graph
funnel,
let's
log
into
it,
and
you
can
search
all
the
kibona
related
or
the
policy
related
dashboard.
C
Here
we
have
three
cluster
policy
reports,
details
policy,
reports,
details
and
policy
report
right.
Let's
take
a
look
at
the
second
one
here
it
gives
you
the
general
information
of
the
policy
application
results.
We
have
six
c3
rule
passed
and
three
rule
field
and
if
you
scroll
down
it
gives
you
like
the
overall
report
of
in
different
name
spaces,
and
here
are
the
details
of
each
of
the
like
rule
application
result
right
here
is
the
pass
resources
here.
We
have
a
lot
of
paths,
and
here
are
the
failed
results?
A
Thank
you
shooting.
That
was
a
great
demo.
I
don't
see
any
other
questions
coming
up,
but
I
think
this
this
was
a
very
good
talk
about
the
police
in
general
in
kubernetes,
which
is
like
the
fundamentals
right.
You
have
to
have
something,
and
I
was
personally
happy
to
learn
more
about
kiverno
and
what's
unique
about
it
and
also
some
advanced
scenarios.
C
Yeah,
so
let
me
share
my
screen
again
and
go
to
that
policy
report.
Documentation
here
is
the
information
about
grandfather
loki.
I
believe,
on
with
the
policy
reports
generated
in
your
cluster.
The
policy
report
gathers
or
collects
those
information
and
exposed
to
the
grant
funnel
locate
targets.
C
B
More
of
a
log
management
tool
right,
so
it
it's
in
the
grafana
family
and
it
kind
of
shows
more
log
data.
B
So
I
think
that's
what
you
used
here
and,
what's
being
what
policy
reporter
will
stream
into
loki
is
like
policy
results
as
they're
occurring
so
by
the
way?
One
one
thing
I
just
want
to
call
out
so
frank
joshua,
who
wrote
this
tool
he
has,
you
know
donated
it
to
kiberno,
so
we
are
working
with
frank
to
bring
this
into
natively
into
kievano
and
just
make
it
part
of
the
coverna
project.
B
B
So
thank
you,
frank,
great
work
and
happy
to
see
sort
of
the
community
grow
with
that.
A
A
B
So
policy
execution
is,
of
course
one
concern
that
always
comes
up
is
how
much
time
are
my
policies
taking
like
what
we
were
also
discussing
previously?
B
If
there's
any,
you
know
kind
of
lack
of
efficiency,
things
like
that,
so
certainly
that
comes
up,
and
then
you
also
want
to
watch
for
policies
which
are
not
being
so.
Maybe
if
you
missed,
you
know
in
in
your
match
and
exclude,
perhaps
if
it's
too
relaxed
that
if
policies
are
not
being
applied,
those
are
other
things
and
metrics.
B
That
would
be
fairly
critical,
of
course,
in
production
to
always
look
at
and
see
through
this,
of
course,
yeah,
and
I
see
the
other
question,
which
was
the
integration
with
slack
alert
manager,
things
like
that.
So
yes,
that
that
is
also
policy.
Reporter
has
some
of
those,
and
you
know
you
would
want
to
create
alerts
if
you're,
seeing
your
metrics
sort
of
dip
below
the
expected
threshold
or
if
again
execution
times,
are
starting
to
grow
things
to
that
nature.
A
If
there
aren't
any
more
questions
seems
like
there
aren't.
I
would
wrap
this
up
and
thank
you,
jim
and
shooting
for
this
very
interesting.
A
Yeah,
thank
you.
It's
been
a
pleasure
to
the
audience.
I
want
to
say
again,
you
can
stay
in
touch
in
the
cncf
slack
under
the
cloud
native
live
channel,
and
that
will
see
you
again
next
wednesday.