►
Description
Don't miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from 18 - 21 April, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
Hello:
everyone
welcome
to
today's
scenes.
Cloud
native
live
with
cncf
online
programs.
I
am
Libby,
Schultz
and
I'll,
be
hosting
today
and
I
want
to
read
our
code
of
conduct
and
hand
over
to
our
speakers:
Jim
mcwadia,
founder
and
CEO
of
nirmada
and
kyverno
maintainer
and
Chip
zoller
technical
product
manager
of
nirmada
and
kyverno
maintainer
as
well
a
few
housekeeping
items
during
the
live
stream.
A
You
can
chat
with
us
and
leave
your
questions
in
the
chat
box,
so
please
do
so
tell
us
hello
and
where
you're
listening
from
and
leave
all
your
questions
for,
Jim
and
Chip
here
and
we'll
get
to
as
many
as
we
can
throughout
please.
This
is
official
live
stream
of
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct.
Please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct
and
please
be
respectful
of
all
of
your
fellow
participants
and
presenters.
A
Please
note
that
this
recording
will
be
online
on
our
YouTube
channel
later
today,
so
you
can
catch
it
anytime
or
use
your
registration
link
that
you
registered
with
to
reach
it
as
well
with
that
I'm
going
to
hand
things
over
to
Jim
and
Chip
to
take
it
away.
B
Thank
you
Libby
and
thanks
everyone
for
joining
so
I'm
gonna
kick
things
off
with
just
a
quick.
You
know
introduction
to
cover
now
a
lot
of
you
probably
already
know
in
a
bit
about
kevano,
but
just
for
those
of
you
who
might
not
have
heard
of
the
project.
So
kivarno
is
a
you
know:
a
policy
engine
that
was
built
purpose
built
for
kubernetes
and
it's
designed
to
operate
both
inside
of
kubernetes
clusters
as
as
well
as
as
a
command
line
which
you
can
put
in
your
CI
CD
pipelines
for
kubernetes.
B
You
know,
policy
management
resources
and
as
a
building
block
for
governance
and
compliance
right.
So
just
you
know,
kind
of
going
and
I'm
sharing
the
documentation
of
kirano
and
we'll
just
pull
up
the
introduction
section,
because
it
has
a
nice
picture
which
shows
how
caberno
operates
and,
as
we
look
at
some
of
the
changes
we
have
put
into
the
newer
releases
of
caberno,
we'll
revisit
a
lot
of
these
components.
But
basically
caverno
works
as
an
admission
webhook,
which
means
it
self-registers
itself
with
the
kubernetes
API
server.
B
It
becomes
part
of
your
control
plane
and
receives
every
API
request
based
on
your
configured.
Policy
set
and
caverno
policies
are
kubernetes
resources.
There
is
no
programming
language
necessary
for
these
policies,
they're
declarative
written
as
kubernetes
resources,
and
you
can
use
these
policies
to
to
either
validate
and
block
so
certain
checks,
certain
rules
that
you
want
within
your
cluster.
You
can
also
mutate
resources.
B
You
can
generate
resources
and,
with
some
of
the
newer
features
that
Jeff's
going
to
cover,
it
also
allows
like
cleaning
up
and
other
hygiene
and
automation
of
resource
management
within
your
clusters.
So
a
lot
of
different,
powerful
things
that
Governor
allows
and
we'll
see.
Some
of
these
features
live
like
I
mentioned.
Cabernos
also
can
be
run,
so
this
is
showing
the
Web
book,
but
there
is
a
kibernos
CLI,
which
is
extremely
handy.
B
If
you
want
to
either
you
know,
apply
your
policies
in
your
CI
CD
pipeline
or
if
you
want
to
test
policies,
the
CLI
also
has
a
test
command
for
this.
So
certainly
you
know
take
a
look
at
that
as
well,
and
these
are
the
two
form
factors.
In
addition,
caverno
will
run
periodic
background
scans,
and
the
nice
thing
here
is
even
policy.
B
Reports
are
produced,
as
you
know,
kubernetes
resources
so
Kevin
our
policies
as
well
as
reports,
the
output,
the
results
from
your
policies
are
all
consumable
through
kubernetes
apis,
there's
no
other
additional
tools,
Etc
necessary
and
it
fits
in
nicely
with
your
existing
way
of
managing
resources.
Managing
you
know
for
the
policy
life
cycle
as
well,
as
you
know,
getting
updates,
etc
for
your
cluster.
B
So
that's
a
quick
overview
on
kevano.
There
are,
you
know,
of
course,
several
sample
policies
in
our
policy.
Library
I
think
yeah.
We're
up
to
265
over
here,
so
certainly
awesome
to
see
the
community
contributing
a
lot
of
these
as
well.
As
you
know,
the
caverna
maintainers
as
we're
working
on
different
features.
Etc
that
we'll
see.
But
you
know
this
is
probably
a
best
source
to
get
started.
You
can
look
at
policies
by
either
the
category.
You
can
look
at
things.
B
You
know
in
in
terms
of
resources
or
versions
that
you're
trying
to
you
know,
write
policies
for
it's
a
lot
of
different
options
or
just
do
a
search
and
you'll
kind
of
see
at
least
a
sample
which
gets
you
started,
and
then
you
can
dive
in
a
little
bit
deeper.
One
quick
thing
to
note
is
you
know
we
do
have
a
lot
of
community
users
coming
over
from
using
other
projects
like
Opa,
open
policy,
agent
gatekeeper,
so
just
quick
comparison.
I
did
already
mentioned
that
Governor
does
not
require
a
policy
language.
B
B
You
know
a
major
distinction
between
sort
of
the
the
approaches
that
the
two
projects
take
and,
of
course
you
know
where
Opa
has
some
values,
if
you're
using
those
same
policies
or
if
it's
the
same
team
writing
policies
outside
of
kubernetes
using
a
language
like
Rego
may
make
sense,
but
if
you're
focused
on
kubernetes,
the
goal
of
kevano
is
to
make
things
as
simple
as
possible.
B
C
C
Jim
all
right,
so
that
was
a
good
overview
and
introduction
of
caverno.
Let's
talk
a
little
bit
about
things
that
just
came
with
the
1.9
release
that
we
released
at
about
the
beginning
of
February,
so
I'm,
looking
at
the
official
blog
here
and
just
not
going
to
be
able
to
cover
all
of
the
things.
If
you
want
to
look
at
all
of
the
things
that
are
new
in
1.9,
go
and
check
out
the
release,
notes
that
have
a
ton
of
things
that
are
in
there,
but
just
some
key
things
to
point
out.
C
So
one
of
the
things
that
has
been
met
with
really
great
reception,
so
far,
policy,
exceptions
and
I'm
going
to
cover
this
in
a
demo
in
just
a
minute,
along
with
the
second
new
feature,
but
policy
exceptions,
and
you
can
see
an
example
right
here
is
a
new
resource
that
we
created
in
caverno.
That
allows
you
to
decouple
the
policy
from
the
scope
of
its
application.
So
the
problem
that
we're
trying
to
solve
here
is
avoiding
the
necessity
of
modifying
a
policy
every
time
that
you
want
something
to
not
apply
to
it.
C
Policies
are
typically
as
broad
as
scope
of
as
possible,
but
sometimes
it
doesn't
make
sense,
or
it's
just
simply
not
possible-
to
go
and
modify
a
policy
when
you
need
to
have
exclusions
that
are
legitimate
and
valid
in
your
environment,
apply,
and
also
this
having
a
separate
policy
exception,
allows
you
to
have
teams,
for
example,
that
may
not
be
responsible
for
policy
authoring.
They
may
not
even
be
able
to
see
the
policies
they
don't
need
to.
C
So
here's
an
example
of
what
a
policy
exception
looks
like
and
what
you're
able
to
do
is
Define
the
name
of
the
policy
that
you
want
to
provide
an
exception
for
policies,
wrap
rules,
so
you'll
Define
the
names
of
the
rules
as
well-
and
this
is
the
simple
the
standard
match
and
exclude
block
that
you
already
may
know
about
in
caberno
policies,
which
is
very
flexible
and
has
a
lot
of
different
options.
You
just
select
the
resources
that
you
want,
so
in
this
case
maybe
I've
got
a
rule.
C
That's
disallow
host
name
spaces,
and
that
applies
to
all
pods,
so
maybe
just
matches
on
pods
across
the
entire
cluster.
But
you
want
to
be
able
to
provide
an
exception
for
one
specific
name
pod
in
one
specific
namespace.
You
can
create
this
not
have
to
modify
your
existing
policy
and
once
this
is
in
place,
your
important
tool-
that's
in
the
Delta
namespace,
for
example.
C
So
that's
the
policy
exception
feature
and,
like
I
mentioned
I'm,
going
to
demo
that
in
just
a
second
here
and
how
you
can
combine
that
with
a
second
new
feature
of
1.9,
which
is
cleanup
policies.
So
caverno
has
long
had
an
ability
that
we
call
generate
rules
and
generate
rules
are
one
of
the
seminal
capabilities
and
one
of
the
most
favorite
capabilities
of
caverno,
which
allows
caverno
to
generate
create
all
new
kubernetes
resources.
C
Yes,
including
custom
resources
based
upon
a
design
that
you
codify
in
a
policy,
so
you
can
either
choose
to
clone
from
an
existing
resource
or
you
can
Define
the
entirety
of
the
resource
in
line
into
the
coverno
policy
as
well.
That
was
great
and
that
that
went
a
long
way.
But
one
of
the
gaps
that
we
heard
was
that
hey
there's
still
a
need
to
be
able
to
remove
resources.
You
got
the
creation
angle
covered
and
that's
awesome,
and
we
can
use
that
in
a
lot
of
different
ways.
C
But
it
would
be
super
nice
if
we
could
couple
that
with
being
able
to
remove
resources.
So
we
came
up
with
these
cleanup
policies
and
there's
some
other
tools
that
do
that
as
well.
But
the
nice
thing
about
cleanup
policies
and
having
them
built
into
caverna
as
well
is
that
you
can
get
all
this
stuff
together
and
these
rules
can
complement
each
other.
So
one
of
the
use
cases
for
this
might
be
periodically
removing
crust
from
a
cluster
so
like
in
this
example.
Here
we're
able
to
remove
things
like
bear
pods.
C
So
one
of
the
one
of
the
the
stories
goes.
You
know
we're
having
troubleshooting.
We
need
to
be
able
to
run
some
some
pause
one
time
to
be
able
to
do
ping
tests
or
do
curls
or
something,
but
as
what
typically
happens,
people
forget
about
them
and
they
get
left
around
and
Bear
Paws
aren't
managed
by
a
controller.
So
we
could
use
something
like
a
cleanup
policy
and
here's
an
example
of
one
to
go
and
find
all
those
bear
pods
and
then
remove
them
on
a
scheduled
basis.
So
just
quickly
walking
through
through
this.
C
As
you
can
see,
it's
very
familiar
policy
contents,
like
Jim
mentioned,
we
try
and
make
caverno
as
easy
as
possible
to
reason
about
and
to
write
policies,
and
so
we're
trying
to
recycle
all
the
same
constructs
that
you
are
already
familiar
with
or
may
be
familiar
with
in
current
final
policies-
and
this
is
this-
is
what
they
are
so
we're
just
going
to
match
on
a
regular
pod
and
then
we're
going
to
use
the
familiar
Expressions
that
kubernetes
already
has
and
caverna
uses
as
well
to
take
a
look
at
the
owner
references
and
without
getting
into
too
much
of
the
Nitty
Gritty
details.
C
However,
you
like,
and
when
that
time
period
elapses
Governor
will
kick
into
action
and
if
the
whatever
resources
match
that
definition
they'll
be
cleaned
up
and
I'm
going
to
show
this,
in
tandem
with
policy
exceptions
on
how
you
can
combine
these
two
together,
to
make
a
really
pretty
cool
system
on
how
you
can
Empower
your
own
local
development
teams
and
and
users
of
your
cluster,
to
get
something
like
this
as
a
service
on
a
time
on
a
Time
expiration
basis,
but
before
getting
there
just
to
quickly
touch
on
some
of
the
other
features
so
distributed
tracing.
C
We
instrumented
caberno
with
distributed
tracing
and
whatnot
1.9,
so
that,
if
you
point
this
to
your
your
collector
it'll
give
you
the
full
execution
that
caberno's
doing
under
the
covers
all
the
policies,
all
the
rules.
How
long
they
took
what
external
calls
were
made.
So
super
duper
valuable
when
it
comes
to
getting
some
of
this
observability
information
can
be
used
for
troubleshooting,
but
alko
also
can
just
be
used
for
things
like
performance
analysis
or
just
to
know
what
it's
doing
so.
C
That's
distributed
tracing
extended
support
for
sub
resources,
so
we
brought
even
more
abilities
for
working
with
kubernetes
sub
resources
to
cover
I
know
now,
they're
easier
than
ever,
and
basically
all
of
them
work.
So
you
can
take
a
look
at
some
of
the
samples
that
we
have
here.
This
one
is
just
showing
how
you
might
be
able
to
do
things
like
when
new
nodes
get
bootstrapped
into
your
cluster.
If
you
want
to
advertise
some
sort
of
a
custom
resource
like
it's
a
dongle
or
fpga,
or
something
like
that,
convertible
can
mutate.
C
Those
nodes
for
you,
which
happens
to
be
in
the
sub
resource
called
status
and
be
able
to
as
part
of
your
onboarding
workflow
for
your
cluster,
be
able
to
advertise
those
resources
to
other
pods
and
things
in
your
cluster.
Without
you
having
to
do
anything
and
the
the
final
one
I'll
just
mention
here
is
config
map.
Caching,
so
caverno
has
the
ability
to
do
things
like
use
contents
of
a
config
map
to
make
policy
decisions.
C
It's
had
that
for
a
while,
but
for
certain
large
clusters
with
a
lot
of
policies
that
may
be
consuming
a
lot
of
config
Maps.
We
wanted
to
reduce
the
impact
on
making
those
API
calls.
So
we
brought
in
this
config
net
caching
feature
that'll,
simply
allow
you
to
assign
a
label
and
convert
a
watch
and
cache
the
config
map
that
will
avoid
some
of
these
API
calls.
So
it
has
a
benefit
of
reducing
some
of
the
load
on
the
API
server
and
also
mixed
policy.
C
Lookups
quicker-
and
there
are
a
lot
of
other
things
that
you
can
see
here
not
going
to
go
through
all
of
them
again.
I
encourage
you
to
read
this
blog
and
also
take
a
look
at
the
release
notes
if
some
of
these
things
piqued
your
interest,
but
I
want
to
flip
over
real
quick
and
show
these
two,
the
first
two
main
features
in
action.
So
what
we
actually
want
to
do
here
and
actually
let
me
flip
over
and
show
the
architecture
what
we
want
to
be
able
to
do
with
the
policy
exceptions
feature.
C
In
fact,
you
really
want
to
get
rid
of
that,
because
that
may
allow
things
to
circumvent
your
policy,
which
you
may
not
want.
So
we
want
to
be
able
to
put
something
like
an
expiration
date
on
this.
So
what
we
can
actually
do
is
combine
these
two
capabilities
in
Cabernet
1.9
to
give
you
that
in
a
fully
automated
fashion.
So
here's
what's
generally
going
to
happen.
C
It's
truly
any
resource
that
you
might
have
in
the
cluster
and
policy
exception
is
just
another
resource,
so
we
can
match
on
that
and
then
it
will
automatically
create
a
new
cleanup
policy
that
will
set
the
expiration
date
to
four
hours.
Of
course,
all
of
this
is
configurable,
but
in
just
this
demo
this
is
set
to
four
hours
and
after
that
time
period
elapses.
It
will
clean
up
that
specific
policy
exception
in
the
Falcon
Dev
namespace
and
give
you
peace
of
mind
that,
once
that
period
has
been
expired,
that
exception
is
removed.
C
C
That
has
this
pattern
that
that
you,
that
I'm,
showing
here
and
just
one
cool
thing
to
note
converter,
uses
a
system
called
James
path,
which
is
a
Json
processing
system,
but
we've
also
built
a
whole
bunch
of
filters
that
are
endemic
only
to
caverno
to
add
even
more
value
and
power
to
it,
and
one
of
the
things
that
we
added
is
this
random
string
generator.
So,
as
you
can
see
here,
part
of
the
name
that's
going
to
be
built
involves
a
random
string.
That
I
said
as
far
as
a
regex
goes
of
eight
characters
long.
C
So
this
is.
This
can
be
a
really
cool
thing,
because
you
can
generate
things
like
pod,
hashes,
API
tokens,
uuids,
all
sorts
of
things
without
having
to
go
down
to
a
low
level
programming
language,
you
just
simply
write
a
filter
for
it,
but
anyhow,
once
we
have
that
name
we're
going
to
stash
all
this
information
in
it.
So
we're
going
to
populate
this
cluster
cleanup
policy
and
one
of
the
cool
things
that
we
can
also
do
in
1.9
and
I'll
show
this
in
the
next
demo
is
take
time
into
consideration.
C
We
can
get
the
current
time
and
add
whatever
amount
of
time
that
you
want
to
it
and
use
that
to
actually
build
the
schedule.
So
the
schedule
is
a
cron
based
thing,
and
so
we're
going
to
take
the
time
form
right
now
add
four
hours
to
it,
convert
it
to
cron
and
that's
going
to
be
our
schedule.
So
once
we
do
that
the
cluster
cleanup
policy
is
going
to
get
created
and
then,
after
the
four
hour
time
period
elapses
it's
going
to
automatically
remove
it.
Now,
we're
obviously
not
going
to
wait
for
four
hours.
C
So
not
we're
not
going
to
have
an
extended
webinar
here,
but
I
just
want
to
show
the
actual
process
of
this.
So
here's
a
policy
exception
that
I'm
going
to
attempt
to
create
and
I'm
going
to
only
say
that
a
an
Emergency
busy
box
pod
is
allowed
for
my
host
namespaces
disallow
host
namespaces
policy,
and
so
let
me
go
ahead
and
create
first,
the
cluster
policy,
which
is
going
to
provide
this
automation
for
us
and
actually
I'm
wrong.
C
C
Cleanup
policy
exists
that
has
the
correct
contents
and
I'll
note
that,
although
I've
already
done
it,
you
will
need
to
add
permissions
to
coverno
in
order
to
do
this,
and
there
is
a
Blog
that
already
has
all
of
this
stuff
out
there,
for
you
and
I
believe.
Also,
the
documentation
mentions
this
as
well.
A
C
Sure
can
yeah
caverno
works
indiscriminately
on
any
kubernetes
resource.
It
doesn't
matter
if
it's
built
in
or
if
it's
a
custom
resource,
even
if
you
don't
have
a
controller
that
reconciles
that
custom
resource
which
is
probably
not
common,
but
still
you
could
do
it
if
you
wanted
to
so.
Yes,
it
can
not
only
validate
and
mutate,
but
also
generate
and
clean
up
and
other
things,
those
custom
resources.
So
absolutely
yes,.
C
Okay,
so
now
we
see
here,
I
just
had
a
cluster
cleanup
policy
that
was
generated
and
going
back
to
the
name.
So
it
gave
me
the
name
in
the
pattern
that
I
mentioned
and
there's
that
random
string.
That's
at
the
end
and
there's
the
schedule
that
if
we
were
to
take
the
current
time
and
add
four
hours
to
it
and
convert
it
to
cron,
that's
what
we
got
so
again,
not
going
to
wait
for
this
to
actually
run.
But,
as
you
see
here,
this
automation
that
fired
saw
a
policy
exception.
It
automatically
created
cleanup
policy.
C
Then,
once
that
interval
elapses
it'll
delete
the
policy
exception,
your
policies
that
are
provided
for,
in
that
exception,
go
back
to
the
same
way
that
they
were
so
that's
it
for
the
first
one.
Hopefully,
that
makes
sense
and
maybe
think
is
even
kind
of
cool,
and
so
let
me
do
one
more
and
in
this
one
like
I
mentioned,
we
with
the
ability
to
work
with
time.
C
One
of
the
things
that
we
can
do
in
caverna
1.9
is
be
able
to
do
things
like
create
a
Time
window
for
your
policies,
and
this
can
be
super
helpful
because
if
you
have
Ops
teams
that
work
in
different
shifts-
or
maybe
you
open
up
your
cluster
to
like
a
platform-
that's
that
that
has
a
component,
that's
driven
by
non-developers,
like
actual
users.
C
Maybe
you
want
certain
enforcement
or
non-enforcement
behavior
in
things
like
business
hours,
but
another
Behavior
to
occur
outside
of
that
or
maybe
a
separate
Behavior
to
occur
on
weekends
or
whatever
the
case
might
be.
So
since
coverno
now
is
aware
of
time,
we
can
do
things
like
time
bound
whatever
policy
that
you
want,
and
so
imagine
a
case
like
this
I've.
C
Just
taken
a
policy
from
our
part
security
standards,
which
coverino
already
provides
for
you
fully
built
that
conforms
to
the
Pod
security
standards,
and
we
want
to
be
able
to
only
make
sure
that
this
is
enforced
during
business
hours,
eight
to
five
eight
a.m,
to
5
PM
Eastern,
Standard
Time.
In
my
case.
What?
Obviously
you
can
change
this
to
whatever
that
you
want?
But
the
idea
is:
if
it's
within
that
window,
then
this
policy
is
going
to
get
enforced
if
it's
outside
of
that
window,
there's
nothing
that
you
need
to
do.
C
Caverna
will
understand
that
and
not
apply
this
policy.
So
imagine
that
you
had
a
pod
like
this,
and
this
is
a
troubleshooting
pod
and
it,
as
you
can
see
it
needs
to
mount
a
host
namespace.
It
needs
access
to
an
underlying
hose
to
be
able
to
do
some
sort
of
work
on
it,
and
so
normally
this
pod
would
be
blocked
by
that
policy.
But
if
I
were
able
to
run
this
pod
in
the
outside
of
the
window,
that's
defined
here,
then
it
would
be
blocked
and
actually
I
changed
this
for
the
sake
of
the
demo.
C
That
is
the
current
time.
So
let's
go
over
and
do
this,
and
let
me
answer
a
question
here:
can
we
set
the
policy
for
30
minutes
or
10
minutes?
Is
it
allowed
sure
you
can
set
it
for
whatever
time
period
that
you
want?
It's
really
up
to
you
just
for
demo
purposes.
I
I
had
that
in
there
to
kind
of
illustrate
what
it
might
look
like
for
a
real
world
scenario,
but
it's
totally
up
to
you.
You
can
set
it
how
you
want.
C
So
let
me
create
this
cluster
policy,
and
this
is
the
disallow
host.
Namespaces
and
I
have
a
pod
that
violates
this
policy
and
now
since
I'm
in
that
that,
at
that
time
window
it's
currently
12
25
PM
Eastern,
Standard
Time.
This
pod
should
be
blocked
and,
as
you
can
see
here,
the
Pod
was
blocked
because
it
was
within
that
time
window.
So
let
me
simulate
what
it
would
look
like
if
I
were
to
adjust
this
time
window
so
that
it
fell
outside
of
it.
C
That
needs
those
capabilities
and
imagining
that
this
is
outside
of
that
time
window
and,
as
you
can
see
here,
I
was
able
to
create
the
Pod,
and
that
was
the
only
modification
that
I
made
I
did
not
change
the
structure
of
the
policy.
I
just
said
hey.
If
the
time
window
is
much
more
narrow
now
and
where
outside
of
that,
you
can
allow
this
through
so
anyhow,
that
is
what
I
wanted
to
show
from
a.
C
Let's
take
a
minute,
let's
first
of
all
check
any
any
more
questions
out
there,
I
don't
see
any
other
questions
all
right.
So
with
that,
that's
one,
nine!
Let's
take
a
look
at
what
we
have
on
the
roadmap
coming
for
1.10,
all
right,
so
caverno,
one
dot,
10
select
the
right
one.
Here.
We've
got
some
really
cool
things
that
are
coming
and
Jim
is
going
to
show
some
of
these
things
today,
which
I
believe
is
going
to
be
the
first
time
that
that
we've
shown
any
of
these.
C
C
You
know
some
other
components
so
we're
going
to
take
the
first
step
and
decompose
coverno
into
separate
controllers.
That'll.
Allow
you
to
get
that
so.
First
thing
that
we're
going
to
do
is
split
the
web
Hook
and
the
background
controller,
so
that
you
can
choose
which
one
of
those
that
you
want
Now
by
default,
you'll
get
both
of
them
and
everything
else.
C
But
if
you
just
want
one
of
those,
then
you
can
totally
do
that
and
so
that'll
reduce
the
resource
consumption
that
you
that
you're
requiring
and
also
just
give
you
the
capabilities
that
you
want,
so
that
you
have
a
little
bit
better
ability
to
reason
about
what
it's
doing
and
what
it's
not
doing.
So
that's
the
first
thing:
we're
gonna,
we're
gonna
split
this
up,
and
this
is
just
the
first
phase
of
more
phases
that
are
coming,
we're
going
to
decompose
it
further
and
and
try
and
make
it
fully
modular.
C
So
if
you
only
want,
for
example,
generation
in
the
future,
then
we'll
be
able
to
just
give
you
generation,
you
don't
need
everything
else.
If
you
just
want
background
scans,
just
to
be
able
to
generate
the
nice
policy
reports
that
caverno
can
generate,
then
you
should
just
be
able
to
get
that
and
not
have
to
want
to
run
a
web
hook
at
all.
So
that's
the
first
one.
The
second
one
here
is
has
been
a
long-standing
request,
and
so
we're
we're
super
excited
to
be
able
to
deliver.
C
C
So
this
is
going
to
be
super
helpful,
because
now
you
can
integrate
covarna
with
truly
whatever
that
you
want
and
as
long
as
it
gets
some
Json
data
back
which,
at
the
end
of
the
day,
that's
really
what
admission
controllers
are
doing,
because
kubernetes
is
going
to
send
Json
anyway.
We
think
that
you
should
be
able
to
do
that
for
any
service.
C
As
long
as
you
get
some
data
back,
that
it
can
process
govern,
will
be
able
to
take
that
into
a
policy
decision
and
Jim's
going
to
show
a
demo
of
this,
and
also
the
second
major
one
to
to
point
out
here
is
notary,
V2
support.
So
today,
in
caberno
we
have
the
ability
to
do
things
like
verify,
image,
signatures
and
attestations
based
upon
the
cosine
project,
but
tomorrow
we
are
working
on
notary,
V2
support
which
will
allow
caverno
to
do
the
same
type
of
thing,
but
for
the
notary,
V2
project.
C
So
now
you
can
have
a
choice
of
if
you're,
using
one
or
the
other
project.
You
can
now
do
things
like
verify
images
for
either
one
of
those
and
as
typical
that
we're
trying
to
do
for
everything
in
caberno.
This
is
all
done
in
a
very
simplistic
manner:
we're
not
exposing
a
programming
language,
we're
making
it
very
easy
to.
C
Very
easy
to
switch
between
the
two
if
you
want
to-
and
the
final
thing
here
is-
we're
we're
we're,
making
some
some
pretty
significant
modifications
to
the
generation
ability
of
caverno
to
make
this
an
even
better
experience,
so
we're
we're
creating
some
new
or
enhancing
it
in
some
ways
that
have
been
requested
for
a
while,
giving
it
a
nice
coat
of
paint
and
polishing
it
up
really
well,
so
that
we've
heard
a
lot
of
folks
that
rely
and
and
really
love
the
generation
capability,
because
it's
so
easy
to
use.
C
We
just
want
to
make
it
a
little
bit
more
robust
and
also
bring
it
bring
a
few
enhancements
that
have
been
out
there
for
a
little
while
so
with
that,
let
me
give
it
over
to
Jim
so
that
he
can
show
you
a
couple
of
these
cool
new
features
coming
in
1.10
and
a
live
demo.
B
All
right,
thank
you,
chip
yeah,
so
just
switching
over
you
know
and
we'll
start
with
the
first
feature
itself.
So
like
chip
mentioned,
you
know
to
decompose.
Governors
started
as
a
single
deployment,
but,
as
you
can
see
in
this
picture,
there's
a
several
different
controllers
that
were
part
of
this.
You
know
packaged
in
this
one
deployment.
So
what
we
have
done
and
just
kind
of
I'll
show
you
this,
how
it
looks
in
the
command
line.
B
If
we
kind
of
do,
you
know,
get
pods
on
kuberno
and
I'm
running,
you
know
from
latest
from
110,
so
I
can
see.
There
are
four
different.
You
know
pods
running
and
in
you
know
we
can
even
just
you
get
deployed
to
see
the
number
of
deployments
and
what
we
should
see
is
there's
for
each
one
of
these
pods.
We
saw
there's
also
a
deployment
in
kevano
right.
So
now,
like
chip
mentioned
so
kivano
itself
is
the
admission
Web
book,
so
that
runs
in
full
mode.
B
B
There's
the
reports
controller,
which
is
responsible
for
reports
and
the
cleanup
controller
for
the
new
cleanup
policy
that
chip
demoed
right
so
very
easy
to
kind
of
I
guess
understand
what
these
are
doing
and
then,
hopefully
also
easier
to
now
scale
governor
and
make
sure
that
your
Web
book
is
properly
sized,
because
it's
critical
as
Kevin
runs
as
an
admission
controller
to
make
sure
that
the
web
hook
doesn't
go
down
or
it
doesn't
have
any
glitches
and
is
performant
enough
as
well
as,
of
course,
some
of
the
back
background
operations
need
to
be
sized
correctly
for
larger
clusters
right.
B
So
that's
just
the
you
know,
kind
of
split,
so
it
really
from
a
user
perspective.
No
major
change.
You
just
install
caberno
with
talm
or
with
you
know
the
install
script.
It
will
automatically
install
the
controllers,
but
then
you
have
you
know
flexibility
and
have
your
size
tune
and
operate
Governor
within
your
clusters
itself,
all
right.
So,
let's
kind
of
switch
over
to
the
next
thing.
You
know
on
the
list,
I'll
start
with
notary
and
then
we'll
go
back
to
the
extension
service.
So,
like
chip
mentioned,
you
know
you're
really.
B
The
idea
is
to
start
supporting,
in
addition
to
a
six-store,
cosine
notary
as
assigning
and
verification
format-
and
you
know
for
those
of
you
who
may
not
be
aware
of
this
project.
This
is
a
cncf
project.
It
started
life,
you
know
prior
to
six
store,
cosine
and
and
some
of
the
other
components
in
six
store.
But
then
you
know
I.
There
were
some
changes
made.
There
was
a
notary
eb2
but
which
may
be
renamed
after
notation,
as
this
demo
is
showing.
But
this
is
a
you
know.
B
B
There's
some
there's
a
good
blog
post
out
there,
which
compares
the
two
and
you
know,
I,
guess
the
fourth
governor
we
will
support
the
plan
is
to
support
both
formats
in
terms
of
signing
and
verification,
and
you
know,
depending
on
how
things
you
know
evolve,
we
will
of
course,
continue
to
track
and
support
both
projects.
So
for
those
of
you
who
might
be
familiar
with
image,
verification
and
caberno,
this
was
a
feature
we
introduced
a
few
releases
back.
B
There
is
a
type
policy
type
in
Governor
called
verify
images
right,
and
what
this
allows
you
to
do
is
it
does
a
few
things?
It
will
verify
image
signatures.
Verify
attestations
within
you
know
that
are
attached
to
images
and
again
based
on
the
project.
You're
using
the
attestations
may
show
up
differently
slightly
differently
in
in
the
Registries,
but
the
idea
is
that
you
are
creating
metadata
in
your
CI
CD
systems.
You're
signing
that
metadata,
along
with
your
images
like
provenance
data
right.
Where
was
this
image
built?
Does
it
have
a
scanned
report?
B
Does
it
have
an
s-bomb
and
then
you're?
Attaching
this
information
into
your
oci
image,
which
kivarno
can
then
verify?
So
this
policy
just
kind
of
starting
up
top,
it's
a
validation
policy.
It's
an
enforce
block,
you
know,
and
it
has
some
standard
boilerplate
you'll
see
in
most
policies
it
matches
every
pod
and
give
our
no,
for
those
of
you
are
familiar,
will
auto-generate
rules
for
deployments
and
other
pod
controllers,
but
the
policy
is
written
only
to
match
pod.
B
The
new
thing
here
is
this
type
right
so,
prior
to
1
110,
we
did
not
have
this
type
field
and
cosine
was
the
default.
You
know
image
verification
logic
used
underneath
it
integrated
into
caberno.
So
now
notary
is
also
supported
and
again
this
name
may
change
based
on
how
the
project
you
know
evolves,
but
right
now
we're
calling
it
cosine
and
notary
eb2
and
then
it's
matching
every
image
reference.
So
here
we've
just
kind
of
done:
a
Splat,
a
wild
card
where
it's
going
to
say:
okay,
every
image
reference,
it's
matching!
B
Typically,
you
would
you
know
you
might
have
different,
even
signature
formats
for
different
repositories.
If
it's
a
third-party
image,
maybe
it's
using.
You
know
a
different
spec
so
on.
But
then
it's
saying
if
these
images
have
to
be
signed
and
a
tester
is
an
authority
typically
can
be
a
certificate
can
be
a
key,
can
be
also
cosine
support,
something
like
called
keyless,
so
there's
a
lot
of
different,
flexible
options
in
how
you
verify
the
image.
B
But
ultimately
this
is
the
public
certificate
here
now
in
again,
in
most,
you
know,
production
deployments
you
might
not
want
to
put.
You
could
put
a
search
chain
here.
If
you
just
want
to
you,
know
kind
of
verify
based
on
root
certs,
but
typically
you
would
get
fetch
your
certificate
from
a
KMS
or
some
sir.
You
know
Upstream
system.
It
could
be
walled
some
other
store.
It
can
also
be
fetched
from
a
config
map.
So
again
a
lot
of
flexibility
for
the
demo.
B
B
If
you
put
this
in
your
CI
script,
it's
better
to
get
the
digest
and
sign
with
the
digest
and
then,
if,
of
course,
if
I,
similarly
kind
of
do
a
notation,
let's
yeah,
let's
kind
of
just
verify
this
on
the
command
line,
to
make
sure
it
works
and
I'm
going
to
verify
by
the
digest
right
and
here
the
digest
matches
what
we
did
so
with
those
as
we
expected
work.
So
you
know
just
to
kind
of
demonstrate
what
this
looks
like
from
the
command
line.
Going
back
to.
B
If
I
look
at
my
Azure
registry,
what
I
should
see
is
I
have
a
signed
in
unsigned
over
here
if
I
dive
in
into
the
signed.
The
new
thing
here
is
this
artifact
spec
right
and
you
see
there's
two
signatures
now
attached.
Typically,
you
would
have
one
I
mean
you
could
have
multiple
signatures.
One
could
be
a
team
level.
One
could
be
the
you
know
the
let's
say
if
it's
a
production
cluster,
you
have
another
signature
to
allow
that
cluster
into
production
and
Governor
policies.
B
B
You
could
very
have
your
signing
and
verifying
your
images
itself
right,
but
going
back
here,
what's
cool
is
and
previously
I
also
had
you
know,
uploaded
a
scan
result
and
I
had
signed
that
scan
result
over
your
into
you
know,
which
you
can
also
verify
right,
but
for
this
demo
we're
just
going
to
keep
it
simple
and
just
verify
the
signature
using
the
kubernet
policy
and
I
have
two
signatures,
so
you
know
I,
don't
really
need
two
signatures.
B
I
can
delete
one,
but
we'll
just
go
ahead
and
you
know
kind
of
leave
it
as
is
and
try
how
this
works
with
Bitcoin
right.
But
now,
if
I
do,
let's
apply
first
of
all
this
policy
and
if
I
do
you
know
just
show
my
policies,
this
policy
should
be
there,
it's
an
enforce
mode,
it's
ready
to
be
applied,
and
if
I
do
now
cook
cuddle,
let's
say
I
I
want
to
run
this.
B
You
know
image
and
I'm
going
to
first
test
with
the
unsigned
version
just
to
make
sure
it
gets
blocked
and
then
what
I
want
to
do
is
you
know
so
as
as
obviously
as
expected,
I'm
saying
it
could
not
find
any
signatures,
so
it
got
blocked
now,
if
I
do
B1
I'm
expecting
it
to
get
created,
which
is
just
it
right.
So
again,
this
is
as
simple
as
it
gets.
B
If
you
start
signing
your
images,
you
can
very
easily
start
verifying
them
and
it's
you
know
pretty
much
with
all
of
the
you
know
the
ryzen
supply
chain
attacks.
We
highly
recommend
you
know
evaluating,
perhaps
one
or
more
of
these
type
of
options,
but
definitely
start
signing
and
verifying
images
and
Camera.
No,
you
can
start
with
an
audit
policy
right,
so
it
will
just
flag
unsigned
and
then
you
can
slowly
turn
it
up.
B
You
know
which
images
you
want
your
kind
of
verify
signatures
on
and,
like
you
saw
here,
there's
a
lot
of
flexibility.
So
I
could
have
said
it's
only
images
coming
from
a
particular
repo
or
only
third
party
images
or
lots
of
different
ways.
You
can
match
name
spaces.
However,
you
wish
right,
but
that's
the
simple
demo
for
notary
support.
B
It
is
going
to
be
built
in
into
kuberno
much
like
with
cosine,
so
very
simple,
no
need
for
extensions,
etc
for
a
very
the
basic
checks
right,
but
moving
to
the
second
feature
and
explaining
why
you
know
you
know
like
chip
mentioned,
we
often
get
the
request
to
say:
hey
kimono
is
great,
but
can
it
do
you
know?
Can
I
do
something
else,
because
I
have
these
other
systems
I
want
to
either
integrate
with
I.
B
Maybe
want
to
you
know
fetch
data
from
external
systems
or
perhaps
I
want
to
even
you
know,
call
some
other
service.
It
could
be
Prometheus,
there's
a
great
demo.
You
know
that
I
recently
saw
from
one
of
our
colleagues
who
was
starting
to
integrate
1.10
with
things
like
even
open
cost
right,
so
it
the
possibilities,
really
become
endless
where
you
could
start
pulling
data,
or
even
you
know,
posting
Kevin
information
that
it
gets
from
the
admission
request
into
external
services
and
then
responding
to
that
right.
B
So
here
what
I
have
for
the
demo
is
I
have
a
very
simple
project
and
there
is
a
repo
I'll
share.
The
link,
which
is
just
it's
written
in
golang,
but
this
could
be
in
any
language
could
be
JavaScript,
Java
python.
Whatever
you,
you
know
your
language
of
choice,
all
it's
doing
is
it's
starting
up.
You
know
two
in
a
listeners
here.
One
is
a
web
service
on
Port
80.
Another
is
on
443.
B
The
443,
of
course,
has
a
certificate
and
a
key
file
in
this
demo.
I
chose
to
use
cert
manager,
but
you
can,
you
know,
configure
that.
However,
you
wish-
and
all
this
is
doing
again,
is
saying
that
every
time
it
gets
a
request,
it
has
a
standard.
You
know
you
know
kind
of
your
Handler
for
that
request
and
it's
going
to
check
and
see
it's
going
to
parse
the
namespace,
based
on
how
the
request
is,
you
know,
sent
if
it's
a
get.
B
It's
going
to
expect
that
as
an
argument,
a
parameter
in
the
request.
If
it's
a
post,
it's
going
to
get
it
from
the
body,
but
it
extracts
the
namespace
and
then
once
the
namespace
is
extracted.
It's
saying
hey.
If
the
namespace
is
missing
or
default,
I'm
gonna,
you
know
block
this
if
it
some
other
namespace
that
you
know-
and
you
could
make
this
as
fancy
or
as
complex
as
you
wish,
you
could
say
which
name
spaces.
You
want
to
block
things
like
that
now.
This
is
a
very
trivial
example
and
there
are
given
no
policies.
B
You
can
do
this
All
in
One
policy,
but
it's
just
meant
to
demonstrate
the
flexibility
and
the
power
of
this
right.
So
that's
the
web
service
and
we'll
install
this
and
you
know,
but
before
we
do,
that,
I
want
to
kind
of
kind
of
show
the
corresponding
policy
and
what
that
would
look
like
right.
So
here
what
we're
doing
is
we're
saying:
hey
we
again
here
we
have
a
validation
policy
which
is
calling
this
extension
service
and
it's
it's
checking
for
config
map
resources.
B
Now
I
did
not
want
to
make
this
pod
resources,
because
if
my
extension
service
is
down,
then
kiverno
will
return
an
error
which,
obviously
you
know
you
have
to
be
very
careful
with
if
you're
running
any
extensions
they
have
to
be,
they
have
to
be
performant
because
admission
controls
as
a
you
know,
finite
window
to
kind
of
respond
back
or
you
are
blocking
potentially
blocking
API
requests
based
on
your
policy
configuration
if
you
put
them
in
enforce
mode
and
if
your
failure
mode
is
fail
instead
of
ignore
right
in
this
demo,
I'm
doing
a
post,
but,
as
you
see
in
the
commented
outlines
you
can
very
easily
do
a
get.
B
One
great
example
we
heard
on
a
community
call
is
as
a
user,
who
wants
to
do
a
get
to
like
the
kubernetes
API
or
a
posterior
kubernetes
API
for
a
subject
review
right,
so
that
use
case
would
be
extremely
interesting
where
you
are
doing
an
additional
call
to
say:
hey:
can
this
user
do
they
own
the
namespace
that
they
are
trying
to
maybe
create
a
service
in
or
something
like
that,
which
you
can
now
do
this
additional
call
to
check
and
whatever
data
is
returned
back
from
this
call
is
stored
in
this
field
called
result
which
you
can
then
you
know
easily
kind
of
check
on
you
know
in
in
within
your
policy
block
itself
right.
B
So
here,
I'm
saying
if
result
is
allowed,
then
proceed
if
it's
not
allowed
just
print
an
error
message,
saying
hey:
this
is
not
allowed
right.
So
again,
pretty
simple,
not
not!
You
know
not
too
much
complexity
in
the
service,
but
more
meant
to
show
a
demonstration
of
what
can
be
done
and,
of
course
you
can
fetch
data
from
other
external
resources.
You
can
even
make
external
calls,
but
we
highly
recommend
with
this.
B
You
know
to
be
limited
to
the
API
calls
within
the
cluster
one
other
thing
to
kind
of
show
is
you
can,
of
course
configure
full.
You
know
a
mutual
TLS
on
on
or
encrypted
connections
on
both
sides
so
you're.
B
If
we
kind
of
we
have
the
ca
bundle
through
start
manager
or
some
other
mechanisms,
you
can
then
check
that
within
the
policy
itself
and
make
sure
you're
talking
to
the
validated
service
for
on
the
on
the
server
side,
what
you
can
do
is
you
can
use
the
kubernetes
token
review
API
to
make
sure
that
only
requests
from
kivarno
are
accepted
and
all
other
requests
are
blocked
right.
So
this
way,
some
other
third-party
tool
cannot
call
that
service.
B
A
B
Let's
see
this
in
action,
so
what
I'm
going
to
do
is
switch
to
the
second
demo,
we'll
just
delete
our
existing
policies
and
I'm
going
to
apply.
You
know,
let's
start
with
applying
the
resources,
so
I
think
I
have
in
manifest
I'm
going
to
apply
all
of
these
okay.
So
it
looks
like
they're
all
in
here
and
then
we'll
apply
this
policy
for
checking
the
name
space
right.
So
now,
if
I
do
again,
good
cuddle
get
C
Paul.
B
What
I
expect
is
you
know
just
this
one
policy
that
we
are
going
to
demo
and
if
I,
let's
say
I
want
to
create
a
config
map.
So
we'll
start
with
just
creating
a
config
map
without
a
namespace
right
and
immediately
you
see
it
says:
namespace
default
is
not
allowed
because
we
didn't
provide
a
namespace.
But
if
we
do
something
like
you
know,
I'm
just
going
to
use
kivarno
and
we'll
put
it
in
dry
run.
B
I
don't
want
to
actually
create
this
config
map,
or
you
know
you
can
use
whatever
name
space
you
want.
This
should
be
allowed
right
and
which
again
internally
kevano
made
this
API
call
to
the
service.
It
got
the
result
based
on
what
we
coded
up
over
here
and
because
it
got
allowed
equals
true.
The
policy
kind
of
does
the
check
and
comparison,
but
this
could
be
any
data
that
you
get
back
from
the
service
and
you
could
formulate
your
allow
or
deny
conditions
according
to
that.
So.
B
It
has
its
own
extensibility
and
its
own
trust
policies,
so
we're
working
with
some
Partners
to
be
able
to
also
extend
cover,
note
to
be
able
to
call
notation
and
its
plug-in
system
and
get
back
a
response
for
yes
or
no,
and
this,
of
course,
will
integrate
with
you
know
your
backend
again
things
like
signing
systems
code
signing
you
know,
tools
which
makes
an
allowable
denied
decision,
but
this
can
be,
as
you
can
kind
of,
Envision,
extend
it
to
anything
right
if
you
want
a
pager
Duty
schedule,
if
you
want
a
scorecard,
if
you
want
any
of
this
data
now
to
be
part
of
your
policy
decisions
very
easy
to
do
very
easy
to
kind
of
integrate
third-party
things
or
even
cost
data
like
I
mentioned
earlier.
B
So
that's
that's
a
quick
demo,
but
hopefully
gives
you
a
sneak
peek
at
what
you
know.
What's
coming,
there's
a
few
other
things
we
need
to
do
to
get
this
feature
ready.
It's
already
available
in
caverno
main,
but
we
will
you
know,
be
you
know,
as
as
we
kind
of
near
production
Readiness,
we
will
be
kind
of
you
know,
adding
in
a
few
other
checks
and
things
for
this.
B
C
C
B
Yeah
so
I,
if
I
understand
that
correctly,
that
is
referring
to
the
ability
that
gatekeeper
has
or
oppa
has
to
be
able
to
Cache
some
data
and
to
be
able
to
make
policy
decisions
on
that.
So
for
caverno
we
have
like
chip
mentioned
the
ability
to
Cache
config
maps,
and
we
have
discussed
you
know
extending
that
to
Cache
any
resource
within
the
cluster
either
based
on
something
like
labels
or
some
other
identification
of
which
resources
you
want
to
Cache
so
by
default.
B
Cover
note
does
not
require
that
replication,
which
makes
it
much
more
performant
and
doesn't
you
know,
does
the
memory
is,
can
be
tuned
accordingly,
but
we
do
have.
You
know
pending
feature
requests
and
we
have
had
discussions
with.
You
know:
community
members
on
being
able
to
Cache
certain
you
know
and
replicate
certain
data
for
this
faster
policy
decisions
right.
So
it's
a
trade-off
between
memory
and
performance,
of
course,
so
to
reach
out
on
the
camera.
Slack
Channel.
If
you
have
you
know
any,
you
know
specific
use
cases
happy
to
discuss
more.
There.
C
Cool
and
then
there's
another
question:
I
see,
you've
got
a
an
answer
to
one
with
some
links,
but
what's
the
best
way
to
handle
mutation
policy
where
we're
configuring
default
image
registry
for
all
pods,
but
before
conferno
was
installed,
the
policy
doesn't
exist,
and
so
we're
not
able
to
achieve
this.
So
if
I
understand
this
correctly,
it
is
a
case
where
you
want
to
be
able
to
change
the
image
on
existing
pods.
But
after
you
introduce
caverno.
C
So
one
of
the
cool
features
that
coverno
has
that-
and
this
is
actually
I
guess
a
partial
answer
to
one
of
the
other
questions.
What's
the
difference
between
Opa,
gatekeeper
and
conferno,
coverno
has
very
rich
and
robust
mutation
capabilities.
Opa
gatekeeper
has
very
very
limited
mutation
capabilities,
which
is
only
like
metadata
assignment,
so
it's
very
limited,
but
one
of
the
things
that
caverno
can
can
do.
In
addition
to
more
robust
standard
mutation
is
what
we
call
mutate
existing.
So
you
you
can
introduce
a
cover,
no
policy
and
based
upon
that
policy.
C
Design
cabrino
will
go
and
mutate,
as
it
says,
existing
resources
in
your
cluster
without
anything
having
to
flow
across
a
web
hook
at
all,
and
that
can
be
super
useful
because
you
can
do
things
like
introducing
convertible
policy
that
changes
namespaces,
for
example,
and
also
we
have
the
same
ability
when
it
comes
to
the
generation
capability.
You
can
do
that
same
thing
and
generate
existing
resources.
So
imagine
that
you
had
an
existing
like
Brownfield
cluster
and
you
wanted
to
get
the
benefit
of
caverno.
C
But
you
already
had
you
know
a
dozen
100
namespaces
that
were
out
there
and
you
wanted
to
be
able
to
do
something
like
give
me
a
resource
quota
in
all
those
existing
name
spaces.
Well,
you
can
introduce
a
new
governor
policy
and
the
coverno
policy
will
generate
into
those
existing
name
spaces,
as
well
as
new
name
spaces
that
you
create
after
that
point
in
time.
So
there's
quite
a
a
big
range
of
flexibility
that
it
has
and
if
I
still
wasn't
understanding
your
use
case
come
talk
to
us
in
the
coverno
channel.
C
So
we
are
participating
in
the
Linux
Foundation
mentorship
project,
where
we
have
a
it's
a
great
opportunity
for
those
that
are
trying
to
get
started
and
open
source
or
they're
trying
to
get
started
with
the
kubernetes
ecosystem,
or
maybe
they're
already
started.
But
they
want
to
make
more
of
an
impact.
So
we
have
several
things
that
we're
working
on
for
that
with
our
mentees
and
one
of
those
is
validation,
admission
policy
support
in
caverno
so
and
kubernetes
1.27.
C
They
introduced,
what's
known
as
validation,
admission
policy,
which
uses,
sell
common
expression,
language
and
caverna
we're
trying
to
integrate
with
that
in
a
couple
different
ways,
and
so
we're
we're
defining
what
that
looks
like.
But
one
of
the
things
that
we'd
like
to
be
able
to
do
as
part
of
this
is,
for
example,
using
the
Cabernet
CLI
to
allow
people
to
validate
those
policies
without
the
need
for
an
API
server,
which
is
one
of
the
one
of
the
gaps
with
the
with
the
cell
solution.
C
So
that's
one
of
the
things
that
we're
looking
at
in
1.11
we're
also
looking
at
the
ability
to
use
oci,
artifacts
and
references
there
for
image,
validation
rules,
and
this
would
be
super
helpful
because,
as
many
of
you
probably
know,
there's
a
lot
of
activity
around
changes
that
are
going
on
in
the
oci
spec.
Kubernet
policies
now
actually
are
oci
compliant.
You
can,
and
you
can
upload
them
to
a
registry
and
even
pull
them
down
with
the
coverino
CLI.
C
So
we
want
some
more
integration
around
that
some
integration
at
some
additional
integration
in
covarna's
pod
security.
What
we
call
the
sub
rule
type
so
Jim
mentioned
this
earlier
on
and
that
is
coverino
has
pulled
in
the
same
libraries
that
kubernetes
itself
uses
for
pod
security
admission,
but
we
allow
more
flexibility,
more
feature
set
and
and
more
robustness
around
how
you
can
configure
that.
We
want
to
extend
that
even
further
in
111.
C
to
be
able
to
do
things
like
exclude
by
specific
field
paths
in
those
controls,
so
we're
looking
at
being
able
to
introduce
that
in
1.11
and
also
some
sub
resource
support,
and
also
not
shown
here
but
covert.
No
policies
are
now
officially
supported
on
artifact
Hub,
but
we
cover
no
has.
C
Policy,
library
of
any
policy
engine
for
kubernetes
I
think
Jim
showed
it
was
265..
That
number
grows
almost
weekly,
that's
across
the
whole
Gambit
of
capabilities
of
cabernos.
So
we
want
to
offer
that
an
artifact
Hub
and
also
be
able
to
make
that
an
extensible
system
whereby
any
new
contributions
that
you
all
or
the
other
community
members
would
like
to
make
they
automatically
flow
there.
So
those
are
the
things
that
we're
looking
at
for
1.11.
C
and
with
that
we
just
have
a
couple
minutes
remaining
I
see
one
other
question
out
there.
Can
we
configure
audit
notifications
to
go
to
slack
or
preferably
AWS,
SNS
topics,
so
caverno
doesn't
do
that
out
of
the
box,
but
there
are
some
other
options
that
are
out
there:
both
open
source
and
commercial.
That
will
allow
you
to
do
things
exactly
like
that,
be
able
to
scrape
a
policy
report
and
send
those
things
to
an
additional
collector.
If
you
like.
B
C
Can
you
can
send
to
a
custom
web
hook,
then
you
can
you
can
do
that
as
well
and
that's
part
of
the
policy
reporter
project,
which
is
you
can
find
that
under
the
kubernet
organization,
very
Nifty
way
to
get
a
visual
dashboard,
among
other
things,
in
your
kubernetes
cluster,
to
see
how
Cabernet
is
doing
and
that's
all
the
questions
that
I
see
anyone
else
that
has
something
to
throw
at
us.
A
Well,
thank
you
all
so
much
I
think
you've
left
all
of
your
contact
info
and
how
to
get
in
touch
with
y'all
and
how
to
link
to
cover
note
in
the
chat,
so
everyone
be
sure
to
click
on
those
before
we
wrap
up
and
thanks
again
Jim
and
chip
for
all
the
great
info
today,
and
thank
you
all
all
for
joining
us.