►
From YouTube: Cloud Native Live: Whats new in Kyverno
Description
Don't miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from 18 - 21 April, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
Hello,
hello,
everyone
and
Welcome
to
Cloud
native
live
where
we
dive
into
the
code
behind
Cloud
native
I'm,
Taylor
dolezal,
head
of
ecosystem
at
the
cncf,
where
I
assist
teams
as
they
navigate
their
Cloud
native
Journey.
Every
week
we
bring
a
new
set
of
presenters
to
Showcase
how
to
work
with
Cloud
native
Technologies.
They
will
build
things,
they
will
break
things
and
they
will
answer
your
questions
in
today's
session,
I'm
stoked
to
introduce
Charlotte
war
and
Jim
from
nirmada
who
will
be
presenting
on
showcasing
new
features
and
capabilities
in
caverna.
That's
that's.
A
Openness
is
a
great
policy,
but
not
always
so
it
should
be
fun
to
dive
into
this
one.
This
is
an
official
live
stream
of
the
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct.
Please
don't
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
Basically,
please
be
respectful
to
all
of
your
fellow
participants
and
presenters
just
be
excellent
to
one
another
with
that,
I
would
love
to
hand
it
off
to
the
nirmada
team
to
kick
off
today's
presentation.
B
Thank
you
Taylor
and
thanks
everybody
for
joining.
This
is
Jim
beguardia,
co-founder
CEO
at
nirmada
yeah.
So
to
start
with,
you
know,
I
want
to
kind
of
do
a
quick
background
and
introductions
for
on
kevano
itself
and
then
we'll
Dive
Right
into
some
of
the
new
features.
B
So
let
me
go
to
the
documentation.
You
know
screen
for
camera
now
and
we'll
kind
of
look
around
in
there
in
terms
of
getting
started
and
how
you
can
you
know,
use
caberno
if
you're
not
already
deploying
it
in
your
clusters.
So
first
off
caverno
is
a
policy
engine.
You
know
designed
for
kubernetes
and
why
that
matters
is
with
kiberno
policies.
You
know,
are
kubernetes
resources,
They
Don't
Really
require
a
new
language
to
learn
and
kivano
can
be
used
to
validate
mutate
generate
and
even
as
Charlotte
will
demonstrate,
with
1.9
clean
up
resources.
B
So
do
some
garbage
collection
on
your
clusters,
as
well
as
verify
software
supply
chain
security.
So
a
lot
of
growing
use
cases,
feedback
we're
getting
from
community
Unity
from
users
Etc
on
you
know
expanding
that
set
itself
right.
So
looking
at
how
kuberno
works
and
what
exactly
happens
you
know
once
Governor
is
installed
in
the
cluster,
so
kuberno
itself,
it
runs
as
an
admission
controller.
It's
also
available
as
the
command
line
tool.
You
can
run
in
your
CI
CD
pipelines
to
verify
policies
outside
of
clusters
as
well
as
it
does
background
scans.
B
Now,
once
Governor
gets
installed
and
let
me
switch
to
the
latest
version
of
this
docs
I'm
on
version,
one
eight,
so
I
went
to
Main
and
we'll
go
back
and
we'll
kind
of
look
at
the
documentation,
for
you
know
the
architecture
diagram
because
it's
changed
with
1.9.
So
here,
as
you
see,
there's
a
few
more
components.
We're
introducing-
and
you
know
the
process
of
evolution
for
caberno-
has
been.
It
will
still
be
a
single
install,
a
single
Helm
chart.
B
But
there's
the
controllers,
which
were
embedded
into
a
single
binary,
are
now
being
split
and
decomposed
into
separate
processes
separate.
You
know,
which
will
become
eventually
separate
deployments,
which
you
can
manage
and
scale
independently.
So
in
1.9
there
are
two
separate
deployments
but
then
moving
forward.
We
have
plans
to
further
decompose
and
exp.
B
You
know
bring
these
controllers
outside
of
kiberno
itself,
but
once
you
install
kuberno
through
the
through
a
Helm
chart
or
through
the
command
line
yamls
it
registers
with
the
kubernetes
API
server
and
acts
as
a
dynamic
admission
controller,
which
means
it
has
the
ability
to
get
any
API
request
and
to
be
able
to
to.
You
know,
act
on
that.
Api
request
based
on
your
configured
policy
sets,
so
kivarno
can,
for
example,
validate
the
API
request,
block
resources
if
they're
not
compliant
or
can
audit
and
provide.
B
You
know,
reports
on
that,
or
also
it
can
mutate
and
generate
resources
based
on
triggers
that
you
configure.
So
these
could
be
events.
You
know
that
are
used
to
trigger
policies.
They
can
also
be
other
resource,
create
resource
mutate,
type
of
admission
requests
which
can
be
used
to
trigger
policies,
and
then
governor
also
has
some
background
controllers
right.
B
The
reporting
controller,
which
is
responsible
for
generating
policy
reports,
as
well
as
the
a
background
controller
for
update
and
mutate
policies
itself,
so
just
quickly
going
on
the
reporting
section,
and
one
exciting
thing
to
you
know
to
point
out
over
here-
is
the
policy
report.
Inc
that
was
you
know,
created
in
caberno,
is
now
being
proposed
as
a
standard.
There
are
several
other
adapters
that
the
policy
working
group
has
built
on
this
policy
report
and
we're
also
in
process
of
you
know
proposing
this
as
a
standard,
kubernetes
API.
B
So
other
tools
can
also
leverage
this
policy
report
and
write.
You
know,
produce
results
based
on
this,
so
that's
a
quick,
you
know
overview
of
what
kuberno
does
how
it
operates
in
the
cluster
and,
let's
take
a
look
at
some
example.
Policies
before
we
dive
in
into
the
1.9
features
itself
right,
so
I'm
going
to
show
a
very
simple
example
of
a
policy,
and
this
is
just
to
disallow
the
latest
tag.
So
since
the
latest
tag
is
mutable,
it's
not
considered
a
best
practice
to
have
that
control.
You
know
allowed
Because.
B
So
you
know
this
policy,
as
you
can
see
it's
a
few
lines
of
yaml,
which
is
the
bulk
of
the
policy
body
itself,
there's
just
other
metadata
that
we
typically
put
with
all
of
our
sample
policies,
but
to
identify.
You
know
the
type
of
policy,
a
description
Etc,
but
here
we're
doing
a
validation,
a
failure,
action
of
enforce.
You
can
hear
it's
doing
a
background
scan
by
the
way.
One
thing
which
you
might
have
just
noticed
is
because
kuberno
is
kubernetes
native
help
Etc
it's
just
built
in
it
works
with
visual
studio.
B
If
you
are
running
the
kubernetes
extension
for
visual
studio
code
right,
so
pretty
cool
to
kind
of
see,
you
know
help
from
open,
API
V3
schema
just
pop
up,
and
it
will
tell
you
if
there's
a
syntax
error
cetera.
But
here
we're
kind
of
you
know
running
this
in
background
mode,
as
well
as
admission
control.
B
So
that's
the
structure
of
a
you
know,
kirana
policy
and
we'll
see
a
lot
of
these
in
action,
but
the
basic
structure
is
very
similar
and
then
there's
of
course,
policies
to
mutate
to
do
things
like
image,
verification
and
to
You
Know
cover
some
of
the
other
new
features
that
we'll
talk
about.
So
going
back
to
my
browser.
What
I
want
to
focus
on
is
the
governor
release
9,
and
we
will
demonstrate
some
of
the
key
features
from
here.
B
The
kivarno
typically
puts
out
you
know
minor
releases
every
two
to
three
months.
We
are
already,
as
you
can
see,
have
made
progress
on
1.10
there's
some
interesting
features.
You
know
schedule
for
that,
but
for
the
topic
today
we
are
going
to
cover
a
lot
of
the
new
features.
So
there's
a
release
candidate
for
1.9,
we
run
rc4,
which
we
just
got
published
earlier
today
and
we'll
cover
you
know.
So,
if
you
want
to
try
out
these
features,
you
can
install
that
make
sure
you
use
the
minus
minus
Dev
El
for
development.
B
You
know
flag
on
the
helm,
install
command
if
you're,
installing
through
Helm,
and
you
can
try
out
these
features.
So
Charlotte
is
going
to
talk
about
the
cleanup
controller,
which
is
one
of
the
major
features
we'll
talk
about
distributed
tracing,
which
is
a
extremely
you
know,
with
open
Telemetry,
there's
some
pretty
cool
stuff.
You
can
do
in
terms
of
understanding
how
policies
are
working
in
your
cluster
and
then
I
will
cover
policy
exceptions.
So
these
are
the
three
major
features
we're
going
to
demo.
B
There's
all
you
know
a
lot
of
other
minor
changes,
there's
hundreds
of
bug,
fixes
and
enhancements
which
have
gone
into
the
release,
but
this
is
what
we
were
planning
to
cover
today
in
terms
of
features.
So
with
that,
let
me
hand
off
to
Charlotte
who's
going
to
talk
about.
You
know
the
cleanup
controller
as
well
as
distributed
tracing
and
he
will
introduce
and
demonstrate
these
features.
C
Thanks
Jim
hi
everyone
I'm
going
to
start
by
the
Distributing
their
tracing
feature
because
it
doesn't,
it
doesn't
apply
only
to
the
kirano
admission
controller
and
it
applies
to
the
cleanup
controller
too.
So
it
makes
sense
to
start
with
the
the
tracing
feature,
and
we
will
discuss
the
cleanup
controller
after
that,
and
we
will
see
what
the
cleanup
controller
actually
does
by
looking
at
the
traces,
because
tracing
is
also
embedded
in
the
distributed
tracing
controller.
C
Documentation
for
the
main
branch
again,
because
it's
not
yet
it's
coming
as
Jim
said
we
released
rc3
yesterday
and
we
are
today
so
it
will
be
available
in
one
hour
or
so,
just
probably
after
the
the
live
stream,
and
one
of
the
features
that
we
added
in
kivano,
one
nine
is
the
capacity
capability
of
tracing,
so
tracing
is,
is
is
inspired
by
by
this
distributed
tracing
actually
currently
in
kivano,
it's
not
a
very
distributed,
because
kivano
is
a
monolithic
application
right
now,
but
things
are
changing
in
110
and
it
will
probably
change
even
more
in
the
next
versions.
C
We
are
introducing
new
controllers,
and
potentially
we
are
also
supporting
HTTP
calls
directly
from
kivano
engine
to
other
services
running
in
the
cluster,
so
progressively
traces
will
become
more
and
more
distributed
by
Nature,
even
if
today,
it's
not
that
distributive
anyway
tracing
is
still
quite
useful.
C
You
can
see
below
a
trace
for
an
admission
of
for
an
admission
request
for
review,
and
we
will
see
that
every
step
that
the
admission
request
goes
through
entering
the
admission
controller,
then
entering
the
engine
and
processing
every
policy
and
every
rule
per
policy
will
be
detailed
and
measured
in
the
in
the
traces.
So,
beyond
the
same
we
are,
we
are
using
open
Telemetry
to
create
the
traces.
So
so
this
is.
C
This
is
the
the
the
the
client.
The
client
is
an
open,
Telemetry,
client
and-
and
we
also,
we
also
instrumented-
all
HTTP
clients,
so
HTTP
clients
and
therefore
kubernetes
clients
will
create
spans
in
the
traces.
C
What
that
means
is
that
it's
available
in
most
of
the
tracing
backends,
so
we
have
added
some
tutorials
in
the
documentation
to
set
it
up
with
graph
Anna
Tempo,
which
is
a
tracing
back
end
developed
by
grafana
and
another
tutorial
to
work
with
jaeger,
which
is
another
backend.
So
today,
I
will
demonstrate
the
graph,
not
Tempo
backend,
because
it's
the
it's
the
easier
to
set
up
for
that.
C
I
created
a
simple
cluster
with
just
the
values
make
comments
we
have
in
the
kivano
repository
then
I
created
a
cluster
I
deployed
what
we
have
called
the
devlab,
which
is
just
an
instance
of
grafana
Prometheus,
Loki
and
values
tools.
We
use
to
observe
the
kivano
deployments
and
I
finally
deployed
kivano,
and
the
couple
of
policies
in
the
cluster
and
I've
also
created
two
namespaces
one
for
tracing
and
one
for
cleanup.
So
right
now
we
will
be
using
this
one,
the
tracing
namespace.
C
So
finally,
I
just
have
one
one
graph,
an
hour
running
graph,
an
hour
running
with
a
Tempo
data
source.
This
Tempo
data
source
will
provide
traces
and
when
clicking
on
the
trace,
we
will
be
able
to
observe
the
details,
but
let's
try
to
create
a
couple
of
resource
first,
so
for
the
tracing
demo,
I
created
two
resources,
one
I
called
good
pod
and
one
I
called
bad
pod.
C
The
good
part
is
just
an
engineic
spot
with
nothing
special
in
it,
so
it
should
be
accepted
to
run
in
the
cluster
and
the
bad
part
is
the
same
part,
but
with
us
Network
set
to
true
and
I
installed.
The
policy
that
will
prevent,
if
I
just
list
the
policies
installed
I
installed
a
number
of
policies
and,
for
example,
the
OS
namespaces
should
forbid
the
creation
of
PODS
that
run
in
the
OS
Network.
C
C
So
the
policy
this
allow
us
namespace
refuse
the
creation
of
the
bot
because
sharing
those
namespaces
disallowed
Etc
and
it's
what
the
policy
is
supposed
to
do
to
reject
pods
that
use
the
OS
Network
and
we
have
still
only
the
good
code
running
now.
The
interesting
thing
is
to
find
what
happened
in
the.
C
Kivano
engine
in
the
kivano
controller,
and
for
that
we
can
use
a
couple
of
tags
that
exist
on
the
on
the
different
spans
of
our
traces.
So
if
I
look
for
tags,
admission.request.name
equals
good
pod
I
see
one
trace.
This
is
the
trace
that
was
generated
at
admission
at
admission
time
when
the
the
Pod
was
created.
So
we
see
that
we
have
first
an
HTTP
request
and
we
have
a
middleware
that
produces
metrics.
C
We
have
a
number
of
tags
available
to
search
for
traces
and,
for
example,
the
admission.request.name
was
good
pod.
We
have
the
namespace,
we
have
the
operation,
so
it
was
a
creation.
We
have
a
couple
of
informations
related
to
the
kind
of
the
resource
being
admitted.
So
in
this
case
it
was
a
pod
belonging
to
the
API
Group
version,
V1,
etc,
etc.
We
have
informations
about
the
user
that
is
issued
the
call.
So
in
this
case
it's
me
I'm.
The
kubernetes
send
me
in
and
I'm
part
of
those
groups
and
below
that.
C
So
this
Trace
is
about
the
oh
sorry,
is
about
the
the
validation
we
we
have
different
spans
and
all
those
spans
cover
different
policies
installed
installed
in
the
cluster.
So
when
I
listed
the
cluster
policies
installed
in
the
cluster,
we
see
that
we
have
approximately
11
policies
and
we
are
going
to
to
find
those
policies
here
in
the
list
and
Below
each
policy.
C
We
will
have
the
different
rules
so,
for
example,
the
disallow
all
sports
policy
as
three
rules,
one
which
is
called
us
Post
ports,
none
another
one
called
autogen
or
Sports,
none
and
a
third
one,
autogen
Chrome
job
or
Sports
known.
In
our
case.
We
can
see
that
the
first
rule
took
1.74
milliseconds,
the
second
one
took
1.78
milliseconds
and
the
third
one
took
almost
zero
milliseconds
and
we
have
the
details
for
every
policy
in
the
in
the
list,
every
policy
and
every
rule.
C
So
of
course,
most
of
the
policies
here
are
about
are
about
pods,
so
most
of
them
were
applied
to
the
pods
themselves.
C
And
that's
it:
we
can
do
the
same
thing
with
the
bad
pod
admission,
dot
request.
C
And
this
time
we
we
found
only
one
one
policy,
one
one
admission
request
for
the
creation
of
the
bad
pod,
and
if
we
go
and
look
at
the
attributes
this
time,
we
can
see
that,
for
example,
admission.responds
that
allowed
was
false,
and
this
is
consistent
with
what
we
had
here
when
we
try
to
create
the
Pod
and
the
creation
was
rejected
with
the
different
information
yeah,
we
have
all
the
same
information.
Okay,
the
the
result
message
is
truncated,
because
rafana
Tempo
has
a
limitation
on
the
size
of
the
the
tag
values.
C
C
C
C
What
can
I
say
some
validation
policies.
We
can
do
the
same
with
the
with
an
image
verification
policy,
so
I
just
installed
a
policy
that
is
going
to
verify
the
signature
of
the
image.
The
image
used
in
this
case
is
an
image
coming
from
ghcr
kivano
named
test
verify
image.
This
is
an
image
we
created
ourselves
to
test
that
the
feature
is
working
correctly,
so
we
have
for
this
image.
We
have
different
different
flavors
of
the
image
we
have
one
which
is
tagged
signed
by
someone
else.
C
If
the
image
is
tagged,
guys
is
signed
by
someone
else,
we
expect
the
creation
of
the
Pod
to
fail.
So
in
this
case,
if
I
try
to
keep
caterer
run
the
image,
the
verification
failed,
because
it
says
the
it
didn't
find
a
matching
signature
on
the
other
Ram.
We
have
another
image
signed
with
the
signed
tag,
and
this
one
is
expected
to
run
so
applying
this
one
will
produce
a
bud
being
accepted
and
if
we
go
back
to
Temple.
C
C
Okay-
and
we
have,
we-
have
both
both
calls.
The
first
one
is
the
mutation,
the
admission
request
and
the
under
mutation.
This
time
we
have
the
first
entry
point
is
the
HTTP
request
and
we
have
the
middleware
creating
Matrix,
another
middleware
for
filtering
and
finally,
we
are
in
the
mutation
part
of
the
kivano
engine.
The
mutation
part
looks
up
the
verify
image
policy
and
in
our
case
this
verified
image
calls
calls
the
verify
image.
Signature
in
cosine,
cosine
in
turn,
will
call
to
ghcr.
C
C
Finally,
we
will
go
this.
One
is
probably
to
get
another
token.
I,
don't
really
know
what
happens
in
the
cosine
code,
but
it
allows
to
dig
and
look
at
the
different
HTTP
calls
that
are
performed
by
the
code
and,
finally,
markers
will
be
done
to
fetch
some
manifests
and
to
actually
verify
the
different
signatures.
So
in
this
case
it
was
the
first
test
that
was
using
signed
by
someone
else
and
the
response
that
allowed
was
false
again
because
the
the
create
the
admission
request
was
rejected.
C
C
This
case
is
true,
so
the
mutating
Web
book
allowed
the
admission
request
and
then,
after
that,
we
have
the
validation
webwork
for
the
same
pod,
and
this
one
was
also
allowed
so
with
the
the
name,
for
example,
with
the
name
tag
in
the
in
the
request
we
are
able
to
to
filter
and
and
to
search
for
every
web
that
were
applied
to
this
particular
admission
request.
So
in
the
case
of
in
the
second
example,
we
have
both
one
mutation
web
code
and
the
validation
web
called
to.
C
Okay
and
basically
it
will
allow
users
to
get
better
information
about
what
happens
in
kivano
at
admission
time.
I
know
a
couple
of
users
are:
are
trying
to
understand
better
the,
why
a
policy
can
introduce
a
latency
or
understand
better
what
happens
behind
the
scene,
and
this
is
a
good
tool
to
know
better
what
actually,
what
was
actually
done
by
kivano
we
can.
We
can
clearly
follow
the
links
between
one
admission
request
and
go
to
ghcr,
because
we
were
using
image,
verification
and
cosine
as
to
call
ghcr
and
so
on.
C
It's
not
visible
here,
because
we
don't
have
a
policy
that
calls
the
API
server,
but
if
a
policy
has
a
context
variable
that
is,
that
is
Created
from
a
call
to
the
API
server.
We
will
see
in
the
graph
the
the
call
the
call
to
the
IPI
server,
so
we
will
be
able
to
say
okay,
this
policy
called
the
Epi
server
to
list
ingresses,
and
things
like
this.
B
B
A
And
I
think
if
there
is
any
I,
did
see
a
couple
comments
in
the
chat
just
in
terms
of
making
the
screen
a
little
bit
bigger,
if
at
all
possible.
But
I
do
want
to
call
that
out
as
well.
For
some
of
the
folks
that
we're
viewing
today.
C
We
now
how
the
possibility
to
clean
up
resources
in
a
cluster,
so
this
is
not,
strictly
speaking,
some
some
validation
or
mutation
of
policies,
but
it's
more
related
to
Automation,
and
we
have
a
automated
tasks
that
delete
things
or
things
like
this.
This
can
now
be
expressed
in
the
form
of
policies.
Those
policies
exist
at
the
cluster
level
or
at
the
namespace
level.
So
you
have
cluster
cleaner
policy
and
just
cleanup
policy,
which
is
the
namespace
namespaced
version
of
the
cleanup
policy.
C
C
We
have
a
match
and
exclude
close
that
can
specify
which
resources
are
targeting
by
such
a
policy,
and
we
also
support
conditions.
So
conditions
will
be
evaluated
on
the
power
resource
basis
to
say
that,
for
example,
we
don't
want
that.
So
this.
This
cleaner
policy,
for
example,
targets
deployments,
and
we
don't
want
to
delete
cleanup.
We
don't
want
to
delete
deployments
when
replicas
is
below
to
now.
C
For
this
Gmail
I
just
created
a
simple
policy
which
is
very
similar
to
what
we
have
here
so
for
now,
I
will
keep
the
labels
and
the
operator.
So
if
we
create
this
policy
and
on
top
of
that,
I
created
two
deployments,
one
of
the
deployment,
as
can
remove
set
to
no
the
other
one
has
can
remove
set
to
true.
C
So
this
one
should
not
be
considered
by
the
policy
we
have
here,
because
this
policy
specifies
that
we
are
only
considering
deployments
that
have
that
can
remove
true
lab
label,
and
this
one
should
be
considered
by
the
by
the
cleanup
policy.
So
let's
apply
this.
C
C
One
cleanup
policy
running,
of
course,
for
now
it's
not
going
to
do
anything
very
useful
because.
B
B
C
B
C
C
B
C
C
It
was
a
very
simple
example
based
on
the
number
of
replicas,
but
it
could
also
use
the
age
of
the
resource
and,
let's
say
I,
don't
want
to
delete
resources
that
are
younger
than
one
day
or
a
few
hours,
but
if
the
resource
is
older
than
one
month,
okay
I
want
to
delete
it
or
things
like
this.
It's
completely
possible
to
have
such
conditions.
C
You
you
have
a
you,
have
all
the
necessary
functions
in
gems
pass
to
do
that,
so
you
can
get
the
creation
timestamp,
compare
that
to
the
now
timestamp
and
say,
for
example,
if
the
resource
is
older
than
three
days,
okay,
I
I
accept
to
delete
it
and
I
can
combine
it
with
different
other
conditions,
and
the
schedule
is
of
course,
a
nice
solution
to
to
implement
time-based
conditions.
B
A
B
A
good
set
of
questions
so
certainly
whatever
can
be
standardized.
You
know,
through
the
policy
working
group
and
other
kubernetes
sigs
as
well
as
working
groups,
we
are
proposing
for
standardization,
like
for
the
policy
report
itself,
kiverno,
of
course.
As
a
you
know,
it
is
an
add-on
in
in
the
kubernetes
Clusters.
So
it's
not
likely
that
kuberno
that
the
entirety
of
the
policy
engine
would
be
standardized
and
the
idea
is
to
allow
flexibility
for
admission
controllers
there.
B
You
know
in
terms
of
how
is
it
better
I
think
it
seems,
like
Adam
answered
the
question
himself.
So
if,
of
course,
Rego
has
some
complexity
and
there's
a
learning
curve
for
it,
because
it's
focused
on
kubernetes
offers
a
much
simpler
experience
and
also
a
wider
set
of
use.
Cases
is
like
Charlotte
bar
showed
the
cleanup
policies.
We
have
also
policies
to
generate
resources,
there's
very
powerful
capabilities
of
mutating
as
well.
B
As
you
know,
other
sort
of
things
that
we're
looking
at
in
terms
of
extensions,
so
yeah
I
mean
we're
always
looking
at
expanding
the
use
cases
integrating
as
natively
as
possible,
and
you
know
staying
focused
on
being.
You
know
the
the
sort
of
providing
the
best
experience
possible
for
policy
management
on
kubernetes
awesome.
A
Awesome,
thank
you.
Jen
I
saw
another
another
set
of
two
questions
that
came
in
the
chat
and
then
I've
got
a
couple
as
well
after
that.
If
you
have
any
questions
and
you're
watching
right
now,
please
throw
those
in
the
chat
and
we
can
get
some
of
those
answered.
The
question
that
I
saw
come
up
was
the
the
first
of
the
two
questions
was:
are
you
planning
to
support
app
signature?
For
example,
Port
5432
should
accept
only
postgres
signatures,
those
kinds
of
use
cases.
B
So
you're
not
sure
if
I
fully
followed
the
question
there,
but
if
that
can
be,
you
know.
So
if
there
are
ways
like
you
know,
whether
through
Network
policies,
kubernetes
or
higher
level,
Network
policies
like
celium
Etc,
if
that
can
be
configured
then
yes,
kuberno
can
verify
those
configurations,
whether
they're
custom
resources
or
native
resources.
Caverno
does
not
intercept.
You
know
Network
traffic
or
do
anything
at
the
you
know,
kind
of
a
layer,
four
or
layer,
seven
request
level.
So
it
it
deals
with
admission
controls
and
configurations,
but
yeah.
A
Gotcha
awesome.
Thank
you.
The
next
question
I
had
that
I
didn't
follow,
but
might
make
more
sense
to
you
was
when
this.
When
will
this
be
released,
be
adopted
by
ACM,
270
or
280
Etc.
B
I
believe
the
ACM
reference
here
may
be
red
hat
Advanced.
Cluster
management,
rackamore
Reddit
ACM,
so
yes,
they
they
ACM
supports
Governor
I
am
not
sure
about.
You
know
on
the
schedule
of
picking
up
1.9,
but
it's
fairly
quick
as
soon
as
you
know.
It's
available
by
the
way
kyverno
is
also
available.
Both
the
Enterprise
distribution
of
caberno
from
nermata,
as
well
as
the
open
source
distribution,
are
now
available
in
the
red
hat
and
openshift
Marketplace,
and
the
operator
Hub
as
well.
A
Awesome
awesome
moving
on
to
some
of
the
questions
I
have
for
you
regarding
tracing.
What
are
some
of
the
supported
back
ends
right
now,.
C
Regarding
tracing
as
I
said
earlier,
we
are
using
open,
Telemetry
behind
the
scene,
so
any
back-end
supporting
open
Telemetry
protocol
is
reporting,
so
that's
graphene,
Tempo,
Jaeger
data,
dog
and
probably
others
in
case
the
the
back
end.
Doesn't
support
open,
Telemetry
protocol
directly.
There's
always
the
possibility
to
deploy
open,
Telemetry
collector,
which
we
did.
The
open,
Telemetry
collector
will
receive
the
traces
in
the
open,
Telemetry
protocol
and
this
capable
of
transforming
them
and
transferring
them
in
another
format.
So
it
can
do
the
conversion
on
the
Fly.
A
Awesome
awesome
and
then
I
had
two
other
quick
questions
and
then
can
shift
back
to
Jim
here.
Both
these
regard
tracing
as
well.
What
are
some
of
the
supported
sampling
strategies
for
tracing.
C
Yeah,
currently
we
are
is
tracing
or
not,
so
we
are
some
sampling
100
of
the
traces.
C
This
is
discount,
have
an
impact,
especially
on
cost,
because
sending
all
traces
to
the
back
end
can
be
costly.
So
in
this
case
again
using
open
Telemetry
query
talk
can
be
a
good
option
because
you
can
have
tail
base
sampling
strategy
and
you
can
say:
okay
I'm,
going
to
to
sample
strategy
to
sample
traces
only
if
they
have
Eros
or
things
like
this,
and
this
kind
of
strategy
cannot
be
done
with
the
ad-based
strategy.
So
in
any
case,
we
don't
have
any
tutorial
for
that
yet.
A
Awesome
awesome.
Thank
you.
Thank
you.
That's
that's
really,
Illuminating
and
and
really
awesome
to
hear
that
you
can
capture
100,
but
also
to
kind
of
include
that
consideration
as
well.
You
know
everything
is
trade-offs,
always
always
a
fun
problem
to
solve
in
terms
of
tracing.
Does
that
add
latency
and,
and
if
so
you
know,
can
you
give
this
kind
of
a
general
sense
as
to
what
that
looks
like
what.
C
Not
really,
of
course,
it
takes
a
a
small
amount
of
time
to
create
the
trace
itself,
but
anyway,
transmitting
the
traces
is
happening
in
the
background.
So
it's
very
lightweight
in
the
end.
Awesome.
A
C
I
mean
again
not
enabling
tracing
is
not
going
to
save
latency.
It
will
be
the
same.
It's
just
that
in
one
case
it
won't
be,
it
won't
be
transmitted,
but
you
know
tracing
for
tracing.
You
need
to
instrument
the
code,
so
it's
not
magic.
So
there's
some
instrumentation
going
on
and
this
instrumentation
is
very
Lightweight
by
Design.
So
it's
just
a
couple
of
function
calls
and
it
doesn't.
It
cost
almost
nothing.
A
Gotcha,
thank
you
so
much
charlevoir
I
really
appreciate
it
was
that
Jim
I'd
love
to
turn
it
over
to
you
for
the
for
the
final
demo
today,
awesome.
B
Yeah,
so
the
last
feature
we
want
to
showcase
is
the
policy
exceptions,
feature
which
is
new
in
1.9
and
this
by
the
way
was
done
by
Eileen
Yu,
which
who
was
one
of
our
LFX.
You
know
Linux
foundation
mentees
for
the
last
term,
so
very
excited
to
be
able
to
demonstrate
this,
and
thank
you
Eileen
for
all
the
great
work
here.
B
So
this
feature
what
it
does
is
it
decouples
the
life
cycle
of
managing
exceptions
or
how
you
can
exclude
certain
resources
from
policies
from
the
policy
definition
itself.
So
here,
I'm
showcasing,
you
know
typical
kaverna
policy
has
match
and
exclude
blocks,
and
you
can
exclude
based
on
many
factors,
including
you
know,
names
name,
spaces
labels,
but
now
what
you
can
do
at
1.9
is
is
you
can
pull
that
exception
into
its
own
new
custom
resource?
Named
policy
exception
right.
B
So
this
can
be
it's
a
namespace
resource
you
can
put
anywhere
in
your
cluster
and,
as
you
can
see
over
here,
within
the
policy
exception
you
can
say
which
policy
name
which
rules
should
be
excluded,
and
then
you
can
do
a
match
on
any
resource
itself.
Right
and
again,
this
match
has
a
lot
of
flexibility
for
the
demo.
I've
just
done
tests
right.
So
that's
how
simple
it
is
now
to
configure
exceptions
and
of
course
you
can
manage
exceptions
through
our
back.
B
You
can
manage
exceptions
to
kuberno
policies,
because
Governor
policies
operate
on
any
custom
Resource
as
well
as
kuberno
itself.
As
a
few.
You
know
knobs
to
make
sure
that
you
don't
misconfigure
these
policy
exceptions
right.
So
the
first
thing
I'll
show
is:
if
I
just
go
to
the
kivano
deployment,
let's
just
edit
the
deployment
and
I'll
I
want
to
show
within
this
there's
a
few
new
Flags.
You
have
to
kind
of
think
about.
So,
first
of
all,
it's
an
opt-in
feature
because
it's
a
new
feature.
B
You
have
to
enable
policy
exception,
it's
not
enabled
by
default
and
then.
Secondly,
you
can
optionally
configure
a
namespace
for
policy
Exception
by
default.
This
is
caverno,
and
but
you
can
put
any
namespace
that
you
wish
and
then
secure
that
namespace.
You
know
through
again
our
back
and
other
mechanisms
for
manage
your
policy
exceptions.
So
that's
how
I
have
this
deployment
configured
and
if
I
look
at
my
you
know
right
now
in
my
cluster
I
have
a
few
pod
security
policies.
B
So
if
I
just
do
get
C
Paul
I
see
I
have
a
bunch
of
pod
security
policies
configured.
So
if
I
try
to
run
a
names,
just
say,
let's
say
you
know:
nginx
I'm
gonna
try
and
run
it
in
this
namespace
test.
B
It's
gonna
should
deny
that
because
of
my
pod
security
policies,
so,
let's
say
say
for
some
reason.
You
know
I
want
to
create
this
exception
on
my
cluster.
So
what
I'm
going
to
do
is
I'm
going
to
just
say,
group
cuddle,
apply
and
I'm
going
to
create
this
exception,
I'm
going
to
try
and
create
it
on
the
kiverno
namespace,
which
actually
should
be
a
no
op
right
because
well
it
it's
allowed.
But
it's
giving
me
a
warning
saying
that
hey
this
doesn't
match
the
Define
namespace
for
policy
exceptions.
B
So
you
know
that's
a
good
safety
check.
You
want
to
make
sure
you
configure
it
in
the
right.
You
know
namespace.
So
let's
delete
that
and
we
will
create
this
exception
again
now.
In
the
policy
exceptions
namespace
right,
because
that's
what
we
configured
give
or
not
to
look
where
to
kind
of
pick
up
these
exceptions.
B
So
let's
see,
if
we
do
that
so
now,
there's
no
warning,
which
is
good
and
if
we
go
ahead
and
run
that
you
know
same
part
again
on
that
namespace.
What
I'm
expecting
now
is
that
for
that
part
to
be
allowed
with
no
errors
right,
because
in
my
policy
exception,
I'd
requested
that
all
of
these
rules,
which
were
previously
failing,
are
not
checked
for
this
particular
namespace.
B
Now
I
could
make
this
more
granular
by
labels
by
in
a
pod
Name
by
other
kind
of
mechanisms,
but
in
this
case
I
just
choose
to
exclude
this
particular
namespace.
So
that's
the
basics
of
our
policy
exceptions,
work
and
you
know,
as
I
mentioned,
you
can
secure
this
further
using
using
kuberno
itself.
So
one
other
kind
of
thing
I
want
to
quickly
demo
is,
if
I,
you
know,
let's
say
if
I
now
require
that
you
know
Charlotte
was
showed
how
to
do
image,
signing
and
verification.
B
But
let's
say:
if
I
want
to
require
that
policy
exceptions
have
to
be
signed
for
approvals
right,
so
I'm
gonna,
you
know
apply
this
policy
called
required,
signed
exceptions
which
I'll
show
you.
What
that
looks
like
that
policy
is
checking
and
making
sure
that
every
exception
I
have
configured
actually
is
signed
by
a
particular
key
and
of
course
you
can
associate
these
keys
to
identities
and
things
like
that
for
an
approval,
workflow
right.
B
So
at
this
point,
if
I
try
to
now
run
that
same,
you
know,
let's
say
part
again
or
or
you
know,
because
so
let's
say:
I'm
gonna
delete
my
policy
exception
and
then
I
I'll
try
and
recreate
the
same
exception.
B
Let's
see
if
you
know
that's
allowed,
so
let's
delete
this
exception
in
the
exceptions,
namespace
and
then
what
we'll
do
is
we'll
apply
that
same
exception
back
with
this
new
policy
and
what
I'm
expecting?
Is
it's
not
going
to
allow
the
unsigned
exception
to
be
configured
right?
So
you're
telling
me
that
hey
this
exception
requires
a
signature,
so
you
can't
do
that,
but
I
do
have
a
signed
exception,
of
course,
for
the
demo.
B
So
if
I
now
go
ahead-
and
you
know
instead
of
the
unsigned
exception-
let's
do
exceptions
signed
yaml
and
this
one
should
be
allowed
right.
So
if
I
go
ahead
and
configure
this
now
it
allowed
that
to
be
created-
and
my
exception
is
created,
one
of
the
cool
thing
is:
if
I
go
and
you
know
kind
of
try
to
edit
this
policy
exception
because
it's
signed
it
will
not
allow
you
know
any
tampering
of
that
policy
exception
itself
right.
So
let
me
let
me
clear.
B
My
screen
and
I'll
go
back
to
the
top
and
if
we
do
group
cuddle
edit
and
here
I'm
editing
the
policy
exception
in
that
namespace,
which
should
be
my
signed
exception
right.
So
you
see
some
of
the
some.
You
know
signatures
up
on
top,
but
let's
say
now
for
some
reason:
I
want
to
allow
you
know
this
instead
of
namespaces
test.
Let's
say
I
put
test,
you
know
and
I'm
trying
to
you
know
kind
of
tamper
with
this
policy
inside
my
cluster.
So
what
happens
right
away
as
soon
as
I?
B
Try
to
save
it
is
caverno
checks
and
says:
hey
the
you
know,
because
you
made
this
change.
The
signature
of
the
sign
manifest
does
not
match
the
signature
I'm
expecting
and
it's
going
to
reject
that
change
and
not
allow
that
policy
exception
to
be
configured
right.
So
a
lot
of
interesting
possibilities.
Now
by
decoupling
policy,
exception
management
from
the
life
cycle
of
the
policies.
You
can
store
exceptions
in
a
different,
git
repo.
You
can
build
your
approvals
workflow
with
Git
Ops.
B
You
can
can
sign
your
yamls
and
make
sure
that
give
our
noise
Make
only
allowing
trusted
policy
exceptions
to
be
configured,
and
you
know
there's.
Of
course
you
can
also
use
our
back
as
well
as
other
additional
kuberno
policies
for
how
you
want
to
do.
Governance
and
compliance
on
these
policy
exceptions.
So
this
is
a
first
release
of
this
feature,
we're
very
interested
in
further
feedback
for
their.
You
know
refinements.
So
please
do
try
it
out.
Let
us
know
what
you
think,
and
you
know
how
we
can.
B
You
know
continue
to
improve
and
enhance
this
feature
itself
so
and
but
it
should,
you
know
immediately
also
start
solving
a
number
of
use
cases
that
were
previously
raised
as
challenges
with
managing
this
kind
of
a
exception.
Management
for
policies.
B
All
right,
so
the
last
thing
I
want
to
kind
of
talk
about,
is
you
know
also
a
little
bit
about
what's
coming
next
in
caberno
and
I'll
also
give
a
few,
you
know
kind
of
hints
on
how
you
can
join
the
community
and
provide
feedback
so
Governor
110
is
our
next
release.
We
have
a
few
other
additional
major
features.
In
fact,
the
bulk
of
this
release
is
going
to
be
internal.
B
You
know
kind
of
decomposition
and
re-architecting
caverno
for
more
scalability
across,
especially
around
the
background
controllers
right,
because
we
see
those
with
cleanup
with
mutate
and
generate
on
existing
resources.
There's
a
lot
of
you
know
background
activity
which
we
want
to
decouple
from
the
WebEx
and
you'll
be
able
to
scale
those
independently
other
key
features
so
inter-service
like
service
API
calls.
So
kivarno
can
now
delegate
some
you
know,
processing
to
another
service
in
your
cluster
and
can
also
look
up
data
for
policy
decisions
from
other
services.
B
So
this
brings
a
lot
of
flexibility
and
pretty
excited
about
that
feature,
and
then
notary,
V2
support
So.
As
a
lot
of
you
may
know,
and
software
supply
chain
security,
notary
V2
is
another
emerging
standard,
kiberna
110
will
support
notary
V2
as
well,
as
you
know,
being
able
to
run
notation
based
plugins
through
this.
You
know,
external
service
API
calls
features.
B
B
So,
lastly,
last
thing
I
just
want
to
quickly
mention.
Is
you
know
if
you
go
to
the
camera?
No
docs
go
to
the
community,
we're
very
active
on
our
slack
channel.
We
do
have
weekly
contributor
meetings
and
we
are
going
to
be
kicking
off
some
either.
You
know
a
set
of
office
hours
or
some
end
user
meetings.
So
please
do
you
know
pop
in
give
us
feedback
on
this
or,
if
there's
any
other
things
you
need,
you
know
from
kivano
feel
free
to
reach
out.
B
There's
also,
you
know
in
terms
of
folks
looking
at
contributing
a
VR
kind
of
you
know.
We
have
a
lot
of
documentation.
We
have
created,
you
can
go
to
the
caverno
repo
and
even
look
at
the
you
know:
development
markdown
file
there
to
get
started
and
we
will
be
you
know,
kind
of
also
continuing
to
enhance
that
experience
to
get
new
contributors
new
new
folks
into
the
project.
B
So
with
that,
let
me
hand
back
to
Taylor
and
see
if
there's
any
final
questions
before
we
wrap
up.
A
I
took
a
look
through
the
chat
and
didn't
see
anything
that
was
super
urgent.
It
looked
like
and
then
I
think
we're
just
about
a
time,
but
really
want
to
thank
both
of
you
for
coming
on
today
chatting
with
us
about
caberno
and
really
excited
for
for
everything
in
one
nine
and
what's
coming
to
110..
So
thank
you
again.