►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone,
hello,
everyone
welcome
to
cloud
native,
live
where
we
dive
in
the
code
behind
cloud
native.
I
am
annie
and
I
am
I'm
going
to
be
your
host
today
and
I
am
a
cncf
ambassador
as
well
as
a
senior
product
marketing
manager
at
camunda.
A
So
every
week
we
bring
a
newsletter
presenters
to
showcase
how
to
work
with
cloud
native
technologies.
They
will
build
things,
they
will
break
things
and
they
will
answer
all
of
your
questions,
so
you
can
join
us
every
wednesday
to
watch
live
as
you
are
doing
now,
this,
possibly
maybe
or
watching
it
afterwards.
So
this
week
we
have
a
few
great
speakers
talking
about
protecting
software
supply
chains
using
kuverno
and
as
always,
a
reminder
for
everyone.
Cube
compass
cloud
native
from
europe
is
next
week,
so
really
looking
forward
to
that
one.
A
I'm
excited
for
that
so
check
that
out,
as
you
saw
in
the
banner
in
the
beginning
as
well
so
and
as
always,
this
is
an
official
live
stream
of
the
csf
and
as
such
it
is
subject
to
the
cncf
code
of
conduct.
So
please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
Basically,
please
be
respectful
of
all
of
your
fellow
participants
and
presenters
with
that
I'll
head
over
to
our
speakers
to
kick
off
the
presentation
today.
B
Thank
you,
annie,
yeah,
so
hi
everyone.
This
is
jim
beguardia,
I'm
one
of
the
maintainers
of
cabrano
and
a
co-founder
and
ceo
at
nermata.
So
I
want
to
chat
today
mostly
about
the
new
features,
we're
introducing
in
kiberno
1.7,
around
software
supply,
chain,
security
right
and
I'll
share
my
screen
and
kind
of
pull
up
a
few
sites.
As
we
talk
I'll,
you
know
start
with
some
of
the
just
background
on
software
supply
chain.
What
security?
B
Why
this
matters-
and
you
know
what
some
of
the
basic
concepts
are
right
and
I
think
we've
done
a
previous
live
stream,
where
we,
you
know,
covered
this
at
a
very
high
level,
but
just
to
kind
of
lay
the
you
know
foundation
again
or
go
over
some
of
the
basics.
B
So
in
the
last
12
to
15
months,
it
seems
like
we've
all
seen
the
headlines
about.
You
know
software
supply
chain
attacks.
You
know
breaches
happening
in
the
software
supply
chain
and
it's
very
interesting.
You
know
kind
of.
If
you
look
behind
the
headlines
and
dig
in
into
what's
going
on,
so
it
seems
like
because,
as
we've
become
better,
perhaps
at
protecting
production
systems
as
we're
leveraging
more
in
a
managed
services
which
have
secured
defaults-
and
you
know,
other
security
settings
which
are
easier
to
set
up
production
systems
seem
to
be
better
protected.
B
But
now
you
know,
and
and
as
that
happens,
attackers
are
naturally
sort
of
looking
for
you
know.
Other
ways
to
you
know
find
find
vulnerabilities
or
find
ways
to
get
into.
You
know
systems
right.
B
So
the
other
other
trend
that
we
have
been
also
seeing
is,
of
course,
with
continuous
delivery
becoming
more
and
more
popular
with
workflows
like
gitops,
you
have
ci
cd
systems
which
now
have
credentials
to
deploy
into
production
systems,
and
the
cicv
systems
in
many
cases
are
deploying
things
you
know,
perhaps
dozens
of
times
or
hundreds
of
times
even
some
day
a
day
you
know
could
be
as
they're
pushing
things
to
production.
So
you
combine
those
factors
and
what's
happened
is
you
know?
B
Attackers
have
found
that
software
supply
chains,
like
our
build
systems,
are
a
weak
spot
and
by
taking
advantage
of
that
by
you,
know
attacking
those
they
have
ways
of
you
know
being
getting
access
to
production
systems
and
getting
you
know,
malicious
code
or
even
you
know,
other
things
deployed
into
production.
B
So
how
do
we
you
know?
How
can
we
protect
against
this
and
what
you
know?
What
are
some
of
the
you
know,
things
happening
in
the
open
source
and
other
communities
there's
a
lot
of
activity
in
in
within
cncf
projects
on
this.
In
fact,
kubernetes
with
1.24
just
announced
that
they
have
adopted
six
store
and
tools
like
cosine
to
sign
all
of
the
kubernetes
binaries.
B
And
there
are
you
know,
projects
also
leading
communities
into
software,
or
you
know,
salsa
3
compliance
salsa
is
an
emerging
standard,
so
it's
actually
it's
an
acronym
that
stands
for
supply
chain
levels
for
software
artifacts
and
it
provides
a
way
of
you
know,
measuring
or
checking
for
or
different
things.
You
would
want
to
do
within
your
ci
cd
system
within
your
build
systems
to
be
able
to
protect
those
right.
B
So
I'm
not
going
to
go
into
a
lot
of
details
on
those
but
they're,
just
very
briefly:
there's
four
levels
of
salsa
protection
and
each
level
kind
of
does
things
about.
You
know
for
across,
like
whether
it's
the
build,
whether
it's
the
code,
whether
it's
the
artifacts
created
to
be
able
to
do
certain
checks
right.
But
if
you,
if
you
look
at
salsa,
it's
very
much
focused
on
the
ci
cd
system.
B
The
way
you
know
one
of
the
ways
of
thinking
about
these
concepts
and
I'm
pulling
up
a
blog
post
that
that
was
posted
on
the
cncf
site,
is
to
kind
of
break
down
the
process.
The
build
process
into
different
you
know
steps
right.
One,
one
way
of
thinking
of
it
is
every
build
system
is
going
to
produce
artifacts
whether
it's
binaries,
whether
it's
container
images
other
things.
The
other
thing
you
know
in
addition
to
artifacts
you'll,
also
have
metadata
which
gets
produced
right.
B
So
when
you
produce
metadata,
these
could
be
like
whether
they're
vulnerability
scan
reports,
there's
software
build
of
materials
or
s-bombs.
All
of
this
can
get
produced
in
the
build
system.
So
now,
with
with
the
salsa
levels,
what
you
can
do
is
you
can
create
signed
attestations
which
are
basically,
you
know,
verifying
those
that
metadata
and
the
provenance
data,
which
means
the
build
system,
is
a
trusted
system.
B
You
are
you
know,
configuring,
the
source
of
that,
and
all
of
this
can
be
now
using
tools
from
six
store
can
be
pushed
into
oci
registries,
which
is
fantastic.
So
now,
once
you
have
this
data
in
oci
registries,
the
final
step
to
all
of
this
is
policies
right
and
that's
where
kiverno
comes
in
and
that's
what
we're
going
to
talk
about
mostly
today,
so.
A
And
the
audience
is
already
really
fired
up
and
asking
questions,
so
we
can
actually
have
an
audience
question
here.
I
think,
which
is,
by
the
way,
essentially
great
agenda.
A
B
Yes,
so
there
are,
you
know
we
have
worked
with
the
flux
community.
I
believe
there
was
a
similar.
You
know
question
from
argo
cd,
so
kiverno
policies
do
have
a
ready
state.
So
there
are
you
know:
githubs
controllers
can
check
that
state
and
based
on
whether
the
policy
is
ready
or
not
delay.
Other
you
know
other
artifacts
from
being
introduced,
so
certainly
reach
out
on
the
camera
slack.
We
can
help
you
with
that.
B
This
was
an
issue
that
I
recall
had
been
addressed,
but
if
there's
any
other
issues
that
can
be
certainly
handled
there.
B
Yeah
so,
let's
kind
of
briefly
just
introduce
you
know
the
existing
kiberno
functionality.
So
I'm
just
going
to
look
in
you
know
for
image
verification
in
1.6.
So
we
understand
what
that
policy
looks
like
and
you
know
then
we're
going
to
start
looking
at
some
of
the
newer
features
of
what
is
possible
right.
So
the
very
basics
of
image.
Verification
policy
is
you're,
checking
for
signatures
and
attestations
and
I'll
explain
the
structure
of
what
we
have.
B
You
know
come
up
in
1.7
to
simplify
this,
but
in
1.6
we
had
this
verify
images
rule
which
was
introduced
here,
we're
checking
for
a
pattern
which
matches
certain
images,
and
then
you
know
we're
verifying
that
image
is
signed
using
a
public
key
right.
So
that's
a
very
simple
policy
to
check
that
your
container
image
is
actually
signed
and
this
will
verify
by
internally.
What
will
also
happen
is
the
image
will
be.
B
You
know,
replaced
by
a
the
tag
will
be
replaced
by
a
digest
so
that
you
know
just
for
additional
security
yeah,
because
tags
can
be
immutable
the
year
it
will
replace
by
a
digest.
A
more
complex
example
is
you
know
to,
in
addition
to
checking
for
the
signature,
to
also
check
for
attestations
right.
So
here,
for
example,
and
attestations
again
are
signed
metadata,
which
you're
producing
with
tools
like
cosine,
and
it
could
be
any
json
blob
could
be
an
s
bomb,
could
be
a
vulnerability,
scan
report
and
you're
in
this
policy.
B
A
Yeah
yeah
and
the
sharing,
I
think
is,
is
a
portion
and
then
a
question.
While
we
get
the
share,
I
think
you'll
want
to
share
some
screens,
so
we
get
that
working
is
kiberno
similar
to
open
policy
agent.
B
So
there
are
similarities,
and
so
I
stopped
my
sharing
and
chip,
maybe
yeah.
C
I'll
be
glad
to
take
that
one
yeah
yeah
go
ahead,
so
so
yes,
caverno
is
similar
to
open
policy
agent
in
a
lot
of
respects.
In
that
you
know,
they're
both
admission
controllers,
they
both
have
the
ability
to
to
validate.
But
the
great
thing
about
caverno
is
that
it
doesn't
ask
you
to
learn
a
new
language.
It
doesn't
bring
additional
technical
debt.
With
that
you
are
you
manage
policy,
you
write
policy.
You
reason
about
policy
in
the
same
way
that
you
do
today
using
yaml.
C
C
Do
the
same
thing
with
policy
and
do
it
very
effectively,
but
also
powerfully,
and
simply
so
that's
a
long-winded
answer
in
saying
yes,
but
it.
C
Additional
capabilities
that
oppa
does
not
bring,
and
and
some
of
those
we'll
probably
share
in
this,
but
for
for
any
more
details
on
that.
You
can
hit
us
up
on
slack
and
also
go
to
the
documentation
and
look
at
features
like
generation
which
allow
you
to
create
new
resources
that
don't
previously
exist,
based
on
a
host
of
triggers
and
in
a
variety
of
ways,
and
that
by
the
way,
is
getting
enhanced
in
1.7,
with
some
great
new
features
that
we
might
talk
about
here.
B
Okay,
you
know
if
the
policy
wasn't
visible.
I
just
reshared
my
screen,
so
this
was
the
the
simpler
policy
that
I
was
showing
and
then
I
had
scrolled
down
to
kind
of
show
a
more
complex
policy
with
some
attestations
here
with
the
custom
code
review.
So
again,
as
you
can
see
like
chip
was
mentioning.
This
is
very
declarative
in
terms
of
yaml.
It's
fairly
simple,
to
understand
what
we're
doing
over
here
is
checking
that
the
image
attestation
is
assigned
using
a
public
key
and
then
there's
certain.
B
You
know
data
using
james
path,
expression,
which
is
just
a
common
way
of
checking
for
certain
things
in
json.
We're
checking
for
you
know
that
the
the
the
review
was
actually
done
on
the
main
branch,
and
there
were
two
reviewers.
You
know
from
a
set
which,
of
course,
you
can
externalize
this
data,
which
would
be
a
best
practice,
but
here
we're
just
showing
it
in
line.
C
C
C
So,
for
example,
this
is
a
policy
that's
possible
today
and
what
we
want
to
be
able
to
say
in
as
a
way
to
augment
our
supply
chain
security.
We
want
to
do
some
basic
things
even
before
that,
which
is
like
saying
in
your
images:
you're
not
able
to
set
a
root
user
unless
it
comes
from
perhaps
an
internal
corporate
registry.
So
in
this
policy
we're
doing
exactly
that
and
caverno
has
this
nice
ability
to
talk
to
an
image
registry
wherever
the
container
image
is
located.
C
So
what
we're
basically
saying
is:
go
and
get
the
user
parameter
for
this
image
and
if
it,
if
the
image
does
not
come
from
ghcr,
which
is
github
you,
obviously
you
can
put
whatever
registry
that
you
want
in
there
deny
it.
So,
let's
see
how
this
actually
works
in
process
just
make
sure
I
don't
have
any
policies
added
here.
C
A
C
C
All
right
so
and
when
defining
a
user
when
defining
an
image,
we'll
take
a
look
at
one
that
does
have
a
a
good
example,
so
in
this
case
we're
just
going
to
fetch
a
caverno
test
image,
that's
out
there
and
take
a
look
at
the
configuration
of
it.
So
we
can
see
here
that
there's
user
that's
being
set
to
empty,
and
so
by
not
having
a
user
or
setting
a
user
to
something
that's
an
empty
string
like
this
you're,
effectively
saying
hey.
I
want
to
run
this
as
root.
C
Well,
our
policy
said
that's
fine,
but
it
has
to
come
from
ghcr
which
this
one
does.
But
if
we
take
a
look
at
an
example
of
a
bad
one,
then
we
have
a
similar
type
of
thing
now.
This
one
is
just
a
general
redis
image,
that's
coming
from
docker
hub,
but
same
type
of
thing.
This
is
setting
a
root
user,
but
it
doesn't
come
from
our
specified
environment.
So,
let's
see
what
happens
if
we
go
and
create
this
first,
let's
do
it
with
the
bad
one.
A
We're
we're
working
on
getting
the
share
up
and
working
just
give
us
a
few
things.
Oh
now,
it's
working
immediately.
C
Oh,
okay,
all
right,
so
there's
a
little
bit
of
a
delay
there
yeah,
so
so
I
applied
in
the
in
my
example
pod
here
I
just
went
and
put
that
redis
image
that
you
saw
a
second
ago.
So
this
is
an
image
that
does
set
root,
but
it
comes
from
a
registry
that
we
haven't
deemed
safe.
So
I
tried
to
apply
that
and
caverno
immediately
blocked
that
it
reached
out
to
the
registry.
C
It
was
able
to
decode
the
configuration
and
see
that
this
is
specifying
root,
but
our
policy
said
that
it
has
to
come
from
that
registry.
Well,
it
didn't
come
from
that
registry,
so
we
blocked
it.
So,
by
contrast,
if
we
give
it
an
image
that
also
sets
root,
but
does
come
from
an
image
registry
that
we
bless
and
again
this
is
ghcr.
C
But
imagine
you've
pointed
this
to
your
internal
registry
or
maybe
or
a
an
existing
registry,
but
in
a
repo
that
you
do
allow,
it
should
be
able
to
understand
that,
and
indeed
it
let
us
create
so
caverno
was
able
to
do
the
same
thing
but
decode
that
and
allow
that
to
pass.
C
So
that's
something
that
caverno
can
do
today
and
you
know,
as
you
can
see
from
this
this
policy
declaration,
it's
fairly
simple.
I
mean
the
meat
of
this
is
only
really
in
a
little
over
10
lines
here
and
there's
no
programming,
that's
required,
so
we're
just
simply
saying:
go
and
get
everything.
That's
inspect
containers
go
and
pull
the
the
image
data
from
the
registry
for
that,
and
then
in
our
deny
conditions,
take
a
look
at
two
different
things.
All
of
these
has
to
be.
All
of
these
have
to
be
true.
C
Does
the
user
is
the
user
running
his
root,
which
is
empty
there,
and
also
does
it
come
from
ghcr
if
it
doesn't,
if
none
of
those,
if
all
of
those
things
are
true,
then
it'll
block
it.
So
let's
look
at
another
example,
but
this
time
using
some
capabilities
in
one
seven
and
there
are
a
couple
of
them
base
images.
So
when
we
build
a
container
image,
we
can
specify
a
lot
of
different
ways
to
build
that
and
one
of
the
popular
ways
of
doing
it
is
to
specify
a
base
image
from
an
existing
image.
C
That's
out
there
well,
in
many
cases
what
you
find
in
the
the
community
in
real
life
is
people
want
to
create
images
from
like
ubuntu
latest
out
from
docker
hub?
Not
only
is
it
a
huge
image,
but
there's
a
bunch
of
stuff
inside
of
that
that
you'll
know
exactly
what
it
is,
what
it
came
from,
but
it's
also
not
secured.
C
I
really
want
to
take
a
look
inside
of
those
and
see
what
was
the
base,
even
though
my
application
code
may
be
good,
even
though
my
pipeline
process
is
good,
I'm
starting
from
an
image
that
may
not
be
good,
and
so
one
of
the
things
that
we
can
do
in
caverno
and
in
one
seven
we
have
some
enhancements
to
an
existing
ability.
Is
we
can
do
exactly
that?
So
one
of
the
things
to
point
out
is
we're
doing
a
similar
type
thing
here
in
this
policy
declaration
that
we
were
doing
before,
which
is
hey.
C
Now
you
notice
that
we're
calling
this
variable
here
image
data,
it's
just
a
way
to
refer
to
whatever
this
this
data,
it
got
back
we're
going
to
call
it
image
data,
but
in
one
seven
we
have
the
ability
to
chain
that
variable
to
new
variables,
so
I'm
declaring
another
one
here
and
I'll
show
what
this
is
in
just
a
second,
but
we're
looking
at
we're
parsing,
whatever
the
the
contents
of
this
previous
variable
in
a
new
one.
So
this
is
a
new
feature:
that's
in
1.7
and
also
in
1.7.
C
We
are
able
to
look
at
any
of
the
data,
that's
in
the
image
when
it
comes
to
fields
and
annotations
and
configurations,
and
not
just
those
that
may
pertain
to
standard.
So
what
we're
basically
saying
here
is
there
are
four
possible
locations
from
which
a
base
image
can
be
specified.
C
Now
it's
important
to
point
out
that,
just
because
you
build
a
an
image
with
a
docker
file
that
has
a
from
statement,
doesn't
necessarily
mean
that
that's
going
to
be
recorded
because
you
have
to
make
explicit
steps
in
order
to
record
what
your
base
image
is,
and
there
are
various
ways
to
do.
Those
and
we've
captured
four
of
those
here.
So
one
of
those
is
if
you're
using
docker
build
kit,
which
is
a
popular
option.
This
is
the
docker
build
x
command
if
you're
familiar
with
that
for
multi-stage
builds.
C
So
let's
take
a
look
at
what
that
would
look
like
here's.
What
nothing
looks
like
so
I've
just
got
an
example
of
an
image
that
I
built
a
while
ago,
not
built
according
to
best
practices.
Really
nothing
that's
specified
in
here.
This
is
all
the
configuration
data
we
would
expect
to
see
the
base
image
show
up
somewhere
in
here
now.
By
contrast,
let's
take
a
look
at
what
if
we
specified
it
in
an
oci
annotation,
which
is
the
pretty
much
the
standardized
way
of
doing
this
today.
C
We
do
have
a
base
name,
and
in
this
case
the
base
is
the
gcr
the
static
image.
So
this
one
specified
but
build
kit's,
also
really
popular
as
well,
and
so
caverno
has
the
ability
to
go
and
parse
into
that
as
well,
and
I'm
going
to
paste
a
more
complex
command
here.
But
one
of
the
things
to
point
out
is
that
we're
actually
using
the
caverno
cli
to
do
the
parsing.
Now
you
can
use
other
tools
as
well
and
I'm
piping
it
to
jq
just
to
give
it.
A
A
Yeah,
I
think
it's
starting
to
get
better
and
I
trust
that
the
audience
will
will.
Let
us
know
if
someone
still
can't
see
it,
but
I
think
it
works
better.
Yeah,
okay,.
C
All
right,
so
so,
what
we're
just
showing
here
is
that
we're
able
to
get
the
same
type
of
information
in
this
case.
This
is
from
the
caverno
image
itself,
so
we're
basing
it
on
golang
and
we
can
get
all
this
information
in
there.
So
let's
go
and
try
to
apply
this
policy
and
then
we'll
try
a
an
image
that
does
not
specify
it
and
then
we'll
try
an
image
that
does
specify
it.
So
I'm
just
going
to
try
and
run
busybox,
which
busybox
doesn't
specify
it.
C
Right
so
caverno
has
blocked
this,
and
it's
also
the
other
policy
that
I
had
in.
There
also
blocked
it.
So
we're
not
specifying
a
a
base
image,
and
so
it's
blocked
that
one
and
now,
if
we
flip
over
and
show
an
example
from
one
of
the
images
that
I
just
checked,
this
is
a
demo
image,
but
it
uses
a
base
image
specified
in
an
oci
annotation,
we'll
try
that
and
caverno
lets
us
create
the
pod.
So
it
was
able
to
go
in
and
look
at
the
base
images
and
make
sure
that
something
was
specified.
C
So
last
thing
to
show
won't
show
a
demo
on
this
before
kicking
it
back
over
to
jim,
but
you
know.
Imagine
that,
like
having
a
base,
image
is
great
and
caverno
is
able
to
do
that
and
one
seven
there's
some
enhancements
that
allow
you
to
to
do
that
even
easier.
But
ultimately,
you,
probably
as
an
organization,
want
to
start
building
a
catalog
of
allowed
base
images,
not
just
any
base
image
and
that's
the
that's
a
good
first
step.
C
But
you
want
to
say:
hey
you
know,
I've
got
a
list
of
maybe
eight
or
something
that
my
teams
or
my
entire
organization
is
able
to
create
these
gold
images,
and
only
those
are
allowed.
Well.
Caverno
can
do
that
as
well,
and
so
what
I'll
show
here
real
quickly
is
imagine
that
you
wanted
to
build
this
index
in
your
environment
and
you're,
using
a
git
ops
you're
using
a
git
ops
flow.
C
Well,
caverno
can
read
this
from
example,
for
from
a
configmat,
so
I've
got
a
platform
namespace
that
I
want
to
have
my
platform
team
curate
a
lot
of
these
sort
of
cluster
or
global
variables,
and
in
this
case
it's
just
a
config
map
that
says
that
has
a
key
set
that
says:
allow
base
images
so,
as
you
can
see,
it's
just
an
array
of
strings,
a
mapping
of
all
of
the
allowable
base
images
that
that
I'm
I'm
going
to
permit
to
be
pulled
into
this
cluster,
and
so
I
can
have
a
caverno
policy
that
goes
and
fetches
that
looks
at
the
base
image
and
then
looks
at
that
list
and
says:
is
this
base
image
that
you
declare
in
that
list?
A
There's
actually
a
question
from
the
audience
as
well,
so
maxim
asks
it
tells
you
all
the
policies
it
fails.
Question
mark.
C
If
you
apply,
if
you
have
multiple
policies
and
any
resource
that
you
submit
violates
any
of
those,
yes
it'll
show
you
all
of
the
ones
that
it
violates
and
that's
actually
what
you
saw
here.
I
had
the
first
policy
that
I
created
with
one
rule
and
then
the
second
policy,
and
so
the
previous
resource
that
I
tried
to
submit
it
violated
both
of
them.
C
When
I
was
trying
to
submit
the
busybox
pod,
because
not
only
was
it
trying
to
set
root
and
not
from
the
registry
that
I
blessed,
but
it
also
didn't
have
a
base
image
declaration,
so
it
it
violated
both
of
those
and
it
showed
you
both
of
those.
C
So
hopefully
that
answers
the
question
and
and
so
just
to
wrap
up
here.
You
know
this
policy
is
doing
what
I
mentioned
looking
at
the
base
image
ensuring
if
it's
from
a
trusted
list,
if
not
it
blocks
it
and
again,
like
everything
that
we
try
and
do
in
caverno,
it's
fairly
simplistic.
I
mean
that's
a
a
pretty
powerful
capability,
but
it's
written
very
simply
not
that
many
lines
of
yaml
go
and
get
the
config
map.
C
That's
in
that
that
platform
name
space
and
we'll
save
that
into
a
variable
and
then
go
and
get
the
registry
data
like
we
saw
on
the
two
previous
policies
and
we're
going
to
look
for
the
base
image
in
this
one
which
we're
we're
picking
up
whatever
was
got
whatever
was
taken
from
our
image
registry
and
then
we're
just
going
to
dive
into
it
and
look
for
that.
Annotation
and
we'll
steal
the
value
out
of
it
and
then
we'll
just
say:
hey
is
the
base
name.
C
Is
that
not
in
the
allowed
base
images
from
that
config
map,
no
block
it
so
pretty
simple,
but
pretty
powerful
stuff
that
allow
you
to
augment?
Or
if
you
don't
have
it
today.
This
is
a
great
step
to
getting
to
that
software
supply
chain
security
and
to
getting
some
security
into
your
cluster
with
capernaum.
So
with
that
I'll,
kick
it
back
over
to
jim
and
happy
to
take
any
other
questions
that
come
up.
B
B
Is
you
know
I'm
going
to
go
back
to
now
the
next
level,
which
is
signing
images
as
well
as
verifying
signatures
and
attestations
or
metadata
for
images
right
so
before
I
do
that,
I
want
to
kind
of
quickly
explain
and
let
me
just
go
into
present
mode
here.
What
a
caverno
policy
looks
like.
So
you
saw
some
examples
already,
but
every
kivarna
policy
is,
you
know,
has
a
set
of
rules.
It
must
contain
at
least
one
rule,
but
rules
can
match
and
exclude
different.
B
You
know:
resources
different
name
spaces
you
can
match
exclude
by
who
the
user
that
created
so
there's
a
lot
of
flexibility
in
how
you
apply
rules
to
admission
requests
or,
to
you
know,
existing
resources.
Once
you've
decided,
you
know,
once
the
policy
decides
that
the
rules
should
be
applied,
then
each
rule
can
either
mutate
resources.
So
you
can
change
things
in
your
existing
configurations.
B
The
oci
config,
like
chip
just
showed,
including
whether
the
base
image
was
built
correctly,
whether
the
image
includes
a
root
user
or
non-root
user
things
like
that,
as
well
as
signatures
and
attestations
that
I
will
show
you
can
validate
resources
right
so
for
proper
settings
for
best
practices.
Things
like
that
you
can
also
generate
new
resources.
So
when
a
new
namespace
is
created,
if
you
want
to
generate
secure
defaults,
if
you
want
to
you
know,
actually
even
you
know
trigger
generating
based
on
like
when
a
service
is
created.
B
Perhaps
you
want
to
create
an
istio
network
policy
right,
or
things
like
that
can
be
now
automated
through
kiberno
fairly
easily,
in
fact
with
1.7
we're
also
introducing
the
ability
to
mutate
and
generate
on
existing
resources,
so
that
opens
up
a
whole.
You
know
different
set
of
use
cases
which
have
been
you
know,
requests
from
the
community
as
well.
So
that's
kind
of
the
structure
of
a
policy
but
diving
more
into
the
image
verification
part
of
it
itself.
A
C
Yeah,
so
if
you
it
like,
I
mentioned
in
if
this
is
being
managed
in
a
git
ops
process,
or
even
if
not
you're-
probably
going
to
create
that
config
map
in
advance
with
the
allowed
registries
or
the
allowed
base
images
or
whatever
source
that
you
want
to
have
in
there.
And
so,
if
you
need
to
add
a
new
one
yeah,
you
typically
are
going
to
add
it
to
that
config
map.
But
that's
not
the
only
possibility
to
add
things.
You
can
also
declare
them
in
the
policy.
C
A
C
Yeah,
so
caverno
is
pretty
vast.
I
mean
it's
there,
there's
they're,
not
a
whole
lot
of
things
that
it
can't
do
even
complex
use
cases
are
comparatively
fairly
simple
in
caverno,
but
caverno
is
built
for
kubernetes.
That's,
not
a
oversight,
that's
a
specific
strategy,
so
if
you
want
to
use
caverno
or
thinking
if
you're
trying
to
use
cover
with
other
things,
there
are
other
tools
out
there
like
opa,
is
a
great
tool
as
a
more
general
purpose
thing,
but
you're
not
going
to
be
able
to
use
cabernet
for
that.
C
Caverno
is
going
to
be
a
great
fit
for
validating,
mutating
generating
and
even
performing
a
lot
of
these
image
verifications
that
that
we've
shown
in
that
jim's
going
to
show
later
on.
So
that's
really
where
it's
a
great
fit
and
there's
a
lot
of
use
cases
that
can
accomplish
within
that,
but
it's
built
for
kubernetes.
That's
one
of
the
reasons
why
not
only
you
know
we're
able
to
get
such
power
out
of
it,
but
it's
incredibly
easy
and
flexible
to
get
started.
C
B
Yeah
and
just
to
kind
of
add
to
that,
you
know
there
are
obviously,
if
you're
running
any
admission
controller,
there's
important
things
to
remember
and
one
of
the
things
which
is
you
know
become
so
kubernetes
has
made
it
very
easy
to
add
admission
controllers,
but
that
also
comes
with
challenges.
It's
not
simple
to
secure
scale
and
manage
admission
controllers,
and
they
can
cause
problems
in
clusters
if
you
misconfigure
them
right.
So
there
are
one
of
the
things
we've
taken.
B
You
know
fairly
great
pains
within
kiverno
to
do
is
to
make
it
as
first
of
all
secure
by
default
and
then,
secondly,
also
make
it
so
that
it
it.
You
know
configures
itself
in
a
in
as
intuitive
in
a
smart
way
based
on
the
cluster
settings
itself,
but
there
are
a
few
gotchas
that
you
need
to
be
aware
of
as
you're
putting
any
admission
controller
in
production.
B
And
of
course
you
know,
one
of
the
anti-patterns
I've
seen
is
to
some
end
up
with
perhaps
too
many
admission
controllers,
which
could
end
up
also
creating
challenges
right.
So
things
like
that,
you
do
need
to
be
aware
of
the
documentation
at
kiberno.
Take
a
look
at
you
know
the
the
installation
page,
the
security
page.
It
goes
through
that
in
quite
a
lot
of
detail.
B
Right
so
back
to
you
know
the
the
policy.
I
was
explaining
kind
of
the
structure
of
the
kiberno
policy,
so
this
is
just
any
any
policy,
but
then
I
want
to
dive
in
a
little
bit
deeper
into
what
a
verify
image
look.
A
rule
looks
like
in
1.7
right
so
in
1.7,
the
major
change
that
we
introduced
was
to
allow
flexibility
of
multiple
attesters
which
could
be
signatures.
B
You
know
you
think
of
those
as
authorities
for
saying
yes,
this,
you
know
attestation,
or
this
image
is
good
and
those
attesters
can
be.
You
know,
specified
as
public
keys,
public
certificates
or
even
using
you
know
something
known
as
keyless,
which
just
like
serverless
doesn't
mean.
There
are
no
servers.
B
Keyless
doesn't
mean
that
there
are
no
keys,
but
in
fact,
what
it's
doing
is
it's
using
almost
disposable
keys
on
demand
underneath
and
then
you
know
specifying
or
taking
information
from
that
signing
event
and
putting
it
in
a
transparency
log
which
is
part
of
the
sig
store,
tooling
right.
So
that's
a
more
advanced
use
case,
but
in
lots
of
cases
if
you're
using
keys
certificates
or
and
keyless,
and
you
can
by
the
way
I
have
a
combination
of
these
right.
B
So
you
could
sign
an
image
you
know
with
you
know,
let's
say
a
one
key
or
a
and
one
certificate,
or
with
a
set
of
keys
and
keyless
things
like
that.
So
previously,
kivernor
had
some
limitations
in
1.6
in
terms
of
allowing
the
flexibility
of
these
multiple
adjusters
and
now
I'll
show
you
a
couple
of
examples.
Why
that's
important?
B
B
It's
a
cncf
project
which
it
does
you
know
is
also
you
know
focused
on
software
supply,
chain,
security
and
managing
metadata,
for
you
know,
images
or
any
other
artifact.
So
the
attestation
format
is
in,
and
you
can
you
know
what
I'll
show
is
how
you
can
put
anything
from
software
build
up
materials
to
even
like.
B
We
saw
the
example
for
a
code
review,
or
you
could
even
put
you
know,
vulnerability,
scan
reports,
things
like
that
as
signed
attestations
and
update
these
for
your
image,
which
creates
a
very
powerful
use
case,
because
now
you
can
periodically
check
and
see
which
images
might
not
be
compliant
with.
You
know,
new
vulnerabilities
or
any
any
changes
in
your
environment
right.
Other
things
that
in
1.7,
which
were
done
with
this
image
verification
rule
is
now
you
can
very
simply
like
it
shows
on
top.
B
B
If
you,
you
could
have
certain
images
which
you
know,
let's
say
if
you
use
a
glob
or
a
wild
card,
you
can
match
certain
images,
but
other
images
would
not
be.
You
know
checked
now.
You
can
very
easily
say
that
every
image
in
your
cluster
needs
to
be
verified
before
it's
allowed
to
execute
or
allowed
to
be
deployed.
B
You
can
also
now
control
on
a
granular
basis
how
you
are
verifying
digest,
so
you
can
enforce
a
global
policy
which
says
that
every
tag
must
be
converted
to
a
digest
before
you
know
it
is
admitted
and
caverno
can
do
this
in
a
couple
of
different
ways.
It
will
you
know,
leverage
the
signing
and
cosign
for
that
or
it
will
fall
back.
B
If
that's
not,
you
know
specified
in
the
policy,
it
can
also
fall
back
to
do
an
oci
look
up
and
get
the
digest
and
make
sure
that
the
tag
is
replaced
by
the
digest
during
admission,
and
that's
so
that's
the
second
part.
The
mutate
digest
right.
So
all
of
this
kind
of
leads
to
a
lot
of
flexibility
and
a
lot
of
interesting
scenarios
and
use
cases
that
you
could
now
create
and
apply
in
terms
of
your
governance
and,
overall,
you
know
security
posture
that
you
want
right.
B
B
So
if
you
have,
for
example,
a
global
key
that
maybe
is
per
cluster
or
could
be
across
clusters,
but
a
global
key
in
this
case
we
called
it
production
and
that
production
key
is
requiring
that
your
first
of
all,
your
image
has
to
be
signed
with
that
production
key
and,
in
addition,
I
want
to
make
sure
that
my
image
is
also
signed
by
a
namespace
specific
key
by
the
way,
if
you
just
noticed
like
kiberno,
because
it
uses
open
api,
you
know,
v3
schema
all
of
the
help
and
stuff
is
available
in
vbs
code.
B
It
makes
it
super
easy
to
kind
of
look
at
policies
and
understand
the
structure.
So
here
you
know
going
back
into
the
policy.
We
can
also
start
from
the
top
there's
a
few.
You
know
sort
of
global
settings
that
we're
putting
in
here
we're
matching
every
pod
and
then,
as
we
match
a
pod,
we're
saying
you
know,
pull.
B
First
of
all,
I'm
going
to
again
look
up
data
from
a
config
map
and
then
I'm
going
to
verify
any
image
that
matches
this
pattern
over
here
and
I'm
going
to
do
two
things
to
that
image
of
the
first
thing
I
want
to
do
is
check
that
that
you
know
it's
signed
with
my
production
key
and
then
based
on
the
namespace.
That's
in
the
inbound
request,
I'm
going
to
look
up
from
my
config
map,
another
key
and
make
sure
that
that
image
is
signed
by
that
right.
B
B
So
I
have
this
config
map
with
production
app
one
and
app
two,
and
I
have
my
three
keys
and,
as
I
add
new
environments,
as
I
add
new
apps,
I
can
have
you
know
more
keys
added
to
this
config
map
and
managed
through
ci
cd,
so
in
a
very
kubernetes
native
manner.
Now
you
can
dynamically
control,
which
not
only
that
everything
is
signed
by
your
common
or
your
global
key,
but
you're
also
making
sure
that
you
know
certain
images
can
only
go
into
certain
namespaces
right.
B
C
B
I
want
to
make
sure
I'll
actually
delete
these
policies
and
then
we'll
add
them
back
in
just
to
make
sure
I
have
the
latest
I'm
going
to
delete
those
and
then
let's
apply
this
multi-attesters
is
the
policy
that
we
just
looked
at
right.
So
that's
what
I
added
in
and
if
I
now
see
it,
the
policy
by
the
way
should
show
that
it's
in
enforce
mode
and
it's
ready-
and
it
doesn't,
you
know,
do
background
scans.
This
is
configurable,
so
I
just
you
know,
set
it
to
false.
B
It
could
be
set
to
true
so
now,
with
this
policy
in
place.
If
I
do
a
coupe
cuddle,
you
know,
and
let's
say
I
want
to
run
app
one,
but
let's
say
I
first
thing
I
do
is
I
don't
specify
a
name
space
right
so
immediately.
What
caverno
is
saying
is
that
hey
there's?
No,
you
know
key
for
default,
so
you
can't
run
this
because
I
can't
verify
this.
You
told
me
that
you
need
two
public
keys,
but
I'm
not
able
to
look
up
the
second
key,
so
I'm
not
going
to
allow
this
right.
B
So
that's
why,
given
a
block
that
if
he
did
not
specify
a
namespace,
so
let's
see
now
what
happens?
If
I
specify
you
know
an
incorrect
namespace
so
notice
over
here,
I'm
running
app
v1,
which
I've
signed
with
my
key
for
that
application
and
the
production
key
and
I'm
running
it
in
in
namespace
app2
right.
So,
ideally,
what
I
would
want
to
see
here
is
that
kiverno
actually
detects
that
and
says
that
hey
you
pat
you
have
your
production
key,
but
notice.
B
Here
it's
saying
entries
one,
and
if
we
go
back
into
a
policy,
we
can
correlate
that
to
saying
entries.
One
is
so
it's
indexed
by
zero,
so
entries
one.
Is
this
namespace
key
right,
so
it
could
not
verify
using
that
key,
although
it
passed
entry
zero
and
it
was
allowed
with
that
right.
So
this
is
an
example
again,
where
now
you're
enforcing
that
specific
applications
can
be
signed
with
like
a
global
key
as
well
as
a
group
key
or
a
team
team
based
key
right.
So
just
to
kind
of
finish
that
use
case.
B
Might
you
know
what
one
common
thing
we're
seeing
with
customers
as
we
work
with
several
organizations
on
this?
Is
that
maybe
they
want?
You
know
each
of
their
environments
in
their
pipeline,
like
perhaps
they
want
a
different
key
for
dev
test
and
one
for
staging
and
one
for
production
right.
So
if
the
production
team
has
signed
off
or
if
your
sre
team
has
signed
off,
then
the
image
gets
signed
with
the
production
key.
Otherwise
it
has
the
staging
keys,
but
it's
not
allowed
in
production
right.
B
B
You
know
it's
just
demo
java
tomcat
under
my
name
jimbaguardian
and
github,
and
I
want
to
show
you
know
the
the
pipeline
that
you
know.
I
was
just
playing
around
with
different
things
here.
So
what,
in
this
through
github
actions,
what
it's
doing
is
it's
building
an
image
for
from
the
java
app
it's
scanning
the
image
it's
generating
an
s-bom,
and
then
it's
signing
all
of
this
as
attestations
and
uploading
the
data
right.
B
So
all
of
this
I
can
do
just
for
github
actions
and
because
github
actions
has
the
oidc
support,
which
was
a
really
nice
feature
that
was
introduced
recently.
All
of
this
can
be
done
in
a
keyless.
You
know
manner,
which
means
I
don't
have
to
configure
so
github.
We
know
when
we
trust
the
identity
of
it
and
we
can
then
rely
on
that
identity
in
a
policy
right.
So
let
me
show
you
what
that
policy
looks
like,
so
the
one
I
want
to
check.
Is
this
one
with
attestations
right?
B
So
let's
expand
the
screen,
so
there's
there's
more
yaml
here,
but
there's
some
pretty
interesting
things
we're
checking
in
here
right.
So
not
only
am
I
checking
that
this
okay,
so
the
image
I'm
matching
is
demo
tomcat
the
same
registry,
I'm
checking
the
subject
that
was
used
because
I'm
using
keyless
signing
it
was
the
actual
workflow
that
created
my
image
right.
So
I
can
precisely
identify
the
workflow
and
I
can
make
sure
that
it
was
signed
using
github.
So
if
I
trust
github,
I
want
to
make
sure
that
you
know
the
issuer
of
that.
B
You
know
that
the
the
certificate
that
is
embedded
in
the
image
is
github
and
I
can
check
even
down
to
the
sha.
So
this
is
the
commit
id
of
my
workflow,
the
workflow
name
I
can,
and
if
the
workflow
comes
from
a
different
repo
like
your
global
repo,
I
can
check
that
right.
So
all
of
this
verifies
now
that
the
image
is
you
know,
trusted
and
once
I
do
that
and
notice,
I'm
not
using
any
any
certificate
or
key
here.
B
I'm
using
this
keyless
option
to
trust
this
image
right
and
once
I
do
that,
I'm
checking
a
few
other
things.
I'm
checking
that
that
image
has
an
s-bom
in
cyclone
dx
format,
I'm
checking
that
the
image
was
scanned
here
I
happen
to
use
trivi
as
the
image
scanner
and
in
the
scan.
I
want
to
make
sure
that
the
scan
was
done.
You
know
in
in
the
last
15
days,
so
I'm
enforcing
that
and
then
I'm
checking
that
for
the
score
right.
B
So
all
of
this
can
be
allowed
and
if
you
for
those
of
you
who
might
be
paying
close
attention,
you
might
have
noticed
here.
I've
allowed
10,
which
is
actually
not
a
good
thing
to
do,
but
takers
10
is
the
you
know,
sort
of
highest
or
the
most
relaxed
score
that
I
did
that,
because
my
image
has
vulnerabilities
and
I'll
show
what
happens
if
I
switch
this
to
a
lower
score
right.
So
with
all
of
this
now.
A
There's
an
audience
question
as
well,
so
is
there
an
easy
way
to
write,
slash
test
policy
without
spinning
up
okay,
3d
or
something
to
run
kubernetes,
for
example,
in
a
pipeline.
B
Yeah
absolutely
so
caverno
has
a
command
line
tool
which
allows
exactly
that
and
it
allows
you
know
you
to
test.
You
can
even
write
like
unit
test
cases.
You
can
have
inputs,
outputs
success,
failure
cases.
All
of
that
so
check
out
the
command
line
too,
and
the
command
is
given
like
coop
cuddle
cabernet,
and
then
you
would
do
test
and
specify
your
test
cases.
There.
C
C
B
All
right
so
continuing
with
that
demo
float.
So
what
I'm
going
to
do
is
run.
You
know
this
version
of
that
tomcat
image,
which
was
built
with
all
of
the
attestations
etc,
and
in
my
policy
you
know
it
is
going
to
check
again
for
the
scan
report
for
the
s
bomb
as
well
as
the
signature
you
know,
using
that
it
was
built
using
that
github
action
right.
So
if
this
so
actually,
oh,
my
other
image
kicked
in.
So
let
me
just
delete
that
policy.
B
Let's
delete
that
again
and
then
we
will
check
for
attestations
over
here
and
now
we
will.
Let's
try
that
one
more
time
right
so
in
this
policy.
What
I
would
expect
is
now
because
it's
checking
just
for
the
tomcat
image
and
just
to
show
the
policy
again.
We're
gonna
check
the
identity
of
the
image
and
then
we're
going
to
check
for
certain
attestation
data
in
here
and
what
I'll
do
is
once
this
runs.
B
C
C
B
Which
is
what
we
really
want
to
see
here
and
I'll
just
delete
the
pod,
and
if
we
run
it
again
at
this
point
it
would,
you
know,
have
created
the
pod.
So
now
that
I
change
the
score,
I'm
just
going
to
reapply
that
policy
and,
let's
see
what
happens
if
I
try
the
same
thing
again
in
this
case,
it
should
block
that
if
the
policy
doesn't
comply
with
that
score,
which
I
I
recall
correctly,
it
did
have
some
high.
You
know
severity
vulnerabilities,
which
are
flagged
in
the
latest,
run
and
sure
enough.
B
It's
saying
that
because
of
the
trivia
aqua
secure
scan
which
which
came
in
and
it
you
know,
reported
those
new
vulnerabilities,
it's
been
blocked
right
so
again,
simple,
but
powerful
example
of
how
you
can
you
know,
integrate
these
type
of
scans
and
these
type
of
different
attestation.
Data,
like
you,
could
also
check
within
the
s-bom
and
s-bombs,
tend
to
be
fairly
large,
but
it's
they're
all
in
json
format,
and
let
me
show
you
very
quickly
so
you're
with
every
build
we're,
including
a
scan
report,
an
s-bom
and
the
provenance
data
right
so
with.
B
If
I
kind
of
go
into
this
s-bom,
it
is
in
cyclone
dx
format
and
it
will
be
in
a
json
format.
Data
which
will
you
know,
show
me
exactly
so.
This
was
built
using
sift.
It's
showing
me
the
container
data,
so
you
can
verify
all
of
this,
including
which
packages
again
you
want
to
allow
where
the
dependencies
are
and
check
this
in
the
policies
right.
B
You
can
also
check
for
certain
licenses
right
here,
for
example,
if
you
don't
want
to
allow
gpl
write
a
policy
for
it
right
and
you
will
immediately
know
if
any
images
were
built
using
an
s-bomb
or
which,
with
any
package
which
depend
on
it
on
that
license
all
right.
So
that's
all
we
had
prepared
to
demo-
and
you
know
just
kind
of
in
conclusion-
certainly
check
out
you
know.
If
you
haven't
tried,
caverno
definitely
go.
B
Try
it
out
if
you
go
to
kievano.io
like
chip,
was
saying
or
go
to
our
you
know,
kind
of
github
page
there's
a
lot
of
information
there,
the
documentation,
you
know.
If
you
just
go
to
documentation
and
the
introduction,
it
will
explain
all
the
basics
there's
a
getting
started,
which
will
you
know,
kind
of
help
you
just
with
the
in
installation,
there's
a
helm
chart
also,
if
you're
kind
of
just
in
installing
on
your
local
cluster,
you
can
use
the
yaml
approach.
So
it's
a
one
line
to
kind
of
do
that.
B
You
know
just
kind
of
from
if
you
are
not
using
any
admission
controller
and
if
you
have
production
kubernetes,
definitely
make
sure
that
you,
you
know,
figure
out
how
to
install
your
pod
security
policies
either
through.
You
know
the
like
kivano
and
there's
built-in
there's
a
good
library
of
pod
security
policies.
B
A
Great
really
great
presentation
and
great
that
we
had
a
lot
of
questions
already
that
we
answered
throughout
the
presentation
as
well.
So
we
have
a
few
minutes
for
the
final
questions,
so
this
is
actually
also
the
last
call
for
questions
for
today.
But
let's
see
if
the
audience
has
has
anything
to
ask
anymore,
there
has
been
a
lot
already.
So,
let's
see,
but
thanks
for
the
great
presentation
it
was
really
nice.
A
Yeah,
of
course,
so
let's
see
if
any
any
questions
come
up,
but
then
before
we
see
if
there
is
any
from
the
audience,
I
would
like
to
maybe
ask
you
what
is
the
most
common
question
that
you
guys
get
about
the
cure
burnout
project.
B
I
think
it's
comparing
you
know
like
some
of
the
comparisons.
The
trade-offs
like
like
we
saw
from
the
audience
right.
Of
course,
those
are
fair
questions
and
good
to
start
out
with,
and
one
of
the
others
that
we.
C
That
we
tend
to
hear
a
lot
is
well
this.
This
looks
great
for
you
know
if,
if
you
have
very
simple
needs,
and
also
if
you're
just
operating
on
your
your
core
kubernetes
constructs
like
pods,
but
it
can
it
do
anything,
that's
more
complex
and
can
it
work
on
custom
resources?
The
answer
to
both
is
absolutely
yes
and,
as
you
saw
from
some
of
those
policies,
you
know
there
are.
C
Cases
that
are
there
but
they're
still
accomplished
in
a
few
a
few
more
lines
of
yaml
and
it
is
all
gamma
there's
no
programming
language
to
expose,
so
caverno
works
the
same
way
on
custom
resources,
the
way
as
it
does
on
existing
resources
and
there's
no
difference.
So
if
you
want
to
write
a
policy
that
operates
on
a
pod,
and
you
want
to
write
a
policy
that
operates
on,
let's
say
you
know:
a
certificate
from
cert
manager,
you're
using
the
same
style
you're
using
the
same
language
as
the
same
constructs.
A
Yes,
there's
three
that
came
very
last
minute
of
the
week.
We
can
try
to
make
it
through
all
of
them.
Let's
see
so
when
should
a
community's
users
start
thinking
about
policies.
C
B
So,
by
default,
I
think
believe
it's
like
around,
like
40
to
50
meg
in
terms
of
memory
it,
of
course,
as
you
scale
the
cluster,
you
will
need
more
and
there
are
like
one
of
the
you
know.
Things
we
will
do
in
the
docs
is
have
some
best
practice
guidelines,
so
we
have
tested
with
you
know,
pods,
which
have
even
hundreds
of
namespaces
thousands
of
resources
or
clusters
which
have
you
know
thousands
of
pods.
B
So
it's
not
that
kiverno
breaks
it's
just
that
you
need
to
be
aware
of
the
behavior
of
admission
controllers,
especially
if
some
they're
kind
of
writing
on
each
other's
resources
right.
So
you
can
have
multiple
admission
controllers,
just
be
aware
of
what
they're
doing.
C
C
A
general
kubernetes
concern
this
isn't
restricted
or
somehow
endemic
to
only
caverno
whenever
you're
running
admission
controllers
in
general,
and
you
have
instances
that
happen
where
your
cluster
may
be
down.
You
have
the
same
problem,
so
caverno
is
an
admission
controller.
It
isn't
exempt
from
any
of
those,
although
we
are
taking
additional
steps
both
on
the
documentation
and
also
on
the
helm,
chart
side
in
the
upcoming
1.7
to
make
it
even
easier
to
tell
you
more
to
prevent
you
from
accidentally
doing
things
like
that.
So
look
for
those
enhancements
coming
soon.
A
Great
really,
really
we
speed
through
the
other
three
questions.
So
perfect,
we're
good
on
time,
but
the
discussion
can
continue
in
the
cloud
native
live
slack
channel
as
always,
if
there
is
anything
more
but
that's
it.
So,
let's
start
wrapping
up.
So
thanks
everyone
for
joining
the
latest
episode
of
cloud
native
live.
It
was
great
to
have
a
session
about
protecting
software
supply
chains
using.