►
From YouTube: CNCF Security TAG 2021-07-07
Description
CNCF Security TAG 2021-07-07
A
A
And
for
existing
members
go
ahead
and
make
sure
that
you
list
your
name
as
well:
you'll
notice
some
changes
to
the
dock,
we're
asking
everybody
as
a
reminder
to
put
your
company
name
and
if
you
have
any
updates
or
any
working
groups
or
other
memberships
that
you
may
have
within
the
cncf
helps
allow
new
members
as
well
as
existing
members,
make
connections.
C
D
Good
day,
emily
a
quick
question
before
things
get
underway,
I
had
previously
spoken
with
ash
and
brandon
about
joining
the
the
toc
meetings
twice
a
month
and
just
sort
of
taking
notes
and
checking
in
here.
Would
that
be
it's
pretty
much
just
like
a
in
this
case,
a
quick
copy
past
of
what
happened
the
other
day.
Would
that
be
relevant
for
today's
check-in
or
no.
A
D
A
A
Define
clean
try
to
keep
them
on
topic
to
the
presentation.
We've
got
a
lively
bunch
of
contributors
here
and
we
don't
want
to
derail
or
take
away
any
time
with
all
of
the
other
fabulous
work
that
we
have
going
on.
C
A
A
A
Awesome,
thank
you,
shapad,
okay
and
then
for
existing
members
and
working
group
reps
remember
to
include
your
organization
and
company,
along
with
the
working
group
you're
involved
with
the
update,
because
we
have
a
presentation
today
we
won't
be
reading
out
updates,
so
some
quick
announcements
to
go
over
the
cfp
is
open
for
cloud
native
security
gone
the
cloud
native
security
con
through
july
25th.
A
As
a
reminder,
this
is
a
hybrid
event,
so
you
can
go
in
person
to
los
angeles
or
attend
online,
virtually
if
you're
interested
in
the
cloud
native
serverless
security
paper,
the
north
america
meeting
coordination
poll
is
available
in
the
tag
security,
serverless
white
paper,
slack
channel,
you're,
part
of
the
supply
chain,
security
working
group
who
is
developing
a
reference
architecture.
There
are
some
user
stories
that
are
being
broken
out
into
more
defined
requirements.
B
All
right,
just
in
terms
of
time,
do
we
have
about
15
minutes
30
minutes.
How
much
time
should
I
plan
for.
A
B
B
I'm
a
co-chair
in
the
policy
working
group
also
contribute
in
the
multi-tenancy
working
group
so
happy
to
come
and
present
and
talk
a
little
bit
about
what
you
know
what
we're
doing
with
kiberno.
So
today
you
know
I'll
just
go
through
a
few
background,
slides
we'll
you
know
we'll
also
look
at
a
live
demo
of
how
giver
now
works,
and
you
know
we'll
leave
some
time
for
q
a
so
first
off.
You
know
why
why
policies
why?
Why
does
any
of
this
matter?
B
Of
course-
and
I
think
some
of
this
will
be
very
familiar,
but
to
me
you
know
as
folks
approach
kubernetes.
Of
course
we
know
the
two
main
personas.
The
two
main
roles
are
your
operators,
the
cluster
admins
and
then
the
developers
other
product
teams
using
the
cluster
right
and
policies
within
configuration
management
can
serve
as
a
contract
to
bring
together
these
two
roles.
B
So
a
lot
of
the
complexity
that
you
know
is
usually
discussed
or
has
talked
about
and
kubernetes
you
know
having
declarative
configuration,
management
tends
to
be
quite
verbose,
quite
complete
and
what
it
can
do
and
policies
can
again
help
in
you
know
aligning
and
helping
manage
some
of
that
complexity
to
make
things
easier
for
developers
and
users
of
the
cluster,
but
also
importantly,
to
have
that
the
ability
to
give
operators
the
way
to
manage
you
know
common
configurations,
the
way
to
enforce
defaults
to
apply
configuration
security
across
their
clusters
right.
B
So
that's
you
know
in
terms
of
the
the
you
know,
the
I
guess
the.
Why
for
policies
itself
and
then
just
quickly
introducing
kiverno
and
we'll
talk
about.
You
know
some
of
the
key
concepts
and
the
motivations
behind
caverno
and
then
go
into
the
features,
but
what
we
you
know,
the
the
whole
goal
of
kimerno
and
you
know
as
we
started.
B
The
project
was
to
design
a
policy
engine
specifically
for
communities
and
kubernetes
resources
itself
right
and
the
sort
of
having
that
focus
provides
a
lot
of
flexibility
and,
in
terms
of
you
know,
adopting
kiverno
to
work
really
nicely
within
kubernetes.
B
So
leverage
all
of
the
underlying
sort
of
patterns-
or
you
know,
idioms
like,
for
example,
knowing
what
an
owner
references
or
understanding
how
pod
controllers
work
and
also
you
know,
in
terms
of
the
overall
experience
for
both
of
the
personas
we've
talked
about
for
developers
and
for
operators
again,
making
this
very
native
to
kubernetes,
familiar
to
kubernetes,
using
also
tools
like
group,
cuddle,
customize,
etc,
which
are
available
to
kubernetes
users
right.
B
So
the
whole
idea
is
just
make
it
simple
to
to
apply
manage,
and
you
know,
policies
to
also
in
terms
of
getting
policy
results
within
clusters,
and
you
know,
help
sort
of
secure
and
manage
kubernetes
configurations
in
an
easier
manner.
So,
by
the
way,
one
question
we
often
get
asked
is:
what
does
caverno
mean?
Why
why
is
it
named
kiverno?
So
kimono
is
a
greek.
B
You
know
word
for
govern,
so
that's
kind
of
why
we
thought
it
would
be
a
nice
term
and
fairly
applicable
to
what
the
problem
it's
trying
to
solve
all
right.
So
one
other
question
which
always
comes
up
right
away
and
is
you
know
well,
you
know,
doesn't
oppa
do
this,
can't
we
do
the
same
with
drego
and
gatekeeper
and
with
the
oppa
project
itself?
And
yes,
of
course,
oppa
is
fantastic
for
managing
policies,
and
it's
a
you
know.
One
of
the
things
again
going
back
to
the
previous
slide
is
well.
B
Oppa
is
more
of
a
general
purpose
in
a
policy
management
tool,
as
I'm
sure
most
of
you
or
all
of
you
know,
but
the
main
challenge
we
were
trying
to
address
and,
as
we
spoke
to
you,
know
users,
as
we
kind
of
looked
at
what
kubernetes
admins
as
well
as
end
users,
wanted,
is
again
making
policies,
something
which
are
a
lot
simpler
to
to
write,
to
manage
and
to
even
you
know,
apply
them
and
I'll
talk
about
some
of
the
differences
and
features,
but
also
to
be
able
to
get
results
and
reports
right.
B
So
here's
just
a
quick
comparison
of
what
you
know.
The
same
policy
would
look
like
in
rego,
which
is
the
language
that
oppa
uses
and
then
how
kivono
defines
that
same
policy
and
your
you
know.
B
This
policy
is
just
validating
within
you
know
a
pod
to
make
sure
that
the
file
system,
the
root
file
system,
is
marked
as
read-only
right,
so
both
interestingly,
of
course,
oppa,
is
also,
if
you
kind
of,
if
you
look
at
rego,
it
is
a
declarative
language
right
and
qivarnau
is
also
declarative,
but
there
I
guess
it
matters
what
you're
trying
to
declare
and
do,
whereas
you
know
regulate
it,
takes
the
approach
of
being
efficient
in
in
processing
data
sets,
and
you
know,
for
json
data.
Kiverno
is
more.
B
You
know
geared
towards
understanding
what
a
kubernetes
resource
definition
looks
like
and
making
policies
as
familiar
as
possible
to
folks
who
already
know
what
a
kubernetes
spot
looks
like
or
what
kubernetes
you
know,
any
other
resource
definition
would
look
like,
and
the
language
here
is
yaml,
but
it's
not
so
important.
You
know
what
the
language
is,
of
course,
if
you're
generating
policies
or
managing
policies
through
external
tools,
you
can
use
any
api
or
just
kubernetes
apis
with,
because
the
policy
itself
is
a
kubernetes
resource
right.
B
So
the
important
aspect
here
is:
there's
no,
you
know
there's
no
separation
of
the
policy
definition
and
then
the
enforcement.
It
is
a
resource.
Well,
the
enforcement
happens
through
kiberno
as
an
admission
controller,
but
there's
no
separation
of
you
know,
defining
the
policy
within
kubernetes,
so
there's
no
constraint
language
or
anything
like
that
which
you
would
use
with.
You
know
rego
and
gatekeeper
to
drive
parameters,
it's
all
embedded
in
that
one
resource
and
that
resource
just
like
any
other
kubernetes
resource.
You
would
manage
across
your
fleet
of
clusters.
B
You
would
manage
in
your
ci
cd
pipeline
diver
known
as
support
for
a
cli
as
well,
so
you
can
apply
those
policies
before
they
hit
any
cluster
and
then
at
admission
controls
and
then
within
clusters
given
or
does
periodic
background
scans.
Also
to
you
know,
provide
validation
checks.
If
you
don't
want
to
auto,
enforce
certain
rules,
you
can
validate
them
inside
your
cluster
itself.
B
So
that's
those
are
some
of
the
key
aspects
right:
making
policies,
native
resources,
making
policies,
you
know
defining
policies
in
the
same
manner
or
to
mimic
the
structure
of
what
a
kubernetes
resource
would
look
like
and
again
simplifying
the
overall
management
of
policies
and
the
policy
results
across
clusters.
B
So,
just
to
kind
of
you
know
some
of
these
points
I
just
covered,
but
the
main
other
things
you
know
to
point
out
is
you
know.
I
mentioned
some
of
the
features
and
differences,
so
one
of
the
things
we
looked
at
within
kivarno
is
its
policies.
Of
course
configuration
security
is
a
big
use
case
for
policies,
but
it's
not
just
about
configuration
security
and
one
other
use
case
that
we
feel
is
pretty
excuse
me.
Let
me
grab
a
quick
sip
of
water.
B
Where
givernow
can
you
know,
based
on
certain
triggers
like,
for
example,
a
classic
one
is
even
namespaces
are
created,
kiverno
can
automatically
generate
all
of
the
configurations
required
for
that
namespace
right.
So
things
like
role,
bindings
network
policies,
other
defaults
for
quotas,
etc.
You
can
automate
that
which
you
know
and
that's
one
example.
You
can
even
trigger
generate
rules
based
on
other.
You
know,
object
creation
or
even
when
objects
are
mutated,
like
with
labels
or
if
they're,
you
know
mutated.
B
With
things
like
annotations,
you
can
also
trigger
kevin
policies
based
on
that,
so
that
expands
sort
of
the
use
case
of
policies
beyond
configuration,
security
to
things
like
automation
and,
of
course,
this
all
buys
into
the
broader
security
realm,
but
it
allows
you
know
easy
automation
of
configurations
required
for
certain.
You
know
certain
sets
of
users
or
certain
resources,
and
it
also
allows
you
know
doing
interesting
things
like
if
you
create
a
namespace
and
if
it
has
a
label
pci.
B
Maybe
you
want
a
node
selector
to
be
injected
with
certain
labels.
You
know
to
put
pods
or
workloads
in
that
name
space
on
certain.
You
know
resources,
so
things
like
that
can
now
be
automated
through
policies
and
again
going
back
to
this
concept
of
using
a
policy
as
a
contract
between
dev
and
ops
and
kievano
supports,
because
you
know
again,
it
inspects
all
the
custom
resource
definitions.
It
can
understand
the
structure
of
crds
within
you
know
installed
in
the
cluster.
It
can.
B
You
know
it
supports
all
resource
types
with
custom
resources
etc,
and
you
know,
will
do
validation
and
things
like
that
on
those
based
on
your
schema
definitions
provided
within
your
cluster
itself.
B
So
quick,
you
know
view
of
how
kiwerner
works,
as
you
can.
Probably
you
know
envision
it.
It
installs
as
a
as
a
you
know,
admission
controller
as
a
web
up,
so
any
any
mutating
or
validating
requests.
That's
in
the
api
path
will
be,
you
know,
will
be
forwarded
to
caverno
based
on
the
policies
configured
so
internally,
cavernous
maintains
an
in-memory
cache
of
policies.
All
of
those
are
resourced.
B
B
There's
you
know
a
generate
controller
which
does
the
you
know,
generation
of
resources
based
on
certain
triggers,
there's
a
general
policy
controller
which
is
responsible
for
the
reporting
and
also
for
you
know,
maintaining
the
policy
cache
things
things
of
that
nature,
so
both
of
those
operate
more
as
in
in
the
background,
whereas
you
know
the
web
hook,
is
what
in
the
api
path
itself
and
then
there's
a
monitor.
You
know
component,
which
will
in
you
know
automatically
create
the
web
book
registrations,
the
you
know,
certificates
and
things
required
to
install
kiverno
within
the
cluster
itself.
B
There
are
some
other,
you
know,
kind
of
crs
which
cabrano
internally
uses
so
there's
the
generate
request,
which
is
what
it
uses
to
trigger
these
generate
rules.
There's
a
policy
cr,
of
course,
which
is
a
definition
that
it
watches
and
manages,
and
then
there's
on
the
reporting
side.
Whenever
a
report
needs
to
be
updated,
there's
change,
requests
created
for
that
so
yeah.
So
today
you
know
and
given
out
with
1.4
with
the
latest
release,
it
supports
full
hx.
B
All
of
this
you
can
run
as
multiple
replicas,
and
you
know
it
does
a
leader
election,
of
course,
to
coordinate
work
across
its
replicas,
so
all
of
that
works
fairly
easily
and
in
a
straightforward
manner
in
terms
of
how
it
gets
configured
and
how
you
would
scale
up
givernow
for
larger.
You
know,
clusters.
B
Just
in
terms
of
the
physical
sort
of
architecture
view
right,
so
they
you
know
when
you
install
caverno
and
we'll
see
this
live
it
installs
into
a
namespace.
It's
got,
you
know,
as
you
can
imagine,
all
of
the
standard
sort
of
fixtures
within
our
resources
within
kubernetes.
B
More
importantly,
and
some
of
this,
you
know
also
talk
about
some
of
the
conversations
john
and
I
are
having
on
the
security.
You
know
kind
of
self
assessment
for
kiverno.
It
will
create.
You
know
the
right,
validating,
mutating
web
books.
It
creates
the
right
roles,
so
these
are
customizable
based
on
permissions.
You
want
to
allow
things
of
that
nature,
so
all
of
these
resources
will
get
installed
and,
of
course,
if
you're
removing
giver
no
or
configuring
kiberno
as
an
admin,
these
would
be
important
to
note
all
right.
D
You've
got
two
questions
in
chat:
I'll
read
the
first
one
to
get
going:
okay,
so
coach
maddis.
When
you
go
back
to
thinking
about
validation.
What
if
I
write
a
policy
that
violates
security
where
it
violates
means
either
one
breaks,
existing
policies
or
two
violates
higher
order
constraints,
for
example,
compliance
or
regulatory
objective.
B
Yeah,
so
that's
a
pretty
interesting
question
and
you
know
I
guess
it
is
possible
to
write
bad
policies
right
and
it
brings
up.
Another
question
is:
how
do
you
know
that
the
policies
running
in
your
cluster
are
trusted?
Are
policies
that
you
know
are
delivered
from
the
right
sources?
So
there
are.
You
know
one
of
the
things
we're
investigating
and
I
think
there's
some
work
going
on.
Also
in
other
communities
is,
is
it
possible?
Just
like
you
know,
I
had
seen
a
project
which
was
looking
at
signing
kubernetes
resources?
B
Is
it
possible
to
establish
some
trust
change
for
policies
itself?
You
know
to
make
sure
that
at
least
there's
no
sort
of
malicious.
You
know
policy
or
you
know
other
sort
of
malware
delivered
as
a
policy
within
kubernetes.
B
The
other
aspect
to
it
is
any
inadvertent.
You
know,
let's
say
you
kind
of
end
up
writing
a
policy
which,
of
course,
accidentally
causes
some.
You
know
some
breakage
or
some
other
problems,
and
the
for
that
of
just
like
with
any
critical
infrastructure
or
code
really
requires
that
requires
some
proper
testing
and
roll
out,
as
you
would
with
other
resources
right.
So
yes,
it
is,
you
know,
policies
of
course
can
do
harm
so
there's
there.
B
It
will
be
interesting
to
explore
that
and
see
again
how
to
restrict
what
can
be
done
and
to
make
sure
at
least
the
policies
that
are
delivered
are,
in
fact
the
ones
you
expect
to
be
in
your
cluster
itself.
D
And
there's
two
more
coming
in
it
is.
We
were
looking
forward
to
having
you
come
in
and
talk
about
this
there's,
probably
a
lot
of
questions.
What
I'm
gonna
do
is
go
read
this
one.
Then,
let's
keep
going
then
we
can
bring
some
more
at
the
end
as
we
have
time,
but
this
is
sort
of
related
to
what
we're
just
talking
about
around
audit
and
those
policies.
How
do
you
test
them?
You're,
probably
reading
this,
but
I'll
read
it
for
the
sake
of
it.
D
Is
there
an
audit
mode
which
is
able
to
do
either
audit
versus
enforce
in
in
thinking
about
either
from
rnd
ci
cd,
as
well
as
migration,.
B
Also,
the
way
kiverno
is
designed
is
we
have
sort
of
been
fairly
careful
about
not
impacting
existing
workloads
but
reporting
violations
on
it
and,
in
fact,
we're
now
looking
at
features
where,
if
you
do
want
to
enforce
policies
on
existing
workloads,
how
do
you
do
that
right?
So
how
do
you,
if
you're
rolling
out
a
new
policy?
B
So,
even
if
you
write
a
blocking
policy,
we've
tried
to,
you
know,
make
it
as
full
proof
as
possible
to
not
impact
existing
workloads,
and
you
know
if
a
pod
gets
sort
of
rescheduled,
you
don't
want
to
block
it
if
it
was
working
correctly,
but
now
happens
to
enforce
and
or
violate
a
new
policy.
But
kaverno
will
report
these
and
let
the
admin
take
an
action
on
it.
B
Okay,
so
yeah.
If
you
want
to
continue
and
let's
we
can
pause
again
for
more
questions
a
little
bit
later,
so
just
to
kind
of
explain,
you
know
quickly
the
policy
structure
itself
and
what
governor
policies
look
like.
So
it's
fairly,
you
know
straightforward,
a
policy
is
a
set
of
rules.
Rules
are
ordered,
so
they
will
apply
in
order.
Policies
themselves
are
not
ordered
and
policies
cannot
override
other
policies
or
anything
like
that.
Each
one
is
evaluated.
You
know
completely,
you
know
in
isolation
and
in
an
atomic
fashion.
B
So
once
you
know
a
rule
matches
so
there's
a
bunch
of
different
criteria
to
match
rules
or
to
exclude
rules.
Then
each
rule
will
have
the
logic
to
either
mutate
resources,
validate
resources
or
generate
resources.
Right
and-
and
you
can
combine
these
within
a
higher
level
policy
and
policies-
have
certain
you
know
fixtures
like
things
like
the
validation
failure
action
we
talked
about
auditor
and
force.
You
can
have
comments.
You
can
have
metadata
things
like
that
within
the
policy
itself.
B
B
If
they,
if
they
apply,
then
the
policy
you
know
matches
or
is
validated
right.
So
here
we're
trying
to
make
sure
that
run
as
non-root
is
set
within
a
pod
security
context,
either
at
the
container
level
or
at
the
pod
level.
Right
and-
and
that's
all
this
is
this
policy
is
doing,
and
then
there's
some
message
and
other
stuff
that
you
can
do.
B
In
this
case,
we
yeah,
we
don't
have
the
validation
failure
action,
but
if
you
add
that
at
the
policy
level
it
will
control
that
audit
or
enforce
behavior
another,
you
know
so
in
in
the
validate
pattern.
There
are,
you
know,
there's
full
support
for
wild
cards.
There's
operators
things
like
that,
and
it
works
in
a
fairly.
You
know
logical
or
reasonable
manner.
So
you
know
just
within
your
value
field
of
your
yaml,
you
can
you
know
kind
of
combine
some
of
these
operations
to
check
if
something
it
should
be.
B
Non-Empty
should
be.
You
know,
there's
also
by
the
way,
there's
full
regex
support,
which
you
can
combine
with
jamespath
expressions.
So
you
can.
The
policies
can
get.
You
know
more
complex
than
some
of
these
examples
and
I'll
show
how
that
can
be
even
done
with
api
server,
lookups,
etc.
B
B
Another
example
of
you
know
how
to
mix
in
some
conditionals
right.
So
these
ex
this
is
these
are
mutate
policies,
so
by
the
way
for
mutate,
kiberno
supports
both
json
patches,
as
well
as
a
strategic,
merge
patch,
and
that
is
works
the
same
as
it
would
with
customize.
So,
if
you're
familiar
with
how
to
customize
those
patches,
kirano
patches
work
the
same
way
or
if
you
prefer,
you
can
use
a
json
patch,
the
rfc,
6902
patch,
syntax
right.
So
the
first
example
the
first
up
top,
is
showing
the
jsonpaths
and
syntax.
B
You
know
where
it's
just
as
an
operation
and
the
value
you
want
to
insert
into
your
patch,
and
the
second
example
is
showing
more
of
a
overlay
pattern
which
also
give
her
no
supports.
So
that's
more,
you
know
declarative,
because
you're
saying
here
I
want
to
make
sure
if
you
know,
if
the
port
is
defined
and
it
starts
with
secure,
then
you
want
a
certain
port
number.
It's
a
bit
of
a
contract
example,
but
it's
to
show
this.
You
know
kind
of
conditional
logic
and
the
second.
You
know,
example.
B
The
third
example
on
this
is
showing.
If
the
port
is
defined
and
not
you
know,
if
it's
not
present,
then
you
want
to
insert
it.
But
if
a
value
is
there,
you
just
leave
it
alone.
Don't
do
anything
right,
so
that's
how
you
would
do
some
simple,
conditionals
and
there's
other
you
know
kind
of
ways
to
do
more
complex
operations,
but
this
is
just
an
example
of
how
mutate
would
work
and
finally
generate
is
also
fairly
straightforward
right.
So
this
is
showing
an
example
where
you
have
inline
data
to
generate.
B
You
can
also
reference
data,
you
know
which
for
other
from
existing
resources.
So
let's
say
you
have
you
know
an
existing
role
defined
in
your
cluster.
You
can
clone
that
and
you
can
also
tell
kevin
to
keep
that
the
copies
in
sync
with
the
clone.
So
the
pretty
cool
thing
you
can
do
now
is
for
managing
things
like
certificates
or
or
even
secrets
for
your
image
registries.
B
You
can
write
a
generate
policy
to
clone
these
from
a
common
source.
So
every
time
a
name
space
is
created
given
will
generate
the
you
know
the
image,
build
secrets,
and
you
know
if
you
change
that.
So
when
you
rotate
your,
you
know
credentials,
you
change
it
in
one
place
and
kibarn
will
propagate
it
to
all
the
existing
namespaces
where
that
was
cloned
from
right.
B
So
pretty
pretty
straightforward
and
you
know
works
as
you
would
expect
you
know,
for
for
generating
and
managing
resources
or
resource
life
cycles
as
well,
all
right
and
finally,
on
the
policy
report
itself.
So
just
you
know,
one
of
the
things
to
quickly
mention
is
the
policy
report
here.
We're
using
is
the
the
custom
resource
you
know,
as
defined
in
the
policy
working
group.
So
I
say
robert
is
online
as
well.
It's
robert
and
I
have
the
co-chairs
there,
the
crd,
the
customer
resource
defined.
B
The
idea
is
to
have
a
general
policy
report
which
various
tools
can
support.
There's
work
in
progress
to
you
know
there
was
a
bench
adapter
which
was
completed,
there's
work
in
progress
for
falco
and
trevi,
and
some
other
tools
you
know
and
we'd
like
for
other
projects,
to
also
adopt
this
policy
report
to
create
a
common
way
of
reporting.
You
know,
violations
of
reporting
policy
results
within
clusters,
independent
of
which
tool
is
managing
the
policies
right.
So
this
is
just
a
quick
example
of
what
that
looks.
Like
you
know,
within
the
cluster
itself.
B
So,
let's
you
know
I'll.
I
know
we
have
about.
I
guess
not
much
time
left
about
five
more
minutes,
so
just
going
through
some
of
the
use
cases,
the
obvious
use
cases
are
pod
security,
general
configuration
security,
applying
fine
grain,
you
know
our
back
and
automatically
generating
roles
for
various
things
like
namespaces
or
workloads.
Kiberno,
you
know,
can
enforce
that
and
also
based
on
other
conditions,
doing
things
for
multi-tenancy
for
namespaces
as
a
service
managing
labels.
B
You
know
some
dynamic
configuration.
You
know
like
we
talked
about
based
on
various
rules,
a
new
use
case
and
I'll
quickly
show
a
demo
of
this.
It's
still
alpha.
It's
not
yet
in
one
of
the
covernal
releases,
but
we're
you
know,
integrating
with
six
store
cosine
to
do
image.
Verification
through
you
know
through
by
using
the
unit
signatures
by
verifying
that
with
your
configured
registries,
etc.
B
B
So
with
that,
let's
you
know
quickly
go
to
a
demo
and
then
we'll
based
on
time.
We
can
come
back
for
more
questions
or
you
know
certainly
I'll
be
available.
Also
on
slack,
you
know
to
answer
anything
else,
all
right.
So
let
me
you
know
what
I'll
do
is
I'm
going
to
install
so
I'm
just
going
to
go
to
the
keyword,
note
docs.
So,
by
the
way,
our
documentation,
you
know,
is
pretty
complete
in
terms
of
getting
started,
getting
things
going.
So
the
first
thing
you
know.
B
Actually,
in
fact,
let
me
I'll
go
I'll
start
with
the
image
verification
demo.
Just
because
it's
something
new
and
it's
pretty
cool
to
look
at
so
what
I'm
going
to
do
is
I'm
going
to
install
kiverno
using
this
yaml
there's
a
hem
chart
available
too,
but
I'm
just
going
to
install
it
with
this,
and
you
know
we'll,
then
you
know
install
the
policy
which
I
have
and
I'll
show
that
while
it's
coming
up
so
the
policy
I'm
going
to
use
is
one
which
is
going
to
use
with
cosine.
B
B
I
want
to
verify
you
know
that
it's
signed
with
this
signature
and,
of
course
you
know
if
you
want
to
deny
everything
else
you
can,
but
here
I'm
allowing
everything
else,
but
only
checking
for
images
from
this
registry
right.
So
all
right,
so
kimono
should
have
been
installed.
If
I
just
quickly
look
at
what
the
installation
did.
B
B
Okay,
so
it
it
took
this
policy
and
then
the
next
thing
I'll
just
go
back
to
this
document
and
happy
to
share
that
also
later
so.
The
first
thing
I
want
to
do
is
just
run
a
container
which
should
which
has
been
signed
with
that
public
or
the
private
key
which
matches
that
public
key.
So
that
should
work
and
you
know,
should
be
allowed
to
run
within
my
cluster
itself.
The
one
other
thing
which
is
interesting
is
what
kever
know.
B
B
Okay
and
we'll
try
it
again,
let's
make
sure
that
this
is
okay,
good,
all
right,
so
it
got
created.
The
other
thing
I
wanted
to
show
is
once
it's
created
given
cavernous
also,
so
this
policy
is
kind
of
interesting
because
it
does
boat,
a
validation
of
the
image
based
on
your
public
key
as
well
as
it's
checking
once
it
validates
it
will.
Also,
then
you
know
it
gets
the
digest
from
the
cosine
response,
that's
coming
back
from
the
registry
and
it
mutates
the
workload
with
that
particular
digest
itself
right.
B
So
that's
why,
when
I'm
looking
at
my
pot
definition
now
it
automatically
replaced
like
what
I
had
was.
I
just
gave
it.
You
know
my
registry,
but
now
it
has
digest,
which
is
just
an
added
kind
of
security
check
to
make
sure
that
we
that
image
becomes
immutable
all
right.
So
that's
a
quick
demo
of
what
it
would
look
like.
You
know
what
what
you
can
do
for
the
image
verification
piece,
but
let
me
the
other
thing
I
can
quickly
highlight
is
also
just
in
terms
of
pod
security.
B
You
know
it's
very,
very
straightforward
with
give
or
no
we
have
so
if
you
go
to
our
documentation
and
yeah.
So
if
you
go
to
the
camera
docs,
you
go
to
the
policies
link
and
you
go
to
pod
security,
I'm
going
to
apply
a
number
of
different
and
these
these
policies
follow
the
pod
security
standards.
So
I'm
just
going
to
apply
those
to
my
cluster.
B
So
what
it's
doing
is
it?
Will
you
know
kind
of
look
at
that
repo?
It
will
customize
it,
and
this
customization
is
just
switching
it
to
enforce
mode.
So
I'm
going
to
demonstrate
what
it
looks
like
if
we
are
now
going
to
inject
a
bad
board
which
has
some
you
know,
things
which
are
not
allowed
by
these
policies.
B
This
is
the
set
of
policies
you
know
of
which
it
will
and
again
these
follow
the
pod
security
standards.
So
it's
you
know
you
can
choose
whether
you
apply
them
in
baseline
or
restricted,
which
are
the
two
security
levels
itself
right.
So
once
that's
you
know
installed
what
I'm
going
to
do
is
I
you
know.
I
like
this
for
demos,
there's
a
site
if
you
haven't
seen
this:
it's
bad
pods
and
the
bishop
fox
reaper,
so
they
have
some
pretty
interesting
pods,
which
you
can
use
to
test
different
things.
B
What
I'm
going
to
do
is
apply
this
podiamo,
which
is
allowing
everything
right
so
like
it
says
here.
It's
a
lot
of
you
know
problems
with
the
spot
definition
and
we'll
go
ahead
and
apply
that
so
by
the
way.
Right
now,
if
I
look
at
you
know,
you
know
my
cluster
policies.
Cpal
is
a
short
form
for
that.
It
shows
me
all
the
policies,
so
we
had
the
image,
verification
and
all
of
these
other
policies.
B
So
let's
try
and
run
that
part
see
what
happens
and,
as
expected,
you
know
qiverno
kind
of
spits
out
a
bunch
of
issues
with
that
part,
and
you
know
most
of
these
are
fairly
clear.
It's
telling
us
don't
use
host
namespaces,
don't
use
hostpath,
make
sure
you
don't
not
running
as
privileged
and
the
run
is
non-root.
B
So
those
are
the
four
things
four
policies
it
violated,
which
is
what
we're
showing
here
so
again,
very
quick
demo
of
you
know
what
can
be
done
if
you
go
back
to
the
kiverno
site,
there's
a
lot
of
different
samples,
there's
about
60,
plus,
I
think,
and
growing
samples
these
all
contributed
through
the
community
and
there's
a
pod
security
section
which
you
can
use
to
install
just
like.
I
showed
what
can
be
done
with
this
library.
B
A
B
Yeah
and
thank
you
emily
for
that
reminder,
so
I
did
have
a
slide
and
you
want
to
give
a
shout
out
to
john
he's
been
working
with
me
on
this,
so
we
do
have
a
self-assessment
here
and
some
of
the
items
we're
working
on.
You
know
in
terms
of
the
role
definitions
of
the
overall
security
processes
for
our
project,
we're
going
to
create
in
the
docs
itself.
There
will
be
a
new
security
documentation,
section,
we'll
add
another
couple
of
interesting
things
which
go
beyond
caverno.
B
C
I've
got
one
question
that
was
great
great,
very
interesting.
One
quick
question
I
had
was
around
policies.
Is
there?
Are
there
plans
to
have
like
versioned
policies
for
where
the
kubernetes
api
changes?
So
I'm
thinking
like
we're
in
in
119
sec
comp
was
an
attic.
It
went
from
being
an
annotation
to
being
part
of
security
context.
B
C
I
one
other
one,
I'm
just
do
you
see
any
customers
running
and
this
kind
of
goes
to
your
point
around
the
attack
model
for
admission
controllers.
Do
you
see
anyone
running
kyverno
outside
of
the
cluster?
That's
being
protected
so
essentially
running
as
a
workload
in
another
cluster
to
so
that,
if
I'm
an
attacker
and
I
get
access
to
one
node
that
happens
to
be
running
the
admission
controller,
I
can't
just
essentially
delete
it
or
bypass
it.
B
So
as
an
admission
controller,
of
course,
it
would
have
to
be
inside
the
cluster
you're
trying
to
protect.
You
can
operate
caverno,
you
know
via
the
command
line
tool,
so
that
would
be
applicable
for
like
scanning
configurations
or
you
know,
validating
configurations
externally,
yeah,
I'm
not
sure
so,
but
running
it
outside
yeah.
Maybe
I
didn't
completely
follow
your
the
idea,
but
it
wouldn't
help
enforce
policies
than
within
those
target
clusters.
So.
C
B
That's
inter
the
question
there
would
be,
you
know
the
allowed
latencies
and
things
like
that
because,
again
yeah
now,
your
web
book
is
running
externally.
I
think
a
more
another
interesting
discussion
there
would
be.
You
know,
and
I
know
that
there's
another
project
called
kuborin,
which
is
trying
to
use
vasm
to
try
and
also
you
know,
see
how
that
can
help
with
security.
B
But
there's
that's
certainly
worthy
of
discussion.
I
think
the
question
becomes
yeah.
Can
you
tolerate
you
know
that
external
call
and
latencies
within
each
admission
request
and
how
would
that
scale.
B
Okay,
yeah
one
interesting
thing,
and
I
haven't
you
know:
I'm
just
I've
been
thinking
about
this.
I
haven't
really
explored
this
in
any
detail,
but
it
would
be
today.
Admission
controllers
are
supported
as
web
hooks
it'll
be
interesting.
If
there's
a
you
know,
kind
of
a
more
native
mode
right
to
run
tools
like
caverno
using
vasim,
just
like
you
know,
can
be
done
with
istio,
for
example,
or
with
you
know,
other
other
solutions
and
to
have
policy
controls
embedded.
C
B
A
That's
a
great
idea,
jim.
If
anybody
is
actually
interested
in
leading
that
effort,
I
would
recommend
a
proposal
be
submitted
as
an
issue
within
the
stag
repo.
That
way
we
can
determine
who's
interested
and
maybe
get
that
added
as
a
project
in
the
future
or,
if
not
interested
in
leading,
maybe
make
a
suggestion
in
the
issues
and
we'll
see
if
we
get
interest
later
on.
C
B
Yes,
so
you
know
what
we
recommend
and
for
at
least
for
our
customers
at
nermata
we're
running
caverno
we,
you
know
they
do
run
also
kiverno
in
the
cicd
pipeline,
just
to
validate
and
give
feedback
as
part
of
the
build
tools
itself.
So
developers
then
see
the
you
know,
violations
there
before
they.
It
goes
into
any
cluster
right.
It's
just
a
quicker
way
to
give
that
feedback
to
somebody
submitting
an
emo.
So
all
the
same
policies
can
be
run
and
the
only
differences
are.
B
C
B
Yes,
for
this
group,
of
course
you
know
like
the
the
work
that
john
and
I
are
doing
around,
like
the
giver.
No,
you
know
self-assessment
and
just
that
would
be
one
very
interesting
area.
We
are
also
you
know
in
just
give
or
no
in
itself
like
some
of
the
new
extensions,
I
demoed
the
image
verification.
B
We
are
on
the
kubernetes
slack,
and
you
know
this
was
confusing
to
me
for
a
long
time
that
there's
a
on
the
kubernetes
slack.
We
have
a
given
channel,
we're
not
on
the
cncf
slack.
So
if
you,
you
know
want
to
reach
out
and
interact
with
the
kubernetes
community
just
or
the
kirano
community,
that's
the
best
place
to
find
us.
C
Amazing-
and
you
run
a
company
that
supports
the
project.
We
know
we
often
struggle
how
to
get
compensated
for
doing
open
source
work.
So
perhaps
not
the
channel
to
announce
that,
but
I
do
want
to
give
a
shout
out
for
building
a
a
company
around
an
open
source
project
and
encourage
folks
to
check
out
your
website,
see
if
there's
any
positions
open.
C
A
A
What
other
questions
does
anybody
have?
This
has
been
a
great
presentation
from
kyvarnow.
I
know
I
have
one
for
them.
How
did
you
find
the
security
pals
process
to
date?
Was
the
engagement
good?
Did
it
allow
you
to
think
about
security
in
a
different
manner?
What
parts
of
your
process
in
developing
kyverno
have
you
changed
as
a
result.
B
Yeah
good
question
so
certainly
very
helpful,
just
in
in
terms
of-
and
I
think
going
through,
the
self-assessment
and
preparing
for
it
in
at
least
within
our
team
itself,
raised
a
lot
of
questions
so,
and
you
know,
working
with
john
has
been
good.
It's
mostly,
you
know
most
of
the
items
of
course
and
challenges.
I've
been
scheduling
and
timing
on
our
side
right,
one
to
you
know,
and
there
were
a
couple
of
feedback
items
that
john
and
I
discussed,
which
would
be
interesting.
B
You
know
disclosures
things
like
that
reporting
because
you
know,
rather
than
having
each
project
invent
their
own.
You
know
if
there's
a
template
or
a
standard
which
most
cncf
projects
adopt,
then
it
could
be
done
earlier
in
the
project
life
cycle.
D
I
think
that's
it
to
sort
of
to
add
a
bit
of
color
on
that
the
idea
we're
thinking
of
I
got
on
my
notes.
I
haven't
sent
it
back
upstream,
yet
emily
and
team,
but
could
we
basically
put
together
a
sort
of
a
stack
of
documents?
D
So
when
you
come
into
cncf,
okay,
you're,
a
new
project,
you're
a
sandbox,
you
obviously,
as
we
came
up
with
security
pilots
you're
not
experienced
in
this
area,
here's
a
starting
point:
here's
templates
and
then
we
can
go
through
and
are
they
the
the
org
can
go
through
the
project
can
go
through
and
modify
those
as
necessary.
D
No,
I
think
that
the
standard
comment
we've
we've
had
we've
been
talking
about
this.
A
little
is
yeah
people
have
time
constraints
right.
It's
how
do
we?
How
can
we
most
easily
make
this
both
understandable
and
easier
for
you
guys
to
adopt
and
get
into
while
you're
trying
to
run
a
project
get
it?
You
know
adopted
and
then
in
your
case
also
run
a
business.
So
I
think,
that's
sort
of
the
type
of
stuff
that
we're
trying
to
figure
out
how
to
make
this
as
easy
as
possible.
A
So
the
standard
docs
is
an
interesting
topic
of
discussion.
There
is
so
one
of
the
things
that
the
cncf
strives
to
do
is
not
be
overly
prescriptive
and
how
individual
projects
govern
themselves.
A
I
could
probably
venture
I
guess
and
say
that
you
need
to
have
something
in
the
way
of
that
in
order
for
you
to
graduate.
But
I
think
this
is
that's
a
really
good
idea.
I
think
that
would
be
a
prime
topic
for
a
suggestion
or
another
project
or
proposal
in
the
future,
something
this
group
can
certainly
contribute
to.
C
The
sorry,
okay,
I
was
just
going
to
say,
there's
a
few
open
issues
that
cover
this.
The
one
that's
being
acted
on
in
the
next,
it's
being
kicked
off.
The
next
few
weeks
is
starting
at
the
really
really
basic,
which
is
like
going
through
and
seeing
everybody's
like
cii
docs,
the
the
security
related
things
that
follow
that,
and
then
you
know
maybe
creating
an
automated
dashboard
and
then
we
could
layer
on
top
of
that,
right
where
we
could
do
is
say:
okay.
C
Well,
if
you
name
your
files,
this
and
such
then
you'll
show
up
in
the
dashboard
automatically.
Otherwise
you
have
to
submit
a
pr
and
those
types
of
things
could,
I
think,
work
towards
standardization
in
a
way
that
makes
things
consumable
for
the
people
interested
in
using
the
projects.
So
I
would
my
take
on
center.
Docs
is
actually
different
from
it's
not
related
to
the
project,
governance
itself.
C
It's
actually
related
to
the
cncf
processes
themselves,
which
have
undergone
many
iterations
so
and
be
it
on
the
incubation
process
itself
or
the
review
process
within
the
settings
and
the
tags.
It's
just.
It
feels
like
we're
going
to
find
examples.
We
find
20
different
styles
and
having
to
know
which
one
because
they've
all
evolved
over
time,
people
taking
very
different
ways,
so
even
just
on
the
cncf
processes
themselves,
having
standard
inbound
documents
for
products
to
submit
would
be
a
huge
help.
C
I
also
had
a
question
for
jim
nobody
else
sure
you
did
a
great
job
of
explaining
the
difference
between
this
and
oppa
and
your
policy
expression
and
rego.
C
B
It's
right
around
there
right
and
one
one
challenge
we
saw
is
of
course-
and
I
don't
know
who-
but
somebody
had
expressed
this
very
well
as
they
said
in
any
any
kind
of
interesting
domain,
there's
typically
three
types
of
solutions:
right,
there's
a
native
solution,
there's
a
esl,
that's
created
to
solve
the
problem
and
then
there's
a
general
purpose,
programming,
language
type
solution
and
I
think
we're
seeing
all
three
play
out
for
kubernetes
policy
management
right.
B
So
the
challenge
we
kind
of
see
in
the
working
group
is
perhaps
the
language
is
not
the
right
thing
to
standardize,
but
at
least
if
we
can
standardize
things
around
it
or
it
should
there
be
some.
You
know
for
necessary
policies
like
pod
security.
B
Should
there
be
other
tools
created
to
know
that
those
policies
are
enforced
right
and
applicable.
So
those
are
all
great
discussion
points.
I
don't
know
the
answer
to
that.
I
think
seems
like
language
discussions
always
get
into
sort
of.
You
know
pros
and
cons
and
there's
always
trade-offs.
So
it
tends
to
be
what
what
the
different
roles
prefer,
but
totally
agree
with
your
idea
on
how
do
you
at
least
make
sure
that
the
right
policies
are
running
and
is
there
a
tool
for
that
in
the
working
group?
B
What
we
you
know
figured
is
the
easiest
possible
thing
to
start
with
is
perhaps
the
policy
outputs
the
report.
So
that's
what
we're
looking
at
standardizing,
first
or
maybe
even
again,
standardize
it
might
be
the
wrong
term,
but
at
least
having
a
recommendation
there
to
say:
hey
here's,
a
common
format.
If
you
can
output
in
this
format.
B
Now
it's
very
simple
to
see
any
policy
result
through
crib
cuddle
or
standard
other
kubernetes
tools
right
through
the
api
server.
C
Yeah-
and
I
wasn't
I
understand
like
there-
are
people
who
love
yaml
or
jason
or
xml,
god
forbid,
or
sorry
to
editorialize
there
or,
like
an
actual,
like
you,
know,
programming
language,
but
I
think
that
there's
an
opportunity
to
standardize
on
the
kind
of
basic
object
models
right.
If
I
have
something
in
oppa
that
says
you
know
something
about
a
port
or
an
ip
address,
and
something
in
caverno.
That
says
the
exact
same
thing.
C
I
should
be
able
to
compare
those
things
without,
like
writing.
Multiple
parsers
right.
Likewise,
if
I'm
gonna
supply
really
basic
policies
with
my
service,
that
is
not
a
security
service,
then
it
would
be
great
to
express
those
in
ways
that
either
oppa
or
inferno
could
ingest
it.
That
would
be,
you
know
it
would
be
normalized
rather
than
standardized.
B
Yeah,
that
would
be
an
interesting
topic.
I
certainly
worked
more
discussion
and
thought
in
terms
of
is
it
possible
to
have
that
you
know
the
ability
to
introspect
and
see
what
those
policies
are
doing
and
compare
the
actual.
You
know
outcomes
or
desired
behaviors
thanks.
A
All
right,
so
we've
got
about
three
minutes
left.
Just
when
I
put
a
quick
reminder
out
there
that
kyverno
is
in
the
kubernetes
slack,
not
the
cncf
slack.
So
if
you're
interested
in
chatting
them
up
about
some
of
these
topics,
maybe
contributing
go
over
to
the
kubernetes
slack
and
it's
just
kiberno
is
the
channel
name
to
be
able
to
jump
in
provide
contributions.
A
Jim.
I
would
like
to
thank
you
so
much
for
your
time
today
and
presenting
and
answering
all
of
our
questions
as
security
professionals.
We
don't
often
get
to
have
presentations
from
the
community,
so
this
has
been
a
real
pleasure
and
john
consella.
Thank
you.
So
much
for
working
with
caverno
in
the
security
pals
effort,
we're
very
interested
in
seeing
what
the
results
of
that
come
out
with.
Does
anybody
have
any
last
minute
questions
before
we?
Let
everyone
go.
A
Yep
and
last
minute
reminders
cloud
native
security.
Con
cubecon,
if
you
have
not
registered,
go
ahead
and
register
details
are
in
the
tag
security
channel
thanks
again
everyone
and
have
a
wonderful.
A
Unfortunately,
not
we're
working
on
maybe
having
virtual
codes,
but
I'm
not
sure
yet
so
stay
tuned
thanks.
Everyone.