►
From YouTube: What's new with Open Policy Agent Gatekeeper
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Thanks
for
joining
us
today,
everyone
welcome
to
today's
live,
see,
live
webinar
with
cncf.
What's
new
with
open
policy
agent,
gatekeeper,
I'm,
Libby,
Schultz
and
I'll
be
moderating
today's
webinar
I'm
going
to
read
our
code
of
conduct
and
then
hand
over
to
jdip
and
Malik,
both
software
engineers
at
Microsoft
a
few
housekeeping
items
before
we
get
started
during
the
webinar
you're,
not
able
to
speak
as
an
attendee,
but
there
is
a
chat
box
down
the
right
hand.
Side
of
your
screen.
Please
feel
free
to
drop
your
questions
there
and
we'll
get
to
some.
C
B
We'll
leave
to
them
for
the
end,
this
is
an
official
webinar
of
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct.
Please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct
and
please
be
respectful
of
your
fellow
participants
and
presenters.
Please
also
note
that
the
recording
and
slides
will
be
posted
later
today
to
the
cncf
online
programs,
page
at
community.cncf.io,
under
online
programs
they're
also
available
via
your
registration
link.
B
C
D
Cool
hello,
everyone.
We
are
delighted
to
have
you
all
here
with
us
for
the
cncf
webinar.
Today's
focus
is
on
the
gatekeeper
project
and
it's
exciting
new
feature
that
our
community
has
been
working
on,
since
our
last
update
I
will
try
to
share
the
new
features
that
we
have
added.
My
name
is
nilek
saudhary
I
worked
at
as
a
software
engineer
at
Microsoft,
Azure
and
I'm
also
involved
in
contributing
to
kubernetes
and
other
projects
in
the
ecosystem
and
I'm.
D
We
have
like
few
topics
on
our
agenda
for
today's
presentation,
so,
firstly,
we
will
delve
into
the
gator
CLI
and
its
practical
application.
Next,
we'll
explore
how
we
can
utilize
the
external
data
to
interface
with
the
various
external
sources,
using
the
provider-based
model
as
to
how
it's
going
to
work
and
we'll
see
you
know
how
we
can
use
it.
We'll
then
examine
the
validation
of
workload
resources
and
how
the
expansion
template
Works.
Additionally,
we
will
also
discuss
the
concept
of
mutation
in
GateKeeper
as
an
update.
D
Okay,
remember
the
agile
Bank
from
the
last
gatekeeper,
webinar
I'm,
not
sure
if
we
are
have
attended
the
previous
webinar
or
not,
but
we
have
this
agile
bank
that
was
using
a
gatekeeper
as
a
as
a
policy
engine
in
their
in
their
Bank
and
well
they're
packed
and
they
got
some
exciting
news.
So,
as
the
developer
of
the
agile
Bank,
we
have
decided
to
implement
a
new
policy
that
requires
a
valid
label
on
any
kubernetes
resource.
D
So
if
you
are
trying
to
deploy
anything
or
any
kubernetes
resource,
what
we
want
to
do
is
we
want
to
make
sure
that
we
have
a
valid
owner
label
on
that
on
that
resource
and
guess
what
we're
going
to
use
the
latest
features
available
in
gatekeeper
to
make
it
happen.
So,
let's
take
a
deeper
dive
into
the
details.
D
So
we'll
first
look
into
the
greater
sales.
So
if
you're
familiar
with
the
gatekeeper,
you
may
have
wondered
if
it's
possible
to
test
constraints
before
applying
them
to
the
kubernetes
Clusters
or
how
you
know,
incorporate
these
policies
into
your
CI
CD
process.
So,
for
example,
you
have
an
you,
have
your
existing
CI
CD
process,
and
we
want
to
make
sure
that
you
know
the
many
developers
on
your
team
are
developing
the
policies.
But
is
there
a
way
to
test
them
before
we
can
put
them
into
the
kubernetes
cluster?
D
And
you
know
what
helps
you
do
exactly
that,
so
these
tool
allows
you
to
perform
shift
left
validation,
testing
by
verifying
policies
prior
to
their
implementation
into
the
KH
cluster.
So
you
don't
have
to
always
deploy
policies
down
to
the
cluster
and
then
see
how
they
behave
or
how
they
work.
Or
you
know
if
they,
if
you
want
to
make
any
tweaks
and
turns
on
the
on
those
policies,
so
with
getter
CLI,
you
can
validate
policies
with
ease
and
ensure
the
smoother
deployment
process.
D
So
greater
CLS
has
like
many
different
sub
commands,
but
I
want
to
focus
on
two
sub
commands
that
are
like
important.
So,
let's
delve
into
those
two
important
getter
sub
commands
that
are
essential
for
any
developer
and
a
user
of
a
gatekeeper.
D
So
this
sub
commands
are
called
getter
verify
and
get
a
test.
So
when
you
come
to
the
getter
verify
a
little
bit
later.
So
let's
take
a
let's
take
an
example
scenarios
from
before
that.
Before
where
we,
you
know,
we
need
to
create
a
policy
that
requires
an
owner
level.
So
as
a
season
developer
at
agile
bank
I
know
how
to
write
a
policy
and,
as
you
can
see
from
the
direct
structure,
I
have
I,
have
the
constraint,
template
and
constraint
in
place.
D
So
if
you
are
familiar
with
the
gatekeeper
and
the
policies
that
we,
that
gatekeeper
has
so
it
has
a
constraint
template
which
defines
the
Rego
or
what
essentially
the
policy
is,
and
then
you
define
a
constraints
as
to
on
what
resources
behaves
and
and
what
parameters
my
policy
will
have.
And
the
typical
directory
structure
looks
like
this
and
by
the
way,
the
example
that
I'm
taking
here
about
the
owner
level
is
already
present
in
arcade
keeper
Library.
D
D
So
so
we
have
this
direct
structure
with
the
policies.
However,
I
still
need
to
check
if
the
policy
that
I
have
written
actually
works-
and
this
is
where
greater
test
comes
in
handy,
so
you
can
use
the
policy
that
you
have
written
and
you
can
run
the
getter
test
against
the
resource
files
and
you
will
immediately
receive
an
error
if
the
part
doesn't
have
another
label.
D
So
if
you
see
here
right
on
the
on
the
right
hand,
side
I
have
like
the
power.tampl,
which
is
like
a
simple
simplest
part
definition.
It
could
be
and
it
doesn't
have
a
doesn't
have
the
owner
level.
So
if
you
run
the
getter
test
against
the
against
this
Resources
with
the
policies
that
we
have
written,
you
will
get
an
error
and
we
will
see
the
demo
in
action
in
a
bit.
D
So
getter
verify
helps
us
validate
whether
the
constraint,
template
and
constraint
we
have
written
are
correct,
so
as
a
good
developer,
It's
always
important
to
write
test
for
for
your
code
right.
So
this
is
where
this
is,
where
the
the
test
driven
development
of
the
getter
sale.
I,
can
you
know
comes
in
handy,
so
if
you
can
see
on
the
right
hand,
side,
we
have
something
called
a
suit
dot
yaml,
so
the
suit.tml
file
serves
that
purpose
of
testing
over
here.
D
So,
if
you
look
at
it,
we
have
a
different
things.
Like
you
know,
hey
what
is
our
template?
Where
is
our
template,
then
what
constraint
we
are
trying
to
test,
and
there
are
some
other
assertions
that
you
can
Define
so
like
whether
you
are
expecting
the
violations
or
not
so
with
with
this
suit.aml
file
and
the
getter
verify
what
you
can
do.
Is
you
can
make
sure
that
whatever
policies
that
you
have
written
is
correct?
D
So
with
get
a
test,
you
can
test
whether
the
resource
that
you
are
going
to
create
will
be
allowed
or
not
with
greater
verify.
What
you
can
test
is
whether
the
policy
itself
that
I
have
written
is
correct,
or
not
so
here,
like
with
allowed
and
disallowed
examples
that
we
have
provided
in
the
direct
structure
right
so
get
a
greater
verify.
D
We
can
use
the
getter
verify
commands
to
ensure
that
the
policies
policy
we
have
written
is
correct
or
not
so,
and
if
you
see
at
the
end
here
right
where
integrator
verify
like,
if
you
like
run
the
greater
widget
will
tell
whether,
whether
it's
passing
or
not.
D
So,
let's
quickly
look
into
the
demo
of
of
this
command.
D
So
we'll
first
see
the
getter
test
demo
and,
let's
see
how
it
works.
So,
as
I
was
mentioning
right,
so
we
have
this
directory
structure
we
have
where,
where
we
have
policies
that
we
have
written
so
samples
will
have
like
you
know,
allowed
example
or
disallowed
examples
and
constraint.
The
suit.yaml
file
as
I
was
mentioning,
will
Define
the
test
for
this.
This
whole
policy
and
template
is
nothing
but
your
actual
Rego
template.
D
So,
let's
see
what
what
we
have
in
this
resources,
so
there
is
a
part
that
EML
file
and
I'm
going
to
use
or
I
am
intent
to
deploy
this
part
into
my
cluster.
So
I
want
to
make
sure
that
this
part
has
an
owner
level
and,
as
you
can
see,
it
does
not
have
an
honor
level.
So
we
should
expect
an
error.
D
So,
let's
see
what
happens
so
when
I
do
get
our
test
and
provide
the
directory
structure,
I
mean
so
when
we
do
get
a
test
and
we
provide
the
the
policy
path
and
then
the
resource
or
the
file
that
we
are
going
to
test
this
against
you,
we
get
an
error
saying
that
hey
all
parts
must
have
another
level,
so
this
is
typically
without
a
getter
seller.
D
Typically,
you
would
have
to
deploy
this
policy
onto
the
cluster
first
and
then
you
will
have
to
you
know,
actually
try
and
create
a
pod,
after
which
you
will
get
a
error
similar
to
this.
But
what
this
allows
us
to
do
is
like
is
essentially
helps
us
to
shift
left
and
make
sure
you
know
we
do
everything
right
even
before
we
deploy
the
policies.
D
Now,
let's
take
another
example
where
we
have
a
pod
which
has
which
has
the
Pod
label.
So
if
you
can
see
here,
the
only
difference
between
the
previous
yaml
file
in
this
ml
file
is
it
has
an
owner
level,
so
it
says,
like
the
owner
is
nellect.agilebank.com.
D
So
when
we
run
this,
when
we
run
this
resource
file
against
our
policies,
we
should
not
get
an
error.
D
D
That
means
whatever
resources
or
part
that
we
are
going
to
create
into
the
cluster.
Will
this
this
particular
policy
will
allow
this
resource.
So
that
way,
this
way
we
are
very
much
sure
that
our
policy
works
as
intended.
D
So,
let's
quickly
also
look
at
the
getter
verify
so
remember.
The
getter
verify
allows
you
to
verify
the
policy
itself,
so
it's
mostly
allowing
you
to
write
test
for
your
policies
and
make
sure
that
your
whatever
the
new
policies
that
you're
creating
has
been
tested
correctly.
D
So
again,
let's
start
with
the
directory
structure,
so
it's
the
same
directory
structure
that
we
have
and
let's
quickly
see
what
we
have
in
the
suit.aml
file.
D
So
again,
it's
it's
the
same
file
that
we
see
in
the
in
the
demo.
So
what
basically
does
is
again
like
it
allows
you
to
Define
what
you're
going
to
test.
What
constraint
you
are
going
to
test
is
against
and
write
some
assertions,
whether
you
are
expecting
the
violation
or
not
foreign.
D
So
now,
if
we
run
the
getter
verify
against
this
entire
directory,
it
will
tell
you
that
whether
it's
passes
or
not
so
again
so
this
is.
This
is
very
very
much
helpful
in
order
to
sort
of
Define
what
the
policy
will
be
and
whether
whether
my
policy
is
going
to
work
or
not,.
D
Cool,
so
that's
that's
all
about
the
cater
CLI,
but
let's,
let's
look
into
the
some
of
the
one
of
the
different
features
that
we
implemented
recently
so
continuing
with
the
same
example
of
the
required
label
policy
that
we
saw
earlier
right.
Let's
explore
how
we
can
leverage
the
external
data
feature
in
this
scenario.
D
You
know,
but
before
we
delve
into
that,
let's
briefly
discuss
what
external
data
is
so
gatekeeper
offers
various
ways
to
mutate
and
validate
kubernetes
resources.
But
in
many
cases
this
data
is
either
built
in
static
or
user
defined.
With
external
data
feature
gatekeeper
can
now
interface
with
the
different
external
datas,
also
such
as
image
Registries
or
you
know
any
any
other
external
data
or
any
data
that
that's
that
is
not
available
within
the
cluster,
and
we
do
so
and
we
do
so
with
the
provider
based
model.
D
So,
returning
to
our
previous
example
right
suppose
we
suppose
a
user
specifies
an
owner
label
on
the
resources
right.
We
we
want
to
validate.
If
that
specified
owner
actually
exists
in
an
agile
bank
right
I
mean
we
can.
If
we
just
say
that
you
know
hey
here
is
an
owner
that
have
specified
onto
the
file,
but
owner
can
be
literally
garbage
and
garbage.agiar
demos.com
or
whatever.
That
is
right,
but
that
garbage
user
is
not
an
actual
real
user.
D
So
we
want
to
make
sure
that
you
know
it
is
actually
a
valid
user,
and
since
this
data
is
external
to
the
kubernetes
cluster,
an
external
data
provider
can
assist
us
in
validating
this.
So
there
is
no
way
we
are.
We
are
going
to
have
all
the
users
or
all
the
employees
that
are
there
in
the
company
present
in
the
kubernetes
cluster
or
you
know,
have
the
data
available
in
the
kubernetes
cluster.
D
So
if
you
can
see
the
last
line
in
this
example
here
on
the
right
hand,
side
right-
it
says
it
says
like
send.
External
data-
requires
right.
So
in
this
last
line
here
we
invoke
the
external
data
provider
and
provide
it
with
with
owner's
information.
So
the
line
before
that
we
are
grabbing
all
the
owner
or
the
different
owners
that
we
may
have
and
pass
it
to
this
external
data.
D
So,
let's
quickly
look
or
see,
you
know
how
it
looks
in
the
real
life.
D
So
let's
first
see
you
know
what
what
what
in
the
constraint
templates,
what
we
have
in
the
constraint
template.
So
it's
the
same
template
that
I
was
showing
on
the
slide
earlier
and
if
you
can
see
here
like
we
are
grabbing
the
owners
and
then
we
are
sending
the
request
to
the
external
data.
So
this
is
this
is
what
our
template
is
going
to
do
when,
when
this
policy
is
in
effect
and
it
is
being
applied
against
the
any
resources.
D
Let's
also
quickly
look
at
the
constraint
that
we
have
defined
for
this.
So
if
you
look
at
this,
what
we
are
saying
is
it's
it's
the
same
template
or
it's
the
same
typical
constraint
of
the
gatekeeper
where
we
are
saying
that
hey
the
enforcement
action
is
deny
and
I
want
to
test
this
or
validate
or
have
this
policies
run
against
the
against
Parts.
Basically,.
D
And
now
we
are:
what
we're
trying
to
do
here
is
we're
essentially
trying
to
create
a
what
we
are
trying
to
do
is
we
are
trying
to
create
an
image
and
if
you
can
see
from
the
command
over
here
right,
the
the
label
that
we
have
defined
is
not
millic.agile
Bank
dot
table.
So,
although
we
are
defining
a
label,
not
nilak
is
not
an
actual
user
or
the
employee
of
the
agile
pack.
D
So
ideally,
what
we
this
will
do
is
when,
when
this
policy
is
being
executed,
it
will
grab
this
owner,
not
neglect.agile,
Bank,
dot
demo,
and
it
will
send
this
string
or
this
label
to
the
external
data
provider.
Then
external
data
provider
will
actually
validate
whether
or
not
millic
presents
in
the
active
directory
or
not,
and
if
it
does
not,
then
it
will
send
an
error
accordingly.
So
let's
say
that
in
action,
foreign.
D
Agile
Banks
directory,
so
so
this
way
we
are,
we
are
bringing
in
some
a
real
information
or
we
are
validating
against
some
Dynamic
information
for
that
matter.
D
Now,
let's
try
to
run
the
same
command
with
with
maybe
a
proper
owner
owner
level.
Basically,
so
now
we
have
the
owner
as
nilake,
dot,
agile,
Bank,
dot
demo,
and
we
know
that
nilake
exists
in
agile
bank
or
and
it's
it's
an
employee
of
the
agile
bank,
so
active
directory
will
have
that
information
and
it
will
allow
the
resource
to
create.
D
So
there
you
go
so
so
it
just
give
a
Json
because
I'm
running
that
in
driving
mode,
but
what
it
says
is
like
it
validated
that
hey
exist.
So
we
are
going
to.
We
are
going
to
allow
this
resource
foreign.
D
What
I
mean
there
is,
there
is
an
another
real
life
example
of
the
external
data
provider,
with
the
ratified
project
which
focuses
or
which
focuses
on
the
container
image
verification.
So
if
you
know
about
the
ratify
it's
the
ratification
framework,
which
makes
sure
that
you
know
your
container
image
is
signed
properly
and
things
like
that
so
another.
So
so
the
real
world
example
could
be
like
whenever
you,
let's
say,
try
to
deploy
a
part,
and
you
mentioned
hey
there.
D
Is
this
XYZ
image
right,
but
you
also
want
to
make
sure
that
that
image
is
signed
or
not
right,
so
the
ratify
external
data
provider
can
come
in
handy
and
this
is
this
provider
is
maintained
by
the
community
and
it's
one
of
actually
several
external
data
providers
that
are
available,
including
there
is
one
for
for
the
cosine.
So
there
is
a
cosine
provider
which
is
also
maintained
by
the
community,
and
all
this
information
is
by
the
way
available
in
our
gatekeeper
website.
D
So
go
on
to
the
gatekeeper
website.
There
is
a
section
for
the
external
data
which
talks
about
various
different
external
data
provider
that
are
available
today,
and
there
is
also
a
template
on
a
GitHub
which
can
let
you
create
your
own
data
Provider
by
by
just
creating
another
repo
from
that
GitHub
repo
template.
D
D
So
right
if
I
will
make
sure
that
this
image
is
signed
up,
signed
or
not,
and
then
it
can
send
the
results
back
to
the
back
to
the
gatekeeper
according
to
saying
that
hey,
you
know
this
is
allowed,
so
this
is
not
allowed,
and
things
like
that.
So
so.
This
gives
a
real
nice
interface
to
interact
with
the
dynamic
system
and
which
helps
us
actually
write
meaningful
policies
or
a
far
better
policies
than
what
we
could
do
with
the
static
data.
D
D
C
Good,
so
next
in
line
is
validation
of
workload
resources,
so
workload
resource
is
basically
a
resource
that
creates
another
resource,
just
like
deployment
or
a
job
right
and
now
gatekeeper
can
be
configured
to
reject
workload,
resources
that
might
create
a
resource
that
violates
a
constraint
or
a
policy.
For
example,
we
could
configure
gatekeeper
to
immediately
reject
deployments
that
would
create
a
port
that
violates
a
constraint
instead
of
rejecting
the
pods.
C
Now
this
feature
can
be
enabled,
via
enable
generator
resource
expansion
flag.
To
achieve
this,
gatekeeper
creates
a
mock
resource
for
pod,
runs
while
evaluations
on
it
and
Aggregates
the
most
resources
Aggregates
the
most
resources
validations
onto
the
parent
resource.
C
To
use
this
functionality,
we
need
to
create
expansion,
template
that
tells
gatekeeper
on
how
to
and
what
to
Mock
and
what
resources
to
expand
into
mock
resources,
so
any
resource
is
configured
for.
Expansion
will
be
expanded
by
both
validating
web
hook
and
audit.
Now
this
feature
will
only
work
if
expansion
template
is
created
for
a
targeted
resource
that
exists
on
the
cluster,
for
example,
on
the
right.
This
expansion
template
indicates
gatekeeper
to
expand
deployment
and
replica
sets
onto
pods,
but
there
are
catches
right.
C
C
Now
this
is
because
metadata
of
that
kind
won't
be
present
until
the
time
of
creation
of
the
resource,
so
those
kinds
of
policies
cannot
be
enforced
accurately.
Now
I
am
a
developerator
gel
pink
right,
and
why
is
this
feature
useful
to
me?
So
we
have
talked
about
the
honorable
policy
right
so
with
that
policy
I'm
trying
to
create
deployments
that
will
create
mods
on
my
cluster
that
do
not
have
owner
level
right
and
without
expansion.
Template
if
I
create
those
deployments.
C
Gatekeeper
will
accept
those
deploy
deployments
without
any
without
giving
me
any
errors,
but
admission
web
hook
will
reject
the
ports
and
I
would
need
to
look
at
the
deployment
and
replica
set
to
see
what
went
wrong.
But
with
this
feature,
a
nipple
gatekeeper
can
mock
the
port
that
will
be
created
by
the
deployment
that
I'm
creating
and
see
that
the
port
that
is
getting
created
will
not
have
owner
level
and
it
rejects
the
deployment
right
away.
So
I
don't
need
to.
C
You
know,
look
at
to
look
into
deployment
and
replicasate
to
debug
what
went
wrong
so
now.
Let's
look
at
the
demo.
C
Cool
right
so
in
the
demo,
I'm
gonna
walk
through
what
is
there
in
the
gatekeeper
namespace.
So
it's
the
all
the
things
that
are
created
with
the
installation
of
gatekeeper,
gatekeeper,
audit
manager
and
other
stuff.
We
are
making
sure
the
expansion
is
enabled
with
the
with
displayed
right.
This
is
the
constraint
template
that
I'm,
using
which
is
the
same
as
we
just
played
earlier,
and
then
there
is
a.
C
And
this
is
the
constraint
that
enforces
the
constraint
template
basically
indicating
that
every
port
that
is
created
in
load,
Valencia
namespace,
must
have
one
owner
level.
So
now,
let's
look
at
the
now.
Let's
apply
those
and
look
at
the
deployment,
so
this
deployment
here
basically
will
create
a
port
which
will
be
missing
on
a
level
right
and
so
far
we
have
not
created
any
expansion
template
that
uses
this
feature.
C
So
now,
let's
try
to
create.
Let's
try
to
create
and
this
this
deployment.
C
And
it
was
accepted
and
it
will
be
in
violation
of
a
policy
right
so
now
I
will
link
to
check
if
the
port
is
up
or
not,
and
as
we
can
see,
there
is
no
codes
running
with
this
deployment,
so
I'm
going
to
describe
the
deployment
and
see
what
went
wrong
right.
C
I
see
that
replica
set
is
scaled
up
to
the
desired
one,
which
is
one
so
I
am
now
looking
into
replica,
set
to
see
what
went
wrong
and
there
I
can
see
the
message
that
admission
web
hook
denied
the
request,
because
it
was
missing
the
required
owner
label
right.
So
now,
I
am
trying
to
delete
the
deployment.
C
C
Cool,
so
next
one
in
the
line
is
mutation,
so
this
feature
allows
gatekeeper
to
modify
kubernetes
resources
at
request
time,
based
on
customizable
mutation
policies.
Mutation
policies
are
defined
using
a
specific
crd
that
is
called
mutator.
There
are
four
types
of
mutator
available
with
gatekeeper
for
different
purposes.
Right
so,
first
and
foremost
is
assign
metadata,
and
we
will
get
into
the
example
of
the
same
in
the
right
hand,
side
very
soon,
so
sine
metadata
is
a
mutator
that
modifies
the
metadata
section
of
the
resource.
C
Metadata
section
is
a
very
sensitive
piece
of
data,
and
certain
mutation
could
result
in
unintended
consequences,
such
as
updating
name
or
namespace
right.
So
this
method,
this
mutator
has
been
limited
to
only
modify
labels
and
annotation,
and
in
in
that
also
right
it.
The
assimilator,
assign
metadata
mutator,
do
not
have
capabilities
of
modifying
existing
label
or
annotation.
It
only
has
capabilities
of
adding
those
things.
The
next
one
is
assign
mutator.
It
is
useful
to
make
any
changes
outside
of
the
metadata
section,
such
as
setting
images
pool
policy
for
all
containers.
C
All
of
these
mutators
can
be
divided
into
essentially
three
listing
sections
that
to
say
that
all
of
this
mutators
have
three
different
sections
that
can
be,
but
that
are
in
the
that
are
the
part
of
the
specification
right.
The
first
one
is
extent
of
changes
which
basically
defines
that
what
needs
to
be
modified
essentially
the
match
section
of
the
spec,
then
the
intent
of
the
changes
defining
where
the
changes
should
happen.
That
is
basically
the
location
part
of
the
spec
and
then
conditions
under
which
the
mutation
should
be
applied
right.
C
So
that's
a
parameter
part
of
the
spec
and
that
basically
defines
the
whole
assign
whole
mutator
spec.
So,
on
the
right
hand,
side
we
can
see
that
this
assigned
metadata
mutator
will
be
applied
to
a
port.
That
is
that
that
have
nginx
label
on
it
and
it
will
modify.
It
will
add
the
label
which
is
honor
and
assign
the
value
admin
right.
So
now,
let's
now,
let's
look
at
the
next
slide.
C
So
this
is
the
example
of
what
happens
with
the
previous
mutation,
a
mutator
in
place
right.
So
this
is,
on
the
left
hand,
side.
We
can
see
the
port.
Is
there
that's
before
mutation,
because
there
is
no
owner
label
there
and
then,
once
you
apply
this
port
and
the
mutation
is
in
place,
gate
keyword
will
add
dishonor
level.
C
Right
so
before
we
we
do
this
demo
right,
let's,
let's
define
the
use
case
that
why
is
this
feature
useful
to
me,
working
at
agile
bank
right
so
I'm
trying
to
deploy
resources
on
the
cluster?
We
have
the
same
policy
as
honorable.
There
has
to
be
an
owner
level
in
in
the
resources
we
create
I'm
using
shared
yaml
files
and
I'm
really
annoyed
that
I
am
have
to
kind
of
modify
those
yaml
files.
C
Every
time
I've
been
trying
to
create
resources
to
add
this
on
a
level
so
now,
I
can
use
the
mutation
to
basically
create
my
own
policies
and
mutation
policies
that
that
adds
the
resources
Edge,
the
required
owner
level
for
the
resources
that
I
create
right.
Now,
let's
look
into
the
demo
right.
C
Cool,
so
this
is
the
deployment
I'm
going
to
create,
which
Miss,
which
doesn't
have
owner
level
or
which
will
create
the
port
that
that
is
missing
from
honorable.
So
we
get
rejected.
C
This
is
the
same
assigned
metadata
that
we
saw
earlier
I'm
going
to
apply
it.
C
And
I'm
going
to
try
creating
the
same
deployment
once
more
to
see.
C
So
the
next
one
is
Pub
sub
model
right.
So
this
is
the
very
recent
change
that
we
did
where
it
allows
consuming
all
of
the
violations
of
audit
using
Pub
sub
models
right.
So
right
now,
gatekeeper
uses
constraint
to
Bubble
Up
This
audit
violations,
where
you
could
find
the
violations
on
the
status
of
a
constraint
and
get
the
information
about
which
resource
is
in
violation
of
your
policies,
but
due
to
etcd
limits
on
how
large
an
object
can
grow.
Gatekeeper
is
kept
at
reporting
maximum
of
500
violations
per
constraint
on
constraint.
C
Statuses.
With
this
feature,
gatekeeper
is
now
able
to
publish
all
the
audit
violations
over
a
channel.
Sn1
violations
are
called
since
messages
are
not
stored
on
Cube
object.
There
is
no
cap
right
now
and
gatekeeper
is
now
able
to
Bubble
Up
all
the
violation
using
this
Pub
sub
model
right
and
consumer
can
subscribe
to
those
violations
by
subscribing
to
the
channel
where
gatekeeper
is
publishing
it.
C
So,
on
the
right
hand,
side
there
is
a
example
of
a
config
map
that
is
essentially
defines
that
essentially
defines
how
and
information
on
how
to
initiate
the
connection
and
using
word
provider
right.
So
the
config
map
on
the
right
essentially
says
that
pop
sub
audit
should
use
Depo
provider
and
then,
in
the
config
section,
it
defines
all
the
necessary
information
to
open
and
maintain
the
same
connection
to
publish
the
messages
onto
the
next
slide.
C
So
right
now
here
is
the
Dapper
example
right,
because
right
now
we
have
created
a
driver
that
works
with
the
Dapper
and
can
be
used
utilized
to
use
stepper
functionality
to
publish
the
messages
so,
but
the
interface
is
extensible,
so
it
the
solution.
Ultimate
solution
really
supports
any
Pub
sub
tool,
such
as
rabbit
mq
Kafka,
but
you
need
appropriate
driver
to
use
these
tools.
Let's
look
at
the
high
level
architecture
which
Remains
the
Same
for
all
the
tools
and
see
how
this
feature
works
right.
C
So
I
have
a
Dapper
runtime
running
in
kubernetes
system.
I
have
a
gatekeeper
audit
board
with
the
right
configuration
that
is
there
per
sidecar
injected
with
the
audit
Port.
Whenever
audit
gets
the
violation,
it
will
publish
the
validation
their
preside.
C
Car
will
publish
that
violation
on
behalf
of
audit
to
a
channel
a
subscriber,
app
and
describes
the
intent
of
subscribing
to
that
particular
Channel
and
then
therefore
injects
the
sidecar
into
subscriber
app
as
well,
and
whenever
there
is
a
message
on
that
channel,
a
sidecar
container
will
let
the
application
know
and
and
forward
that
message
that
message
to
that
application,
so
that
subscriber
receives
all
the
violations
that
are
published
by
audit
port
cool.
So
now,
what
is
the
format
of
this
violations
right?
What
information
will
subscriber
get?
C
Cool
so
I'm
gonna
walk
through
a
whole
setup
of
this
Pub
sub
and
what
is
there
in
each
and
every
related
namespaces
right?
So
for
our
purposes
when
we
are
using
Dapper,
we
kind
of
have
to
we
kind
of
have
four
namespaces
that
are
of
our
interest.
First
and
foremost,
is
type
of
system.
C
And
in
Dapper
system
the
Dapper
runtime
is
running
like
every.
They
have
five
ports
for
Dappers
different
functionality,
then,
in
the
default
namespace
I
have
radius
running,
which
is
essentially
a
broker
that
that
will
or
act
as
a
queue
for
messages.
C
Then
we
have
a
subscriber
subscribing
to
the
channel
where
audit
will
be
publishing
messages
and
we
can
see
in
the
subscriber.
There
are
two
containers
running
one
is
the
application
container?
Another
one
is
the
another
one
is
the
one
with
double
cycle
and
then
in
the
gatekeeper,
nothing
changes
except
now
get
Gatekeepers.
Audit
Port
is
running
with
two
containers:
one
is
injected
by
sidecar
and
a
service
that
is
created
by
Dapper
to
publish
the
messages.
C
Cool
so
on
the
left.
On
the
right
hand,
side
I
am
subscribing
or
I'm
tailing
to
the
ports
of
subscriber
application
and,
as
you
can
see
right
now,
there
is
on
no
ports,
but
if
violations
occur
and
are
published,
I
will
see
those
violations
received
by
the
subscriber
application
right.
C
So,
on
the
left
hand,
side
I'm
trying
to
create
the
resources
I'm
trying
to
create
the
same
policy
and
constraint
that
we
have
been
talking
about
which
is
Honorable
one,
and
then
we
will
see
audit
running
and
and
we
getting
the
violations
in
the
subscriber
in
a
moment.
C
So
yeah
Audi
train
in
the
background
and
then,
on
the
right
hand,
side.
We
can
see
that
we
got
the
messages
or
violations
for
the
port
that
were
in
the
violation,
cool,
and
that
was
it
for
for
this
feature
for
the
next
one
I'm
gonna
hand
it
over
to
the
lake.
D
Good,
so,
let's
discuss
the
multi-engine
support
in
kubernetes
1.26,
which
introduces
an
alpha
feature
called
validating
admission
policy,
so
this
feature
enables
declarative
in
process
validation
of
policies
against
the
admission
request.
The
motivation
for
this
is
to
help
us
understand
when
to
use
what
and
you
know,
what's
the
difference
between
the
two
like.
D
What's
the
difference
between
when
we
want
to
use
the
validating
admission
policies
and
when,
when
we
want
to
use
the
GateKeeper
so,
firstly,
the
validating
admission
policy
is
an
entry
native
policy
eliminating
the
need
for
an
additional
hop
required
by
the
typical
admission
web
folks.
D
This
has
the
benefit
of
reducing
the
request,
latency
and
enhancing
the
reliability
and
the
availability.
With
the
with
the
absence
of
the
extra
hop
you
can
now
Implement
a
failed,
close
approach.
This
address
has
a
significant
issue
with
admission
web
folks,
where
the
requirement
of
an
extra
hop
can
impact
the
request.
You
know
and
and
results
in
many
web
webworks
felling
open
so
ensuring
the
policy
enforcement
while
maintaining
the
cluster
available
availability
is
crucial
and
the
validating
admission
policy
allows
you
to
file
close
with
without
concern
about
availability.
D
The
burden
of
operation
is
also
reduced,
since
there
is
no
need
to
maintain
additional
web
hook
and,
as
we
know,
the
embedded
language
that
is
used
by
this
validating
admission
policy
itself.
So
so
that's
cool
right.
That's
what
validation
admission
policy
does
now.
Let's
take
a
look
at
the
gatekeeper
and
you
might
be
wondering
in
which
scenarios
you
would
actually
need
to
use
gatekeeper.
D
Well,
a
gatekeeper
provides
the
audit
functionality
that
validating
admission
policy
does
not
so
just
now
we
saw
JS
demo
where
we
even
added
this
pops
up
feature
which
allows
us
to
swiftly
consume
all
the
violations
right.
So
there
there.
This
functionality
currently
is
missing
from
the
validation,
admission
faults
in
theory,
you
could
get
get
to
your
API
server
and
examine
the
logs,
but
with
gatekeeper
audit
you
can,
you
can
have
all
the
violations
conveniently
accessible
and,
as
I
was
saying,
like
pops
up
makes
it
even
more
convenient.
D
If
you
have
Integrations
with
audit,
you
can
generate
other
reports
and
compliance
reports
for
the
cluster
operator.
But
another
aspect
to
consider
here
is
our
friendship
policies.
So
what
do
I
mean
by
referential
policies?
Well,
let's
say
you:
you
have
a
policy
that
needs
to
ensure
that
ensure
the
uniqueness
of
the
English
source,
for
example.
Achieving
the
uniqueness
requires
examining
the
incoming
request
and
then
comparing
them
against
the
everything
already
present
in
the
cluster
this
type
of
policy.
D
Furthermore,
when
it
comes
to
the
external
data,
as
we
saw
earlier,
there
is
a
high
probability
that
you
have
a
data
source
located
outside
the
cluster,
while
it
yeah
so
while
validating
animation
policy
focuses
primarily
on
the
on
the
data
within
the
cluster.
The
inclusion
of
the
external
data
provides
the
additional
capability
for
the
scenarios,
which
requires
information
from
the
external
data
sources.
This
expanded
functionality
offers
greater
flexibility
as
per
your
specific
needs.
D
Additionally,
gatekeeper
not
only
assist
in
violating
policies,
but
also
facilitated
mutations.
So
right,
we
just
now
saw
the
demo
of
how
the
mutation
works,
and
so
in
goes
with
the
cater
sale.
So
greater
sale
allows
us
like
the
shift
validations
as
we
were
seeing
so
you
you
can
do
all
the
all
the
functionality
that
policy
the
gatekeeper
can
do
even
before
you
know
we
get
actually
implemented
into
the
before
implementing
those
policy
into
the
cluster.
D
Moreover,
moreover,
the
Oppa
offers
remarkable
capabilities,
allowing
you
to
Define
highly
intricate
rules
that
surpasses
the
limitation
of
the
validation,
admission
policies
so
Rego
with
Rego,
you
can
Define,
you
know
different
types
of
expressions
and
conditions
that
may
not
be
possible
with
violating
admission
policies
today.
Furthermore,
the
numerous
connect
I
mean
Community
policy
liabilities
are
readily
available,
which
offers
a
wide
range
of
pre-existing
policies
that
can
be
effortlessly
deployed
in
your
cluster,
and
we
have
the
whole
gatekeeper
libraries
that
you
can.
You
know,
browse
and
just
use
it.
D
So
this
eliminates
the
need
to
create
a
policies
from
scratch.
So,
with
gatekeeper,
multi-engine
support,
you
can
you
can
leverage
opar,
other
engines
and
which
you
know
allows
you
to
sort
of
write
the
policies
in
the
regular
or
any
other
record
language.
D
So
that's
cool
I
mean
they
both
have
the
boats
of
different
purpose.
But
you
might
still
wonder
you
know
how
you
can
achieve
The
Best
of
Both,
Worlds
or
you
know,
is:
is
there
a?
Is
there
a
way
to
accomplish
that
next
slide?
Please.
D
So
yes,
so
there
is
a
there
is
a
way
to
sort
of
you
know
see
if
we
can
get
a
Best
of
Both
words
and
that's
where
the
concept
of
multi-engine
comes
into
play.
So
you
are
currently
working
on
this
concept
to
enhance
the
gatekeeper.
So
as
of
now
gatekeeper
relies
on
the
constraint
framework
that
we
have
that
we
have
in
the
Oppa.
D
However,
the
goal
is
to
create
an
abstraction
layer
that
simplifies
the
user
experience,
so
this
abstraction
layer
allows
user
to
write
policies
in
in
the
preferred
language,
while
the
and
The
Operators
deploying
these
policies
follow
the
same
deployment
process
in
in
the
essence,
the
multi
engine
enables
you
know,
multi-language
multi-target
policy
enforcement,
so
you
can
use
the
languages
like
Rego
or
cell
targeting
different
platforms
such
as,
like
you
know,
kubernetes
admission
or
terraform,
or
you
know,
if
you
can
come
up
with
any
other
such
platforms.
D
So
we
believe
that
this
approach
can
be
beneficial
for
the
gatekeeper
and
Oppa
I
mean
I,
mean
the
gatekeeper
and
hope
are
significantly
more
mature
compared
to
the
entry
validating
admission
policy.
You
know
which
is
still
in
the
alpha
stage.
D
So
the
goal
here
is
to
bridge
the
gap
and
you
know
provide
a
solution
that
can
combine
the
power
of
both
gatekeeper
and
gatekeeper
and
opa,
and
you
know
the
violating
admission
policy
so
by
utilizing
the
gatekeeper
and
the
gator
CLI,
you
can
not
only
obtain
the
audit
capabilities,
but
you
know
also
perform
the
shift
left
validations
and
for
the
for
the
new
validating
admission
policies
with
with
no
additional
costs.
It's
literally,
like
you
know,
comes
free
with
all
the
existing
feature
that
we
have
in
gatekeeper.
D
So
this
integration
also
allows
us
for
comprehensive
policy
enforcement
and
validation
throughout
the
development
development
process.
So
this
is,
you
know.
This
is
the
vision
that
we
are
are
going
with,
and
you
know
we
would
want
to
sort
of
make
sure
that
both
this
world
can
come
together
and
coexist,
and
you
know
provide
like
a
good
end
user
experience,
whether
you're
developing
the
policies
or
whether
you're
deploying
the
policies
in
your
cluster,
for
your
complements
compliance
or
the
other
other
purpose.
D
So
so
this,
this
sort
of
sort
of
gives
will
give
you
an
idea
of
what
we
are
doing
in
terms
of
a
cell
or
like
the
multi-engine
support,
and
things
like
that
before
we
wrap
up
like
we
do
have
like
some,
like
a
common
updates
that
we
want
to
talk
about
so
I'll.
Let
Jay
again
talk
about
those
updates.
Okay,.
C
Thanks
yeah,
so
we
have
some
other
updates
to
share
some
features.
So
namespaces,
first
and
foremost,
is
namespace
exemption
with
suffix.
So
now
namespaces
can
be
Exempted
based
on
the
suffix.
That
is
a
flake.
The
flags
is
exempt
in
specifics,
and
this
is
useful
when
namespace
is
in
the
form
of
tenant
Dash,
something
and
in
cases
where
you
would
like
to
exempt
certain
namespaces
for
all
tenants.
The
next
one
is
open.
Sensors
and
strike
drive
a
straight
driver
exporters.
C
C
Then
the
third
one
is
emitting.
Events
in
involved
object,
namespace,
so
now
with
two
new
fill
eggs.
Each
for
admission
and
audit,
which
is
in
emit
admission
events
and
emit
audit
events.
Audits
events
essentially
can
be
audited.
C
No
sorry
emitted
into
audit
violations
can
be
emitted
as
a
kubernetes
events,
and
this
flag
is
in
Alpha
stage,
set
to
false
by
default,
and
then
there
are
other
flags
also
which
basically
mentions
that,
if
emitted
events
should
be
in
the
namespace
of
the
object,
that
is
responsible
for
this
variation
or
in
The
Gatekeepers
and
for
the
cluster
scope
resources,
it's
always
in
The,
Gatekeepers,
namespace
and
then
the
last
one
is
our
ability
to
validate
sub
resources.
C
Now,
resources
such
as
Sports
law
or
eviction,
replica
set
scales,
node
proxy,
are
also
but
can
can
also
be
validated
with
gatekeeper
and
that
that's
it
ready.
D
Yeah
thanks
here
yeah,
let
us
know
like
if
you
have
any
questions
you
know
we
can
help
and
try
and
answer
those
questions,
and
you
know
I
would
like
to
take
this
opportunity
to
also
thank
all
the
GateKeeper
maintainer
and
the
contributor
to
sort
of
help.
Implement
all
these
features
feel
free
to
drop
in
you
know
onto
the
slack
channel,
so
we
have
a
slack
Channel
called
gatekeeper
on
open
policy
agents.
Slack
workspace
so
feel
free
to
drop
in
there
say
hi.
D
We
have
our
community
meetings.
Every
other
I
mean
we
do
it
every
every
week
on
Wednesdays,
so
you
know,
come
there
say
hi
to
us
or
feel
free
to
open
an
issue
on
the
GitHub.
B
Awesome
thank
y'all
very
much.
Everyone
still
with
us.
If
you
have
any
questions,
go
ahead
and
pop
them
into
the
chat
now
for
JD
been
Malik
and.
C
B
C
A
A
C
B
Think
that
have
y'all
put
your
handles
into
the
chat
so
that,
if
anyone
wants
to
follow
up
with
you,
they
can
find
you
a
nose
in
the
end
of
your
slides.
So
we'll
get
I'll
make
sure.
If
one
of
you
can
send
me
that
final
deck
right
after
we
hop
off
I'll,
make
sure
it's
attached
when
we
load
the
recording,
but
otherwise.
B
Thank
you
so
much
jdip
and
Link
for
a
great
presentation
and
everyone
join
us
next
time
for
another
live
webinar
with
cncf
and
have
a
great
weekend
and
rest
of
your
week.