►
From YouTube: Data Protection Guardrails using Open Policy Agent (OPA)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
here
we
go
hi
everyone
thanks
so
much
for
joining
us
today.
Today's
cncf
live
webinar
data
protection,
guardrails
using
open
policy
agent,
I'm
libby
schultz
and
I'll
be
moderating.
Today's
webinar
I'm
going
to
read
our
code
of
conduct
and
then
hand
over
to
joey
lay
with
cast
and
by
bean
veeam
and
anders
eknert
with
stira
a
few
housekeeping
items
before
we
get
started
during
the
webinar.
A
A
If
we
have
time
this
is
an
official
webinar
of
the
cncf
and
as
such
is
subject
to
the
cncf
code
of
conduct,
please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
Basically,
please
be
respectful
of
all
of
your
fellow
participants
and
presenters.
Please
also
note
that
the
recording
and
slides
will
be
posted
later
today
to
the
cncf
online
programs,
page
at
community.cncf.io,
under
online
programs
they're
also
available
via
your
registration
link,
and
the
recording
will
also
be
available
on
our
online
programs.
Youtube
playlist.
B
Thank
you
very
much,
libby
and
good
morning,
good
evening.
Everybody,
depending
on
where
you
are
my
name,
is
joey.
Lay
I'm
one
of
the
product
managers
with
cast
and
by
veeam
I'm
joined
here
by
anders
ekner
who's,
a
developer
advocate
from
styra
and
we're
going
to
be
talking
to
you
about
a
new
concept
called
the
data
protection
guardrails,
which
is
focus
on
helping
misconfigurations
with
your
data
protection,
environment,
so
think
of
backups.
Think
of
disaster
recovery
solutions
and
we're
going
to
talk
about
how
to
do
that.
C
Yeah
thanks
joey,
hey,
I'm
anders,
and
I
work
as
a
developer
advocate
for
styra,
which
is
the
creators
of
the
open
policy
agent
project
so
yeah
today,
we'll
talk
about
using
opa
for
the
purpose
of
data
protection
and
I'll,
be
covering
like
the
oppa
side
of
things
and
joey
will
talk
more
on
like
the
topic
of
data
protection,
so
just
to
get
started
we'll
see
here.
C
What's
going
on,
okay,
so
before
we
get
started
on
on
opa,
what
is
policy
as
code
or
what
is
even
policy
so
opa
is
an
open
source,
general
purpose
policy
ending
so
before
we
dive
into
what
that
entails.
So
you
can
add
it's
a
it's
a
good
good
to
remind
ourselves
what
policy
is
and
and
what
policy
as
code
is.
C
So
basically,
policy
is
a
set
of
rules,
and
these
rules
could
be
anything
from
organizational
rules
permissions
and
for
authorization,
kubernetes,
admission
control,
given
rules
on
what
can
be
deployed
or
not,
infrastructure
policy
and
infrastructure
rules.
What
kind
of
what
kind
of
instance
types
should
we
allow?
What
kind
of
security
configuration
and
so
on
build
and
deployment
rules
and
policy
is
something
we're
gonna
cover
today
as
well.
C
We
will
show
how
we
can
use
like
policy
as
part
of
our
cicd
pipeline
to
enforce
things
that
that
apply
to
the
data
protection
space
data
filtering
is
another
item
for
policy
and
there's
so
much
more
so,
basically
anywhere.
You
have
rules,
that's
pretty
much
where
oppa
shines
so
that's
policy,
and
why
would
we
treat
this
as
code?
C
The
simple
answer
is
basically
that
creating
policy
as
code
kind
of
provides
all
the
benefits
of
creating
anything
as
code.
So
we
can
work
with
our
policy,
I.e
our
rules
in
a
collaborative
manner,
so
we
can
work
with
pull
requests.
We
can
test
our
policies
in
isolation.
C
We
can
work
with
tooling
like
an
static
analysis,
linters
and
so
on.
So
no
more
pdf
documents.
Our
policies
should
be
code
just
as
anything
else,
and
when
we
talk
about
policy
as
code,
we
often
talk
about
decoupling,
meaning
that,
just
as
we
can
decouple
storage
from
our
application
and
move
that
into
a
dedicated
database,
we
kind
of
want
to
treat
policy.
C
The
same
way
so,
ideally
our
applications
and
business
logic
should
not
need
to
deal
with
authorization
or
perhaps
not
even
users,
but
that
should
be
treated
by
a
separate
system,
so
that
kind
of
decouples
and
removes
responsibility
from
our
applications
and
into
a
dedicated
entity
like
opa
all
right,
so
that
that's
policy
and
that's
what
a
policy
has
code.
So
what
is
open
so
opa
is
kind
of
an
implementation
for
all
these
ideas.
C
It's
an
open
source,
general
purpose
policy
ended
and
as
of
february
last
year,
it's
a
graduated
cncf
project.
It
offers
a
unified
tool
set
and
a
framework
for
working
with
policy
across
the
whole
stack.
So
there's,
even
if
the
topic
of
today
here
is
data
protection
oppa
is
general
purpose
again.
So
it's
meant
to
to
cover
all
of
these
all
of
these
places
where
we
might
need
rules.
C
C
One
thing
to
note
is
that
oppa
separates
the
actual
decision
from
enforcing
it.
So
when
you
query
oppa
for
a
decision,
oppa
is
going
to
give
you
a
response,
but
it's
still
up
to
you
to
do
something
with
that
response.
That
is
the
actual
enforcement,
and
that
is,
of
course,
highly
dependent
on
the
kind
of
context.
C
You're
work,
you're
working
in
so
one
type
of
enforcement,
which
we're
gonna
look
into
later,
is,
for
example,
in
a
in
a
build
pipeline
where
you
might
wanna
say
that
you
cannot
merge
a
pr
unless
this
manifest
of
this
resource
passes,
all
the
policy
checks
and
these
policy
checks
are
written
in
a
declarative
language
called
rego
and
that's
kind
of
the
glue
that
kind
of
binds
all
these
diverse
and
different
kind
of
policies
together
and
we'll
look
into
what
that
looks
like
later
too
so
oppa
again,
it's
an
open
source
project.
C
It's
got
a
a
huge
community,
a
big
ecosystem
of
of
tools
and
and
whatnot
there's
currently
over
250
contributors
to
the
project
over
70
integrations
listed,
and
that
is
everything
from
like
job
applications,
php
databases,
kafka
and
whatnot,
pretty
much
anywhere.
C
There's
the
gatekeeper
project,
which
is
oppa
applied
to
to
the
kubernetes
admission
control
space.
There
are
editor,
integrations
for
vs
code,
intellij
and
so
on.
So
it's
a
big
ecosystem
and
just
to
kind
of
summarize
what
oppa's,
what
the
benefits
oppa
brings.
I
like
this
quote
from
kelsey
hightower,
so
the
open
policy
agent
product
is
super
dope.
I
finally
have
a
framework
that
helps
me
translate
written
security
policies
into
executable
code
for
every
layer
of
the
stack,
so
that's
kind
of
what
oppa
does
and
what
opa
is.
C
So
if
that
is
what
oppa
does?
How
does
it
work?
I
think
there's
two
aspects
that
I
tend
to
focus
on
and
those
are
the
policy
decision
model
and
it's
rhaego
the
policy
language.
C
So
if
we
start
with
the
policy
decision
model,
this
is
kind
of
key
to
how
can
oppa
integrate
with
all
these
projects
that
they
really
weren't
built
with
opa
in
mind
and,
of
course
they
are
enormously
different
from
each
other.
So
very,
very
heterogeneous
tech
stack
here
with
everything
from
like
linux,
pa
modules
to
kafka
or
databases.
C
C
When
that
request
is
received
in
the
service,
the
service
passes
that
request
to
oppa
in
order
to
have
oppa
make
a
policy
decision,
and
this
policy
query
is
just
basically
any
json
any
json
value
and
oppa
looks
at
that
query
and
based
on
the
policy
and
based
on
any
data
it
has
loaded.
It
makes
a
policy
decision
and
this
policy
decision
is
also
just
json,
so
pretty
much
any
service
that
understands
json
can
communicate
with
oppa.
C
So
the
next
aspect,
which
makes
oppa
work
for
all
these
different
technologies
I'd,
say:
that's
rego,
it's
a
declarative,
high-level
policy
language
which
is
generic
enough
to
work
with
pretty
much
any
json
data
or
yaml
data,
while
it's
still
tailor-made
for
the
policy
space,
and
it
allows
you
to
describe
policy
across
the
whole
cloud
native
stack
and
again,
just
like
a
real
world
policy.
A
regular
policy
is
just
a
number
of
rules.
C
These
rules
they
commonly
return
true
or
false.
Should
the
user
be
allowed
or
not,
but
it
could
return
any
type
of
data
available
in
json,
so
you
could
return
strings
lists,
objects
and
and
whatnot
oppa
includes
a
testing
framework,
so
you
can
run
tests
directly
on
your
policy
and
you
can
kind
of
build
confidence
in
in
your
policy,
just
as
you
would
with
any
other
code,
it's
a
well-documented
project,
so
check
out
the
official
docs
and
there's
also
a
playground.
So
you
can
try.
You
can
try
policy
offering
without
even
having
oppa
installed.
C
So
with
that,
I'm
gonna
hop
over
here
to
an
editor,
I'm
just
gonna
open
that
just
just
gonna
show
you
the
basics
of
rego.
I
hope
you
can
all
see
my
screen
here,
I'm
just
gonna
say
policy.rego,
I'm
gonna
create
a
regular
file.
First
thing
to
do
is
to
create
a
package,
and
this
is
similar
to
a
namespace
in
other
languages.
C
It's
going
to
call
it
policy,
and-
and
now
let's
write
our
first
rule.
So
what
a
rule
is.
The
first
thing
we
do
is
to
just
provide
a
name
in
this
case.
I'm
gonna,
I'm
gonna
name,
my
rule
allow
and
the
way
our
rule
evaluation
works
is
basically
you
have
the
rule
head
here,
which
includes
the
name
and
optionally.
We
provide
a
return
value,
so
in
this
case
we
might
want
to
say
that
allow
is
equal
to
true
if
all
the
conditions
or
assertions
provided
in
the
body
here
are
also
equal
to
true.
C
So
if
we
do
something
silly
like
one,
it's
equal
to
one
yeah,
that's
obviously
true.
So
if
we
now
go
and
we
evaluate
the
alar
rule,
we
should
hopefully
see
that
this.
This
rule
evaluates
to
true,
so
I'm
gonna
load,
I'm
gonna,
run
opa
eval
here,
which
is
a
simple
command
line
tool
to
evaluate
any
value
from
a
policy.
C
I'm
just
gonna
say
data
policy
allow
and
we
can
format
that
a
little
nicer,
just
gonna
say
critic,
and
we
can
see
that
indeed,
this
is
true.
So
if
we
change
this
to
something,
that's
not
true,
we
can
see
that
the
result
is
now
undefined.
C
So
when,
when
a
rule
does
not
evaluate
oppa
simply
says
like
this
is
undefined.
So
if
we
want
to
ensure
that
there's
some
value
returned,
we
can
provide
a
default
one.
So
we
can
say
by
default.
Allow
is
equal
to
false,
and
now
we
see
that
we
will
always
get
something
back.
We
we
have
an.
We
have
either
true
or
false,
so
the
user
is
either
allowed
or
it's
not.
C
What
we
might
want
to
do
here
is,
of
course,
have
something
more
realistic,
so
we
might
want
to
add
again
remember
that
when
we
query
opa,
we
can
provide
any
type
of
value,
so
we
could
or
any
type
of
json
document.
So
I'm
going
to
add
an
input.
C
Which
is
part
of
the
future
keywords
so
like
that.
So
if
the
user
is
in
the
admin
or
group
or
if
he
has
the
admin
role,
this
should
be
allowed
and
we
can
see
oh
right,
because
we
did
not
provide
the
input
file
here.
So
I'm
going
to
say,
put
json
and
we
can
see
now
that
providing
this
we
see
that
opel
allow
evaluates
the
rule
and
returns
the
provided,
return,
value
and
remember
here
that
this
could
be
any
json
value.
C
C
So
if
we
change
the
the
groups
here
to
be
or
the
roles,
let's
say
this
is
a
developer
and
we
evaluate
this
again,
we'll
see
that
the
the
decision
is
now
false
because
the
user
is
no
longer
part
of
the
admin
group.
So
that's
basically
super
simple
policy
or
the
anatomy
of
a
policy,
the
anatomy
of
a
rule
in
a
few
minutes,
all
right,
so
gonna,
head
back
to
the
presentation
here.
B
C
Oh
yeah
for
sure
so
pretty
much
anything
that
can
be
that
exports.
Anything
as
json
or
jaml
or
that
can
be
transformed
to
do
so
is
is
definitely
a
viable
integration.
So
for
asher
ad
or
any
active
directory
yeah
you
can.
You
can
definitely
do
so.
I
think,
like
the
way
you'd
commonly
do
it
is
to
either
provide
the
data
from
from
your
active
directory
in
an
access
token
or
something
similar,
maybe
a
json
web
token.
C
That
is
included
as
part
of
the
query,
but
you
can
also
provide
data
beforehand
to
oppa.
So
oppa
is
an
in-memory
store,
so
you
could
kind
of
mirror
parts
of
your
active
directory
with
all
any
data
that
might
be
relevant
for
policy
evaluation.
B
All
right,
you
guys,
can
see
that
okay.
So
let's
talk
about
the
guardrails
for
data
protection.
Now
the
one
thing
that
to
learn
about
data
protection
is
it's
all
about
the
application
data
itself
and
the
backups
of
application
data
and,
historically
you
know,
data
data
protection
and
data
management
solutions.
They're
historically
focused
on
availability
right.
So
the
quick
recovery,
the
disaster
recovery,
the
business
continuity
of
application
data,
but
there's
more
to
it
than
that.
You
know
the
security
attacks
these
days
are
very
full
compromised
type
of
attacks,
so
things
like
exfiltration
things
like
ransomware.
B
The
concerns
that
people
have
been
their
customers
have
been
raising
to
us
has
largely
been
full
spectrum
confidentiality
issues
with
the
theft
of
data,
the
privacy
of
data
integrity
issues
that
really
affect
is
the
data
that
the
application
presenting.
Is
it
legitimate
or
is
it
you
know,
manipulated
by
an
adversary,
or
is
it
corrupted
in
the
case
of
an
integrity
attack
in
addition
to
the
availability
requirements,
and
so
we
we
need
to
start
thinking
about
the
data
protection
objectives
much
earlier
in
the
process.
B
Historically,
it's
a
day
two
operation
right,
let
production
be
deployed,
let
it
be
provisioned
and
hand
off
that
responsibility
to
a
data
protection
team-
or
you
know,
an
operations,
engineering
staff
right.
There
are
definitely
techniques
to
get
the
business
to
start.
Thinking
about
the
data
protection
object
as
well
early
in
advance
early
in
the
death
cycle,
and
these
are
a
few
things
that
we
want
to
show
you
now.
Historically,
there
are
two
ways
to
control
and
enforce
access
to
different
objects.
Right.
B
We
have
role-based
access
control,
that's
more
of
a
granting
an
authorization
technique
to
say:
hey
this,
this
admin
or
this
unprivileged
user
can
read.
You
know
the
pvcs,
the
stateful
data,
the
services,
the
secrets,
but
sometimes
we
we
want
to
actually
enforce
or
deny
behaviors,
and
this
is
going
to
be
more
specific
to
the
data
protection
objectives.
B
So
the
first
example
that
we
have
is,
if
you're,
if
you're
a
developer
and
you're
if
you're
using
a
technique
called
get
ops
now
get
ops
is
where
we
check
sort
infrastructure
as
code
into
a
git
repository,
and
that
represents
the
source
of
truth
for
how
a
new
production
environment
should
be
stood
up.
A
new,
auto
scale
node
should
be
stood
up,
and
that
includes,
of
course,
your
production
infrastructure's
code
right.
Your
staple
sets
your
pdcs
your
secrets,
your
network
policies.
B
So
you
can
see
in
the
example
we
have
here
on
the
right.
We
have
some
example
policies
an
example
backup
target
in
that
policy.
You
would
want
to
look
for
things
like
321
321
is
where
we
are
ensuring
that
we
have
multiple
copies
of
data.
We've
exported
it
to
an
offsite
cloud
location.
You
know
for
disaster
recovery
purposes,
and
that
is
meeting
our
compliance
for
retention,
maybe
it's
seven
years
and
that
it
has
an
rpo
that
actually
meets
the
mission
critical
objective.
It
might
be
minutes,
it
might
be
hourly.
B
B
That
is
also,
of
course,
deploying
the
right
backup
policies
in
the
bottom
example
here
on
the
backup
target,
we've
got
a
bug
where
the
object
immutability
was
accidentally
left
out.
You
know
I
commented
it
out
here,
so
I
can
actually
show
you
the
bug,
but
in
a
lot
of
cases
we
might
just
stand
up
a
backup
policy
and
kind
of
just
forget
about
these
advanced
configuration
settings
and
that,
ultimately,
is
very
dangerous,
especially
in
the
case
of
ransomware
attack.
B
So
how
does
this
work
exactly?
Let's
say,
you're
using
a
git,
apps
workflow,
you
know-
and
some
of
you
are
doing
that
because
that's
great,
let's
say
you
have
a
developer.
Who
wants
to
create
a
new
tier,
zero
mission,
critical
data
protection
policy
for
their
app?
They
write
some
prod
code.
They
write
their
secrets,
their
policies,
their
stateful
sets
the
provisioning
of
those
pvcs
and
they
write
the
data
protection
policy
and
the
data
protection
backup
targets.
They
committed
all
to
a
new
github
repo,
a
new
branch,
and
then
they
initiate
a
pull
request.
B
And
once
all
those
reviews
are
signed
off
once
the
opa
engine
is
actually
validated
that
the
code
signs
off
against
the
policy
you're
now
going
to
have,
and
once
you
merge
it
back
into
the
main
branch
you're
now
going
to
have
secured
protection
and
production
infrastructure
code
to
deploy
to
any
region.
To
you
know
any
new
clusters
from
day
zero
and
day
one
now.
The
second
way
to
leverage
the
guardrails-
it
is
maybe
more
of
a
for
more
more
mainstream.
Audiences
is,
if
you
have
a
dedicated
cloud
team
or
a
dedicated
data
protection
team.
B
Who
is
writing
this
code
to
hand
off
to
an
application
developer
and
the
application
developer?
You
know
they
may
not
necessarily
be
seasoned
experts
in
writing
data
protection
policies,
but
they
don't
need
to
be.
They
could
easily
consume,
for
example,
by
name
the
name
of
you
know
our
gold
standard
policy,
our
standard,
you
know
bronze
or
silver
level
policies,
make
it
easy
for
them.
B
You
would
achieve
this
through
the
gatekeeper
project,
and
this
is
helpful
if
you
want
to
apply
your
protections
a
little
bit
more
closer
to
staging
or
production
that
you
actually
want
the
kubernetes
api
server
to
not
only
do
the
authorization
and
authentication
according
to
our
back,
but
you
want
an
extra
layer
to
actually
ensure
that
that,
for
example,
that
backup
policy
or
that
backup
target
is
actually
meeting
the
data
protection
objectives
and
that's
something
called
admission
control
where
it's
going
to
send
a
request
to
the
opa
policy
agent
right
using
the
rego
language,
and
it's
going
to
give
a
decision
that
says:
hey
this,
this
api,
this
code
meets
or
does
not
meet
our
compliance
requirements.
B
So
let's
talk
about
some
example
policies.
You
should
be
thinking
about
if
you're
responsible
for
deploying
data
protection
in
your
environment-
and
you
want
to
leverage
infrastructure
as
code
as
a
way
to
accelerate
and
get
to
market
faster.
Here
are
some
of
the
policies
that
we
recommend
deploying
basic
primer.
You
know
on
data
protection,
it
usually
is
centered
around
two
things.
Recovery
points,
recovery,
point
objective,
which
is
rpo
and
recovery
time
objectives
put
it
simply.
B
B
B
Usually
this
is
negotiated
with
a
general
manager,
a
business
leader,
somebody
who
owns
the
application,
it's
very
important
to
get
that
alignment,
of
course
with
your
business
stakeholder,
because
this
ultimately
affects
revenue.
It
infects
adoption.
It
affects
retention
and
churn
and
brand
loyalty
of
your
customers.
So
as
we're
codifying,
these
data
protection
policies
make
sure
they
align
with
the
business
and
that's
going
to
help
you
when
you
start
to
author
these
directly
in
code,
because
you
can
actually
show
that
back
to
your
your
stakeholders
and
actually
say
hey.
Does
this
meet
your
business
objectives?
B
So
the
first
example
here
is
exactly
that:
it's
talking
about
the
recovery
objectives.
So
in
this
pseudocode
example
we
have
here
we
have
a
backup
policy,
that's
tuned
to
do
hourly,
backups
and
that's
great
for
general
purpose.
Sorry,
mission,
critical
workloads,
a
general
purpose
workload
might
accept
a
daily
backup
24
hours.
B
But
in
this
case
you
know
we
want
to
also
make
sure
that
there's
availability
of
that
data
so
that
we
have
a
copy
of
that
data
off
on
a
third
party
cloud
array.
Object,
store
nfs
target
somewhere.
That's
that
where,
if
production
fails,
that
you
have
a
secondary
target
of
course-
and
so
this
is
looking
for
and
then
you
know,
depending
on
the
backup
software
that
you
use,
you
might
have
different
objects.
B
You'd
want
to
target
so
there's
policy
objects,
there's
scheduling
objects,
there's
cron
job
objects
and
you're,
going
to
typically
want
to
look
for
that
rpo
and
validate.
Is
it
lower
than
or
equal
to
what
you
expect
from
a
data
loss
perspective
I
mean
you
can
see
here
in
the
regal
code.
Maybe
actually
anders
can
explain
a
little
bit
what
the
rigo
code
is
actually
doing
here.
C
Sure,
okay,
so
yeah,
we
have
a
an
allow,
allow
rule
here
pretty
much
like
what
we
have
it.
Hadn't
showed
in
the
in
the
example
before
and
in
this
case
we're
gonna
just
check
the
the
type
of
the
input.
So
in
this
case
the
resource
is
so
the
type
policy.
It's
just
it's
a
little
a
little
confusing,
though,
because
it's
it's
it's
a
it's
actually
a
backup
policy.
So
it's
it.
This
doesn't
check
a
regal
policy,
but
it
it.
C
It
verifies
a
backup
policy
yeah
just
to
check
that
the
kind
of
the
request
is
policy
and
the
group
is
backup.io
and
what
it
does
on
the
third
line.
Here
it
references
another
rule,
so
rules
are
composable,
so
one
rule
can
reference
any
other
number
a
rule
and
if
they
all
are
evaluated
to
true,
then
the
alar
rule
will
evaluate
to
true.
C
So
in
this
case,
allow
is
only
going
to
be
true
if
the
hass
backup
policy
rule
evaluates
to
true
and
the
house
backup
policy
rule
below
it's
kind
of
simple.
C
We
just
take
the
spec
from
from
the
input-
and
I
should
say
here
that
the
input
in
this
case
it's
a
kubernetes
admission
review
object,
which
is
what
the
kubernetes
api
server
sends
to
oppa
for
validation.
C
So
for
the
for
the
object,
we
we
use
some
action
in
and
I
think
we're
missing
something
here,
but
so
we're
iterating
over
all
the
actions
provided
and
if
there's
one
that
matches
or
is
equal
to
hourly.
We
say
that
we
have
an
acceptable
backup
policy.
B
All
right
and
and
kind
of
paired
with
that,
the
other
most
common
thing
to
definitely
look
for-
and
this
is
particularly
important
if
your
data
is
typically
a
target.
So
the
adversaries,
especially
ransomware
operators,
they're
they're,
looking
for
sensitive
data
that
you
know
is
valuable.
It
has
purpose
it
has
business
value
and
they're
looking
to
hold
it
hostage,
of
course,
by
encrypting
that
data,
so
ransomware
protection
is
also
known
as
immutability
or
air
gap,
and
it
really
depends
on
your
backup
product,
and
you
know,
if
you're
in
a
private
cloud
or
a
public
cloud
scenario.
B
B
Of
course,
everything
is
always
online,
so
they
typically
have
a
write
once
read,
many
object
store
so
and
s3
calls
this
an
object
lock
bucket,
for
example,
and
the
backup
software
you
know
if
it
supports
object,
lock,
it's
going
to
enable
writing
those
backups
to
that
object,
lock
store
so
that
it
can
create
a
property
of
immutability,
and
so,
depending
on
how
that
code
looks,
you
might
have
an
advanced
configuration
value.
B
In
this
example,
you
know
we
have
a
configuration
value
called
object,
lock.
True
this.
This
might
set
up
the
bucket
to
configure
it
using
that
object,
lock
and
then,
of
course,
send
the
backups
to
that
that
same
object-
and
I
think
the
rigor
code
here
is
also
pretty
straightforward
right.
It's
looking
for
that
specific
action
or
advanced
metric
configuration
that
says,
hey
is
it
true?
Is
this
object
lock
being
enabled
if
it
doesn't
find
this?
B
The
third
example
is
related
to
an
availability
of
your
of
your
backups,
so
a
common
disaster
recovery
policy
would
have
you
know
you
have
your
production,
you
have
your
copy
of
production,
which
is
sometimes
a
storage
snapshot,
and
then
you
have
an
ability
to
copy
that
storage
snapshot
to
an
off-site
cloud
location,
and
so
that's
what
we
would
call
three
two
one.
B
You
can
extend
that
concept
further.
If
you
want
to
include
the
air
gap
or
immutability
property
with
something
called
three
two
one,
one
zero
and
again
it
varies
by
data
protection
software.
If
it
supports
immutability,
you
can
have
it
check
for
that
if
it
supports
data,
verification
or
verification
of
those
application,
backups
that
can
be
that
can
be
restored.
Of
course,
you
know
you'll
want
to
write
your
policies
to
ensure
that
it's
able
to
kick
off
that
test
and
again
the
rego
here
is
very
straightforward.
It's
looking
for
two
actions
in
the
actions
array.
B
B
B
B
So
if,
for
example,
your
data
protection
software
software's
compromised
or
the
access
to
your
data
protection,
software
has
been
compromised,
there's
potential,
for
they
can
use
the
restore
functionality
to
exfiltrate
data
itself
and
so,
depending
on
how
the
data
protection
software,
creates
jobs
or
backup
jobs
or
restore
jobs
in
kubernetes,
they
would
typically
be
another
resource
and
you
can
actually
track
that
resource
in
using
opa.
If
you
can
write
a
rego,
for
example,
that
says
hey
for
any
restore
job,
that's
created,
let's
look
at
where
it's
going.
B
B
The
beauty
of
the
regal
language
here
is
that
it's
very
flexible
how
you
want
to
to
author
it,
and
so
in
this
case,
in
the
exploration
protection.
You
might
want
to
write
a
rule
that
says
if
we
see
a
restore
job
going
to
somewhere,
we
don't
expect
shut
it
down.
You
know,
don't
let
that
job
actually
kick
off
and
that's
going
to
prevent
any
exfiltration
of
data
coming
through
the
backup
application.
B
C
C
Right
great
yeah,
so
we
have
some
like
basic
knowledge
on
rego
and
what
the
rules
look
like
and
as
joey
showed.
We
also
know
like
what.
What
does
these
kind
of
backup
policies
look
like
and
again
like,
given
how
oppa
is
is
kind
of
agnostic
or
a
general
purpose
framework?
We
can
work
with
any
any
type
of
json
or
yaml
data.
C
So
one
thing
to
note
note
here,
though,
that
the
way,
if,
if
we
check
these
manifest
so
in
this
demo
repository,
we
have
manifests,
which
are
the
yaml
files
that
we
want
to
deploy
to
to
kubernetes,
and
we
have
some
policies
where
we
have
all
these
rules
like
any
backup.
Action
must
be
accompanied
by
a
backup
copy
and
so
on.
So
we
have.
We
have
a
bunch
of
manifests
and
we
have
a
bunch
of
policies.
C
So,
for
example,
I
don't
know
here-
let's
check
some
something
here
like
the
ransom
one
where,
where
we're
checking
that
as
joey
demonstrated,
we
need
to
ensure
that
for
any
aws
backup
target
we
we
should
deny
that
deployment
if
it
does
not
have
either
an
object,
lock
or
an
air
gap
type
or
for
the
recovery
scenario.
C
We're
just
gonna
check
that,
for
any
policy
there
needs
to
be
a
backup
for
any,
I
don't
know,
is
there
for
the
restore
job.
C
We
want
to
ensure
that
the
restore
can
only
be
performed
in
one
of
our
allowed
namespaces,
so
we
want
to
protect
ourselves
from
like
the
exfiltration
scenario.
C
So
one
thing
to
note
here
when
we
look
at
all
these
kind
of
manifests,
the
format
of
these
manifests
are
going
to
be
a
bit
different
from
what
we'll
actually
be
working
with
when
these
manifests
are
presented
to
us
by
the
kubernetes
api
server.
C
So
the
way
the
kubernetes
api
server
presents,
these
again
is
going
to
be
in
the
form
of
admission
review
objects,
so
we
could
of
course,
spin
up
a
cube.
Cluster
have
oppa
installed,
ask
the
admission
controller
and
test
things
that
way,
but
in
the
context
of
a
ci
cd
pipeline,
we're
going
to
want
to
have
some
form
of
quicker
feedback.
C
So
in
this
case
I
I've
written
a
little
tool.
It's
called
cube
review,
which
simply
takes
any
kubernetes
manifest
like
a
deployment
in
this
case,
and
it
it
simply
turns
that
into
an
admission
review
object.
C
So
in
that
way
we
can
kind
of
shift
left
the
testing,
so
we
create
admission
review
objects
and
we
can
push
those
into
oppa
directly
in
the
ci
pipeline
and
run
our
tests
on
those
objects.
C
So
the
way
we've
done
it
here
in
this
demo
repository
is,
if
we
check
the
the
pr
workflow
here,
you'll
see
this
applies
some
pull
requests,
so
we're
just
checking
out
the
the
manifest
checking
out
the
policies
we
download
oppa
using
the
the
setup
opa
project,
which
is
basically
just
a
downloader
for
oppai.
In
the
context
of
github
actions,
then
we
download
kuber
review
and
we
run
the
validate
script
and,
of
course,
if
this
fails,
the
pr
is
is
going
to
fail
or
it's
going
to
have
errors.
C
So
if
we
check
them
the
validate
script
here,
we're
gonna
see,
there's
some
just
scaffolding
here
to
check
that
the
cube
review
tool
is
installed
and
so
on.
What
we'll
do
here
is
we
iterate
over
all
the
manifests
in
in
the
manifest
directory?
C
And
for
each
of
these
we
create
using
the
kr
cube
review
tool.
We
create
an
admission
review
object
which
would
be
identical
to
what
the
cube
api
would
present
us.
So
we
can
run
the
same
policies
here
as
we
would
in
in
a
later
deployment
step,
but
we'd
rather
we'd
rather
want
fast
feedback
and
we'd
rather
not
have
to
run
a
cube
cluster
in
in
the
build
step
we
can
we.
C
We
will
obviously
want
that
later
in
in
the
deployment,
but
for
for
quick
feedback,
quick
iteration,
we
just
want
to
run
something
to
verify
that
the
manifests
are
fine.
So
again,
you
might
remember
the
oppa
eval
tool,
which
is
a
simple,
simple
way
of
just
evaluating
a
policy.
So
what
we
do
here
is
for
each
of
these
manifests.
C
We
we
pipe
that
into
opa
eval,
and
we
run
that
over
all
the
policies
in
the
policy
directory,
and
in
this
case
we
have
one
rule
that
kind
of
aggregates
all
the
results
from
all
the
rules
and
if
it's
not
an
empty
array,
meaning
if
we
check
the
rules
over
here
or
the
policies,
you
can
see
what
we're
using
here
we're
calling
all
of
these
deny,
and
these
rules
look
a
little
different.
C
C
C
So
if
we
were
to
change
something
here-
and
we
make
this
we'll
remove
the
backup
copy
here
and
I'm
going
to
run
the
validate
script,
I'm
going
to
see
here
that,
oh
when,
when
validating
the
recovery
yaml
file,
we
have
a
policy
violation,
because
policy
resource
must
include
both
backup
and
backup
copy
actions.
So
very
fast
feedback.
C
I
can
modify
my
manifests
and
try
things
out
and
then
just
run
the
validate
script,
to
have
oppa,
evaluate
the
manifests
and,
of
course,
if
I
am
to
it's
called
feature
branch
where
I
now
have
one
changed
file,
I'm
gonna,
try
and
add
that
commit
I'm
gonna
say:
remove
backup
copy
action,
I'm
going
to
push
that.
C
So
now
we're
now
now,
hopefully
oppa
or
the
github
action
is
gonna,
run
the
pr
checks
which
will
run
the
same,
validate
script
as
as
we
we
just
ran
locally
and
we're
hopefully
going
to
get
some
pretty
fast
feedback
or
get
to
know
what
went
wrong
here.
So
we
see
that
okay,
oppa
or
oppa
did
not
validate
the
resource.
C
We
can
check
in
the
details
here
we'll
see
that
yeah
pretty
much
the
same
output
as
we
did
locally
and
oh
and
github
will
now
not
allow
us
to
merge
this
pull
request.
So
a
very
a
pretty
simple
way
to
to
verify
manifests.
C
There
are
obviously
more
like
elaborate
things
like
conf
test
and
so
on,
but
for
the
purpose
of
the
demo,
I
wanted
to
keep
it
basic
and
show
how
you
can
use
like
simple
tools
like
opa
eval,
to
run
as
part
of
of
the
build
process
and
yeah.
I
think
that's
pretty
much
it
if
there's
any
questions
or
or
so.
C
B
That
scenario
you
know,
I'm
I'm
a
platform
developer,
I'm
writing
code
for
deploying
into
a
new
region
or
new
mission
critical,
and
I
don't
necessarily
have
to
wait
for
other
people.
To
give
me
feedback
right.
Opa
is
going
to
give
me
feedback
and
I
can
fix
the
bug
right
there
right
without
any
human
interaction,
potentially
yeah.
C
Exactly
so,
github
is
going
to
tell
you
like
this.
Pr
cannot
be
merged
due
to
the
validation
errors
that
oppa
returned.
C
So,
of
course
we
we
could.
We
could
do
these
checks
in
a
in
admission
control,
but
I
would
mean,
like
we
wouldn't
know
at
the
at
the
time
or
at
the
point
in
time
when
we
created
a
pr
that
this
is
not
going
to
work
so
we'd
have
the
pr
merged
and
then,
when
we
deploy
that
code,
we'd
see
like
oh
this.
This
cannot
be
deployed
because
this
is
in
a
violation
of
this
or
that
policy.
C
B
C
Yeah
exactly
so
like
the
same
kind
of
validation.
Steps
that
you
run
locally
are
run
in
the
ci
or
in
the
build
pipeline
so
which
is
which
I
think
is
idl,
because
then
you
can,
you
can
use
like
pre-commit
hooks
or
whatever
you
want
to
ensure
that
you
probably
don't
even
want
to
create
that
pr.
In
in
case
there
are
violations
and
oppo
will
provide
you
with
detailed
messages
telling
you
which
resources
is
in
violation
and
and
for
what
reason.
B
Okay,
so
does
anybody
have
any
questions
about
anything
you
saw,
you
know
clarification
that
we
need
to
provide.
I've
got
a
few
questions,
if
not.
C
Oh
yeah
for
sure,
so
again
the
the
official
docs
are
great
and
in
addition
to
that,
there's
also
the
styra
academy,
which
is
a
learning
resource
with
things
like
video
based
content
tests
and
so
on.
So
I
can
definitely
recommend
that.
B
Okay,
I
have
a
question
that
I
saw
from
a
developer
forum
so
on
reddit
r
devops.
It
was
about
policy
as
code
and
one
of
the
one
of
the
developers
actually
commented
that
we're
writing
the
declarative
code.
You
know
like
like
a
yammer
right
policy
is
code,
is
also
declarative
and
that
developer's
opinion
was
actually
that's
a
little
bit
redundant
where
he
would
rather
see
this
used
with
a
declarative
imperative
concept,
so
the
scripts
trying
to
deploy
something
more
specifically
and
a
declarative
policy
to
validate
that
that
imperative
is
actually
doing.
B
C
B
Yeah,
I
think
it
has
to
do
with
we're
trying
to
get
we're
trying
to
meet
an
outcome
right
and
a
decorative
language
defines
the
outcome
you
want
and
the
policy
is
code
is,
is
enforcing
the
outcome
you
want,
so
it
it
will
protect
against,
of
course,
typos
and
things
like
we
forget
code,
but
if
they're
just
trying
to
to
test
the
outcome
in
theory,
both
of
those
approaches
will
meet
the
outcome
if
written
correctly,
but
the
imperative
is
not
outcome-based.
It's
action-based
scripting.
C
Yep,
no,
I
think
I
I
think
I
understand,
but
I
think,
like
the
benefit
that
I
would
you
know
emphasize,
is
having
the
rules
decoupled
from
the
actual
like
specifications,
so
the
manifests
in
this
file
are
separate
and
the
policies
could
live
in
like
an
entirely
different
repository
or
be
managed
by
a
different
team
for
some
cases.
Of
course,
if,
if
it's
just
a
simple
check,
sometimes
you
you
might
just
want
to
add
like
an
if,
then,
if
or
else
or
like
that's
part
of
the
of
the
build
script
or
something.
C
But
once
you
start
to
kind
of
scale
up
and
you
have
like
hundreds
of
resources
and
you
might
have
like
tens
or
hundreds
of
teams-
and
you
want
to
have
like
how
do
you
know
like
what
what
what
policies
are
applied
to
this
type
of
resource
in
a
unified
way
across
our
whole
organization?
C
How
do
we
audit
that,
and
things
like
that?
So
I
think
sure,
for
the
simple
use
case.
You
can
definitely
like
add
some
simple
checks
and,
and
so
as
close
to
the
to
the
actual
like
data
as
possible.
But
once
you
kind
of
go
past
that
you're
going
to
need
something
more
organized.
B
I
definitely
think
this
is
this
is
kind
of
the
technology
you
would
use
in
a
much
more
distributed
environment.
You
have
a
distributed
team
distributed
applications.
Maybe
there's
one
central
cloud
platform
team:
that's
responsible
for
production,
also
responsible
for
data
protection
and
you've
got
a
series
of
application
stakeholders,
there's
not
enough
meetings
in
the
day.
Well,
you
don't
want
that
many
meetings
in
the
day
to
kind
of
coordinate
things
yeah.
C
Oh
yeah,
definitely
and
and
oppa
provides
several
ways
to
kind
of
now.
We
were
just
running
oppa,
eva,
which
kind
of
just
takes
a
file
on
this,
but
there
are,
there
are
more
advanced
ways
of
fetching
like
data
and
policy.
There
is
like
a
concept
of
remote
bundle,
servers
where
oppa
can
go
in
and
and
get
policies
and
so
on.
So
there
are
many
ways
of
doing
this.
I
I
just
wanted
to
keep
it
like
very
basic
for
for
the
purpose
of
this
demo.
B
Okay,
well,
I
just
wanted
to
thank
everyone
for
attending
today.
I
hope
it
was
helpful
and
you
know,
hopefully
we
can.
You
know,
provide
you
some
guidance
on.
You
know
if
a
use
case
that
you
have
might
be,
for,
I
say
a
question
here:
are
there
any
use
cases
for
using
opa
against
and
serverless.
B
C
Yeah
sure
so,
there's
yeah
for
you
can
oppa
itself
kind
of
the
common
mode
of
operation
is
having
opa
run
as
a
service,
so
you'd
have
an
actual
like
http
server
that
services
requests
there
are.
There
are
ways
to
have
opa
and
more
to
run,
open
and
more
kind
of
a
stateless
context
which,
which
is
suitable
for
service
serverless,
so
yeah
that
can
can
definitely
be
done,
but
I
think,
like
another,
pretty
common
way
of
doing
it
is
have
opa
run
somewhere
and
then
have
something
like
that.
C
B
C
Nah,
I'm
actually
not
sure
about
open
files
and
I've
never
used
that.
So
I'd
love
to
know
too.
B
A
You
thank
you
so
much.
Thank
you
anders
and
joey,
and
thank
you
everyone
for
joining
us.
You
know
where
to
find
them
and
we
will
see
you
next
week
for
another
version
of
cncf
live
webinars.
Thank
you
all.
So
much
for
coming.