►
From YouTube: 27 04 pro Piotr Jablonski Prisma Cloud Admission Control 3 ways to control K8s deplyments
Description
KCD 2022 CZ & SK virtual - Prvá CNCF cloud native konferencia v regióne Čiech a Slovenska. kcd.live
A
Let
me
continue
in
english
and
introduce
piotr
jabonsky
he's
a
solution
architect
at
paolo
alto
networks
covering
cloud
native
security.
He
designed
it
and
deployed
multi-cloud
solution
from
the
perspective
of
service
providers,
data,
centers,
visualization
and
application
layers
currently
leading
security
project
related
to
public
cloud
kubernetes
and
devops.
A
A
This
is
built
on
the
open
policy
agent
opa
in
a
console
you
can
manage
and
compose
rules
in
rego,
which
is
opa's
native
query.
Language
rules
can
allow
or
danny
pots
based
on
the
settings.
The
other
method
will
be
presented
using
checko.
You
will
also
see
how
to
set
up
a
policy
where
you
can
trust
images.
You
want
and
block
untrusted
images
project.
B
Yes,
thank
you
for
the
introduction,
hello,
everyone.
My
name
is
peter
jabonsky
and
let's,
let's
get
started
then.
So
what
is
the?
What
is
the
agenda
for
today?
We
have
three
main
points.
The
first
is
the
admission
control
with
with
the
check
of
engine,
then
opa
engine.
So
there
are
two
engines,
and
there
might
be
different
implementations
as
well
as,
but
opa
is,
is
one
of
the
most
regarded
option
right
now.
B
B
Kubernetes
is
using
admission
controllers
internally,
and
there
are
different
types
of
admission
controllers
there
might
be
validating
mutating
or
both
so
mutating
admission
controller
is
essentially
modifying
related
objects,
validating
is
checking
specific
settings
and
allowing
or
blocking
denying
these
settings
from
being
deployed
or
used.
Essentially,
admission
controller
is
a
piece
of
code
which,
which
takes
your
request.
The
request,
which
is
sent
to
the
server
api
and
before
the
the
yaml
file
with
the
specific
configuration
is
is
realized,
is
deployed
into
the
environment,
is
written
to
the
lcd
as
a
persistent
object.
B
First
of
all,
the
request
is
authenticated,
authorized
and
verified
against
settings
and
actually
you've
with
admission
controller.
With
validating
admission
controller,
you
can
verify
any
settings
which
you
send
via
request
with
yaml
file,
so
it
can
be
cpu
limitations.
It
can
be
security
privileges,
it
can
be
labels,
it
can
be
any
settings
which
are
part
of
this
yaml
statement,
so
it's
very
useful
way
how
you
can
apply
best
best
configuration
best
practices
and
also
in
terms
of
security.
B
By
default,
validating
admission
controller
is
not
utilized.
You
need
to
use
it.
You
need
to
turn
it
on
by
applying
a
web
hook,
web
hook,
which
is
which
is
deployed,
and
you
will
see
in
the
second
where
that
mission
controller
is
supplied
in
the
overall
security
landscape.
So,
as
you
know,
security
thread
vectors
are
are
present
in
every
at
every
stage
of
software
life
cycle.
B
So
we
we
have
three
main
life
cycles
like
stages
of
life
cycle,
which
is
build,
deploy
and
run,
and
if
you
are
thinking
about
security
overall,
you
should
apply
your
security
parameters,
security
measures
at
every
stage,
so
you
should
think
about
as
a
developer
as
a
devops
to
deploy
security
even
starting
from
the
ide.
Like
visual
studio
code,
you
can
run
in
plugins,
and
these
plugins
can
can
check,
can
apply
iac,
which
is
infrastructure.
B
You
can
also
apply
this
admission
controller.
So
just
in
the
deploy
stay
in
the
deploy
stage,
you
can
apply
the
admission
controller
to
verify
the
configuration
once
again,
maybe
to
verify
if
the
image
of
the
container
is
trusted
or
not.
So
this
can
limit.
The
risks
are
associated
with,
with
with
intentional
or
unintentional
attacks
or
vulnerabilities,
which
may
come
with
the
specific
packages.
B
Admission
controller
is
specifically
useful
for
application
api
to
protect
containers
and
hose
because
containers
and
hose
by
the
way
at
the
end
of
the
day,
can
be
compromised
if
they
are
vulnerable
simply
because
of
two
wide
privileges,
overly
permissive
privileges.
So
so,
for
example,
if
the
container
is
running
as
a
root
or
with
netrol
net
admin
operating
system,
privileges,
capabilities
which
allows
again
to
execute
low-level
operating
system
commands
to
to
gain
visibility
or
even
execute
commands
on
the
operating
system
to
block
or
intercept
the
traffic
between
the
con
different
containers.
B
B
Here
on
this
picture,
I
put
it
on
just
single
icon
for
the
api
request,
but
I
focused
more
on
the
webhook
side
and
what
is
happening
beneath
so
at
the
later
stages.
So
during
the
first
stage,
first
stage
number
one
in
the
first
step,
the
administrator
is
sending
the
yaml
file
to
execute
cubectl.
Minus
f
apply,
for
example,
and
do
the
deployment
of
the
port
is
sending
the
api
request
to
the
api
server.
B
B
Enforcement
and
policy
and
stage
and
this
stage
is
represented
by
service
as
vc.
So
this
is
the
second
step
and
behind
the
svc
there
might
be
one
or
multiple
of
of
containers
or
processes
which
are
actually
verifying
these
settings,
because
at
the
end
of
the
day,
someone
must
verify.
Compare
the
january
statement
manifest
with
the
with
the
configuration
which
is
which
is
the
best
practice
or
which
is
expected
configuration
in
your
case.
B
So
in
the
step
number
three
svc
is
redirecting
this
api
request
to
one
of
the
pod
to
one
of
the
pod,
because
this
is
a
working
class
like
a
load
balancer.
B
So
just
one
of
the
pot
will
receive
this
request
in
stage
step
number
three
and
within
the
pot
there
must
be
the
either
the
script
or
the
part
of
the
engine
or
the
binary
which
is
executed,
which
is
running
and
after
receiving
the
api
request
it.
It
takes
the
json
statement
because,
because,
at
the
end
of
the
day,
api
api
processes
transforming
yaml
file
to
json
format,
and
it's
simply
comparing
our
our
best
practices
against
against
the
requested
file
and
based
on
the
settings
it
is.
B
It
sends
the
deny
result
or
allow
result
as
a
four
in
the
four
step,
visible
action
step
here.
The
dashed
line
means
that
port
is
sending
the
result
at
the
end
of
the
day,
to
to
either
the
user
or
or
the
deployment.
It's
not
direct
response
because,
of
course,
the
api
request
is
sent
by
the
api
processing
in
the
kubernetes.
So
actually
port
is
responding
to
the
api
request.
First
of
all
and
api
server
is
responding
to
the
user,
so
this
is
a
simplified
view.
What
is
happening?
B
The
logical
view
of
what
is
going
on
if
the
statement,
if
the
analyzing
process
is
successful,
is
allowed
then
in
on
the
right
hand,
side
we
see
that
the
port
is
sending
the
logical
result
to
to
allow
us
allow
and
the
deployment
phase
is
realized
by
instantiating
the
port.
So
let's
have
a
look
right
now
how
it
goes
in
the
in
the
environment,
so
I
have
a
cluster
in
the
smgke
cluster,
very
simple
one
with
just
one
node.
B
I
had
a
cluster
with
four
nodes,
but
I
I
broke
just
before
the
before
this
event,
due
to
some
testing,
so
I
will
utilize
a
different
cluster
here,
right
now
and
and
in
this
cluster.
B
B
B
You
will
receive
at
the
in
the
presentation
all
the
reference
links
where
you
can
find
these
settings
these
web
hooks
all
is
available
in
the
github
repository,
so
you
can
you
can
download
it.
I
would
like
to
focus
right
now
on
the
on
the
functional
perspective
of
this,
of
this
validating
webhook
admission
controller.
Okay,
once
we
found
webhook
we
can
get,
we
can
have
a
look.
What
is
inside
so
cubectldescribe.
B
Plus
the
name
of
the
webhook,
and
this
webhook
is,
is
almost
very
basic
one
I
mean
there
are.
There
are
many
empty
parameters,
as
you
can
see
many
empty
parameters,
we
have
a
certificate
which
authenticate
the
components
to
each
other.
What
is
more
important?
We
have
the
name
of
the
svc,
so
we
have
the
name
of
the
svc
the
namespace
bridge
crew
and
the
identification
of
the
service
itself.
B
And
do
we
have
something
more
and
important
thing
is
failure
policy,
because
what
happens
when
the
web
hook
is
not
working
or
is
broken
or
svc
does
not
work?
Or
there
is
a
communication
blocked
or
posts
behind
the
svc
are
broken
or
not
running
anymore,
then
the
api
requests
sent
to
a
web
hook
will
not
get
the
response.
B
All
the
response
will
be
malformed
or
anyway,
it
won't
work
as
expected,
then
the
failure
policy
is
used
to
to
to
make
a
decision
whether
if
there
is
a
failure
on
the
api
request,
we
should
fail
the
deployment
or
not
so
this
is
this
is
the
setting
which
is
failed,
equal
to
fake,
fail,
close
and
the
failure
component
will
result
to
the
fail
closed,
which
means
any
and
execution
of
api
request.
Keep
ctl
will
be
will
be
failing
as
well.
If
there
is
a
failure,
you
can
change
this
failure
policy
to
ignore.
B
B
B
Yeah
and
we
we
can
notice
the
front-end
ip
address.
We
can
notice
the
front-end
tcp
port,
the
back-end
tcp
port
is
484
for
free
tcp,
and
we
have
just
one
end
point
which
is
sitting
behind.
So
this
is
one
of
the
ports
which
was
visible
previously,
and
this
spot
will
is
having
the
specific
binaries
wharf
engine
check
of
engine
and
and
the
template
with
check
marks,
and
these
check
marks
are
compared
with
with
the
statement
or
manifest
kubernetes
manifest.
B
We
would
like
to
execute
in
our
so
these
are
the
components
webhook
svc
and
the
backend,
either
ports
or
processors
in
the
kubernetes
world.
It
will
be,
as
you
see
it
is
pots,
so
we
are
ready
to
test
this
test,
this
validating
admission
controller,
how
we
can
test
it.
We
can
first
of
all
think
up
what
we
would
like
to
achieve.
B
So,
for
example,
if
we
would
like
to
run
containers
us
as
a
root,
so
this
this
rule
will
not
allow
us
to
con
to
run
containers
as
a
route.
B
B
And
right
now,
if
I
do
check
of
config,
these
are
identifiers
of
the
check
marks
which
will
be
used
to
fail
the
configuration
the
deployment.
So
there
are
hundreds
of
rules,
but
only
these
rules
will
fail.
The
deployment
each
of
the
each
of
the
best
practice
is
having
the
id
these
ids
visible
here,
for
example.
So
if
we
talk
about,
let's
say.
B
B
This
is
very
simple
yaml
statement
with
just
one
part.
Just
for
the
sake
of
simplicity
and
clarity.
We
have
the
security
context
privileged
to
true.
This
is
this
is
definitely
not
not
the
best
practice
to
from
the
security
and
the
configuration
standpoint
to
run
such
a
such
a
port.
So
if
I
run
right
now,
cube
ctl,
minus
f
privilege
port
apply,
let's
see
what
happens,
our
request
should
be
denied
by
check
of
engine.
B
B
Somewhat
similar
reason
is
also
container
should
not
run
with
allow
privilege
escalation
default
names.
They
should
not
be
used
this.
This
can
be
changed.
Some
some
in
some
environments
default
names,
which
is
nothing
wrong
and
it
can
be
absolutely
used
in
this
case.
It's
it's
the
reason
why
also
the
execution
of
the
spot
is
blocked,
minimize
the
admission
of
root
containers.
This
is
also
related
to
id
16
and
20.,
but
totally
there
are
18
issues.
B
B
You
can
set
up
the
free
account
because,
for
small
environments,
bridge
crew
is
offering
free
of
charge
accounts,
so
you
can
get
the
access
without
any
specific
charges
and
you
can
get
the
full
json
statement
with
each
of
the
which,
with
each
of
the
configuration
best
practice,
some
of
them
are
low
risk
low
severity.
So
that's
why
they
are
not
contributing
to
the
decision.
B
B
So,
without
with
the
security
context,
privilege
false
this
part
should
be
should
be
deployed,
actually,
actually
it
won't
be
deployed
because,
again
again,
other
other
statements
are
also
are
in
place.
So
we
we
need
either
turn
off
20
21
23
from
the
from
the
checkoff
statements,
which
can
be
done.
B
B
There
are
20
21,
23,
okay,
we
we
need
to
change
the
the
configuration
which
is
applied,
let's
leave
it
as
of
now.
Of
course,
we
we
need
to
modify
the
the
settings,
but
nevertheless,
if
you,
if
you
want
to
change
the
policy,
as
you
can
see,
you
need
to
configure
the
ammo
statements
which
is
less
convenient.
So
let
me
show
you
in
a
bit
more
convenient
way
how
you
can
manage
the
policies
so
instead
of
running
this
check
of
check
of
engine
by
the
way.
B
B
B
And
I
will
apply
a
new
web
hook
web
hook,
which
is
which
can
be
which
can
be
associated
with
the
with
the
additional
tool
which
is,
in
this
case,
prismacloud.
Why?
I'm
talking
about
prismacloud,
because
in
this
case
you
can
also
utilize
additional
features
like
verification
of
images,
verification
of
other
settings
which
is
normally
not
possible
with
standard
opa.
B
B
B
All
right,
so
it
should
should
work.
Let's
have
a
look.
How
this
work
against
our
privileged
privilege,
port,
comparing
to
the
check
of
first
of
all
opa
is
is
a
different
engine.
Is
the
engineer
written
in
goal?
Sorry,
yes,
in
go,
chekhov
is
written
in
python
and
also
rules
which
are
written
in
the
opa.
They
are
written
in
rego
language,
so
you
might
be
not
familiar
with
regular
language,
but
essentially
it's
very
simple.
B
I
mean
we,
we
are
taking
the
yama
statement,
the
json
format,
which
is
executed
by
the
api
server,
and
we
compare
the
parameters
from
the
statement
from
this
manifest
against
specifics
parameters.
We
want
to
execute
so
in
this
case.
We
we
find
that
we
want
to
for
react
upon
creation
of
the
pot,
the
pots,
and
we
need
to
check
the
security
context
if
the
security
context
is
privileged.
B
If,
yes,
because
all
the
statements
are
from
the
logical
perspective,
are
combined
together,
so
there
is
a
boolean
operand
and
then
the
message
sent
is
privileged
but
created
plus
the
effect
is
blocked.
We
can
use
a
different
effect
like
allow
alert
or
block,
but
in
this
case
we
will
use
we'll
use
block.
B
B
And
you
can
see,
the
message
is
different
because
the
different
engine
is
validating
and
there
is
the
denied
request
privilege
pod
created,
which
means
this
is
the
message
which
is
which
is
configured
here,
and
the
beauty
with
this
solution
is
that
we
don't
have
just
one
block
of
all
the
check
marks
in
the
same
statement
rules.
We
have
a
separate
rules,
rule
by
rule.
B
We
can
manage
them
separately,
which,
which
is
which
is
in
some,
sometimes
more
convenient,
and
also
the
convenient
way
with
this
approach,
comparing
to
even
standard
open
source
opa
is
that
is
that
you
can
manage
in
the
one
repository
all
the
all
that
mission
controllers
across
different
clusters
across
different
environments,
kubernetes
releases
or
or
in
the
hybrid
environment
as
well.
So
this
is
providing
a
single
dashboard
for
monitoring
and
policy
enforcement.
B
What
is
created
nine
seconds
ago
and
privileged
ngnx,
and
the
last
step
for
for
today,
is
to
add,
on
top
of
this
configuration
best
practices,
enforcement
to
add,
also
verification
of
the
image
which
is
not
an
easy
stuff,
because
by
default
with
opa,
we
don't
have
such
an
option
with
regular
language.
So,
although
we
can,
we
can
add,
for
example,
digest
sha
digest
comparison,
so
you
can
write
the
you
can
write
down
the
policy
which
compares
digest
against
your
static
entries,
like
like.
I
did
here,
which
I'm
comp
where
I
comparing.
B
Signatures
and
you
compare
simply
the
signature
which
is
used
in
the
manifest
kubernetes
yaml
file
with
the
signature
which
is
trusted
and
in
it
is
located
in
the
database.
So
in
that
case
you
can
you
also
utilize
opa
with,
for
example,
prismacloud
to
execute
the
policy
and
enforce
the
the
comparison
between
between
the
this
digest
entries.
B
So
you
have
the
script
visible
here
and
also
allow
allowed
to
you.
I
mean
accessible
to
you
for
the
download
on
the
github
repository,
but
there
is
a
drawback
of
this
approach,
because,
with
this
approach,
you
need
to
operate
with
on
the
digest
level,
with
modern
solutions.
What
you
can,
what
you
can
utilize
you
can
utilize
the
ability
to
first
of
all,
find
out
what
images
are
risky
because,
based
on
the
digest,
you
cannot
you
you
don't
know.
B
Maybe
today
is
not
vulnerable.
This
image
is
trusted,
but
after
a
month
there
is
a
new
vulnerability
so
based
on
the
static
entries,
even
in
the
notary
database,
you
cannot
refer,
you
cannot
you
don't
know.
What
is
the
risk
with
the
with
the
tool
which
is
combining
the
vulnerability
scanning
and
threat
detection
with
the
opa?
You
can
do
this
and
this
is
available
with
the
with
the
compliance
service
trusted
images.
B
Just
I
need
just
three
minutes
more
and
I
will
show
you
the
example
here.
So
I
added
trusted
registries
with
repositories.
One
one
of
them
is
docker
dot,
iol
library
ngnx,
because
I
would
like
to
run
my
nginx
here
in
this
environment.
You
can
state
also
trusted
images
by
layers
and
digest
if
you
want
as
well-
and
you
can
also
define
untrusted
images.
So
if
you
know
them,
you
can
define,
but
the
overall
fallback
effect
is
block,
so
any
image
which
is
untrusted
will
be
should
be
blocked
as
well.
B
It
requires
correct
settings
alpine
time,
so
I
have
additional
statements
for
alpine.
This
is
very
simple
port
with
just
alpine
image,
but
alpine
image
is
not
added
as
trusted
in
the
in
the
configuration.
So
even
in
this
case,
we
should
expect
that
this
spot
won't
be
deployed.
B
B
B
B
C
I
will
have
to
interrupt
you
because
we
are
running
out
of
time.
So
maybe
thank
you
for
the
for
your
presentation.
We
have
here
several
questions.
I
will
probably
ask
you
one
or
two
of
them.
Mr
francis
ferenchik
is
very
put
maybe
three
or
four
questions.
We
have
probably
just
time
for
one
I
choose
just
one
in
this.
Is
this
solution
competitive,
for
example,
for
aqua
security
or
to
carbon
back
for
containers,
respond
to
octane.
B
Yes,
it's
competitor
to
the
carbon
block
and
aqua
security,
and
you
can
we
can
cont,
you
can
contact
with
us
and
to
to
so
you
can
get
more
comparison
in
terms
of
that
in
terms
of
cspm
in
terms
of
cwp,
in
terms
of
all
the
security
features
available.
Yes,.