►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hello,
everyone
good
morning,
good
afternoon,
good
evening,
depending
on
where
you
are
in
the
world
today,
and
thank
you
for
joining
today's
cncf
webinar,
preventing
kubernetes,
misconfiguration,
static
analysis
and
beyond
I'm
kristy
tan
here
from
the
cncf
and
I'll
be
moderating
today's
webinar.
We
would
like
to
welcome
our
presenter
today.
Matt
johnson
developer
advocate
lead
at
bridge
crew,
a
few
housekeeping
items
before
we
get
started
during
the
webinar.
You
are
not
able
to
talk
as
an
attendee.
There
is
a
q,
a
box
at
the
bottom
of
your
screen.
A
Please
feel
free
to
drop
in
your
questions
and
we'll
get
to
as
many
as
we
can.
At
the
end.
This
is
an
official
webinar
of
the
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct.
Please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
Basically,
please
be
respectful
of
all
your
fellow
participants
and
presenters.
A
B
Hey.
Thank
you
very
much.
Thanks
for
the
introduction,
christy
hi
everyone.
Thank
you
very
much
for
joining
today,
whether
you
are
I'm
based
in
manchester
in
the
uk,
so
whether
you
are
ahead
of
me
and
it's
quite
late
or
you
are
kind
of
california
america
way
and
it's
very
very
early.
I
appreciate
you
getting
up
and
and
attending.
B
B
There
we
go
so
just
briefly,
and
these
slides
again
will
be
available
later
on
today
on
the
cncf
website.
We're
gonna,
look
at
kind
of
common
security
configuration
issues
that
arise
with
kind
of
deployments
and
use
of
kubernetes.
Looking
a
little
bit
at
kind
of
the
state
of
configuration
issues,
we
see
from
kind
of
public
resources
and
kind
of
you
know
the
common
issues
we
we
see
with
customers
or
kind
of
publicly
available
kubernetes
samples.
B
Look
at
how
policies
code
kind
of
tied
into
devops
and
ci
cd
can
go
some
way
to
making
security.
You
know
more
of
an
enabler
than
a
chore
for
your
development
teams
and
also
look
at
what
options
we
have
for
kind
of
post
deployment
and
runtime
analysis
of
security
issues
within
kubernetes
as
well.
B
So
without
further
ado,
if
you
want
to
ask
me
questions
that
you
forget
during
this
webinar,
if
you
wake
up
tomorrow
night
and
go,
I
wish
I'd
asked
that
question.
I'm
at
metahertz
on
github
and
twitter
I'll
put
my
email
address
at
the
end
of
the
talk
as
well.
Obviously
we
have
the
q
a
box
as
well
within
the
zoom.
So
please
do
use
that
and
we'll
save
some
time
for
questions
at
the
end.
A
little
bit
about
me,
I'm
an
all-round
kind
of
cloud
security
geek.
B
I
like
breaking
cloud
connected
iot
devices,
the
bigger
the
better
cars
and
planes,
if
I
could
get
my
hands
on
them
in
my
spare
time,
so
I'm
a
bit
of
a
wannabe
pen
tester
on
on
that
side
of
things,
which
I
think
just
means.
I
like
breaking
things,
and
I'm
also
like
super
intrigued
by
the
idea
of
kind
of
really
lightweight
kubernetes
clusters
at
edge
sites
for
kind
of
consistent
apis
across
cloud,
and
you
know
remote
kind
of
survivable
locations.
So
a
little
bit
about
me.
B
If
any
of
those
interest
you
by
all
means
give
me
a
shout
so
a
little
bit
of
a
story
to
kick
things
off.
I
like
a
bit
of
context
as
an
engineer,
I
either
want
to
move
fast
or
I
am
expected
to
move
fast
with
the
advent
of
devops
kit,
ops.
You
know
multiple
deployments
to
production
a
day
and
when
I
am
moving
fast
for
whatever
the
reason
I
do
not
want
to
break
things,
and
so
we
will
generally
track
what
we're
doing.
We
will
track
what
changes
we
need
to
make.
B
We
will
manage
our
sprints.
We
will
manage
our
backlog
in
some
way,
which
leads
most
engineers,
you
know,
and
whether
it's
this
or
whether
it's
another
ticketing
system
to
have
a
love
and
hate
relationship
with
jira,
and
you
know
it's
good
to
track
what
we
need
to
do.
It's
good
to
think
about
security.
I
know
that
a
lot
of
the
time
security
historically,
has
been
a
blocker
to
kind
of
fast-paced
teams.
So
we
will
get
widely.
B
You
know,
affecting
tickets,
like
do
not
admit,
root
containers
for
a
little
bit
of
context
for
those,
maybe
new,
newer
to
kubernetes
in
kubernetes
kind
of
a
container's
user
id
whatever
the
user
id
is
within
a
container
maps
to
the
hosts
user
table,
so
a
container
running
as
root
within
the
container.
If
there
were
any
kind
of
file
systems
mapped
into
the
container.
B
If
there
were
any
kind
of
security
issues
which
allowed
container
escape
for
that
process-
or
you
know,
a
hacker
could
use
those
to
become
root
on
the
underlying
host
if
they
break
the
container
isolation,
and
so
generally
speaking,
we
don't
want
to
run
root
containers.
But
this
ticket
is,
you
know,
potentially
a
single
engineer
going
through
all
the
manifests
learning
how
to
make
sure
you're
absolutely
telling
kubernetes
not
to
do
this
through
configuration
through
yaml.
You
know
this.
B
One
ticket
actually
represents
a
lot
of
work
if
it's
just
in
ticket
form
another
one
do
not
allow
containers,
sharing
the
home
host
network
namespace
again
bit
of
context
each
pod,
each
part
in
a
kubernetes
cluster
gets
its
own
ip
address
of
some
form
through
cni.
If
you
allow
the
container
to
have
the
host
network
namespace,
it
can
suddenly
affect
things
for
the
whole
host.
B
Multiple
containers
potentially
affect
roots
kind
of
seal,
the
traffic
or
just
generally,
you
know
not
a
good
idea
for
most
kind
of
deployments,
so
again,
a
very
simple
security
baseline,
something
that
we
shouldn't
be
doing
with
most
of
our
kubernetes
deployments.
But
again,
that's
a
lot
of
work
to
go
and
find
them
work
out
how
to
stop
them
and
make
sure
you're
staying
in
compliance,
and
just
because
it's
there
in
jira
doesn't
necessarily
mean
that
just
magically
becomes
a
solved
issue
and
a
third
example.
B
You
know,
let's
not
make
sure
ssh
is
open
to
the
world
on
the
cluster
level
or
the
security
groups.
So
what
we
end
up
with
traditionally
from
security,
especially
when
you
have
kind
of
a
security
team
correctly
enforcing
these
kind
of
best
practices
from
cis
benchmarks,
for
example,
onto
a
developer
team.
Is
you
just
end
up
with
a
lot
of
tickets?
B
This
is
the
first
of
two
times
we
will
defer
away
from
the
the
kind
of
star
trek
theming
for
our
gifts
in
this
session.
There
will
be
another,
but
straight
back
to
it.
You
know.
Picard
tells
us
that
resources
are
always
limited
time,
expertise,
etc.
So,
when
we've
been
tasked
to
solve
real
problems,
let's
not
waste
our
time.
So
let's
look
at
the
problems
we
actually
need
to
solve
and
to
do
that,
we'll
start
with
some.
B
Data,
I
told
you
we'd
be
going
back
to
the
star
trek
thing
anyway,
so
it's
not
just
the
kubernetes
manifest
themselves.
It's
not
just
your
pods
having
maybe
root
access
or
maybe
being
allowed
access
to
the
network.
Namespace
or
you
know,
the
ces
benchmarks
and
guidelines
suggest
you
should
have
cpu
limits
set
so
you're
not
going
to
swamp
or
you
know,
allow
a
single
pod
to
take
down
a
node
or
kind
of
you
know,
annoy
your
scheduler
or
prevent
other
resources
getting
scheduled.
B
It's
not
just
the
the
kind
of
things
running
on
top
of
kubernetes.
You
also
know
if
you've
ever
deployed
a
kubernetes
cluster.
That
kubernetes
doesn't
just
happen.
You
don't
just
have
the
kubernetes
cluster
or
maybe
you
do
because
another
team's
deployed
it,
but
they're
still
having
to
think
about
what
are
the
underlying
infrastructure
that
that
kubernetes
cluster
is
deployed
on
whether
it's
a
cloud
provider
or
bare
metal
or
wherever
there
are
still
going
to
be
vms.
B
There
is
still
going
to
be
infrastructure,
there's
still
going
to
be
routing
and
security
in
firewall
and
ip
addresses
and
storage
attached,
and
a
lot
of
things
that
again
come
with
their
own
set
of
security,
best
practices
that
need
to
be
considered
so
by
the
time
you're.
Getting
to
the
point,
you're
deploying
kubernetes,
manifest
you
have
your
security
of
the
pod
or
containers
you're
about
to
deploy
to
consider
and
someone
either
in
your
team
or
in
your
organization.
B
Has
the
security
posture
of
the
kubernetes
cluster
and
supporting
services
itself
to
consider
and
so
infrastructure
as
code
is
absolutely
great,
it
gives
us
a
way
to
checkpoint
to
go
back
to
do
drift.
Detection
to
you
know
see
where
we
were
and
where
we
are
and
kind
of
you
know
really
do
that
historical
audit.
If
we
need
to
because
we're
tracking
our
infrastructure
changes
just
like
we
track
our
code
base
changes,
but
it
does
provide,
it
does
create
new
learning
requirements.
B
And
you
know
as
another
example
of
what
we're
talking
about
within
the
pods.
Just
because
you
know
devops
engineers
aren't
going
to
spend
their
full
time
being
security
engineers,
you
know
even
kind
of
if
you're,
implementing
devsecops
and
you
have
security
engineering
working
within
your
teams.
There
is
so
many
best
practices.
There
are
so
many
standards.
There
are
so
many
guidelines.
There
are
so
many
things.
B
If
you
kind
of
read
a
you
know:
security,
basics
in
kubernetes
book
you
know-
and
that's
just
for
kubernetes
you've
then
got
the
same,
applies
to
amazon
or
azure
or
choose
your
favorite
cloud
provider.
You
know
guidelines
for
securing
their
components,
whether
it
be
a
cloud-hosted
eks
or
whether
just
the
vms
and
the
the
you
know,
individual
components
needed
to
build
your
own
and
there's
no
one
way
to
solve
a
problem
you
can
see
here.
B
We
have
a
security
contract
context
to
make
sure
that
the
containers
do
not
run
as
root,
but
then
that
can
be
overridden
by
a
single
container
lower
within
that
pod
description.
So
you
know
there's
a
lot
of
things
that
are
quite
hard
to
catch.
If
you
keep
this
as
a
manual
task,
even
if
you
have
kind
of
security
focus
people
on
the
team-
and
so
this
is
why
we
created
chekov
chekhov
is
open
source.
B
It
was
released
in
december
2019,
so
just
before,
we
all
had
a
wonderful
year
at
home
to
spend
time
learning
new
things
and
chekov
is
designed
to
statically
analyze
for
known
best
practices
in
lots
of
infrastructure,
as
code
formats,
for
example,
kubernetes
manifest,
for
example,
terraform,
for
example,
aws
cloud
information,
serverless
framework,
the
idea
being
that
chekov
will
allow
you
to
kind
of
scan
those
manifests
and
find
common
security.
B
Misconfigurations
before
that
security
issue
becomes
real,
you
know
an
issue
in
a
kubernetes
manifest
or
in
a
terraform
manifest,
isn't
a
problem
until
it's
actually
deployed
on
a
kubernetes
cluster
or
creates
some,
for
example,
aws
objects.
Those
are
the
things
that
are
then
vulnerable
to
attackers,
the
actual
manifest
themselves.
It's
a
great
place
to
do
that.
First,
suite.
Just
like
you'd
lint,
just
like
you
test
your
code,
just
like
you'd,
run
integration
tests
and
unit
tests,
etc.
B
The
idea
behind
chekov
is
it's
not
only
the
scanner,
the
policies
that
the
scanner
comes
with
out
of
the
box,
which
the
whole
point
is
it's
a
single
tool.
It's
immediately
usable
and
useful
across
all
of
those
different
cloud
from
all
those
different
infrastructures
code-
styles,
kubernetes,
terraform,
I'm
going
to
stop
repeating
them
and
just
focus
on
kubernetes
at
this
point.
But
the
whole
point
is
those
checks
are
built
into
the
chekhov
code
base.
Therefore,
kind
of
your
policies
are
version
controlled,
peer-reviewed.
We
accept
pull
requests.
B
We
have
a
vibrant
community
of
over
kind
of
50,
continuous
contributors
across
a
range
of
companies
and
different
tech.
You
know
different
industry
sectors
that
are
all
kind
of
contributing
their
own
checks
and
also,
if
those
checks
are
private
to
you,
you
can
still
source
control
them,
because
chekov
allows
you
to
pull
in
a
specific
git
repo
of
custom
checks
to
run
at
the
same
time.
B
So
if
you
do
need
private
checks
that
you
don't
want
to
expose
to
the
world,
that's
supported
as
well,
and
it's
written
in
python
and
a
lot
of
people
kind
of
ask
us,
especially
in
the
cncf
community,
where
you
know
go,
is
definitely
the
language
du
jour
like
why
python
and
it's
the
right
tool
for
the
job.
B
We
find
a
lot
of
security
engineers
that
have
spent
you
know
a
considerable
amount
of
time:
automating
security,
operation,
centers,
automating,
kind
of
reactive
scanning,
of
existing
environments,
kind
of
before
the
whole
kind
of
dev
secops
movement.
When
maybe
there
were
two
separate
teams,
a
lot
of
security
engineers
are
already
very
familiar
with
python,
they're
very
happy
to
kind
of
read
through,
for
example,
this
is
the
kubernetes
cpu
limits
check
and,
as
you
can
see,
we're
using
inheritance.
B
We're
inheriting
a
base
case
check,
there's
a
load
of
inherited
definitions
which
are
really
really
simple
to
go.
Okay.
Well,
if
I
have
a
resources
section
which
has
limits
which
has
cpu,
I
have
some
cpu
definitions
limits.
If
I
don't,
let's
fail
the
check,
because
we
want
to
check
that
the
manifest
has
cpu
limits
and
it's
very
easy
to
read.
It's
very
easy
to
understand.
B
It's
very
easy
to
add
your
own
checks
and
we
find
whether
you're
a
kind
of
engineer,
team
or
kind
of
security
team
python
is
a
really
kind
of
low
barrier
to
entry
language,
for
both
teams
to
collaborate
on
on
policies
rather
than,
for
example,
having
to
learn
a
new
domain,
specific
language
to
write
policies
and
then
write
adapters
for
your
infrastructure,
so
that,
hopefully,
will
answer
the
answer.
The.
Why
python
white
python
question,
but
yeah?
We
do
get
a
lot
of
industry
contributions
because
of
it.
B
B
So
at
this
point
I'm
going
to
give
a
shout
out
to
a
project
called
kubernetes
goat.
You
can
find
it
here
on
github
and
kubernetes.
Goat
is
purposely
vulnerable,
kubernetes
infrastructure
for
learning
and
training
purposes.
So,
if
we're
looking
for
some
kubernetes
manifests
that
are
definitely
going
to
have
horrifically
bad
security
defaults,
kubernetes
go
is
a
good
place
to
go
and
we
can
install
check
on
a
number
of
ways.
It's
in
pips.
You
can
do
pudda
me
pip,
install
chekov.
B
We
also
have
it
on
brew,
so
brew,
install
checker
or
just
go
to
check
of
I
o
for
all
the
docks
and
information.
B
So
I
already
have
chekhov
installed
version
one
zero,
six,
two
nine
and
what
we
can
basically
do
is
tell
chekhov
to
scan
this
directory
recursively
and,
as
you
can
see,
it's
gonna
go
through
it's
going
to
find
a
load
of
kubernetes
yaml,
which
we
all
love,
and
it's
going
to
highlight
that
there
are
a
lot
of
security
misconfigurations
based
on
the
built-in
kubernetes
checks
in
chekhov.
B
So
we
can
see
here
you
know
secrets,
as
files
are
better
than
secrets
and
environment
variables.
Readiness
probes
should
be
configured
read-only
file
systems
for
containers
where
possible,
all
suggestions
from
the
kubernetes
cas
guidelines.
B
We
can
see
all
the
kubernetes
specific
checks
that
we're
checking
against
when
running
checkoff.
B
Mini
demo
done
so
that's
kind
of
the
you
know
the
easiest
kind
of
intro
to
check
off
way
of
using
chekov
kind
of
almost
pre-commit.
You
can
also
configure
this
easily
to
run
as
a
pre-commit
hook,
like
all
cicd
style
tools.
Chekhov
will
exit
with
a
return
code
of
one.
If
there
are
any
violations,
you
can
also
specif
specify
a
flag
to
just
say
just
show
me
the
violations,
don't
show
me
all
the
past
checks
as
well.
B
So
this
is
a
fine
example
of
a
developer
just
using
chekhov
on
their
local
machine
before
they
push
to
github
before
they
do
anything
like
that.
But
obviously
that's
not
ideal.
That's
still
manual
work.
It's
it's
less
manual
than
kind
of
going
through.
All
those
manifests
yourself
once
you
get
that
jira
ticket,
but
we
can
do
better
so,
instead
of
kind
of
pre-commit,
let's
look
at
some
more
automation.
B
So
for
exactly
that
reason
we
have
a
check
of
github
action.
This
is
pre-packaged.
You
are
able
to
specify
directories
or
just
recursively
normal.
You
can
display
only
failed
checks
like
I
was
saying
you
can
scan
for
only
a
specific
framework.
If
you
know
this,
repo,
for
example,
will
only
contain
kubernetes
manifest
and
you
can
even
skip
certain
noisy
checks
if
necessary,
and
with
that.
We
can
see,
for
example,
that
on
a
given
repo.
B
So,
for
example,
here
on
terra
goat,
which
is
a
terraform
purposely
vulnerable
set
of
manifest
just
like
kubernetes
go,
is
for
kubernetes
this
one's
written
by
us
here
at
bridge
crew,
and
you
can
see
we
have
a
github
action
set
up
to
just
run
that
check
of
action,
and
whenever
we
receive
a
pull
request
on
that
checker
on
this
repo,
we
will
get
some
lovely,
lovely
output
and
block
that
build,
and
you
know
highlight
to
that
pull
request
that
checks
have
failed,
just
like
you
would
with
any
other
code
base,
so
that
you
can
make
decisions
on
not
allowing
more
vulnerable
infrastructure
as
code
or
more
vulnerable
kubernetes
definitions
into
your
into
your
infrastructure's
code,
repo.
B
So
that,
in
a
nutshell,
is
the
way
we
would
use
chekhov.
Ideally
so,
instead
of
kind
of
doing
it
pre-commit
and
there's
nothing
wrong
with
doing
it
pre-commit,
but
it
is
better
doing
it.
As
you
know,
cicd,
especially
with
all
these
wonderful
git
repo
hosting
companies,
now
giving
us
free
compute,
we
might
as
well
run
chekov
against
pull
request
and
against
commits
to
your
main
branches
to
block
the
bill.
Just
like
you
would,
with
any
other
style
of
test
or
security
test
and
again.
B
And
then
the
second
thing
I
wanted
to
talk
about,
isn't
just
your
own
kubernetes,
your
own,
manifest
your
own
pod
definitions,
your
own
containers,
moving
fast
that
we
talked
about
earlier,
always
generally
contains
using
other
people's
dependencies.
You
know
standing
on
the
shoulders
of
giants
using
the
work
that
came
before
you
copying
and
pasting
from
stack
overflow,
which
is
one
of
my
favorite
fake,
o'reilly
books.
We
all
know
that
at
some
point
we
are
going
to
be
using.
You
know
third-party
python
modules,
the
polygo
modules,
third-party
npm
modules.
B
You
know
just
like,
with
code,
we're
probably
going
to
use
existing
modules
for
kubernetes,
be
it
helm,
be
it
customize,
etc,
etc.
So,
going
back
to
kind
of
the
idea
of
looking
for
infrastructure
checks
like
this
graph
before
when
we
were
kind
of
trying
to
work
out
what
chekhov
needed
to
be
based
on
data.
Based
on
what
issues
we
were
seeing
with
people's
configuration
in
the
wild
versus
kind
of
security
guidelines,
we
have
the
same
thing
for
kubernetes.
B
So
what
was
the
home
hub
is
now
the
kind
of
open
source
cncf
artifact
hub,
and
you
know
here
there
are
a
great
number
of
helm
charts
which
realistically,
if
we
can
compare
like
a
terraform
module,
what
is
the
kind
of
terraform
module
equivalent
for
kubernetes
like
sharing
kubernetes,
manifests
to
achieve
a
specific
goal?
It
would
be
a
home
chart
so
rather
than
look
at
individual
kubernetes
manifest
we
find
online
or
customers
kubernetes
to
work
out
what
the
common
commonly
bad
security
configuration
is.
B
So
what
we've
ended
up
with
is
you
know
we
took
a
very
small
sample
and
we're
going
to
carry
on
working
on
this
data
and
produce
some
insights
from
it
and
see
if
we
can
work
out
a
way
to
kind
of
tie
this
back
into
chekov,
because
also,
if
you're
familiar
with
helm,
you'll
know
that
helm
actually
regularly
a
helm
chart
depends
on
other
helm
charts.
B
So
you
know
it'll
be
good
to
know
right
back
up
the
stack
of
home
charts,
whether
what
the
security
posture
looks
like
of
the
thing
you
are
about
to
deploy.
Just
like
you
want
chekov
to
scan
your
own
handwritten
kubernetes
manifest
itself,
and
so
we
took
these
home
charts
to
see
if
this
was
also
something
that
needed
investigating
and
as
we
can
see,
there
are
a
number
that
don't
match
the
security
requirements
based
on
the
cis
kubernetes
guidelines.
B
B
That
are,
you
know
privileged
or
you
know,
or
running,
on
latest
or
a
blank
tag
rather
than
a
specific
version,
which
means
your
your
manifests,
aren't
kind
of
repeatable,
as
you
might
expect
them
to
be,
or
you
know,
secrets
or
environment
variables
or
containers
are
sharing
the
host
network
namespace,
which
we've
already
covered,
and
obviously
people
will
take
examples
like
we
do
with
code
like
we
do
with
stack
overflow.
You
know
like
we
do
with
terraform
modules.
B
People
will
take
base
home
examples
that
are
very
clearly
marked,
as
kind
of
you
know,
not
production-ready
testing
only
and
these
things
will
end
up
accidentally
getting
to
production,
especially
if
it's
a
dependency
of
a
dependency,
because
it's
that
security
versus
usability,
so
just
being
aware
of
these
things
automatically
in
your
ci
cd
pipeline
before
your
kubernetes
objects,
actually
become
real
by
by
scanning
in
the
manifest
stages.
B
You
know
it's
just
an
extra
positive
step
that
you
can
automate
just
to
give
you
that
visibility
of
of
what
you
might
be
exposing
yourself
to
and
there
just
for
the
the
slide
deck
is
a
list
just
focusing
on
the
top
failed,
rather
than
a
mixture
of
failed
and
past.
B
So,
based
on
that,
as
I
said,
we're
working
on
some
improvements
to
check
of
that
you'll
see
over
the
coming
weeks
to
actually
make
this
kind
of
easy
within
check
off,
but
right
now
again
using
ci
cd.
It's
pretty
easy
to
get
this
kind
of
helm
security
posture,
just
like
we
saw
with
the
previous
example
with
your
own
kubernetes
manifests.
B
So
this
link
here
goes
to
a
blog
post,
where
I
kind
of
wrote
up
a
slightly
custom
version
of
the
check
of
github
action.
Workflow,
which
specifically
kind
of
goes
and
finds
helm3,
manifests,
goes
and
templates
those
out
and
then
also
runs
check
off
against
those,
so
that,
if
you
do
have
charts
as
part
of
your
infrastructure
as
code
as
well
as
just
your
own
hand,
rolled
kubernetes,
you
will
be
able
to
kind
of
check
your
security
posture
across
those
as
well
now
by
default.
B
I
set
this
to
quiet,
but
obviously
you
can
see
from
the
output
here
this
past,
but
you
could
easily
set
this
to.
You
know
like
a
normal
check
of
run
exit
with
a
one
so
that
you
don't
have
to
so
that
you
don't
have
to.
B
You
know
progress
into
your
kind
of
deploy
pipeline,
or
you
can
have
a
manual
step
to
kind
of
go
and
investigate
whether
the
security
posture
that's
been
found
in
those
modules
is,
you
know,
acceptable
in
terms
of
risk,
for
you
know
whatever
the
project
or
whatever
the
environment.
You
know
whatever
your
current
security
response
in
your
kind
of
automation
and
pipeline
is
chekhov.
B
With
this
kind
of
simple
little
helmci
cd
action
will
will
make
a
nice
contribution
to
that,
and
so
with
that
they
are
the
kind
of
main
things
that
when
we're
looking
at
infrastructure
as
code
security
with
kubernetes,
certainly
from
a
you
know,
pre-run
time
when
you're
still
in
the
manifest
stages,
they're
the
kind
of
things
we
we
are
caring
about.
So
we're
caring
about.
You
know
not
only
your
infrastructure,
but
your
kubernetes.
B
I'm
just
going
to
take
a
moment
to
look
at
the
infrastructure
itself
and
to
do
that,
I'm
going
to
go
back
to,
as
I
said,
kubernetes
go
is
one
vulnerable
set
of
kubernetes
manifests,
but
we
also
have
terra
go
and
if
you're
running
infrastructures
code
they
might
be
in
different
repos,
because
one
might
be
your
kind
of
base
configuration
for
multiple
teams
from
you
know
a
a
service
team
that
is
providing
the
kubernetes
namespaces
or
providing
the
infrastructure,
but
just
to
highlight
you
know,
as
everything
is
kind
of
related
here
and
kubernetes
clusters,
don't
just
appear
even
if
you're,
using
a
managed,
kubernetes
service
from
pick
your
favorite
cloud
provider.
B
If
we
go
into
here,
we
can
see
that
we
have
multiple
terraform
modules
for
different
cloud
providers
and
if
we
go
into
aws,
for
example,
we
have
multiple
terraform
files,
including
somewhere
somewhere
somewhere,
for
example
a
kubernetes
cluster.
B
So
if
we
go
and
look
at
this,
what
we'll
see
is
kind
of
im
rolls
a
vpc
kind
of
all
your
regular
building
blocks
of
aws
and
then
a
definition
for
your
eks
cluster
itself
and
then
what
we
can
do
is
again.
We
can
run
checkoff
here
instead
of
on
a
directory.
We
can
just
run
it
on
a
specific
file
and
we
can
find
things
that
again
affect
your
kubernetes
security
posture.
It
might
not
be
your
developers
deploying
pods,
but
it's
still
with
the
same
tool,
because
we
can
scan
multiple
infrastructure
as
code
styles.
B
It's
still
something
you
should
be
mindful
of,
and
if
you
are
provisioning
clusters
you
should
be
taking
into
account.
So,
for
example,
we
should
have
control
plane,
logging
enabled
so
we
can
go
back
and
see.
What's
going
on,
we
should
make
sure
the
amazon
eks
kubernetes
endpoint
is
not
open
to
the
world.
B
We
should
make
sure
secrets
are
encrypted
at
rest.
In
our
environment
we
should
make
sure
the
public
endpoint
is
disabled,
and
then
you
know
we
have
some
past
checks
as
well:
around
iem
policies
not
being
too
widely
open
and
not
being
allowed
to
be
assumed
by
principles.
We
we
don't
expect-
and
you
know
all-
that
regular
amazon
access
control
stuff.
B
So
again,
just
kind
of
you
know
the
infrastructure
is
one.
The
kubernetes
deployments
are
another.
You
may
be
abstracted
a
further
step
through
helm
and
its
dependencies,
but
using
a
bit
of
ci
cd
magic.
We
can
kind
of
get
a
good
view
of
all
of
those
issues
and
the
remediations
necessary
pre-deployment
with
chekov.
B
Now.
The
last
thing
I
want
to
show
is
these
guides.
So
for
every
output
you
will
get
a
guide
link
and
this,
in
most
cases,
is
just
a
bit
of
context.
So
if
you
have
this
in
your
ci
cd
and
you
have
developers
again
that
are
focused
on
getting
their
job
done
focused
on
kind
of
reaching
their
next,
you
know
next
week's
deployment
release
and
not
specifically
security
mindset,
and
they
see
these
cicd
issues
blocking
a
deployment.
This
gives
a
nice
bit
of
context
for
each
issue.
This
gives
kind
of
the.
B
It's
also
quite
nice
to
know
that
the
way
the
check
of
action
is
written,
we
now
integrate
with
some
new
features
in
github
that
give
us
these
kind
of
lovely
annotations
to
kind
of
really
easily
see
the
outputs
rather
than
having
to
scroll
through
the
ci
cd
output,
and
that
really
brings
us
to
the
end
of
build
time.
But
that's
not
the
end
of
the
story,
because,
once
you
have
your,
you
know,
tested
certified
kubernetes
manifest,
be
it
helm,
be
it
handwritten
you're
still
going
to
deploy
those
and
yes,
the
pods
themselves.
B
The
containers
themselves
may
not
be
long
lived.
Kubernetes
might
be
handling
the
lifecycle
management
of
those,
but
that
actual
deployment
manifest
is
going
to
be
on
your
cluster
potentially
for
a
long
time,
and
not
only
that
there
might
be
things
that
haven't
been
deployed
via
infrastructure's
code.
B
There
might
be,
you
know
whether
it's
in
your
cloud
provider
or
whether
it's
actual
kubernetes
objects,
there
might
be
things
that
have
been
deployed
before
you
had
your
latest
round
of
kind
of
automation,
or
you
know
before
this
team
was
really
set
up
to
to
kind
of
automate
its
deployments.
You
know
we,
the
the
the
green
field
infrastructures
code.
We
appreciate
is
a
luxury
that
not
every
team
has,
especially
if
you
inherit
other
projects.
B
So
the
final
piece
of
the
puzzle
is
a
runtime
analysis
of
a
kubernetes
cluster.
Now,
historically,
you
know
and
with
most
configuration
types,
this
is
not
something
chekhov
is
trying
to
tackle.
You
would
go
for
a
different
tool
like
the
bridge
crew
platform
or
open
source
tools
like
prowler
to
kind
of
go
and
look
at
your
runtime
configuration
and
kind
of
go
hey.
You
know
this
isn't
now
just
a
manifest
describing
an
inc.
You
know
as
insecure
kubernetes
deployment
or
an
insecure
amazon
s3
bucket.
This
is
an
actual,
insecure
amazon.
B
B
Chekov
cares
about
scanning
your
manifests,
however,
with
kubernetes
you
can
very
easily
request
the
initial
deployment
manifest
or
the
you
know
the
same
format
of
a
running
object
in
the
format
that
a
deployment
manifest
is
in,
whereas
with
kind
of
cloud
provider
objects
and
with
things
like
a
terraform
plan
file,
if
you're
familiar,
that's
not
the
case,
the
the
deployment
state
definitions
are
very
different
to
the
running
state
definitions,
but
with
kubernetes
they're,
one
and
the
same,
which
actually
allows
us
to
really
easily
create
a
kubernetes
job
which
effectively
is
just
chekhov,
and
you
can
find
this
again
in
the
kubernetes
branch
of
chekhov,
rich
corrio,
sorry,
the
kubernetes
directory
of
chekhov
in
in
bridge
cryo,
chekov
and
within
there
you
will
find
a
deployment
manifest
to
effectively
install
a
chekhov
job
onto
your
cluster,
and
that
does
exactly
what
you
would
think
it
does.
B
It
basically
asks
the
kubernetes
cluster
for
a
json
definition
of
all
your
currently
deployed,
pods
services,
etc,
etc.
All
the
objects
you
can
think
of
and
runs
those
against
the
checkoff
checks,
and
so
using
this
you
can
not
only
prevent
via
pull
requests
or
prevent
via
via
scanning
on
any
commit,
basically
using
cicd
with
chekov,
making
sure
that
not
only
your
underlying
infrastructure
as
code,
but
also
your
kubernetes
deployments,
are
not.
You
know,
obviously
vulnerable
to
any
of
these
kind
of
checks.
B
You
can
do
that
pre-deploy,
but
then
you
can
also
use
the
same
set
of
checks
to
validate
that
those
you
know.
Secure
defaults
in
your
infrastructures
code
are
actually
staying
secure
within
your
kubernetes
cluster
and
you
know
no
one's
come
along
and
manually
done
a
cube,
ctl
edit
and
change
some
things
get
it
working.
You
know
chekov
will
then
be
able
to
flag
up.
Actually
you
have
an
issue
in
production
now,
maybe
that
wasn't
insecure.
B
Like
I've
just
said
when
you
did
deploy
time,
but
through
that
you
get,
you
know
simple
drift
detection.
You
go
hang
on
a
minute.
This
wasn't
what
I
deployed,
because
it
wasn't
showing
a
violation
at
deployment,
but
it
is
showing
a
violation
in
runtime,
so
using
the
same
policy.
Definitions
across
different
points
of
your
pipeline
actually
gives
you
this
wonderful
scenario
of
kind
of
continuous
analysis
and
so
pre-commit,
whether
you're
just
running
chekhov
locally
or
as
a
pre-commit
hook
continuous
integration
and
then
also
comparing
those
violations
against.
B
You
know
the
output
from
your
running
kubernetes
listener,
and
this
is
really
what
we're
trying
to
get
to
with
with
bridge
crew,
with
our
open
source
tools.
You
know
we
want
infrastructure,
whether
it's
you
know
your
low-level
kind
of
aws
infrastructure
or
higher
level
abstractions
such
as
kubernetes.
We
want
infrastructure
to
be
developed
and
secured
in
the
same
place,
and
we
want
that
to
be
part
of
the
regular
devops
pipeline.
We
saw
with
ops
getting
integrated
with
dev
that
it
makes
a
lot
more
sense.
You
know
not
throwing
things
over.
B
The
wall
makes
a
lot
more
sense
and
bridge
crew
is
trying
to
do
the
same
through
these
tools
with
security.
Your
security
posture
things
you
are
willing
to
make
exceptions
of,
should
all
be
source
controlled.
B
Should
all
be
in
your
ci
cd
and
your
developers
should
be
aware
of
it
rather
than
security
being
bolted
onto
the
side.
We
are
trying
to
get
to
the
point
as
you've
seen
with
blocking
in
ci
cd,
where
issues
are
automatically
prevented
from
being
deployed,
and
all
of
this
should
allow
kind
of
security
resources
you
may
have
within
your
team
or
within
your
organization,
to
go
and
work
on
kind
of
you
know
bigger
ticket
items.
B
Looking
at
you
know,
data
trends
looking
at
future
threats,
you
know
looking
at
those
things
that
aren't
just
day-to-day
firefighting,
because
firefighting
should
become
a
non-up,
because
it's
just
built
into
your
your
devsecops
pipeline.
B
So
before
we
get
to
any
q,
a
of
which
we
have
plenty
of
time
for
I'm
happy
to
add
the
takeaways
here
are
really,
you
know,
keep
your
manifest
secure
and
do
that
with
kind
of
open
source.
Automated
scanning
know
your
imports
know
your
dependencies
know
what
you're
pulling
in
helm,
terraform
third-party
modules,
just
like
you
would
need
to
be
careful
of
that
with
any
other
code
base.
Infrastructure's
code
treat
it
like
a
code
base,
know
your
imports
and
again
use
automated
scanning,
like
we
just
demonstrated
with
home.
To
do
that.
B
Have
a
fast
feedback
loop
on
configuration
changes
so
do
kind
of
block
and
annotate
pull
requests.
You
know
do
highlight
that
information
to
developers
with
the
pull
request
check
of
action.
B
You
can
very
very
easily
just
scan
the
changes
in
that
pull
request,
so
you're
not
going
to
get
a
million
violations
from
an
existing
vulnerable
repo-
and
maybe
you
just
start
by
making
sure
you're
not
introducing
new
issues
by
taking
violations
highlighted
in
pull
requests
really
seriously
across
both
kubernetes
and
your
base
infrastructure,
which
allows
the
technical
debt
of
then
fixing
issues
in
your
existing.
B
You
know
main
repo
in
your
existing
existing
code
base
a
lot
easier
because
it's
not
going
to
get
that
number
of
issues
and
ever
going
to
get
higher
because
you're
you've
turned
off
the
tap
which
allows
you
to
kind
of
slowly
prioritize
and
tackle
existing
issues
build
time
and
runtime.
B
As
I
said,
we
don't
have
open
source
tools
for
runtime,
but
it
is
important
to
do
both
and,
as
you
saw
with
chekov,
actually
we
do
have
a
tool
for
runtime
for
kubernetes,
because
it
allows
us
to
speak
the
same
language
both
build
time
and
runtime,
which
is
kind
of
a
benefit
in
kubernetes,
which
we
don't
have
in
terraform.
B
Things
like
that,
and
obviously
version
can
draw
your
policies
and
kind
of
this
is
done
for
you
with
the
built-in
policies
of
chekhov
because
they
come
with
a
specific
release
of
chekhov,
because
they're
built
into
the
code
base.
B
If
you
go
to
check
of
io
and
get
into
contributing
in
the
docs
you'll
see
another
video
of
mine
and
a
step-by-step
instruction
for
kind
of
writing.
Your
first
chekhov
check.
It's
super
simple,
give
it
a
look.
I've
got
my
like
vs
code
environment
set
up
on
there
documented
as
well,
and
so,
if
that
interests
you
please
take
a
look
at
that
or
give
me
a
reach
out
on
twitter,
so
yeah
version
control.
Your
policies
super
easy
to
do
if
you're,
using
just
the
checkout
built-in
policies,
because
they're
already
there.
B
If
you're
not
using
the
checkout
built-in
policies,
we
support
loading
in
policies
dynamically
when
you
run
check
off
from
another
git
repo.
So
again
you
can
keep
your
policies
versioned
in
a
way
that
suits
you,
and
that
is
the
end
of
my
happy
little
kubernetes,
helm,
check
of
and
all
things
infrastructure,
security
scanning,
webinar
really
looking
forward
to
any
questions
you
have.
If,
as
I
said,
you
want
to
have
a
conversation
about
writing
checks
if
you
want
to
get
more
involved
in
codified
security,
we
are
available.
B
I'm
available
there
on
twitter
there's
my
email
address
and
slack.bridgecrew.io.
We
have
a
codified
security,
slack
channel,
which
is
kind
of
you,
know
the
developers
of
chekhov
contributors
and
just
kind
of
all
general
codified
security,
chats
and
topics.
So
please
do
reach
out
to
me
with
any
kind
of
further
questions.
After
this
event,
thanks
very
much
everyone.
A
So
far
we
don't
have
any
questions
submitted
through
the
q,
a
box,
just
a
reminder
that
if
you
have
a
question
for
matt
there
is
a
q,
a
box
at
the
bottom
of
your
screen
feel
free
to
submit
a
question
through
there
or
via
the
chat
and
we'll
just
give
folks
a
few
seconds
here
to
see.
If
any
questions
come
through.
A
Oh
perfect,
we've
got
one
right
now.
Oh.
B
A
No
worries,
I
know
you
kind
of
were
on
a
roll
there,
somebody's
asking
about
your
twitter
handle
matt
and
it's
right
there
on
the
screen.
It's
meta,
hertz!
Sahih!
I'm
sorry!
If
I'm
saying
your
name
wrong,
but
that's
twitter's
right
on
the
screen
there,
for
you.
B
A
No
worries
all
right
well
go
into
one
going
twice:
okay,
no
questions
but
again
feel
free
to
connect
with
matt
via
slack.
His
twitter
is
here
and
also
his
email
address.
I'd
like
to
thank
everyone
for
joining
us
today
for
today's
cncf
webinar,
a
reminder
that
the
recording
and
the
slides
will
be
posted
later
today
to
our
cncf
webinars
page.
Hopefully,
everybody
stays
safe
out.
There
continue
to
yeah,
wear
a
mask
and
we'll
see
everyone
real
soon.
Thanks
again,.