
►
From YouTube: DevSecOps Lunch and Learn (West)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
so,
let's
get
started,
I'm
town,
I,
run
field
and
partner
marketing
I
will
be
moderating
today's
virtual
one
Chandler
thanks
again
for
joining
our
virtual
event.
Today,
I
have
max
truck
with
me
today.
He's
our
Regional
Sales
Manager
for
the
West
and
he'll
be
giving
an
overview
of
stack
rocks
before
he
hands
it
off
to
Eric
our
solutions
engineer
to
give
a
live
demo
max.
Let
me
hand
this
off
to
you.
Let's
get
started.
B
Awesome
thanks
Dan
and
I.
Just
remember
for
everyone
too,
if
you
guys
want
to
communicate
or
ask
questions
throughout
this,
hit
the
QA
or
respond
through
the
Q&A
section
and
we'll
make
sure
to
field
those,
but
wanting
to
run
everyone
through
just
a
high-level
as
to
who
stack,
rocks
is
and
kind
of
like
our
key
takeaway
from
this
and
and
our
main
differentiator
I
promise.
This
has
been
too
long-winded.
B
We're
going
to
spend
the
majority
at
a
time
in
the
demo
with
Eric
to
actually
show
you
what
the
solution
looks
like
so
just
bear
with
me
for
a
few
minutes
as
I
get
through
this,
but,
like
I,
said,
there's
some
important
details
that
I'd
like
to
cover,
but
starting
time
from
the
top.
We
stack
Raxus
so
basically
we're
the
only
kubernetes
native
container
security
solution
on
the
market.
B
Now
you'll
hear
Eric
and
myself
talk
about
that
a
lot
cooper,
Nettie's
native
kubernetes
native,
and
we're
doing
it
because
we're
trying
to
drive
home
the
fact
that
what
we're
doing
is
really
different
than
anyone
else
any
other
vendor
on
the
market.
Now
saying,
Kubb
native
is
kind
of
a
marketing
buzzword.
Today
and
wasn't
a
couple
years
ago,
so
interesting
to
see
that
evolution,
but
how
we
enforce
security
in
our
native
kubernetes
fashion
is
really
the
takeaway
that
you
know
like
I
said
you
should
be
taking
away
from
this
meeting
and
we'll
get
into
that.
B
Gen
one
is
recall
that
take
a
container
centric
approach
to
container
security
and
to
where
Sakharov
spits
and
where
we're
taking
a
kubernetes
native
approach
to
container
security
and,
like
I,
said
we'll
elaborate
more
on
those
differences,
but
that
is
the
kind
of
a
key
differentiator
for
us
being
that
too
made
of
approach
to
to
security
in
containers.
This
is
a
full
lifecycle
solution,
so
we
will
secure
your
containers
throughout
the
entire,
build,
deploy
and
run
faces
of
the
lifecycle,
and
it
is
a
fully
featured
security
solution.
B
So
these
are
eight
use
cases
at
a
high
level,
but
will
provide
visibility,
will
do
vulnerability.
Management
kind
of
you
know
the
classic
image
scanning
almost
almost
commodity
now,
but
we
really
go
beyond
image
standing
by
scanning
for
vulnerabilities
in
kubernetes,
and
then
we
provide
comprehensive
workflow
capabilities
and
we'll
highlight
that
in
the
demo,
compliance
something
that
other
vendors
offer
as
well,
but
something
that
we
really
excel
at.
We
work
with
a
ton
of
companies
that
have
strict
auditing
requirements
rather
than
just
repurposing
safe
CIS
benchmarks
and
they
like
PCI.
B
If
and
this,
we
actually
comb
through
all
those
current
standards
and
then
bake
them
into
into
the
product
to
make
sure
your
deployments
are
meeting
those
benchmarks
network
segmentation,
the
ability
to
view
active
versus
allowed
traffic
and
then
apply
updated
policies
in
the
ammo
files
to
reduce
your
attack,
surface
risk,
profiling,
listen
taking
account
all
things
that
encompass
risk
in
your
environment
and
then
we
stack
rank
those
risks
based
on
severity,
so
from
most
risky
to
least
risky.
So
you
know
what
to
focus
on
what
to
prioritize
configuration.
B
Management
is
kind
of
classic
chip
left
mentality
right.
Where
a
lot
of
vulnerabilities
them
from
misconfigurations
that
occur
in
the
build
and
employee
base
is
something
that
we
certainly
pretend
again,
as
well
as
threat
detection,
run
time
detection
and
instead
of
a
spot,
we're
also
a
hundred
percent
epi
driven.
So
anything
that
you'll
see
in
the
demo
with
Eric
are
things
that
you
can
do
directly
in
the
API,
and
we
have
a
really
deep
integration
with
all
your
existing
devops
systems.
B
So
you
know
these
are
just
some
examples
of
those
DevOps
tools,
but
really
any
DevOps
tool
that
you're
using
will
integrate
with
so
makes
a
really
seamless.
Operationalized
stack
rocks
yeah,
whether
you
have
you
seedy
tools
or
Bob
notification
tools.
Whatever
we
integrate
with
all
that,
but
going
into
a
little
more
detail
about
differences
between
that
gen1
container,
centric
approach,
security
and
Gen,
2
or
saccharides
fits
in
so
the
Gen
1
solutions,
the
container
centric
approach,
essentially
in
order
to
be
compatible
across
all
orchestrators.
They
have
to
do
a
policy
enforcement
at
the
container
level.
B
Only
so
the
downside
of
that
on
kubernetes
is
that
the
orchestrator
really
has
no
idea
what
the
security
tool
is
doing
and
why
kubernetes
is
declarative
in
nature,
so
gives
us
his
opportunity
to
look
at
security
in
the
same
way.
If
you're
enforcing
security
at
the
container
level,
then
you're
really
missing
the
opportunity
to
embed
security
at
the
Kubb
level
and
be
part
of
that
desired.
Declarative
state
right.
It
gives,
like
I,
said
compatibility
across
all
Oracle
shaders
outside
of
judge
ku.
B
If
we
find
any
security
risk
in
the
environment,
we're
going
to
take
that
information
and
shift
it
left
to
prevent
it
from
becoming
a
persistent
issue,
essentially
continuously
hardening
your
environment.
So
this
is
basically
getting
a
vantage
point
for
security
at
the
Clube
layer
right.
It's
almost
like
running
security
is
code.
C
C
So
if
you
were
to
do
a
POC
with
us,
you
would
be
able
to
pull
down
the
images
from
our
render
stream
once
you
have
access
so
that
all
the
different
components
to
stack
rocks
can
either
be
deployed
directly
to
your
kubernetes
clusters
as
a
set
of
EML
files
and
bundles
or
as
Hjelm
charts.
So
either
either
method
works
and
there's
a
few
core
components
to
stack
rocks
when
you
install
missus
in
all
of
them,
but
I'll
explain
the
important
ones,
so
the
collector
runs
as
a
daemon
set
on
every
single
node
for
each
cluster.
C
You
want
to
manage,
and
essentially
the
collector
is
basically
just
a
privileged
container
with
read-only
access
to
the
hosts
file
system.
That's,
what's
gathering
all
the
SIS
calls
the
network
connections
that
are
established
any
executables
inside
the
containers.
Basically
the
data
data
collection
layer.
Then
we
have
what's
called
sensor
and
there's
a
single
deployment
per
cluster,
so
just
one
deployment
that
also
runs
in
the
cluster,
that's
being
managed
in
sensor
that
basically
serves
a
couple
different
purposes:
one
its
aggregating,
all
the
data
from
the
collectors.
C
It's
also
pulling
in
declarative
information
from
kubernetes
directly,
so
things
like
our
back
configuration
secret
service
accounts,
users
and
groups.
It's
pulling
in
your
network,
declarative
data,
any
active
network
policies,
things
like
that
and
the
third
is.
It
can
also
be
enabled
as
a
default
and
native
admission
control
or
to
kubernetes.
So
when
we
talk
about
deploy
time,
policies
are
preventing
certain
deployments
with
certain
profiles
or
configurations
from
being
deployed,
we
can
natively
block
those
deployments
or
deny
them
at
the
API
server
level
inside
kubernetes.
So
that's
where
it
can,
it
can
be
enabled.
C
The
last
component
we
have
is
what
we
call
central
and
that's
really
just
the
UI
and
the
API
to
stack
rocks
and
that's
where
most
of
the
user
interaction
is
going
to
happen,
that
define
policies.
Notifiers
turn
on
enforcement
begin
to
set
up
different
things
that
you
want
to
view
and
usually
what
customers
will
do
is
they'll
run
this
in
a
tooling
or
a
management
cluster
or
somewhere.
C
That
basically
has
access
to
the
environments
that
they
want
to
manage,
and
then
the
the
sensor
and
collector
bundle
and
each
individual
cluster
sensor
is
communicating
outbound
to
central
via
port
443
and
mutual
TLS.
So
it's
just
simple
and
service
the
service
communication
to
basically
send
the
information
up
to
central.
If
you're
running
in
eks,
gke
aks,
you
typically
expose
this
over
an
e
lb
or
google
load,
balancer
or
natural
it
balancer
or
you
can
do
it
via
ingress
nginx.
You
can
also
do
it
via
a
note
port.
C
However,
you
want
to
basically
expose
the
dashboard
you
can
you
can
do
it
that
way,
and
then
we
have
a
default
scanner
that
gets
deployed
it's
based
on
Clair,
as
well
as
some
other
proprietary
things
we've
added
to
it,
but
you
can.
You
can
also
add
your
own.
You
can
use
your
own
scanner
as
well
and
we'll
get
into
that,
but
that's
the
basic
architecture.
B
Awesome
and
then
just
the
typical
salesy
nascar
slide
here
of
some
of
our
publicly
referenceable
customers
yeah.
This
could
actually
be
updated
with
some
great
new
logos
and
red
and
sumo
logic
either
way.
The
idea
here
is
that
it
doesn't
really
matter
what
type
of
customer
you
are
vertical
size
of
customer.
We
were
a
big
federal
presence
as
well
as
long
as
you're
running
kubernetes.
That's
the
environment
that
you're
looking
to
secure.
It
really
is
a
great
great
fit
for
stack
rocks
all.
C
C
Okay,
perfect,
so
so
yeah
we'll
go
through
each
of
the
use
cases
that
were
elicited
in
the
Eventbrite.
Invite
I'll
take
you
through
the
technical
demo
of
all
these
use
cases,
and
if
we
see
any
questions
along
the
way,
we'll
will
answer
some
of
those
we'll
pick
some
out
and
like
Tim
mentioned,
we'll
get
back
to
you
if
we
have
an
answer
and
all
of
them
by
the
end
of
the
call,
so
what
you're
viewing
right
now.
This
is
the
dashboard
to
central
I
currently
have
this
deployed
over
two
clusters
in
gke.
C
So
you
can
see
at
the
top
there's
a
summary
of
the
clusters
that
I
have
under
management.
Two
clusters,
six
nodes
about
64
deployments,
43
images
and
about
twenty-two
secrets
are
then
on
the
left-hand
side.
These
are
going
to
be
each
of
the
functional
tabs
for
the
different
use
cases,
so
network
graph
will
have
to
do
with
visualizing
the
network
details
in
the
traffic
across
your
thoughts
and
clusters.
C
Violations
will
have
to
do
with
runtime
compliance,
vulnerability,
management
and
and
so
on,
and
so
the
main
dashboard
is
basically
going
to
summarize
the
high-level
information
from
each
of
those
tabs
where
I
usually
like
to
begin
as
I
actually
like
to
begin
in
the
in
the
policy
tabs.
So
I'll,
just
I'll
just
quickly
show
you
right
now
how
this
works,
so,
basically,
so,
basically,
what
we?
What
we've
done
is
we've
we've
used
security
as
a
declarative
model
and
kubernetes
meaning.
C
C
In
the
case,
we'll
begin
with
the
build
policy,
so
build
policies
are
pretty
much
analogous
to
anything
that
you
want
to
fail,
a
build
inside
your
CI
CD
pipeline
or
when
the
initial
image
is
being
built
and
things
are
being
deployed
within
things
like
Jenkins,
Circle,
CI,
Travis
or
you
know
any
any
CI
CD
tool
that
you're
using
and
so
those
are
going
to
really
be
kind
of
like
what
we
call
the
low
hanging
fruit.
It's
gonna
have
a
lot
to
do
with.
Do
we
detect
any
vulnerabilities
that
are
critical
in
the
environment?
C
Do
you
have
package
managers
installed
inside
that
image
oftentimes?
The
first
thing
attackers
are
going
to
do
is
they're
going
to
see
what
existing
binaries
are
existing
or
running
inside
the
the
container,
but
they're
also
going
to
try
to
leverage
the
package
managers
that
already
exist
and
install
additional
binaries
additional
executables,
and
so
my
default
docker
usually
compiles
all
those
things.
It's
a
good
best
practice
to
remove
those
if
the
application
that
doesn't
actually
need
that
to
run
inside
kubernetes
we're
also
looking
at
things
like
have
you
exposed,
you
know,
4:22
in
the
image.
C
Are
you
deploying
this
from
an
approved
registry
things
like
that,
so
these
can
be
directly
integrated
into
into
your
CIS
or
excuse
me
into
your
CI
CD
pipelines,
well,
I'm,
what
I'm
going
to
do
so
I'm
Mexican
just
going
to
take
one
of
my.
This
is
one
of
my
nests
policies
around
fixable
CB,
with
a
great
or
CB
SS
core
or
greater
than
seven
and
I'm
gonna
go
ahead
and
I'm
just
gonna
turn
on
to
run
enforcement
at
the
build
in
the
deploy
stage.
C
Cuz
we're
we're
going
to
come
back
to
this
later
and
show
you
show
you
what
that
looks
like,
then,
when
we
get
into
deploy
time
policies,
this
really
has
to
do
with
some
of
the
native
admission
controls
that
we
elaborated
on
in
the
productions.
So
how
are
things
actually
configured
right?
It's
not
just
about
the
image
in
the
runtime
aspects,
but
really,
how
are
you
packaging
that
and
running
that
in
your
environment?
What
is
the
service
exposure?
What
are
the
privileges
and
our
back
and
kubernetes?
What
additional
things
are?
C
You
are
you
doing
inside
the
deployments
configuration
that
could
potentially
lead
to
bad
bad
actors
and
bad
things
happening
very
quickly
and
so
we'll
get
into
things
that
are
very
specific
to
kubernetes,
like
this
we're
mounting
a
secret
as
an
environment
variable-
and
this
is
actually
this
is
actually.
This
is
actually
kind
of
unknown
to
a
lot
of
people
who
are
developing
on
kubernetes,
because
in
the
kubernetes
documentation
it
actually
has
a
whole
section
on
how
to
basically
mount
secrets
as
environment
variables.
C
The
the
problem
is
that
it's
actually
base64
decoded
text
so
basically
you'll
be
able
to
read
all
of
your
secrets
directly
from
there
and
attackers
know
this
and
will
take
advantage
of
that.
So
it's
an
improper
way
to
do
it.
We
flag
that
we're
going
to
actually
look
at
how
you're,
leveraging
and
sharing
their
secrets.
Are
you
doing
things
like
that?
Are
you
mounting
sensitive
host
directories?
C
Are
you
using
a
detainer
with
readwrite
route
file
systems?
Do
you
have
require
owner
labels
on
it,
for
example,
and
so
these
things,
if
you
were
to
turn
on
enforcement,
basically
you
can
prevent
that
from
being
deployed
into
the
environment
and
then
for
runtime
level
activities.
We
automatically
baseline
all
of
the
processes
across
each
of
your
containers
in
your
deployments,
and
we
come
out
with
a
automated
baseline
on
what
is
normally
executed.
So
anything
that's
normally
executed
within
the
image
binaries
that
the
application
needs
to
perform
its
duty.
C
That's
typically
fine,
then
we
automatically
flag
any
abnormal
process.
Executions
that
happen.
What
we've
actually
done
above
and
beyond.
This,
though,
is
we've.
We've
actually
defined
very
specific
runtime
policies
that
aren't
just
like
going
to
notify
you
of
every
single
abnormality
alone.
We
want
to
actually
give
you
very
specific
things
that
we
see
happening
on
kubernetes
environments,
so
this
specific
runtime
things
that
we
look
for
to
basically
allow
you
to
enforce,
enforce
these
very
quickly
or
things
like
this.
Do
I
have
a
shell
being
spawned
by
an
application.
C
As
someone
trying
to
use
the
check,
config
manager,
service
manager
or
people
trying
to
manipulate
password
or
login
binaries
are
people
trying
to
run
Network
execution
and
and
again,
one
of
the
first
things
that
attackers
do
in
addition
to
seeing,
if
there's
any
vulnerabilities,
they
can
exploit
they're
gonna
run
something
like
an
map
or
telnet
and
they're
going
to
try
to
see
what
else
they
can
get
to
what
other
pods
they
can
access
to
compromise.
So
we're
automatically
looking
for
that
at
a
very
specific
level.
C
Again,
the
more
specific
Cooper
Nettie's
things
are
you
do
you
have
processes
targeting
the
kubernetes
api
server?
Do
you
have
processes
targeting
the
kubernetes
stats,
endpoint
and
we'll
get
into
all
these
things?
Creating
your
own
policies
is
very,
very
easy.
You
can
create
any
criteria
of
policies
across
any
of
these
life
cycles
and
we'll
get
into
that
a
little
bit.
C
Let's
begin
with
vulnerability
management,
that's
typically
where
most
people
begin
and
we'll
explain,
we'll
explain
how
this
works
for
more
of
the
build
level
policies
and
more
of
the
build
time
enforcement
that
you're
looking
to
be
looking
to
do
so.
One
of
the
most
important
things
for
vulnerability
management
is
to
actually
be
able
to
correlate.
Is
that
vulnerability
at
risk
in
the
probability
of
it
actually
being
exploited
and
as
well
as
the
danger
if
it
were
to
be
exploited?
What
is
the
surface
attack
area
in
order
to
do
that?
C
You
really
need
a
deep
kubernetes
context,
like
the
one
that
stock
rocks
provides.
You
can't
just
provide
developers
with
a
list
of
containers
and
image
vulnerabilities
or
a
UI
link
to
login
to
the
security
tool
and
have
them
have
to
really
chase
down.
Okay,
which
deployments
is
this
running
in
which
clusters
are
production
versus
test?
What
other
configuration
issues
do
I?
Have
that
make
this
arrest?
We
automate
that
analysis
for
you,
so
what
you're?
C
Looking
at
the
top
left
of
this
view
is
the
basically
we're
going
to
come
out
with
a
ranking,
your
top
risky
deployments
by
CDE
count
in
cbss
score.
So
if
you
look
on
the
x-axis
or
my
numbers
of
CBE's
and
my
vulnerabilities
and
on
the
y-axis
or
my
is
my
cbss
score
for
those
CDs
and
immediately
at
a
glance,
I
can
see
my
top
three
yes
deployments.
C
So
we
already
analyzed
the
images
which
deployments
they
said
that
in
which
game,
space
and
cluster,
how
many
of
those
are
fixable
and
we're
bringing
that
up
now,
let's
look
at
it
from
a
cluster
perspective,
so
you'll
notice,
the
other
views
I,
have
top
Russkies
images,
most
common
vulnerabilities
vulnerabilities
inside
kubernetes
or
inside
sto
based
components
as
well.
So,
even
if
they're
not
running
in
containers,
we
actually
analyze
the
images
for
that
are
being
used
by
those
system
level
components.
C
But
if
I
look
at
my
cluster
I
want
to
check
out
production,
so
I'm
gonna
go
to
production
and
now
I'm
gonna
get
a
very
rich
experience
with
all
my
kubernetes
level
details
so
immediately,
I
can
see
all
of
my
policies
that
are
failing
across
this
cluster
I
can
see
my
top
riskiest
images
in
this
cluster.
My
top
riskiest
deployments
in
this
cluster
and
I
can
see
things
like
namespaces
components,
anything
that
I
have
I'm.
C
Gonna
click
on
my
top
riskiest
namespace,
so
immediate,
now,
I'm
narrowing
this
down
to
ok
production
is
what
I
care
about.
My
front-end
is
in
the
top
right
of
this
graph.
I
want
to
see
now
what
my
top
riskiest
deployment
is.
My
entire
environment
and
my
quickly
looking
at
this
I
can
narrow
down
that
asset
cash
in
my
front-end
namespace
is
the
most
risky
and
when
I
look
at
this
now,
I
can
immediately
see
all
the
policy
violations
that
are
that
this
is
better
failing.
I
can
see
the
CVE
data
inside
this.
C
Actually
I
can
begin
to
actually
define
policies
around
this.
So
if
you
remember,
I
had
enabled
that
build
time
policy
earlier
around
cbss
greater
than
7.
What
I'm
actually
going
to
do
is
I'm
going
to
take
a
few
examples
of
ones
that
are
fixable
and
I
want.
To
specifically
add
it
to
that
Paula
so
directly
from
here,
I
can
go
ahead
and
I
can
add
that
or
I
can
create
a
new
policy
and
I'm
going
to
add
it
to
an
existing
policy.
That's
enforced,
so
I've
just
added
that
to
a
policy.
C
At
the
same
time,
there
are
certain
things
that
are
just
not
fixable
I,
don't
care
about
I
want
to
go
ahead,
I
want
to
use
those
those
vulnerabilities,
I,
don't
I
really
don't
want
to
see
them
and
I
can
export
this
information.
I
can
do
this,
but
this
is
kind
of
a
quick,
a
quick
way
to
view.
You
know
each
of
these
things
the
direct
links
to
those
CBE's.
C
What
the
environmental
impact
is,
where
it's
impacting
your
environment
and
we
have
all
the
links
to
those
CDs
directly
in
the
UI,
so
you
can
see
the
fixable
fixed
version
and,
and
you
can
quickly
diagnose
it
from
from
the
main
view
you
can
diagnose
it
from
the
top
down
like
I
did,
which
I
prefer
or
you
could
analyze
it
from
an
image
perspective,
and
you
could
work
your
way
backwards
and
say:
okay,
this
image
has
these
vulnerabilities,
which
deployments
does
it
currently?
So
then?
What
else
do
I
have
in
that
cluster?
C
A
C
Got
a
great
great
question
so
so
yeah
as
we
were
saying,
those
policies
before
I
actually
have
Jenkins
actually
have
Jenkins
setup.
That's
directly
integrated
to
this.
So
one
of
the
things
that
you
can
do
is
it's
it's
just
a
simple
API
called
a
stack
Rox,
basically
calling
the
stack
rocks
during
the
pipeline
build
and
seeing
whether
there's
any
existing
policies
that
have
gotten
created
that
that
it
violates,
and
so
for
example,
now
I
have
this
integrated
with
my
Jenkins
pipeline.
C
So
bringing
that
data
directly
to
the
developer
is
really
important
to
us.
We
don't
want
them
to
necessarily
log
into
stack
rocks,
but
giving
them
all
those
details
directly
in
the
CI
CD
solution,
and
this
is
going
to
work
with
any
CI
city
solution
that
you
have
again,
because
it's
just
an
API
call
to
the
stack
Rox.
C
Great
question,
so
you
can
use,
you
can
use
other
scanners
in
the
environment,
so
we
support
all
the
registries
you
would
want
to
scan.
We
also
support
default
scanners
that
works.
If
you
wanted
to
use
GCRs
default
scan
or
if
you
wanted
to
use
Quay,
Clair,
tenable
anchor.
Those
are
options
for
you
where,
basically,
instead
of
using
our
default
scanner,
it
would
just
get
deployed
and
and
run
it
that
way.
On
average,
our
scanner
runs
every
four
hours
and
updates
all
the
CL
des
vulnerability
databases
about
every
hour.
C
C
What
I
want
to
do
now
is
I
really
want
to
just
get
a
general
idea
of
what
my
top
riskiest
deployments
are
in
my
environment
and
why
those
are
the
most
risky,
and
so
what
we
do
is
we
compare
the
vulnerabilities,
the
runtime
data,
all
the
policies
that
are
that
are
turned
on
and
notify
only
mode.
To
begin
with
that,
we
showed
you
at
the
beginning
of
the
demo
and
we
analyze,
basically
all
the
different
attack
vectors
for
those
deployments,
basically
prioritize,
which
one
you
should
focus
on.
C
First,
in
this
case,
it's
my
visa
processor
deployment,
which
is
at
the
top
of
the
list.
The
reason
that
this
is
the
most
risky
is,
if
you
notice
I,
have
over
ten
policies
for
the
best
practices
that
it's
violating
and
I
also
have
very
specific
things
that
attackers
are
trying
to
do.
I
can
see
that
an
attacker
is
trying
to
install
netcat
and
actually
execute
a
shell
in
this
environment
and
and
with
with
the
different
risk
indicators
of
the
vulnerabilities
and
those
runtime
events,
we're
also
looking
at
what
what
this
could
potentially
mean.
C
So,
in
addition
to
those
vulnerabilities
in
the
runtime
executions,
I'm,
seeing
things
like
my
secret,
my
SSH
keys
are
being
used
inside
the
deployment,
so
SSH
keys
are
readable.
I
have
a
sidecar
container,
that's
deployed
in
privilege
mode
to
this.
To
this
deployment,
port
22
is
exposed
in
a
very
open
way.
Port
8080
is
exposed
to
the
Internet
in
the
cluster.
My
image
is
over
390
days
old
and
on
top
of
that,
I'm
using
a
service
account.
That's
been
granted
cluster
admin
privileges
to
the
environment.
So
pretty
much.
C
If
someone
were
to
compromise
this,
not
only
would
they
be
able
to
take
advantage
of
existing
vulnerabilities,
they
would
be
able
to
get
to
any
rolling
pod
in
the
in
the
cluster
with
root
level.
Privileges
they'll
be
able
to
get
to
my
hosts
they'll,
be
able
to
execute
against
my
SSH
keys
they'll,
be
able
to
get
out
to
the
Internet
to
install
additional
things.
So
when
we,
when
we
boil
this
down,
this
is
really
the
most
dangerous
entry
points
of
the
cluster.
C
For
those
reasons,
and
what's
nice
about
this,
this
is
where
those
deploy
time
policies
become
really
important.
So
one
of
the
things
that
we
can
do
is
we
can
bring
all
of
that
all
that
metadata
on
the
deployment
and
its
policy
violations
directly
to
the
developers.
So,
let's
say
I'm
a
developer
at
my
workstation
and
I
want
to
just
actually
call
this
tak
Roxas
API
I
want
to
check
a
gamble,
configuration
and
just
see
okay
before
I
deploy.
C
This
am
I,
gonna,
run
into
any
issues,
and
so
we
can
actually
call
it
a
stack,
Rox
API
and
we
can
actually
bring
in
all
of
that
information
directly
to
the
developer.
So
we
can
give
them
all
the
details
about
what
is
wrong
with
their
deployment.
What's
going
to
make
this
more
of
an
open
configuration,
that's
dangerous
and
how
they
should
fix
it,
so
I
can
see
I
have
an
abuti
I
have
a
failed
policy
where
my
environment
variable
contains
a
secret
I'm
using
the
boo
to
package
manager'
inside
the
image.
This
is
the
command.
C
I
need
to
run
to
remediate.
That
and
I
also
have
a
policy
that
I've
set
up
to
enforce
resource
requests
in
limits
so
that
a
container
that
goes
bogus
can
basically
host
my
host
and
that,
and
so
now
this
can
be
integrated
into
a
get
ops
model.
We
can
scan
multiple
EML
files
inside
a
github
repo.
It
can
be
done
on
a
one-off
basis,
but
the
the
key
here
is
that
the
developer
should
have
all
of
the
data
as
early
as
possible
into
how
to
fix
these
things.
C
And
that's:
this
is
the
information
they
would
see
if
you
had
admission
control
turned
on
and
they
tried
to
actually
deploy
it.
We're
actually
going
to
tell
them
why
it
can't
be
built
and
that's
that's
a
key
difference
between
other
container
based
solutions.
If
a
solution
is
using
a
proprietary
mechanism
to
block
deployments
outside
of
kubernetes
kubernetes
has
no
idea
what
that
security
tool
is
doing.
If
a
developer
really
is
going
to
deploy
something
and
have
it
be
denied,
it
should
be
inside
kubernetes,
it
should
be,
it
should
give
them
the
information.
C
C
C
I'm
just
getting
into
my
kubernetes
environment
and
then
what
I'm
gonna
do
is
I'm,
going
to
run
a
temporary
shell
and
I
just
want
you
to
pay
attention
to
the
violations.
Tab
on
the
left
so
immediately
you'll
see
I
spawn
that
temporary
shell
and
you
can
see
how
quick
quick
this
is
to
react.
Let
me
just
change
the
configuration,
so
the
violations
tab
is
really
where
we
track
all
the
runtime
data.
C
So
you
can
see
that
the
temporary
shell
we've
already
that
nothing
has
happened,
but
we
see
we
see
that
a
temporary
shell
got
spawned
and
we're
already
flagging
it
for
configuration
based
issues,
preventative
ly,
we're
saying:
hey,
you
have
a
policy
where
you
don't
want
a
bootie
package
manager
on
this.
Here
are
someone
someone.
We
noticed
that
this
deployed
its
actively
running
and
now,
if
I
try
to
do
things
like,
let
me
let
me
try
to
run
an
nmap.
I
can
see.
C
Nmap
isn't
found
so
I'm
gonna
run
app
to
install
nmap,
doesn't
have
it
installed.
Curl
can't
find
curl
when
we
do
an
app
update
and
now
you'll
begin
to
see
that
that's
actually
showing
up
showing
up.
Now,
on
the
left,
so
I
updated,
updated
the
package
manager
now
I'm
able
to
install
nmap
in
the
environment,
and
now
I
can
run
I
can
run
an
map,
and
so
now,
as
we're
getting
to
see
this
now,
you'll
see
that
we're
detecting
each
of
those
individual
things
based
on
the
security
incidents.
C
Let
me
go
ahead
and
I'm
actually
going
to
turn
on
this
policy
at
runtime
and
I'm
going
to
enforce
it,
and
now
what
this
is
gonna
do
is.
This
is
actually
going
to
go
ahead,
and
this
is
going
to
this
is
going
to
basically
enforce
it
until
kubernetes
to
enforce
it
I'll
get
into
why
that's
advantageous.
C
So
now,
if
I
go
back
to
my
temporary
shell,
and
let
me
just
try
to
run
app
update
again
and
what
I
should
see
immediately
is
that
that
got
that
deployment
got
deleted
and
I
see
that
it
was
enforced
one
time.
So
here's
a
key
difference,
stat
rocks
everything
is
done.
Natively
in
kubernetes
we're
not
putting
any
security,
wait
for
enforcement
in
a
proprietary
way.
All
we're
doing
is
telling
Cooper
Nettie's
to
go
ahead
and
actually
kill
that
kill
that
deployment
and
kill
that
running
pod.
C
Now
a
lot
of
other
traditional
tools
who
actually
try
to
kill
the
container
and
know
what
forced
you
to
do
is
run
agents
on
the
hosts
that
instrument
the
hosts
and
basically
send
a
send
a
request
to
kill
the
container
the
problem
is:
kubernetes
has
no
idea
what
you're
doing
it's
declarative.
It's
immutable,
so
you're,
making
an
in-place
change
to
something
that
you're
not
supposed
to
be
doing
that
in
the
first
place,
and
so
from
a
developer's
point
of
view.
C
If,
if
you're
trimming
on
enforcement
in
that
way,
not
only
does
it
become
to
become
very
hard
to
track
from
a
security
perspective,
but
from
a
developer
perspective,
they
have
no
idea.
Why
they're,
why
their
container,
why
their
deployment
isn't
coming
up?
It'll
show
something
like
zero
out
of
two
pods
running
in
the
environment
or
zero
out
of
X
running.
If
you
go
for
an
upgrade,
the
app
doesn't
come
off
because
the
container
keeps
getting
killed.
C
They
have
no
metadata,
they
have
no
information
on
why
that's
actually
happening,
and
the
difference
here
is
that
we
want
to
use
kubernetes
as
a
single
source
of
truth.
If
you,
if
you
do
this,
then
developers
understand
it's
logged
in
kubernetes.
They
understand
why
it's
happening,
they're,
able
to
tie
that
directly
to
the
policies
and
want
and
communicate
that
and
it's
much
easier
to
scale
and
it's
much
it's
a
much
lower
performance
weight
on
the
environment.
To
do
it
this
way,
so
we
believe
it's
the
right
way
to
do
it
in
the
environment.
C
That's
a
great
question,
so
I
would
say
that
it
doesn't
happen
overnight,
usually
they're,
going
to
notify
at
first
they're,
going
to
they're
going
to
basically
tie
this
to
a
scene
technology
to
basically
notify
and
and
they're
going
to
bring
it
to
the
right
teams.
Eventually,
people
get
comfortable
with
enforcing
these
things.
They
begin
to
turn
on
some
of
the
low-hanging
fruit
like
if
there's
very
specific
things,
they
don't
want
to
happen.
C
They're
gonna
turn
on
enforcement,
and
that's
that's
really
where
it's
almost
impossible
to
turn
on
enforcement
with
a
native
technology
that
doesn't
have
a
powerful
policy
engine,
because
if
you're
killing
everything
that's
abnormal,
your
environment
is
going
to
be
going
haywire
right,
you
don't
necessarily
want
to
enforce
every
runtime
event.
You
want
to
enforce
the
ones
that
are
particularly
compromising,
and
you
want
to
be
able
to
do
this
in
an
intelligent
way.
C
As
much
so,
they
shouldn't
be
having
a
very
big
laundry
list
of
runtime
events,
if
they're
really
following
the
configurations
for
the
deployments
and
the
images
themselves,
and
that's
really
where
we
see
most
customers
begin
in
the
building
the
deploy
stage,
they
begin
to
actually
take
advantage
of
sack
rocks
to
configure
guardrails.
If
you
will,
that
and
in
these
narrow
gates,
that
developers
have
to
have
to
go
through
so
that
the
developer
knows
how
to
build
their
application
correctly
from
the
beginning
and
they're
less
likely
to
have
a
compromised
after
the
fact.
C
A
Cool
I
know
Eric,
you
mentioned
best
practices,
just
something
for
the
attendees
to
know.
Do
you
guys
want
to
check
out
some
best
practices
subscribe
to
our
blog,
our
developers
in
house?
You
know
write
a
lot
of
great
content
there
and
there's
a
lot
of
best
practices.
Let's
get
one
more
question
in
we're
using
spelunk.
Can
we
send
notifications
to
spunk
yeah.
C
Yeah
great
great
question,
so
spunk
is
actually
a
customer
of
ours.
Tsumo
logic
is
as
well
so
spunk
suma
logic
both
use
us
for
their
internal
security.
So
we
have
deep
integration
with
those
as
well
as
others.
So
we
have
direct
integration
to
Splunk.
All
you
have
to
do
is
set
up
an
HTTP
Event
collector
inside
Splunk,
and
we
can
send
all
this
information
there
and
then,
once
you
do
so
any
of
the
system.
Policies
that
you
want
to
send
a
slunk.
C
It
will
show
up
as
a
notifier,
and
you
can
select
that
from
the
notification
drop-down
to
be
able
to
send
that
to
Splunk,
and
we
also
integrate
to
anything
via
a
generic
web
hook
as
well.
So
if
you
have
running
ELQ
stack,
if
you're
running
anything
else,
everything
can
be
configured
via
a
generic
web
hook
to
send
information
anywhere.
You
want,
but
these
are
some
of
the
default
ones
we've
created
ago
before
so
things
like
JIRA,
page
or
Duty,
slack,
Microsoft
teams,
etc.
C
C
Yeah,
you
got
it
yeah
we'll
roll
through
this
a
little
bit
quickly
towards
the
end
yeah.
So
the
next
thing
that
I'm
gonna
want
to
do
after
I've
checked,
build
time.
Policy
is
I'm,
beginning
to
manage
vulnerabilities
I'm,
beginning
to
get
a
sense
of
where
my
risky
deployments
are
and
how
I
might
be
able
to
configure
deployment
based
analysis
to
prevent
wide
open
configurations
for
my
developers
now
I'm
going
to
want
to
begin
to
lock
down
my
perimeter
and
really
begin
to
focus
on
how
do
I
actually
get.
C
What
my
different
port
configuration
is
directionality
and
on
the
network
flow
so
where,
where
am
I
talking
to
and
what
it's
allowed
and
then
based
on
this,
we
take
this
information
and
we
actually
Auto
generate
Auto
generate
Network
policies
that
live
in
kubernetes,
so
you
can
be
using
whatever
CNI
back-end
you
want
and
we're
going
to
be
able
to
actually
generate
and
create
the
ingress
and
egress
policies
for
your
cluster
at
the
click
of
a
button.
So,
basically
you
could
come
into
an
environment.
You
haven't
done
any
network
policies
on.
C
Let
us
analyze
it
for
for
a
couple
days
and
then
we'll
come
in
and
we'll
actually
create.
We
can
actually
create
that
policy
will
create
all
the
match:
labels,
the
policy
types
for
ingress
and
egress
and
we'll
lock
it
down
to
exactly
what
the
application
needs
to
actively
to
actively
function
as
an
application
and
we'll
cut
out
anything
from
the
outside
world
or
anything
else
that
shouldn't
be
talking
to
and
then
I
can
share
this
to
Splunk
I
can
share
this
to
my
scene.
C
So
so
other
competitors
they
provided
Network
details,
I,
don't
think
the
uniqueness
is
in
the
data,
because
it's
very
easy
to
see
network
details.
What
things
are
hitting
what
the
network
flows
are.
The
problem
is
when
you
actually
go
to
enforce
it,
so
a
lot
of
other
technologies
will
run
proprietary
firewall
again.
It's
all
proprietary,
so
they'll
run
it
as
an
inline
proxy.
C
They
requires
read/write
privileges
to
your
hosts
and
basically
they're,
going
to
intercept
all
traffic
through
that
proxy
and
which
is
particularly
dangerous
in
production.
If
you're,
if
you
have
a
very
highly
transactional
environment-
and
you
have
a
very
dynamic
environment
having
all
your
network
be
filtered
through,
that
is
actually
very
dangerous,
both
for
performance
as
well
as
security,
because
if
something
goes
wrong
with
that
security
tool
and
that
goes
down
you're
pretty
much
left
wide
open,
you
have
no
protection.
The
difference
here
is
that
we
don't
want
to
do
it
in
a
proprietary
way.
C
We
want
to
do
the
analysis
in
an
intelligent
way,
that's
unique
to
us,
but
in
terms
of
actually
enforcing
it
that
policy
is
going
to
live
as
a
network
policy
object
in
kubernetes.
That's
the
way
that
kubernetes
expects
it
that's
the
way
the
orchestrator
understands
it
and
it's
going
to
function
a
lot
more
natively
and
smoothly
with
your
applications.
C
If
it's
working
inside
the
orchestrator
and
as
a
result
of
that,
you
don't
have
a
lot
of
weight,
that's
put
on
the
environment,
you
know
I've
been
a
lot
of
bulk,
that's
added
to
that
which
makes
it
which
makes
it
a
lot
better.
In
our
opinion,
great
and
so
I'm
gonna
move
on
to
I'm
gonna
move
on
to
compliance.
C
So
from
the
perspective
of
compliance,
this
is
also
a
very
unique
aspect
of
what
Stax
does
in
kubernetes
environments.
So
a
lot
of
a
lot
of
times,
people
will
just
repurpose
CIS
benchmarks,
it'll,
create
very
abstract
and
subjective
ways
of
measuring
compliance.
One
of
the
things
that
we
do
pretty
uniquely
is
we've
actually
done
done
the
homework
and
done
a
lot
of
hard
work
internally
on
actually
defining
each
of
these
benchmarks,
with
with
active
control
guidances
for
kubernetes.
So
the
problem
is
in
the
market.
C
A
lot
of
these
haven't
caught
up
with
kubernetes
and
a
lot
of
people
are
confused.
Well,
how
do
I
prove
I'm
actually
compliant
for
things
like
PCI
and
HIPAA,
on
an
environment
where
you
know
compliance
is
still
viewing
it
from
the
legacy
vm
world
and
we
saw
we've
dig
it.
We've
done
our
best
to
take
a
stab
at
that
and
develop
some
thought
leadership
around
that
and,
of
course
we
do.
The
things
like
see
is
docker.
We
do
very
detailed
analysis
where
you
can
actually
export
this
information
and
for
auditing
purposes
with
things
like
CIS.
C
It's
pretty
easy
to
do,
because
it's
super
specific
and
will
actually
give
you
the
exact
evidence.
Why
is
something
passing?
Why
is
something
failing
down
to
the
containers
and
images
across
each
of
the
standards
which
number
of
standard
it
is
which
cluster
it
is
in?
So
we
give
you
exhaustive
data
for
auditing
purposes,
but
then,
when
we
get
into
things
like
PCI,
for
example,
a
lot
of
this
becomes
pretty
subjective.
Where
you
know
you're,
saying:
okay,
I
want
to
limit
inbound
traffic
to
my
IP
addresses
within
the
DMC.
C
Well,
the
way
to
do
that
in
kubernetes
is
through
through
in
Brown
inbound
to
ingress
network
policies
to
be
considered
as
compliant.
The
problem
is,
if
you
have
no
network
policies,
then
there
is
nothing
preventing
ingress
to
those
pods
and
those
namespaces
which
allows
people
to
basically
do
that.
C
So
by
directly
using
our
network
segmentation,
we
can
actually
begin
to
enforce
these
things,
for
example,
and
you'll
actually
notice
I'm
at
81%
right
now,
but
I'm
going
to
go
ahead
and
I'm
going
to
scan
the
environment
again
because
I
know
I
applied
a
couple
policies
to
my
environment
and
so
I'll
actually
see
minus
score,
went
up.
My
PC
I
went
from
81
to
86
percent.
C
I
can
actually
see
because
I
did
I
created
an
entire
ingress
policy
for
my
production,
cluster
I
went
from
32%
on
this
to
a
hundred
percent
and
then,
if
I
explore
the
details,
obviously
that
Network
policy
will
be
provided
as
evidence
inside
inside
something
like
PCI,
so
it'll
actually
show
why
I'm
passing
or
failing
based
on
the
existence
of.
Basically,
the
existence
on
that
pass
deployment
as
ingress
same
thing
for
HIPAA
same
thing.
C
C
The
last
thing
I
want
to
end
with
before
we
get
to
last
questions
is
configuration
management,
so
configuration
management
is
really
things
like
our
back
secrets:
visibility
into
my
environment
from
how
things
are
being
shared,
how
things
are
configure
for
permissions
and
my
roles
and
groups
so
immediately
I
can
begin
to
look
at
things
in
a
very
in
a
very
specific
way.
So,
if
I
go
to
like
my
production
cluster,
this
is
where
you're
going
to
get
a
lot
of
visibility
into
the
cluster
and
the
configuration.
C
So
if
I
look
at
my
production
cluster
now,
I
can
see.
Okay,
which
deployments
are
violating
which
policies
at
a
high
level
I
can
see.
All
of
my
CIS
controls,
whether
I'm,
passing
or
failing
those
I
can
look
at
all
my
secrets
in
this
cluster,
which
deployments
those
secrets
are
tied
to
what
is
hosted
within
those
secrets.
C
What's
the
public
certificate,
what
was
the
issue
or
what
was
the
day?
I
can
look
at
the
images
my
namespaces
I
can
look
at
users
and
groups,
so
I
can
see.
Okay,
which
users
and
groups
have
cluster
have
been
enabled.
So
I
can
see
that
my
service
accounts
have
cost.
Sir
I've
been
stable.
What
rule
is
on
that?
What
service
accounts
are
tied
to
this?
Which
deployments
are
those
service
accounts
tied
to
what
are
the
specific
permission
levels
inside
this
that
make
this
particularly
dangerous?
C
C
Absolutely
so
custom
policies
are
really
easy
to
do.
I
can
create
a
new
policy,
let's
say
for
our
back.
I
wanted
to
say:
I
want
minimum
are
back
permissions
like
these
inside
this
cluster
I
also
want
to
prevent
writable
host
mounts
I
want
to
prevent
writable
volumes
and
I
want
to
apply
this
specifically
to
my
production
cluster.
In
my
payments
payments
namespace,
why
capitalize
the
payments
I'm
going
to
send
it
to
slack
when
a
pic
security
best
practices
a
little
then
deploy
it's
very
level
high
and
I'll
just
say
our
back
policy
tax
and
that's?
A
C
Great
question:
so
we
have
we
have
the
resource
requirements
for
each
of
the
components
in
our
documentation,
it's
very,
very,
very
lightweight,
so
the
resource
requirements
for
central,
which
is
the
UI
are
about.
We
sent
request
some
limits
for
everything
so
for
central,
it's
about
1.5
cores
and
4
gigs
on
request
with
a
hundred
database.
The
scanner
is
0.5
cores
and
510,
sir,
which
runs
one
four
clusters:
0.5
cores
and
500
Meg's,
and
then
the
collectors
run
as
a
daemon.
C
C
Requestion,
so
we
for
the
network
segmentation
portion,
it's
really
good
to
work
at
complementary
to
sto.
They
provide
zero
trust
security
at
layer,
seven,
a
lot
of
great
things
with
application,
endpoints
and
layer
seven,
and
then
we
provide
the
l3
l4
firewalling
with
the
additional
information
we
get
from
misty.
Oh
we're
also
able
to
specify
better
network
policies
like
not
just
match
labels
for
pods,
but
actually
application
endpoints
specifically,
so
we
can
get
a
little
bit
more
nifty
with
the
ingress
and
egress
policies
by
incorporating
application
endpoints.
Since
about
description
in
the
network
policy
itself,.
A
C
Migrate
does
every
four
hours
we
run
a
scan
of
environment
depending
on
how
many
images
and
deployments
you
have,
that
might
be
taking
a
little
bit
longer
or
less.
We
have
customers
earning
up
to
fifteen
hundred
nodes
and
ten
thousand
you
know
deployments
and
customers
running
20
nodes
and
100
deployments,
so
it
just
really
depends
on
the
size
of
your
environment,
but
every
four
hours,
okay,.
A
Cool
and
then
can
you
pull
out
the
next
step
slide.
Eric
I
just
want
to
cover
this,
since
we
have
about
30
seconds
left
thanks
again
guys
for
joining
our
virtual
event
today
will
be
hosting
a
series
of
these
to
look
out
for
different
topics
and
we're
always
looking
for
additional
topics,
something
to
look
out
for
join
our
need
up
group.
We
just
started
this
a
couple
days
ago.
A
It's
just
to
get
the
Cates
community
together,
it's
our
West
Coast
kubernetes
and
cloud
native
security
needeth,
and
this
is
where
we're
gonna
have
some
virtual
events
and
a
series
of
events
that
are
not
specific
to
stack,
rocks
but
more
specific
to
just
kubernetes
and
cloud
native
security
as
a
conversation
piece.
So
for
next
steps
we'll
be
sending
all
of
you
guys
a
$25
doordash
gift
card
electronically.
So
look
out
for
that
in
your
inbox,
that
that
should
hit
your
inbox
tomorrow
and
then,
let's
schedule
a
meeting.