►
From YouTube: StackRox Overview and Demo Ali Golshan (StackRox) and Kirsten Newcomer (Red Hat) | OpenShift Commons
Description
StackRox Overview and Demo
Ali Golshan (StackRox) and Kirsten Newcomer (Red Hat)
OpenShift Commons Briefing
hosted by Karena Angell (Red Hat)
Link to Slides: https://github.com/openshift-cs/commons.openshift.org/blob/master/briefings/slides/StackRox%20Overview%20Feb%202021.pdf
https://commons.openshift.org/events.html
02/02/2020
A
Hello
and
welcome
back
to
another
commons
briefing-
I
am
your
host
today
karina
angel,
I'm
one
of
the
openshift
product
managers
and
we
have
a
very
special
guest.
Today
we
have
oligoshon
who's,
co-founder
and
cto
of
stack
rocks
and
I'm
sure
you've
heard
hopefully
by
now
about
the
intent
to
acquire
stack,
rocks
by
red
hat
and
stack
rocks,
has
been
a
leader
in
the
container
security
market
and
a
great
partner
to
red
hat.
A
C
A
B
Great,
thank
you.
Okay,
great
so
I'll
I'll.
Do
a
quick,
high
level
overview
of
what
stock
rocks
does
and
who
we
are.
I
have
a
few
short
slides
just
to
sort
of
set
the
context
and
then
I'm
just
going
to
jump
into
a
product
demo
and
sort
of
do
a
deeper
dive
into
use
cases
and
how
we
think
about
solving
customer
problems,
and
you
know
devops
and
developers
and
security
teams
problems,
so
stack
rocks
at
its
core
has
been
built
around
cloud
native
security.
B
Our
premises
is
that
you
have
to
be
able
to
secure
the
entire
life
cycle
of
an
application.
That's
from
the
moment,
the
building
has
happened
to
when
you're
deploying
things
to
when
you're
running
things
and
the
core
concept
for
stack
rocks
is
kubernetes
native.
So
what
we
focus
on
is
how
do
we
leverage
the
infrastructure?
How
do
we
leverage
the
underlying
kubernetes
constructs
and
open
shift
constructs
to
be
able
to
apply
a
lot
of
the
things
that
have
typically
in
security,
been
bolted
on
versus
building
in?
B
So
at
a
high
level,
when
we
talk
about
what
are
the
benefits
of
a
kubernetes
native
approach
to
security,
these
are
really
the
three
large
pillars
we
constantly
think
about.
B
One
is
lower
operational
cost,
so
the
lower
operational
cost
really
centers
around
the
fact
that
a
lot
of
the
typical
policies
or
languages,
or
even
rules
that
have
been
written
or
built
by
security
products.
They
typically
either
get
built
in
some
proprietary
language
or
get
implemented
using
the
tool
itself
using
a
third-party
implementation.
B
However,
from
us
at
stackrocks,
when
we
think
about
kubernetes
native
and
how
do
we
lower
the
operational
cost
for
users?
It's?
How
do
we
align
developers,
devops
and
security
teams
to
be
able
to
use
a
common
language
and
the
common
language
are
really
all
these
components
built
into
kubernetes.
So
as
an
example
and
I'll
show
you
this
later
on
in
the
demo
itself
is
when
we
actually
build
micro,
segmentation
rules
or
firewalling
rules.
We
use
things
like,
for
example,
network
policies
in
yaml
and
while
we
developed
them,
we
actually
enforced
those
through
kubernetes
itself.
B
The
other
part
of
it
is
reducing
the
overall
operational
risk.
So
if
you
build
things
into
your
infrastructure,
you're,
inherently
probably
building
some
trust
into
openshift
and
kubernetes
to
handle
this
entire
layer
for
you,
so
reducing
operational
risk
means
leveraging
the
already
existing
components
of
your
infrastructure,
like,
for
example,
network
segmentation,
rules
that
come
with
network
policies
or
admission
controllers,
or
your
existing
ci
cd
to
implement
a
lot
of
these
particular
operations.
B
So
not
only
does
it
become
native,
but
it
doesn't
break
your
existing
workflow
and
because
it's
already
using
your
infrastructure
to
implement
it,
the
risk
overhead
is
substantially
lower
than
potentially
using
a
third-party
tool
that
could
fail
or
cause
a
conflict
with
your
existing
first
class
citizen,
which
is
potentially
kubernetes
or
some
other
tooling,
and
then.
Lastly,
it's
this
notion
of
increasing
developer
productivity.
B
The
way
we
think
about
this
is
that
there
is
a
there
is
an
abundance
of
knowledge
that
you
can
gain
by
monitoring,
applications
and
infrastructure
when
it's
running
at
runtime,
but
being
able
to
sort
of
condense
that
into
actionable
insights
as
part
of
data
or
code
and
be
able
to
merge
it
back
with
the
development
process
is
a
very
key
component
of
increasing
developer
productivity.
At
the
same
time,
not
breaking
the
developer's.
Workflow
is
a
very
key
concept.
B
This
is
another
area
where,
for
example,
if
we
implement
things
as
part
of
the
ci
or
cd,
whether
it's,
for
example,
vulnerability
checks
or
vulnerability
scans,
whether
it's
policies
that
prevent
particular
deployments
from
admission
controllers,
all
this
information
gets
fed
back
to
the
developers.
Now
we
do
this
in
many
different
forms
we
can
plug
into
your
typical.
You
know
ticketing
solutions
or
messaging
solutions,
whether
it's
pager
duties
or
jira
or
slack,
but
the
other
part
of
it
is
as
part
of
stackrock's
its
own
cli.
We
actually
inject
ourselves
into
the
ci
process.
B
So
when
a
developer
is
working
through
their
natural
flow,
we
can
actually
give
that
feedback
in
the
ci
itself
in
the
cli
itself
and
be
able
to
have
the
developers
see
that
as
part
of
their
native
workflow.
So
these
are
very
core
concepts
to
how
we
develop
and
the
key
concepts
are.
We
want
to
make
sure
we're
not
breaking
infrastructure,
we're
leveraging
the
tools
and
the
capabilities
that
already
come
with
your
infrastructure
and
ensure
developers
see
this
feedback
in
their
existing
workflow
and
as
early
as
possible.
B
So
they
can
merge
it
and
basically
combine
it
back
with
their
code
base
to
inherently
harden
application
and
build
more
secure
applications
and
infrastructure
versus
sort
of
this
disjointed
process.
Where
you
take
actions
at
runtime
and
developers
and
devops
teams,
don't
really
understand
what
is
actually
happening
now
with
that
there's
also
sort
of
the
higher
level
of
what
is
it?
That's
whoops,
sorry,
what
is
it
that
stack
rocks
actually
offers?
B
So
the
the
a
good
way
to
think
about
stack
rocks
is
in
really
three
big
buckets:
securing
to
build
the
deploy
and
run,
and
the
way
we
functionally
think
about
that
is
stack
rocks,
helps
you
secure
the
supply
chain,
then
it
helps
you
secure
your
infrastructure
and
then
finally
securing
the
workloads
that
are
running
on
top
of
that
infrastructure.
B
This
is
an
area
where
stackrock
cares
deeply
about
operationalizing
vulnerability,
management
and
workflows
versus
the
scanning
itself,
and
that's
why
stackbox
does
provide
its
own
sort
of
best-in-class
scanner,
which
looks
for
packages
and
tools
and
languages
and
layers
and
all
these
sorts
of
things,
but
we
also
integrate
into
other
existing
vulnerability
scanner,
whether
it
be
clair
quay
and
core
cannibal
or
other
tools.
What
you
see
at
the
bottom
here
are
a
sampling
of
tools.
I'll
show
you
actually
a
more
complete
list
of
tools
we
integrate
with
directly
from
the
product
demo
itself.
B
We
also
integrate
with
all
the
registries
that
are
out
there,
so
very
standard,
whether
it's
red
hat
or
you
know,
amazon's,
ecr
or
artifactory
or
ibm
cloud
container
registry,
and
then
at
the
deployment
stage.
We
focus
on
securing
your
infrastructure,
so
securing
infrastructure
is
about
things
like
applying
the
right
cis
benchmarks
for
docker
for
coup,
implementing
things
like
lowest
privilege
and
best
practices
for
security
on
ensuring
things
like
our
back.
Controls
are
properly
configured
things.
B
For
example,
like
segmentation
rules
are
properly
applied
to
your
pods
and
deployments
and
name
spaces
and
ensuring
the
infrastructure
that
is
now
ready
to
deploy
workloads
on
top
of
it
is
properly
configured.
So
you
have
all
the
best
preventative
and
hardening
measures
already
in
place,
and
the
reason
for
this
is
this
sort
of
creates
this
funneling
effect
for
us,
where,
if
you
build
securely
and
you
ensure
your
supply
chain
has
hardened,
you
ensure
your
your
infrastructure
itself
is
preventative
measures
in
place
and
hardened.
B
Then
the
scope
of
the
attack
surface
at
runtime
naturally
reduces
so
what
we
can
focus
on
from
runtime
standpoint.
Not
only
is
highly
efficient,
but
it's
very
scalable.
It's
highly
automated
and
this
feedback
loop
that
we
talked
about,
which
is
gathering
insight
at
runtime
and
feeding
it
back
into
the
build
time
over
time.
Sort
of
compounds
this
value
and
gradually
continuously
reduces
this
attack
surface
and
allows
you
to
really
focus
on
things
that
are
matter
to
you
most
and
at
runtime.
B
Overall
stack
rocks,
as
we
mentioned,
is
kubernetes
native,
but
this
means
any
kubernetes
distro.
So
it
doesn't
really
matter
what
the
wrapper
is
around
coupe.
We
can
deploy
on
it
whether
it's
managed
services
like
eks,
aks,
gke,
pks,
or
whether
you're
rolling
out
your
own
native
kubernetes,
vanilla,
flavor,
using
open
shift
and
then
on
top
of
that,
doesn't
matter
where
you
run
it
public
cloud,
private
cloud,
hybrid
cloud,
which
seems
to
be
the
most
emerging
component.
We
see
all
the
time
we
run
anywhere
and
we
can
protect
anything
under
the
cube
umbrella.
B
Now,
architecturally
I'll
just
show
you
how
stack
rocks
deploys
and
then
we'll
jump
into
the
actual
product
demo.
So
stack
rocks
has
really
three
main
components:
stackrock
central
stackrock
sensor
and
stackrock's
collector,
so
stackrock
central
is
one
per
customer.
Stackrockcentral
is
really
the
brains,
it's
the
ui,
it's
the
api
server,
it's
where
the
scanning
happens
and
the
data
analytics
and
analysis
happen.
This
is
also
where
all
the
third-party
tools
and
other
technologies
and
solutions
integrate
with
stackrock.
So
this
is
where
we
can
output
alerts
or
basically
do
web
hook
integrations.
B
Then
we
have
the
api
and
the
stack
rock
sensor
stack
rock
sensor
and
collector
deploy
as
daemon
sets
and
their
kubernetes
native.
They
run
with
read
access
only.
They
don't
have
permissions
to
write
to
your
clusters.
They
don't
as
a
result,
need
high
levels
of
permission
and,
as
a
result,
it
naturally
itself
reduces
the
attack
surface
that
exists
in
your
infrastructure.
B
Sensors
also
act
as
become
mutating
web
hooks
and
act
as
ignition
controllers,
so
they
can
have
things
like
policies.
Enforced
cluster-wide
be
able
to
enforce
things
like
other
policies
that
are
being
written
either
by
the
customer
themselves
or
the
out-of-the-box
60-plus
policies
that
we
produce
and
then
the
collector
itself
is
one
per
node,
so
the
collector
can
run
in
actually
two
different
modes,
our
preferred
way,
which
is
leveraging
the
extended
berkeley
packet
filter
to
be
able
to
collect
system
information
processes,
network
information.
B
This
is
the
piece
that
contributes
to
detection
response
and
forensics,
but
there
are
also
customers
of
ours
that
have
sort
of
more
dated
versions
of
the
linux
kernel
that
don't
have
evpf,
and
we
allow
this
to
also
be
able
to
run
as
a
kernel
module
if
needed.
The
way
this
works
is
collectors,
collect
information
from
every
node.
Send
it
back
to
sensor.
B
Sensor
correlates
it
across
the
entire
cluster
and
sends
it
back
to
central
and
then
central,
collects
and
correlates
information
across
all
the
clusters
and
then
pushes
back
down
things
like
rules
and
permission
requirements
and
controls,
and
all
of
this
information
is
pushed
back
into
kubernetes
for
enforcement
or
action.
So
as
an
example,
if
there
is
potentially
malicious
activity
or
anomalous
activity
or
attack
on
a
particular
pod
or
deployment,
we
try
not
to
intercept
the
system,
call
table
and
kill
system
calls
or
processes.
B
What
we
do
is
we
instruct
kubernetes
to
tear
down
that
particular
pod
and
just
spin
up
a
new
one.
However,
because
the
collector
runs
on
that
particular
node,
all
of
the
forensics
and
investigation
data,
you
need
from
processes
and
system
calls
and
network
information
is
collected,
stashed
in
central,
and
you
can
output
that
into
your
sim
or
other
tools.
So
you
can
still
go
back
and
do
investigations
forensics
what
happened?
Build
policies
on
top
of
that,
yet
we're
not
actually
conflicting
with
your
first
class
citizen,
being
your
business
logic
and
your
uptime
requirements.
B
Okay,
I'm
gonna
jump
in
I'm,
assuming
that
everybody
can
see
my
dashboard.
If
not,
please,
let
me
know
so
a
couple
of
quick
things
to
mention
when
you
see
the
stack
rocks
dashboard,
there's
two
things
to
take
into
consideration.
First
of
all,
everything
you
see
here
is
presented
as
kubernetes
constructs.
We
don't
talk
about
in
the
notion
of
images
or
containers.
We
talk
about
pods
and
deployments
and
actual
applications
that
are
bundled
together
in
a
particular
name
space.
B
Next,
everything
you
see
here
is
exportable
through
apis
there's,
no
black
box,
there's
no
magic
under
the
hood.
Every
bit
of
data
you
see
in
our
in
our
ui,
you
can
export
through
the
api
and
the
last
part
of
it
is
everything
you
can
parse,
so
you
can
say
well,
I
don't
really
want
to
see
everything
under
the
sun.
I'm,
for
example,
only
interested
in
my
clusters
that
run
in
production.
B
The
same
sort
of
filtering
can
be
applied
to
our
api,
so
if
you
want
to
do
very
fine-grained
searches
or
exporting
of
data,
you
can
do
the
same
sort
of
flags
and
sets
as
far
as
the
export
goes
through
the
api.
So
when
you
land
on
the
dashboard,
the
first
thing
you
see
is
sort
of
an
overview
of
everywhere
stack
rocks
is
deployed.
You
can
see
sort
of
the
overall
violations
which
we'll
jump
into
you
can
see
your
compliance
overviews
again.
B
B
These
are
all
categories
of
pre-built
policies
that
come
out
of
the
box
from
stackrocks
configured
ready
to
go
and
we'll
get
into
a
little
bit
deeper
as
to
what
each
of
these
particular
things
mean
and
how
they
actually
get
implemented
from
a
general
workflow
I'll
sort
of
walk
through
how
stack
rocks
typically
works
with
customers
and
how
customers
operationalize
us
from
the
standpoint
of
vulnerability
management,
starting
at
the
build
time
through
deployment.
All
the
way
through
runtime,
so
customers
typically
start
from
a
standpoint
of
integration.
B
It's
very
simple
if
you
know
how
to
deploy
a
crew
app,
it's
that
simple
to
be
able
to
deploy
stack,
rocks
typically
for
us
to
deploy
it
takes,
and
this
there's
a
little
bit
of
a
nuance
depending
on
how
much
of
a
scale
you
want
to
deploy
us
to,
but
on
a
standard
scale,
when
customers
typically
want
to
do
a
proof
of
value,
it
takes
stack,
rocks
anywhere
between
15
to
20
minutes
to
deploy,
and
it's
important
to
mention
stack
rocks
is
an
on-premises
solution.
This
is
not
a
sas
service.
B
This
entire
product
deploys
on
your
own
premises,
regardless
of
where
your
premise
is
public,
private
or
hybrid.
Typically,
what
customers
do
is
they
put
our
central
in
a
sort
of
a
dedicated,
namespace
or
cluster
by
itself,
and
then
everything
else
which
is
sensors
and
collectors
deploy
as
kubernetes
damon
sets.
B
So
as
part
of
this,
the
the
first
place-
customers
typically
start,
is
around
integration,
so
whether
they
want
to
start
with
some
image
integrations.
These
are
sort
of
the
out
of
the
box.
Integrations.
We
have
it's
very
simple:
how
quickly
you
can
get
started,
which
is
basically
you
give
the
integration
name,
the
type
the
endpoint
you
point
it,
and
we
can
actually
test
and
validate
that
that
integration
works
very
quickly.
B
These
are
the
plugins
for
for
notifications,
so
slack
jira
output
into
email,
pagerduty,
splunk
teams.
We
also
have
our
own
generic
web
hook
as
well
as
aws
security,
help
and
syslogs.
You
can
also
do
backups
to
s3
or
google
cloud
storage
or
use
our
own
apis
and
create
tokens
and
create
scope
access
controls.
So
these
are
sort
of
typical
ways.
The
customers
get
started
once
they
get
started
and
integrate
us
into
their
workflow.
The
vulnerability
management
side
is
where
they
typically
start.
B
Now.
There's
a
lot
of
depth
here
and
I'll
sort
of
showcase.
Some
of
the
unique
components
of
stack
rocks.
First
stackross
just
doesn't
show
you
vulnerabilities
that
exist
in
your
images.
We
also
look
for
vulnerabilities
that
exist
in
inside
kubernetes
and
istio
themselves,
and
the
premise
is
obviously:
if
your
infrastructure
and
your
source
of
truth
is
insecure
and
somebody
can
own
that
it
doesn't
really
matter
what
your
containers
and
applications
are
doing
and
how
hard
they
are.
If
you
can
take
over
everything
as
root
and
admin,
then
there's
a
lot
more.
B
B
This
is
you
can
say,
for
example,
I
care
about
risk
based
on
a
particular
cluster,
and
here
you
can
see,
production
is
obviously
top
and
you
can
jump
in
just
into
your
production
cluster
and
from
there
you
can
see
different
things.
You
can
see
policies
that
have
been
built
against
vulnerability
management
and,
at
the
same
time,
you
can
actually
see
fixable
vulnerabilities
that
exist
that
are
currently
deployed
and
loaded
for
you
to
be
able
to
fix.
Now
when
customers
typically
start
here.
B
So
these
are
the
categories
we
typically
focus
on
configuration
management
based
on
policies
that
stack
rocks
itself
produces
and
I'll
show
you
some
of
those
what
they
mean
covering
cis
benchmarks
for
docker
and
kubernetes,
viewing
all
things
related
to
admin
roles,
as
well
as
secrets,
and
the
way
you
can
slice
and
dice
configuration
management
is
you
can
look
at
it
based
on
clusters,
namespaces,
nodes,
deployments,
images
or
secrets?
B
B
Where
is
ssh
misconfigured,
I
can
see
the
deployments
that
are
misconfigured
are
on
my
jump
host,
which
may
not
be
as
critical
and
in
my
visa
processor,
which
is
highly
critical
so
from
here
I
can
jump
into
my
visa
processor
and
see
what
other
misconfigurations
exist
on
this
particular
deployment.
So
I
can
see
I
have
apache
struts.
I
have
privileges
that
have
not
been
revoked
and
here's
the
ssh
that
we
talked
about.
So
if
I
want
to
fix
this
now,
I
can
just
say
edit.
B
This
policy,
and
here
stack
rocks,
allows
you
to
walk
through
the
m
through
the
policy
workflow
and
then
all
you
have
to
do
is
just
simply
turn
on
our
policy
enforcement.
So
again
because
of
our
native
integration
into
kubernetes
enforcement
and
security
is
extremely
simplified.
We
actually
look
at
collisions.
We
determine
where
is
the
best
place
for
you
to
enforce
this
policy,
and
all
you
have
to
do
is
turn
it
on.
So
this
is
where
stack
rocks
using
web
hooks.
B
You
becomes
an
admission
controller
and
essentially
blocks
any
particular
future
releases
or
deployments
of
this
particular
misconfiguration,
and
as
long
as
you
save
that
that
is
now
enforced,
so
that's
sort
of
a
quick
walk
through
of
how
quickly
you
can
jump
from
this
configuration
to
a
resolution.
B
So
compliance
for
us
is
is
actually
a
differentiated
function
and
I'll
explain
why,
unlike
a
lot
of
the
tools
in
the
industry
where
they
use
compliance
as
a
subset
of
cis
docker
and
ku,
we
actually
built
best
in
class
and
each
control
under
every
one
of
these
particular
compliance
workflows.
So
we
actually
took
all
the
hipaa
controls
that
apply
to
coop
and
docker
and
built
them
into
control
same
with
nist,
800,
190
and
853,
as
well
as
pci.
B
So
the
way
you
can
orient
your
your
palm
your
compliance
workflow
rather
than
sort
of
getting
all
this
overwhelming
information
is,
you
can
say
well,
I
either
care
about
particular
clusters
like
production,
or
I
might
care
about
a
particular
name
space
like,
for
example,
I
care
about
my
payments
namespace,
and
I
can
jump
into
this
payment
namespace
and
see
exactly
what
compliance
issues
I
have
in
this
environment.
So
I
can
see
overall
I'm
at
34
compliance.
I
have
pci
nist
hipaa
and
wellness
190
and
53
running.
B
So
if
I
wanted
to
dive
deeper
into
pci,
I
can
see
the
entire
standards
here.
I
can
see
the
past
the
controls
that
are
passing
and
the
controls
that
are
failing.
If
I
were
to
look
at
controls
that
are
failing,
I
can
see,
for
example,
requirements
for
firewalls
that
are
not
in
this
particular
zone
that
need
to
be
implemented,
and
I
can
actually
see
which
two
particular
clusters
are
impacted
by
this.
B
B
The
other
part
that
for
stack
rocks
is
very
important
is
because
we
have
that
collector
running
on
every
node.
We
actually
check
the
path
for
every
one
of
these
controls.
So
if
you
actually
export
this
not
as
pdf
but
csv,
we
actually
give
you
proof
of
audit
for
every
single
one
of
those
controls.
This
is
a
highly
automated
process
and
I'll
sort
of
show
you
a
cardinal
rule
of
not
to
do
during
a
demo
which
is
I'll
do
a
live
run
of
our
of
our
scan.
So
you
can
see
we're
running
67
deployments
here.
B
It's
not
massive,
but
the
point
I'm
trying
to
make
is:
if
you
run
a
scan
here,
you
can
see,
I
just
hit
run.
You
can
see
how
long
this
takes
the
reason
this
is
important
and
that's
it.
It's
done.
The
reason
it's
important
is
even
if
you
scale
this
to
orders
of
magnitude,
larger
you're,
still
talking
about
tens
of
minutes.
B
The
reason
this
becomes
very
important
is
our
customers
can
run
compliance
check
on
nightly
bases
or
weekly
basis,
at
least
so
they
can
find
drifts
in
their
compliance
and
their
violations
versus
doing
this
once
a
quarter
or
once
every
six
months
and
having
this
large
volume
of
fixes,
they
have
to
go.
Do
this
is
an
area.
We've
invested
a
lot
of
time
where
we've
taken
a
lot
of
very
sort
of
remedial
lot
of
manual
work
and
we've
made
it
highly
automated
and
attached
to
proof
to
it.
B
Okay,
now,
typically,
these
are
all
the
things
you
need
to
be
able
to
make
sure
your
heart
hardened
your
supply
chain.
Your
infrastructure
is
properly
configured
and
ready
for
deployment.
So
what
happens
when
you
actually
deploy
workloads
on
top
of
this
environment?
What
ends
up
happening
is
stack.
Rocks
draws
you,
a
visibility
of
everything
that
you
have
deployed,
so
at
a
very
high
level.
You
can
see
we're
looking
at
our
production
cluster.
You
can
also
look
at
other
clusters
if
you
chose
to,
but
let's
focus
on
production
here.
B
So
everything
in
this
environment
in
this
box
that
you
see
is
shaded
is
our
production,
and
then
we
also
show
you
egress
and
ingress
to
external
nodes
and
what
they're
doing
talking
to
you.
So
every
one
of
these
boxes
that
you
see
here
is
a
namespace
and
if
I
zoom
in
inside
these
namespaces
is
where
you
see
the
actual
deployment.
Now,
if
I
hover
over
these
deployments,
I
can
actually
see
ingress
flows,
egress
flows,
what
ports
and
services
are
listening
and
the
type
of
tcp
processes
and
services
that
are
actually
active.
B
These
are
all
very
relevant.
They
all
get
logged.
You
can
export
all
this
to
be
able
to
validate
things
now.
The
reason
this
is
very
important
is,
as
we
are
looking
at,
how
you
build
your
images,
how
you
deploy
and
the
requirements
and
the
dependencies
of
your
applications.
We
build
what
is
called
your
active
connections.
Map
active
connections.
Map
are
every
path
and
service.
Your
applications
actually
need
to
be
able
to
conduct
their
normal
operations.
B
On
top
of
this,
we
also
show
you
what
is
allowed
now
allowed
you
can
see.
If
I
switch
back
and
forth,
you
can
see
sort
of
things
that
are
blue
versus
things
that
are
red,
so
allowed
shows
you
the
entire
permissive
attack
surface.
These
are
all
misconfigurations
services
that
are
missing
their
network
policies
or
firewalls
and,
as
a
result,
they're
reachable
without
you
really
knowing
or
wanting
them
to
be
reachable.
B
B
The
reason
this
becomes
very
important
is
once
we
lock
that
baseline
in
you
actually
have
the
ability
to
be
able
to
add
other
services
or
other
potential
communication
paths.
That
may
not
be
part
of
the
baseline
to
your
baseline.
If
you
chose
to
so
it's
not
an
automated
process
where
everything
is
either
locked
or
not,
you
have
the
ability
to
be
able
to
determine
if
you
want
to
add
additional
flows
to
your
baselines,
and
obviously
this
is
very
important
for
fine
tuning
and
being
able
to
add
other
services.
B
That
may
not
be
part
of
your
sort
of
manifest
or
they
might
be
edge,
use
cases
you
have
now.
The
typical
question
becomes.
Okay,
if
I
have
all
this
permissive
attack,
surface
and
stack
rocks
is
aware
of
it.
What
do
I
do
about
it?
So
this
is
where
you
can
actually
look
at
stackdrops
and
say:
stack
logs.
Take
a
look
at
the
past
week
as
the
baseline
that
you
understand
and
stack
rocks,
generate
and
simulate
network
policies.
B
For
me,
this
is
where
stack
rocks
generates
an
entire
yaml
policy
for
you
for-
and
this
is
what
we
chose
is
for
the
entire
cluster.
You
can
actually
scope
this
down.
If
you
wanted
to
say
so,
you
can
say
I
only
care
about,
for
example,
my
front
end
and
let's
say
my
back
end.
So
if
you
went
through
the
same
workflow,
you
would
only
generate
policies
for
these
ones,
but
for
the
sake
of
this
particular
demo,
we'll
just
do
it
for
the
whole
cluster,
and
you
can
say
once
you
generate
it.
B
The
reason
we
simulated
is:
we
treat
your
infrastructure
as
code,
so
this
is
all
declarative
policies.
We
can
check
all
of
our
policies
against
the
communication
paths
and
the
policies
that
exist
inside
kube
and
ensure
this
segmentation
and
firewalling
policy
does
not
break
any
of
your
communications
that
exist.
The
reason
we
also
simulate
it
is
for
two
reasons.
One
our
preferred
way
is
to
allow
you
to
share
this
back
through
your
change,
control,
integrate
it
into
slack
or
any
other
git
process.
B
You
have
so
your
developers
can
merge
this
policy
on
the
next
build
or
you
might
be
under
a
particular
attack,
or
you
might
have
a
breach
where
your
team
might
actually
legitimately
need
to
be
able
to
apply
this
policy
in
real
time
to
your
infrastructure,
to
be
able
to
prevent
something
that
is
happening
and
based
on
that.
This
is
how
stack
rocks,
helps
you
lock
down
your
environment,
and
now
that
I
show
you
aloud,
you
can
see
that
all
these
other
application
deployments
have
now
turned
blue
to
state
the
obvious.
B
We
don't
actually
go
tamper
with
anything
in
kubernetes
itself,
but
everything
that
is
your
applications
on
top
ends
up
getting
locked
down
now.
This
is
sort
of
our
entire
workflow
process
to
build
in
best
practices,
hardened
and
preventative
measures
in
place.
At
the
end
of
this,
naturally,
some
things
potentially
get
through.
You
have
attacks,
you
have
potentially
exploits
or
you
have
vulnerabilities.
B
This
is
where
the
detection
and
response
part
comes
in
for
stack
rock.
So
this
is
the
standard
workflow
for
how
we
detect
and
alert
you.
Everything
in
this
particular
workflow
can
be
exported,
but
let
me
just
show
you
something:
for
example,
if
we
want
to
find
something
that
is
running
with
process
uid0,
because
it
doesn't
actually
have
anything
dedicated.
So,
first
of
all
what
we
do
is
we
actually
capture
the
entry
point.
B
We
capture
the
container
id
the
argument
that
was
passed
and
written,
but
if
there's
even
more
information
here,
we
capture
the
entire
forensics
data
for
you,
so
you
can
actually
go
back
and
correlate
this
and
see
if
this
particular
information
exists
anywhere
else
in
your
environment.
Again,
stackrock
does
not
act
as
a
sim
or
a
data
lake.
We
have
native
integration,
so
you
can
merge
and
export
all
this
information
into
your
environment
and
the
reason
staccrox
is
very
proficient
at
doing
this
is
under
the
hood.
B
Rocks
is
really
an
asset
inventory
management
and
tracking
tool
from
the
moment
code
becomes
an
image.
We
track
that,
based
on
its
dependencies,
its
requirements
where
it's
deployed,
where
it's
actually
loaded
and
running
and
all
the
other
dependencies
and
communications
it
has.
So
when
we
alert
on
something
we
give
you
the
entire
asset
information
around.
B
This
particular
deployment
all
the
way
from
commands
it's
running
to
argument,
even
if
it
had
volumes
or
secrets
or
additional
security
context
that
you
needed,
and
then
this
is
the
policy
that
gets
generated
and
inputted
into
your
slack
or
into
any
other
tool.
You
have
as
an
example
if
this
was
integrated
into
your
build
time.
Let
me
just
show
you
a
life
cycle,
for
example,
alert
that
is
for
build.
B
What
ends
up
happening
here
is
anything
at
build
and
deploy
this
information
around
rationale
around
description
and
remediation
can
be
directly
inputted
into
the
cli,
so
the
developer
can
immediately
see
it
if
it's
violating
something.
So
this
is
again
how
we
ensure
the
high
velocity
for
developers
when
they're
interacting
with
this
particular
tool.
Now.
Finally,
when
you
have
all
these
components
in
place,
the
culmination
of
all
these
outputs
become
what
we
call
our
risk,
metrics
and
risk
ordering.
B
This
is
where
you
stack
rank
all
your
applications
that
you've
deployed
from
most
risky
to
least
risky
there's
nothing
that
has
zero
risk,
it's
just
what
risk
you're
willing
to
take
on.
So,
let's
take
a
look
at
our
visa
processor,
which
is
most
risky.
Why
is
that?
Well,
we
can
see
these
are
all
the
policy
violations
that
are
causing
this
particular
deployment
to
be
risky.
B
B
So
when
we
actually
run
these
services
and
these
containers
we
build
an
entire
baseline
and
one
of
the
unique
things
stack
rocks
does
that
is
differentiated,
and
this
is
where
we
leverage
kubernetes
native
constructs
is
rather
than
just
baselining
on
a
container.
We
use
what
we
call
pod
consensus
uniformity,
which
is
all
the
parts,
should
be
uniform
and
run
the
same
way.
B
So
if
there
are
edge
cases
or
anomalies,
we
use
that
conformity
to
be
able
to
drive
down
and
drive
out
noise
and
really
be
able
to
signal
on
a
particular
process
or
function
that
is
truly
malicious,
so
the
process
baselining
allows
you
to
add
all
these
baseline
components
in
yourself.
If
you
wanted
to
stack
rocks,
does
this
automatically
and
anything
that
is
captured
outside
the
baseline?
B
B
You
also
have,
as
I
mentioned,
the
ability
to
be
able
to
add
these
to
your
allowed
list
or
remove
it
from
your
disallowed
list.
So
if
you
said
well,
I
actually
want
to
allow
somebody
to
run
bash.
That's
fine!
You
can
add
this
to
the
baseline
and
at
the
same
time
you
can
remove
some
something
like
sudo
from
your
baseline
and
say
I
don't
actually
want
anyone
to
run
it,
and
since
I
just
brought
this
out
even
if
I
expand
this,
I
can
actually
see
if
there
was
a
command
round.
B
So
this
is
not
sort
of
blind
to
historic
events
either
and
then.
Finally,
if
you
were
to
actually
look
at
the
entire
processes,
you
can
actually
see
the
entire
execution
table,
and
this
is
sort
of
the
time
series
view
of
what
executed
where
so.
This
is
very
useful
when
you're
doing
forensics
and
incident
response,
and
if
I
were
to
sort
of
click
through
this,
you
can
see
every
different
asset
here
has
different
requirements
and
different
sort
of
output.
B
As
to
why
something
was
made
to
be
very
suspicious
from
this
workflow,
you
can
jump
actually
back
into
the
network
side
and
implement
network
policies.
If
you
chose
to
at
this
point,
there's
no
recommendations,
because,
obviously
we
we
implemented
segmentation
firewalling
across
the
whole
cluster
last
couple
of
quick
things
I'll
show
you
is
so
these
are
all
the
system
policies
that
come
built
out
of
the
box
in
stack,
rocks
currently
there's
63,
but
obviously
we
allow
our
customers
to
be
able
to
create
new
ones
or
duplicate
the
ones
that
are
here
and
modify
them.
B
So
you
can
actually
modify
things
in
here
directly
by
adding
or
changing
the
severity
which
changes
their
risk
scoring.
You
can
determine
what
life
stage
you
want
to
actually
apply
this.
The
rationale
categorization.
You
can
even
restrict
the
scope
or
exclude
a
particular
scope
from
this
particular
policy
when
you
hit
next.
The
other
things
you
can
do
is
you
can
actually
add
other
policy
criteria,
so
you
can
say
I
want
to
actually
add,
for
example,
understanding
of
cvss
score
to
this
particular
policy
or
because
this
is
shell
shock.
B
I
want
to
understand,
for
example,
if
there's
process
activity
correlated
to
this
by
a
process
name.
This
is
very
simple.
You
can
combine
it
as
an
and
or
as
an
or
and
add
it
to
a
different
component.
So
it's
a
very
simple
drag
and
drop
policy
workflow
once
you've
modified.
Your
policy
when
you
hit
next
stack
rocks,
will
audit
your
workloads
and
will
tell
you
if
you
apply
this
policy,
which
deployment
is
actually
impacted,
so
you're
not
sort
of
going
in
blind,
not
knowing
what
will
be
really
impacted.
B
If
I
enforce
this
policy
and
then
finally,
stackrocks
will
tell
you
where
it's
best
to
be
able
to
apply
this,
so
you
know
your
view
might
be:
well,
I'm
actually,
okay,
letting
developers
build
with
something
that
has
shell
shock.
I
just
don't
want
them
ever
to
deploy
this
in
production.
I
want
to
prevent
that,
so
you
can
sort
of
mix
and
match
how
you
do
that.
B
Finally,
all
of
our
api
references
are
built
into
the
ui,
so
this
is
for
you
to
be
able
to
automate
all
the
workflows
and
all
the
use
cases.
We
talked
about
directly
from
the
product
itself.
Samples
and
blueprints
and
snippets
of
codes
are
built
in
our
entire
health
center,
which
is
get
started.
Guide
is
built
into
the
dashboard
as
well,
because
it
takes
15
to
20
minutes
as
we
discussed
most.
B
Our
customers
have
sort
of
gone
on
this
track
of
being
able
to
get
things
up
and
running
themselves
and
then
the
last
thing
I'll
show
you
is
the
search
capability,
because
we
have
all
this
inherent
knowledge
under
the
hood.
We
allow
you
actually
query
on
top
of
that,
so
you
can
say.
Well,
you
know
nothing
you
showed
me
was
what
I
was
looking
for,
but
I'm
interested
in
understanding.
B
If,
somewhere
in
my
production
cluster,
I
have
a
cve
score,
for
example,
from
2020,
and
this
way
we
actually
parse
all
the
data
we
have
seen
across
your
infrastructure.
We
tell
you
what
violations
associated
to
this
search,
what
secrets
are
associated
to
the
specific
search
term
and
we're
actually
expanding
this
to
make
it
boolean,
so
you
can
even
add
processes
or
functions
or
even
more
specific
indicators
for
a
lack
of
a
better
word.
We're
allowing
you
to
basically
write
a
search
query
on
the
enumeration
of
your
infrastructure
that
we
have
presented
and
with
that.
B
That
concludes
the
overall
presentation
for
stackrock.
So
thank
you
very
much
I'll
pause
there
and
happy
to
take
any
questions.
A
B
A
A
Don't
know
kirsten,
she
is
the.
What
do
you.
D
Security
security
and
openshift
devsecops
for
for
openshift,
so
security
throughout
the
stack
I
collaborate
with
other
members
of
the
product
management
team
so
and
I
did
notice
karena-
may
have
already
been
getting
ready
to
ask
you
these
ally
we've
been
we've
been
trying
to
answer
questions
we
can
as
we
went,
but
there
were
a
couple
here
that
that
we
thought
were
for
you,
one
of
which
was.
How
would
you
compare
prismacloud
slash
twist
lock
with
stack
rocks?
D
What
makes
stack,
rocks
sound
stand
out
and
in
particular
this
is
tim
he's
asking
about
notary
service
and
ci
cd
deployments.
B
B
We
leverage
kubernetes
native
constructs,
so
we
don't
collide
with
infrastructure
as
an
example,
we're
not
becoming
an
inline
proxy
or
we're
not
shipping,
the
runtime
engine
and
taking
actions.
That's
the
component
that
allows
us
to
have
substantially
lower
operational
risk
and
better
scaling
and,
as
a
result,
it
ends
up
having
substantially
less
overhead
from
cpu
memory
utilization.
That's
one
part
of
it.
The
other
part
of
it
is
is
we
also
tend
to
be
much
more
open.
B
So
obviously
our
integrations
into
the
ci
cd
tool
are
a
lot
more
developer,
focused
where,
for
as
much
with
stock
tend
to
be
a
little
bit
more.
I
think
security
operations
oriented
so
we
are
a
little
bit
more
developer
friendly
and
we
have
a
more
rich
integration
component
of
tools
and
services
as
part
of
our
ci
cd
process
and
then
there's
one
piece
that
we
didn't
talk
about,
which
is
another
core
differentiator,
which
is
a
component
which
is
our
plugin
called
kube
linter,
which
is
actually
an
open
source
tool.
B
You
can
search
for
kube
linter
from
stack
rocks,
which
is
actually
a
linting
tool
that
you
can
download
as
a
binary
or
point
to
your,
for
example,
git
repos,
which
gives
you
dozens
of
best
practices
out
of
the
box.
So
you
can
lint
your
yelm
or
helm
or
yaml,
charts
or
sorry,
helm,
charts
or
yamls
for
best
practices
and
sort
of
the
direction
of
this
is
to
then
be
able
to
eventually
plug
this
into
the
stack
rods
platform
and
get
application
specific
linting.
D
Awesome,
thank
you
and,
and
I'm
going
to
say
out
loud.
I
know
this
has
been
put
in
chat,
but
folks
clearly
are
aware
of
the
announcement
that
red
hat
and
stack
rocks
will
be
working
more
closely
together
in
the
future.
The
deal
still
has
not
closed,
and
so
we
can't
answer
any
questions
about
anything
that
might
be
post
acquisition
or
post
close.
So
so
we
just
today
we're
independent
companies
we'll
answer
as
independent
companies.
D
Just
so
everybody
knows
that
another
question
that
I
thought
was
for
you:
ali
is
from
doric.
I
hope
I'm
pronouncing
that
properly
does
stackrocks
have
ml
and
or
ai
capabilities,
and
then,
after
that,
there's
related
to
that.
I
guess
is:
is
alerts
interest
in
alerts
for
anomaly
detection
without
creating
rules.
B
Sure
so,
at
this
point
we're
not
doing
anything,
I
would
consider
to
be
ai,
because
I
consider
ai
to
be
more
predictive
and
we
are
doing
some
simple
correlations
again
just
to
be
very
transparent.
I
wouldn't
consider
them
to
be,
as
as
far
as
saying
the
true
machine
learning
and
the
reason
for
it
is.
It
goes
back
to
sort
of
what
I
mentioned
early
on,
which
is
dealing
with
infrastructure
as
code
and
declarative
policies.
B
It
doesn't
really
create
a
lot
of
dimensionality
or
complexity,
and
you
don't
need
a
lot
of
cardinality
between
your
data,
so
it
allows
us
to
be
able
to
be
more
decisive
and
definitive
about
what
we
produce.
So
the
policies
we
create
right
now
are
rule-based
and
the
suggestions
we
create
because
they're
based
on
network
policies
or
segmentation
rules
are
for
lack
of
a
better
word
heuristics
as
and
as
a
result,
we
haven't
actually
seen
the
need
to
build
specific,
ai
or
ml.
B
Now
that
having
been
said,
it
is
part
of
our
roadmap
to
think
more
about
how
we
can
sort
of
layer
on
additional
analytics
on
top.
So
we
can
create
more
insight
into
detection
response.
Forensics
and
better
recommendations
overall
and
this
sort
of
maps
to
some
of
our
thinking
about
how
we're
expanding
our
footprint
beyond
so
one
of
the
things
we
didn't
get
into
is
our
we
have
integrations
into
service
mesh
and
istio,
so
in
some
of
those
areas
to
be
able
to
correlate
data
points
that
are
different
or
have
high
sort
of
dimensionality
and
cardinality.
B
B
So
we've
made
that
conscious
decision
to
tailor
towards
automation
and
scalability
with
low
overhead
versus
sort
of
over
abundance
of
data
collection.
This
is
also
why
we,
we
export
our
data
from
our
api
into
your
existing
sims
or
data
lakes
versus
trying
to
become
that
sort
of
tool
that
holds
your
data.
D
Awesome-
and
it
looks
like
let's
see
chris,
do
you
want
to
relay
some
of
the
questions,
or
do
you
want
us
to
do
that
that
you've
kind
of
put
in
chat
for
us.
A
So
I
mean
there
are
so
many
great
questions.
Let's
go
back
to
the
top.
Stackrocks
is
cross
platform
right,
so
it
goes
from
aws
to
openshift.
B
A
Awesome,
thank
you,
and
I
know
this
one
has
been
answered,
but
it'd
be
good
to
answer.
It
live
so
regarding
cni,
there's
multis
to
provide
network
isolation,
but
what
about
security
with
multis?
Sometimes
we
receive
comments
that
it's
third,
that
it's
a
third-party
solution
that
provides
network
isolation
and
it'll,
make
weak
security
and
openshift.
D
There
you
go
so
I
I
think
that's
an
open
shift
question,
so
so
I'm
going
to
I'm
going
to
tackle
that
thanks
thanks
karina.
So
right,
one
of
the
purposes
of
the
multis
plug-in
that
that
we
ship
by
default
with
openshift,
is
to
allow
you
to
have
to
do
sdn,
chaining
and
also
to
have
other
options
that
support
different
ways
of
working
with
networking
in
in
an
open
shift.
D
I
think
what
might
be,
what
that,
what
the
question
might
be
referring
to
is
the
ability
to
use
other
networking
features
such
as
sriov,
dpdk
or
macvlan,
that
bypass
the
sdn
and
in
those
cases
they
do
create
a
different,
you
might
say,
a
different
attack
vector
and
they
need
to
be
used
with
caution,
and
you
want
to
evaluate
the
appropriateness
of
using
that
feature
versus
the
risk.
D
As
ali
said
earlier
right,
there
is
no
such
thing
as
zero
risk,
and
so,
when
we
think
about
this
from
a
security
perspective
right
you,
you
decide
whether
you
need
to
use
those
futures
and
features
and
are
willing
to
accept
the
risk
and
what
mitigating
controls
you
might
provide.
Those
features
are
typically
used
in
low
latency
environments,
where
performance
is
critical.
A
B
Yeah,
so
that
is,
demon
sets
per
host
the
collector
piece
and
as
we
mentioned
it,
it
runs
as
a
demon
set
with
read
only
privileges.
So,
unlike
traditional
agents
that
run
and
can
write
and
tamper
with
the
host,
we
read
only
and
then
we
correlate
that
information
per
cluster
and
then
instruct
kubernetes
to
take
action.
So
we
leverage
the
kubernetes
as
the
control
plane
and
then
you
know,
for
example,
side
cars
that
come
with
mesh
or
the
kernel
itself
as
the
data
plane
to
be
able
to
enforce
things.
A
B
So
we
we
typically
do
our
performance
testing
on
medium-sized
cloud
boxes,
quad-core,
16-gig
ram
boxes
and
our
on
our
overall
overhead
is
considerably
lower
than
most
solutions.
We've
seen
out
there
and
we're
talking
about
you
know,
somewhere
between
one
and
a
half
to
two
percent
cpu
memory
utilization.
If
you
run
everything
as
is
on
a
relatively
you
know,
hygienic
box,
meaning
you're,
not
overloading
a
number
of
you-
know,
containers
and
sort
of
like
crushing
the
I
o
on
the
runtime
engine
itself.
B
So
to
this
date
we
have
yet
to
have
performance
or
overhead
via
customer
feedback
or
an
issue
for
us.
This
is
something
transparently
that
we
worked
very
heavily
on
two
three
years
ago,
but
over
the
last
year
and
a
half
to
two
years,
it's
actually
been
one
of
our
core
differentiators
against
other
products
in
the
market.
A
B
Sure
so
I
think
generally
the
core
differentiators
for
us
as
a
sort
of
a
common
denominator
are
we
tend
to
be
a
declarative
policy
enforcement
tool,
meaning
that
we
leverage
that
kube
native
understanding
that's
constructs
and
we
are
able
to
be
able
to
understand
the
relationship
between
objects
from
when
they're
being
deployed
all
the
way
to
runtime.
So
when
you
build
a
policy,
we're
the
only
tool
that
you
can
apply
this
from
build
and
be
able
to
have
a
correlated
policy
for
deployment
all
the
way
through
runtime.
That's.
C
B
The
core
main
differentiator:
we
have
against
everyone
now,
if
you
sort
of
break
it
down
to
each
specific
tool,
it
becomes
a
little
bit
different.
Obviously
the
prisma
twist
lock
one
we
talked
about
a
little
bit
in
the
case
of
systig
I
mean
cystic,
is
a
great
tool,
I'm
a
huge
fan
of
falco.
B
They
have
really
great
open
source
projects,
they've
done
really
well,
but
at
the
end
of
the
day,
it's
mostly
a
monitoring
tool
that
pivoted
into
a
security
tool
and,
as
a
result,
they're
sort
of
leveraging
that
underlying
collection
model
to
be
able
to
do
security
as
a
result,
they're
strong
on
the
runtime
side,
but
they
have
substantial
gaps
at
the
build
and
deployment
side
and
being
able
to
have
that
full
life
cycle
policy
coverage
and
enforcement
and
preventative
coverage
they're
also
missing
a
lot
of
those
pieces
like
configuration
and
posture
management
that
we
do
so
it
all
really
depends
on
what
the
core
use
cases
or
problem
sets
are.
B
A
Thank
you
and
again
there's
a
lot
of
comments
and
chat.
Saying
just
thank
you
for
taking
the
time
to
go
over
all
of
this
and
answer
questions.
There's
one
that
says
what
steps
does
the
community
go
through
to
deliver
operators,
I'm
trying
to
pull
out
the
question?
A
Let's
see
I'll
read
what
caught
my
attention
is,
the
recommendation
it
provides
to
operators.
It's
compelling
off
concept.
What
steps
does
a
community
go
through
to
deliver
operators
with
the
right
information
here
and
not
mislead?
False
positives
can
stir
the
value
of
a
project.
For
example,
we
can
parse
chris.
Do
you
want
to
maybe
get
a
clarification
on
that
one
too,
and
then
also
can
you
deploy
and
run
in
a
fully
air-gapped
environment.
B
I'm
assuming
the
air
gap
question
is
for
me
and
then
I'll
turn
it
into
a
team
for
the
operators
part.
So
the
short
answer
is
yes:
we
actually
have
large
government
customers
that
we
work
with
from
you
know:
intelligence
to
u.s
air
force
to
dod
that
have
been
public
references
and
those
customers
do
run
us
in
fully
air-gapped
environments.
So,
yes,
you
have
absolutely
the
ability
to
be
able
to
take
updates
and
policies
and
rules
out
of
band
and
update
the
product
and
run
the
product
itself
in
a
fully
air-gapped
environment.
D
Thank
you
so,
regarding
the
false
positives
question,
absolutely
that's
a
that's
a
complex
scenario
and
anyone
who
delivers
vulnerability,
scanning
solutions
or
consumes
the
results
of
vulnerability
scanning
solutions
is
aware
that
there
can
be
challenges
there
I'll
try
and
keep
this
a
little
short.
Because
I
know
we've
only
got
a
few
more
minutes
left.
D
Red
hat
does
produce
its
own
set
of
vulnerability
data,
its
own
vulnerability,
feed,
which
can
be
consumed
by
vulnerability
scanners,
and
we
do
that
in
part,
because
our
product
security
team
evaluates
newly
discovered
vulnerabilities
in
the
context
of
what
we
ship
in
red
hat
and
what
we
enable
and
don't
enable
so
sometimes
our
severity
rating
for
a
cve
is
different
in
red
hat,
shipped
content
than
in
an
upstream
project
and
also
sometimes
there's
mitigation
in
place
in
a
red
hat
solution.
That's
not
available.
D
Obviously,
if
you
were
just
using
vanilla
or
upstream-
and
so
we
make
that
feed
generally
available
and
we've
done
a
lot
of
work
over
the
last
gosh
6
to
12
months-
to
improve
that
data
feed
to
help
reduce
false
positives
and
we
launched
a
project
called
the
container
scanning
vulnerability
project
to
work
with
all
of
our
partners
to
help
them
more
easily
consume
that
new
feed
and
to
help
them
provide
better
data
for
red
hat
solutions.
And
so
that
is
a
work
in
progress
and
that
will
be
continue.
A
Thank
you
can,
can
you
also
address
the
compliance
operator
and
how
stack
rocks
fits
in
with
that.
D
Sure
and
and
ali
does
a
great
demo
love
seeing
all
of
the
compliance
checks
that
stack
rocks
offers.
One
of
the
things
also
that's
been
in
in
progress
for
a
while,
really
keeping
my
fingers
crossed.
We
can
get
this
out
in
february,
the
cis,
the
vanilla,
the
cis
kubernetes
benchmark
is
designed
for
vanilla.
Kubernetes
cis
recently
has
started
adding
distribution,
specific
benchmarks,
and
so
we
have
we're
very
close
to
publishing
a
cis
openshift
benchmark
in
the
near
term,
because
you
know
openshift
is
operator
driven.
D
We
manage
the
configuration
settings
differently
than
vanilla,
kube,
our
and
so
our
the
openshift
compliance
operator.
We
also
recently
shipped
compliance
checks
for
the
cis
benchmark.
We
call
it
inspired
by
the
cisco
benchmark
until
we
get
the
the
cis
open
shift
benchmark
published.
So
if
you're
interested
in
scanning
for
cis
compliance
with
openshift
the
compliance
operator
is
your
better
bet
for
right
now,
it's
available
with
any
openshift
subscription.
D
The
compliance
operator
will
also
scan
at
the
rel
core
os
layer,
and
we
have
we
checked.
We
do
a
number
of
checks
that
fit
that
are
subsets
from
the
nist
853
controls,
so
you
might
find
using
the
two
together
to
be
valuable.
A
Thank
you.
Thank
you
very
much
and
I
know
you'll
have
more
after
the
acquisition
closes
on
that
that
we
can't
address
right
now,
but
in
a
future
briefing,
I'm
sure.
Let's
end
on
this
last
question,
can
stack
rocks
be
installed
in
one
place,
either
stand-alone
or
part
of
a
cluster
and
scan
and
secure
dissimilar
systems.
I'd
love
to
have
one
instance
to
be
able
to
scan
ecs,
eks,
ocp,
vsphere
7
with
tanzu,
and
also
standalone
quay
image
repository.
B
Yeah,
so
I
just
want
to
make
sure
I
understand
the
question
correctly.
So
when
we're
talking
about
standalone
the
there
is
a
way
to
do
that.
The
way
to
do
that
is,
then
you
would
have
to
have
standalone
deployments
of
central,
because
the
scanner
is
part
of
central
itself.
Now
we're
working
on
how
we
actually
sort
of
decouple
that
the
short
answer
is
yes,
but
there
are
some
nuances
about
how
you
go
about
doing
that
which
is
not
sort
of
natively
correlated
across
multiple
instances.