►
From YouTube: DevSecOps Lunch and Learn (Federal)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everyone,
and
thanks
for
attending
today's
virtual
lunch
and
learn
my
name-
is
tam
and
I'll
be
moderating.
Today's
virtual
event,
I
have
trace
vance
from
red
hat
with
me
today
he's
the
director
of
strategic
partnerships
and
we
also
have
brandon
wood
with
us
today.
He
leads
the
federal
practice
at
stack,
rocks
and
we'll
give
we'll
be
getting
a
live
demo
of
the
stackrock's
kubernetes
security
platform,
all
right,
let's
before
we
hand
this
off
to
trace,
I
wanted
to
start
off
with
a
polling
question.
A
So,
let's
see
first
one.
Are
you
running
kubernetes
in
production
today?
Just
a
reminder:
openshift
is
another
flavor
of
kubernetes,
so
I'll
leave
this
up
for
a
couple
seconds
and
we'll
show
the
results
here.
A
All
right,
let's
end
the
poll
and
wow
about
80
of
you
guys
are
definitely
running
kubernetes
in
production.
So
that's
great
all
right.
Let's
kick
it
off
to
trace.
B
Hello,
everyone,
so
my
name
is
ray
vance,
I'm
from
red
hat,
and
I
currently
I
am
what's
known
as
a
hyperscale
partner
leader.
B
So
you
know.
I
think
that
this
is
a
really
great
thing
to
do.
I
also
appreciate
the
effort
towards
the
american
red
cross.
I
think
those
are
going
to
be
very
important
things
for
us
going
forward
and
it's
a
spirit
of
collaboration
that
we
at
red
hat
share.
So
we
definitely
thank
zacharox
for
doing
that.
Next
slide.
B
B
So
when
we
talk
about
kubernetes
and
containers,
what
we're
really
talking
about
is
the
evolution
of
the
idea
of
devops
and
security
and
bringing
those
things
together
to
create
an
entrenched
value
for
both
customers
and
for
the
organizations
that
you
may
be
supporting
the
reason
that
those
things
are
so
important
is
that
containers
allow
us
to
put
just
what
we
need
into
a
vessel,
the
container
to
be
able
to
build
applications
to
be
able
to
solve
complex
problems
with
things
that
are
easy
to
use
highly
replaceable
and
could
be
orchestrated
to
achieve
a
greater
goal.
B
So
we
can
take
a
smaller
set
of
tools
in
this
case
the
containers
and
build
something
really
big.
It's
analogous
to
the
way
we
treat
building
blocks
right.
I
can
do
a
lot
with
very
few
building
blocks.
It's
the
order
that
I
put
them
in,
and
I
think
about
that
order.
I
think
about
something
that
manages
it
and
that's
really
what
kubernetes
does
for
us
right.
Kubernetes
is
a
way
to
be
able
to
orchestrate
maintain
a
record
and
the
ability
to
scale
containers
and
all
of
those
are
key
to
the
idea
of
hybrid
cloud.
B
So,
for
us,
hybrid
cloud
means
that
it
could
run
anywhere
that
you
have
compute,
whether
that's
at
the
edge
dealing
with
things
like
internet
of
things
or
iot.
It
could
be
running
within
a
data
center,
a
traditional
data
center.
It
could
be
in
the
cloud
as
well.
The
big
thing
there
is
that
we
want
to
be
able
to
move
very
quickly
and
be
able
to
take
workloads
and
use
workloads
where
appropriate.
So
it
doesn't
matter
if
it's
something
that
starts
on-prem
starts
on
a
developer
laptop
ends
up
in
the
cloud.
B
B
When
we
think
about
practical
applications,
especially
today,
living
in
a
world
that
has
coveted
19
and
the
implications
to
society
for
it,
we
think
about
the
speed
of
treatment.
Medical
care,
in
particular,
is
very
important
and,
as
we
look
at
some
of
the
use
cases
for
openshift
and
of
containers,
one
of
the
things
that
comes
to
mind
is
the
work,
that's
being
done
with
hca
healthcare.
B
B
So
the
solution
here
was
that
the
hca
was
able
to
use
predictive
data
analytics
built
on
top
of
an
open
platform
on
top
of
openshift,
to
be
able
to
treat
that
and
give
themselves
a
tremendous
head
start
and
a
five
hour
head
start
really
does
save
lives.
So
that
was
a
very
important.
B
You
know,
example
of
practical
ai
being
done
in
an
open
nature.
Next
slide.
B
Please
so
what
are
we
seeing
today?
Why
are
customers
choosing
openshift
and
why
are
they
choosing
red
hat
they're?
Doing
it
primarily
based
on
the
pace
of
innovation?
Base
of
innovation
is
something
you
hear
about
often
from
a
lot
of
organizations,
but
red
hat
has
taken
this
approach
for
the
last
25
years.
B
They've
taken
the
approach
of
working
with
the
open
source
community
to
turn
projects
into
open
products,
so
those
products
have
been
tried,
tested
and
scaled,
and
one
of
the
important
differentiators
for
red
hat
is
that
we
actually
do
upstream
contribution
first
and
what
that
means
is
that
you
know,
as
we
discover
things
in
practice
and
at
our
scale,
we
actually
contribute
back
to
the
upstream
projects
and
that
eliminates
the
need
or
the
idea
to
fork
or
to
rebase
or
re-platform
we're
actually
sharing
back
with
the
community,
so
that
there
could
be
this
faster
flywheel
of
innovation
and
that's
what
builds
not
only
red
hat
enterprise
linux,
which
is
a
trusted
linux
platform,
but
also
kubernetes.
B
B
B
B
B
So
that
is
the
difference
in
some
ways
of
scanning
a
container
image
versus
the
runtime
behavior
of
a
of
a
container,
so
you'll
hear
from
brandon
about
some
of
the
things
that
sacrox
can
do
to
address
that.
But
that
is
something
that
is
inherent
to
openshift
as
a
platform.
The
ability
to
plug
in
modules
such
as
stack
rocks
to
be
able
to
to
do
that
security
and
provide
visibility
to
the
you
know,
underlying
containers
the
container
run
times
and
to
be
able
to
do
that,
reporting
out
to
work
with
external
micro
services.
B
Please
so
when
we
think
about
it,
kubernetes,
vanilla,
kubernetes,
the
idea
of
it
is
extremely
hard.
There's
a
lot
of
operational
practices
that
you
have
to
take
on.
There
are
a
lot
of
things
that
you
continually
have
to
integrate
and
doing
those
integrations
making
sure
that
you
have
the
trusted
container
solutions,
making
sure
that
you
have
the
correct
types
of
network
policies.
Those
are
extremely
difficult,
and
so
those
are
some
of
the
things
that
openshift
attracts
that
that
red
hat
as
an
organization
is
working
with
the
community
to
continually
improve
upon.
B
So
enterprises
have
identified
those
things,
whether
it's
the
idea
of
having
a
operating
system
that
you
have
to
continually
patch
permissions
that
you
have
to
set
at
cd,
etc.
Those
are
the
things
that
are
the
typical
blockers
to
enterprise
adoption,
and
those
are
some
of
the
things
that
we'll
talk
about
in
the
next
slide.
B
So
what
is
openshift
openshift
is
a
platform.
It's
a
platform
that
includes
kubernetes
as
a
core
engine,
but
also
significant
projects
from
the
cloud
native
compute
foundation
to
make
a
comprehensive,
production-ready,
enterprise-grade
open
platform.
So
several
several
years
ago,
actually
right
at
the
beginning
of
the
release
of
kubernetes
red
hat
became
involved.
They
are.
B
We
are
still
involved
in
the
working
groups
leading
21
out
of
the
43
working
groups
related
to
kubernetes
and
making
contributions
back
to
the
the
actual
project
itself,
and,
and
so
what
that
means
is
that
each
release
of
openshift
is
fully
kubernetes
certified.
So
you
can
leverage
the
openshift
tools
or
you
can
leverage
the
kubernetes
native
tools
either.
One
of
those
is
fine.
It's
a
matter
of
preference,
but
what
you
get
in
an
enterprise
product
like
openshift.
Is
you
get
the
multi-year
lifecycle
management?
So
you
can
really
make
determinations
about.
B
You
know
what
types
of
tooling
and
process
you're
going
to
invest
in
and
how
your
transformation
looks
over
time.
You
know
that
what
you've
started
down
the
path
of
doing
is
going
to
be
continually
supported,
with
both
security
updates
with
feature
updates
as
well.
So
it
really
is
the
culmination
of
hundreds
of
performance
fixes,
tweaks
and
changes,
and
it's
one
of
those
things
that
makes
for
the
compelling
argument
of
what
to
use
it.
B
You
know,
as
you're,
moving
into
the
cloud,
whether
that
be
one
club
provider
or
another,
the
ability
to
do
something
consistent
in
an
open
and
transparent
method
methodology
is
very
important.
B
One
of
the
next
things
to
talk
about
with
openshift
is
the
idea
of
the
security,
so
security
is
baked
in
and
it's
at
each
level
and
it's
extremely
important
when
you
think
about
libraries
so
for
federal
customers
out
there.
When
I
think
about
the
requirement
for
fips
140-2
validated
modules,
we
have
those
in
red
hat
enterprise
linux,
those
can
be
leveraged
inside
of
openshift,
and
so
that
means
that
you
can
expose
in
a
cloud-native
manner
those
same
libraries
to
build
your
applications.
B
So
what
that
means
is
that
I
can
use
consistent
libraries
throughout
the
entire
stack,
have
a
certified
and
tested
application
infrastructure,
and
it
allows
me
great
transparency
when
it
comes
to
things
like
compliance
and
authority
to
operate
the
ability
to
get
compliance.
Reviews
done
that
we
have
some
programmatic
methods,
by
which
you
can
see
those
things
you
can
see
the
nist
853
or
the
171
controls
being
implemented
inside
of
openshift.
So
next
slide,
please.
B
So
what
that
gives
you
is
the
ability
to
to
doc
to
properly
document
the
ability
to
extend
your
environment
and
the
ability
to
continue
to
iterate,
so
that
cloud-like
experience
can
be
done
anywhere
next
slide.
B
So
in
thinking
about
openshift,
and
particularly
the
idea
of
managing
kubernetes
those
day,
two
activities
there's
this
concept,
that
we
have
that's
known
as
the
operator
and
an
operator
essentially
takes
the
smarts.
The
operational
conditions,
such
as
you
know,
fail
retry,
restore
and
it
can
codify
that
into
the
container
platform
inside
of
openshift.
B
So
what
that
allows
us
to
do
is
build
applications
that
can
do
some
level
of
self-healing
so
that
there's
not
the
3
am
call
or
page
that
we
can
anticipate
some
of
the
things
that
might
happen
inside
of
a
production
environment
and
allow
for
the
system
to
mitigate,
so
that
allows
us
to
be
able
to
deploy
very
quickly
to
debug,
to
make
alternative
arrangements
so
really
that
entire
life
cycle
is
started
to
be
automated
or
more
automated.
Next
slide.
B
B
B
It's
you
know
which
of
the
things
that
you're
going
to
use
and
extend,
and
in
case
of
open
shift,
there's
a
lot
of
flexibility.
So,
for
example,
I
might
be
a
shop
that
uses
bitbucket
or
gitlab
or
github
any
of
those
can
work
and
play
in
an
openshift
environment
because
all
based
on
open
standards,
they
have
consistency
of
their
published
apis,
and
so
there
are
ways
to
interchange
a
lot
of
those
different
solutions.
B
So
the
other
big
thing
here
is
that
that
provides
over
a
five-year
time
period,
a
tremendous
amount
of
return
on
initial
investment,
so
reducing
the
ability
or
reducing
the
cost
and
increasing
your
capability
is
an
extremely
important
thing
that
leads
to
more
iteration,
better
applications
moving
more
into
the
devops
pipeline,
and
so
with
that
the
payback
is
quite
significant
and
quick
within
eight
months,
and
you
can
produce
more
applications,
which
is
really
the
thing
that
organizations
want
to
do
today.
So
with
that
next
slide.
B
B
There
are
a
lot
of
different
ways
to
consume:
openshift
and
kubernetes.
You
may
run
it
in
your
own
environment.
That's
what
we
call
the
openshift
container
platform,
but
you
may
want
to
run
that
in
a
managed
environment-
and
there
are
a
number
of-
we
have
a
pretty
broad
reach
here-
all
the
major
hyperscalers,
so
aws
there's
a
red
hat,
openshift
dedicated,
which
is
something
that
is
managed
by
red
hat.
B
We
have
a
partnership
with
microsoft,
for
the
azure
red
hat,
open
shift
that
we
call
aero
it's
jointly
managed
and
then
there's
the
the
same
thing
with
google
right.
So
the
major
cloud
providers
have
a
version
of
a
managed
openshift
environment
that
you
can
take
part
of,
and
since
you
know
that
it's
based
on
an
open
platform,
you
can
move
workloads
between
the
cloud
providers
of
your
choice
along
those
lines.
B
One
way
to
customers
is
the
open
innovation
labs,
the
idea
of
being
able
to
adopt
cloud
cloud-native
practices
to
be
able
to
take
those
first
journeys,
both
as
a
cloud
migrator,
someone
who's
moved
into
the
cloud
because
there's
a
mandate
or
a
strategic
direction.
Those
organizations
that
start
to
innovate.
Those
cloud
forward,
organizations
and
finally,
the
ones
that
want
to
move
the
cloud
native,
the
red
hat,
open
innovations,
labs
and
the
kind
of
container
practice
helps
to
get
customers
familiar
with
those
efforts
and
so
along
those
lines.
B
B
So
one
of
the
things
to
think
about
here
is:
why
is
red
hat
participating?
Why
are
we
using
our
technology
in
this
way?
B
It's
part
of
the
ethos
of
who
the
company
is
right,
so
the
company
has,
from
the
very
beginning,
been
championing
the
idea
of
enterprise
grade
open
source.
We
believe
strongly
that
code
should
be
open.
It
should
be
iterated
upon
and
shared
with
a
wide
community,
and
we've
been
doing
that
with
kubernetes
since
kubernetes
inception.
We've
been
doing
that
with
linux
since
linux
inception
as
well.
B
We
really
do
have
strong
relationships
where
we
are
building
not
only
on
the
solid
infrastructure
that
the
cloud
providers
have
provided,
but
also
building
out
that
serverless
and
those
event-driven
environments
of
the
future
and
then
finally,
there
is
a
comprehensive
set
of
services
that
relate
all
the
way
from
the
the
physical
device
all
the
way
through
the
application
services.
That's
part
of
our
heritage
and
all
of
those
things
are
open.
B
So
you
know
I'll
leave
you
with
the
ability
to
build
cloud
native
applications
in
an
open
and
transparent
way,
there's
a
hallmark
of
red
hat
and
openshift,
and
you
know
we
can
further
the
conversation
with
stack
rocks
as
they.
They
guide
you
through
some
of
their
security
solutions
that
are
built
on
top
of
our
enterprise
kubernetes.
A
Time,
excellent
thanks
trace.
I
really
missed
red
hat
summit
this
year
in
san
francisco,
but
it's
okay.
Boston
was
really
fun
last
year,
let's
get
to
our
second
polling
question.
Okay,
so
let
me
launch
this,
for
you
guys
is
container
security,
an
active
priority
for
your
company
and,
if
so,
what
is
your
timeline
for
an
implementation.
A
Results
all
right
cool,
let's
see
what
you
guys
came
back
with
all
right,
so
you
guys
are
mainly
in
between
three
to
six
months
or
looking
to
implement
okay
cool.
Let's
move
on
to
brandon
here.
C
Awesome,
thank
you,
tim
and
thank
you
trace
those
that
was
awesome,
really
really
appreciate
you
working
with
us
and
being
such
a
good
partner
to
stackrocks.
So,
let's
get
started
on
stack
rocks.
I
am
going
to
breeze
through
these
these
slides,
I
I
think
the
product
speaks
for
itself
and
I
want
to
spend
most
of
time
in
the
demo
here
so
we'll
run
through
these
things,
real,
quick,
but
from
a
high
level.
Let's
talk
real
quickly
about
what
is
stack
rocks
does
what
are
the
use
cases
we're
trying
to
solve.
C
First
and
foremost,
I
like
to
talk
about
visibility.
I
met
with
john
stroyffer
back
in
the
the
infancy
of
the
cdn
program
when
he
was
at
dhs.
Putting
this
whole
thing
together,
and
one
thing
you
told
me
is
you
know
if
you
don't
know,
what's
on
your
network,
how
can
you
ever
expect
to
manage
it?
Secure?
It's
all
right.
You
always
think
about
that,
and
that's
it's
really.
The
first
and
foremost
of
stack
rocks
as
well.
C
We
want
to
show
you
exactly:
what's
inside
the
ecosystem
of
kubernetes
inside
of
openshift
and
what's
happening
out
there,
we're
gonna,
take
the
boxes
for
all
your
common.
You
know
tenants
of
security
which
are
gonna,
be
things
like
vulnerability,
management
or
network
segmentation.
We're
gonna
do
some
compliance.
We
have
all
the
data
in
there
for
proper
incident
response.
C
You
know
so
really
the
the
security
stuff
that
we
did
when
we
were
sitting
on
prem
we're
seeing
inside
data
centers
we're
just
moving
that
to
the
to
the
evolution
into
containers
and
openshift
in
kubernetes
environments.
During
that
process.
We
know
you
already
have
a
lot
of
tools
out
there
that
are
assisting
with
your
devops
program,
so
we
like
to
start
with
integration
points
starting
at
the
very
beginning
during
the
build
phase.
C
C
We
will
ship
with
our
scanner
our
stackrock
scanner,
but
you
know
there's
a
lot
of
scanners
out
there.
You
may
have
a
tool
that
you
already
know
and
love.
You
already
have
a
subscription
for
it,
whatever
it
may
be,
we
will
integrate
with
those
solutions
as
well
and
we'll
consume
them
just
like
they
were
provided
by
us,
so
that
may
be
leveraging
some
old,
tenable
licenses
or
you
want
to
go
completely
open
source
and
utilize.
Something
like
claire.
That's
fine,
we'll
consume
that
data
as
well.
C
It
doesn't
matter
what
container
run
time
you're
working
with
your
docker
is,
has
a
lead
there,
but
cryo's
picking
up
steam
really
quickly
and
we'll
support
that
process
as
well
and
then,
as
we
move
along
into
the
orchestrator
itself,
we
love
our
partner
over
at
openshift
and
we
hope
you,
you
guys
leverage
them,
but
if
you
have
another
flavor
of
kubernetes
you're
working
with
we'll
be
happy
to
support
you
in
that
process
as
well
and
at
the
end
of
the
day
you
don't
want
to
be
stuck
inside,
of
the
stack
rocks
console
to
get
violations
and
alerts
and
notifications.
C
So
we
have
integrations
with
a
lot
of
third-party
plugins
that
are
going
to
deliver
those
notifications
into
tools
and
utilizes
every
day,
whether
that's
your
sim
tool
like
splunk
or
you
want
to
get
a
notification
inside
of
microsoft
teams
or
an
email
where
it
may
be.
We
have
integrations
for
you.
There.
C
So
there's
really
two
ways
to
do:
container
security.
There's
the
old
way,
which
you
know
is
really
starting
at
the
container
level
and
that's
looking
at
you
know
what
the
vulnerabilities
are
where
what
the
image
registries
are.
C
You
know
where
the
image
came
from
and
showing
you
some
really
good
information
about
the
containers,
but,
what's
missing
is
being
able
to
provide
visibility
into
the
entire
ecosystem,
starting
with
kubernetes
or
starting
with
openshift,
so
the
approach
that
we
take
instead
of
taking
a
a
look
at
the
containers
and
trying
to
look
up
what's
going
on
in
the
environment,
we
take
a
top-down
approach
and
we
start
at
kubernetes
itself
and
we
go
all
the
way
down
to
the
container
level.
The
reason
we
do
that
is
really
for
three
main
benefits.
C
First
and
foremost,
is
context
where
I
like
to
describe
this
is
if
I
was
a
container
only
solution
that
was
trying
to
integrate
with
the
orchestrator.
I
would
provide
you
things
like
here's
container,
a
with
10
vulnerabilities,
here's
container
b
with
two
vulnerabilities.
You
look
at
that
information.
You
go
okay!
Well,
I
guess
I'm
going
to
take
care
of
container
a
first,
because
there's
10
and
10
is
greater
than
2..
We
take
the
approach
of
yeah.
That
is
all
true.
C
There
is
container
a
with
10
there's
container
b
with
two,
but
let
me
give
you
additional
context
from
openshift
from
kubernetes
that
container
b,
that
only
has
two
vulnerabilities,
that's
actually
sitting
on
the
internet.
It's
exposed
it's
running
in
your
production
cluster.
The
blast
radius
is
huge,
meaning,
there's
direct
communication
between
container
b
to
container
c
to
container
d.
Therefore,
with
all
that
information,
all
that
context,
you
may
make
a
different
decision
based
on
overall
risk,
not
just
vulnerability
information
you
got
from
the
individual
container.
C
The
other
reason
we
take
this
approach
is
for
native
enforcement.
Kubernetes
openshift.
They
have
enforcement
capabilities
that
are
built
into
the
product,
whether
that
is
the
use
of
admission
controller
or
utilizing
network
policies
that
are
built
in
that
stuff's
already
there
for
you,
let's
just
make
it
easier
to
utilize
and
easier
to
consume
and
operationalize
with
stack
rocks.
That's
what
we
do.
Last
but
not
least,
we
want
to
take
information.
We
learned
from
run
time.
C
Things
are
actually
running
in
production,
security
violations,
anomalies,
feed
that
bank
into
the
development
lifecycle
and
create
a
continuous
hardening
loop.
There
we've
got
three
real
small
components:
we're
highly
scalable.
We
ride
right
alongside
your
kubernetes
environment,
just
like
the
rest
of
your
applications.
C
If
we
start
down
to
the
bottom,
I
like
to
call
collector
the
little
guy
down
there.
I
don't
know,
I
don't
know
about
you
guys,
but
I've
got
this
neighbor
behind
me.
I
love
her
she's,
really
nice
lady,
but
she's
the
the
nosy
neighbor
and
you
know
she's
the
one
that
tells
my
wife.
You
know
your
husband
is
outside
with
friends.
You
know
watching
football
until
midnight
and
goes
and
gossips
about
it.
C
That's
exactly
what
stack
rock
says:
we're
the
nosy
neighbor
of
the
node,
where
the
nodes
are
nearby
the
cluster,
we're
collecting
all
this
information
traffic
coming
in
traffic
coming
out,
processing
being
executed,
the
order
in
which
processes
are
being
executed.
We
take
all
that
information.
We
feed
it
up
to
you
and
we
gossip
about
it.
We
do
the
same
thing
at
sensor.
Sensor
sits
on
each
individual
cluster.
C
It's
doing
the
same
thing:
it's
talking
to
kubernetes
talking
to
openshift.
You
know
what
are
the
role
based
access
control
configurations
inside
of
kubernetes?
What
are
the
network
policies
have
been
assigned?
What
are
the
emission
controllers?
All
this
information
up
to
central
and
central
is
one
per
customer
assuming
there's
network
connectivity
between
your
clusters.
You
only
need
one
of
those
and
to
use
a
extremely
overused
term.
It's
your
single
band
of
glass
for
kubernetes
security.
C
We've
got
a
lot
of
customers,
both
commercial
public
sector,
federal,
dod,
ic,
really
across
the
board.
We've
got
a
lot
of
traction:
we're
an
incutal
portfolio-based
company,
so
we've
got
a
lot
of
traction
inside
of
the
ic
community
dod
inside
the
army,
all
of
our
fine
federal
systems
integrators
as
well,
and
then
a
lot
of
commercial
presence.
C
A
Yeah
all
right,
let's
get
the
third
one
going
here.
What
are
your
top
three
use
cases
for
security
for
your
kubernetes
environment,
and
this
is
multiple
choice.
You
just
select
your
top
three
use
cases
there.
A
All
right,
let's
share
the
results
here:
cool.
It
looks
like
a
lot
of
you
guys
are
looking
at
vulnerability,
management
and
compliance.
We
get
compliance
a
lot
for
sure
and
configuration
management,
so
good
stuff
brandon,
you
ready
with
the
demo.
C
So,
let's
start
there,
you
know
people
think
of
vulnerability
management
and
they
think
of
you
know,
here's
all
my
images
and
here's
all
these
cves
that
are
associated
with
it
and
that's
great
and
they
utilize
that
from
vulnerability
management
perspective
to
start
prioritization
of
what
they
need
to
fix
first,
so
you
would
expect
to
see
things
like
all
of
your
images
that
are
here
the
number
of
components
that
have
been
layered
on
top
of
it.
So
components
are
gonna,
be
things
like
maybe
open
ssl.
B
C
Java
or
apache
whatever
it
may
be,
and
then
we're
going
to
show
the
the
vulnerability
data
associated
with
each
one
of
those
components
there
on
top
and
that's
all
well
and
good.
That
is
one
way
to
do
it.
What
we
like
to
do
is
we
like
to
take
a
risk-based
approach
to
everything
that
we
do
inside
of
our
product?
So
just
because
a
vulnerability
has
a
high
cvss
score,
may
not
me,
may
not
be
the
the
lowest
hanging.
Fruit
may
not
be
where
we
want
to
actually
start
a
prioritization.
C
So
again
we
take
a
look
at
risk.
So
in
this
case,
I'm
going
to
look
at
my
top
riskiest
images
in
my
environment
and
you're,
going
to
see
that
risk
is
not
only
based
on
the
total
cvss
score,
but
it's
also
going
to
be
based
on
environmental
impact,
meaning
how
far
widened
spread.
Is
this
vulnerability
and
if
we
take
care
of
this
thing
first,
is
it
going
to
give
us
the
biggest
return
on
our
investment?
C
By
doing
so
so
again,
everything
that
we
do
is
associated
with
an
overall
risk
assessment
for
the
the
inside
the
entire
product.
So
this
is
from
a
vulnerability
manager
perspective.
This
is
where
you
kind
of
live.
You
get
started
here.
Looking
at
vulnerabilities,
looking
at
the
cvss
scores,
we're
going
to
show
you
the
most
common
ones
across
your
entire
implementation,
we're
going
to
show
you
recently
detected.
C
So
if
you
logged
in
last
monday
we're
going
to
show
you
the
things
that
have
popped
up
since
the
last
time
you
logged
in
so
really
good
starting
points
here.
The
other
thing
I
like
to
highlight
from
a
vulnerability
management
perspective
is
there's
a
lot
of
tools
that
are
going
to
do
image
scanning
and
they
can
provide
vulnerability.
Information
based
on
images.
C
So
that
is
all
well
and
good.
Vulnerability.
Management
is
certainly
a
place
that
a
lot
of
people
like
to
start-
and
you
know
we've
got
a
lot
of
good
data
there,
but
I
like
to
take
it
up
a
level
and
say
you
know:
security
in
is
not
just
about
vulnerabilities,
it's
all
about
risk
and
what
we
can
do
to
reduce
our
our
tax
surface.
So
I'm
going
to
look
at
overall
risk
and
I'm
going
to
look
at
visa
processing
here
stack
rocks
has
determined.
C
This
is
my
number
one
risk
people
ask
the
question:
well,
how
does
stack
rocks
determine
it
and
you
know
how
does
it
destroy
risk
and
we?
We
have
all
these
risk
indicators.
So
in
this
case
we
can
say
yeah
we
are
going
to
take
into
consideration,
for
instance,
this
vulnerability,
which
happens
to
be
the
equifax
vulnerability,
but
we
also
start
looking
at
things
like
environment
variables.
Did
someone
stuff
a
password
or
certificate
inside
of
the
environment
variable?
Well,
if
so,
that's
probably
going
to
increase
your
risk.
C
A
little
bit
is
the
container
itself
running
privileged.
If
so,
let's
have
a
conversation
about
that.
Why
is
it
running
privileged
and
what
can
we
do
to
to
prevent
that
and
maybe
utilize
lower
privileges?
C
We
start
looking
into
things
like
components
useful
for
attackers,
so
a
lot
of
our
product
management
team
actually
came
out
of
you
know
red
team,
blue
team
penetration
testers,
and
they
want
this
to
be
reflected
inside
of
the
product,
so
we're
going
to
go
in
and
we're
going
to
start
looking.
Are
there
tools
that
attackers
can
utilize?
For
instance,
if
I
were
to
compromise
an
environment,
the
first
thing
I'm
going
to
do
is
I'm
going
to
see
if
I
can
launch
a
shell.
C
If
I
can
launch
a
shell
well,
then
I'm
probably
gonna
run
curl
or
wget
or
app
or
app
get
to
see
what
other
utilities
I
can
get
to
further
exploit
the
environment.
So
we
highlight
this,
so
we
can
bridge
communication
from
security
teams
to
devops
teams
and
say
hey
guys.
Why
don't
we
have
curl
running
inside
of
this
image?
C
C
Last
thing
that
we
look
at
for
risk
indicators,
I
think,
is
really
neat-
is
suspicious
process
executions
so
those
familiar
with
the
dod
devsecops
program,
nick
salon,
as
has
called
this
his
behavioral
analysis,
and
what
this
does
is
we
are
looking
at.
I
told
you
we're
the
nosy
neighbor.
C
So
when
an
application
comes
online,
we
start
baselining
that
application
this
process
runs.
Then
this
process
runs
and
that
process
kicks
off
this
process
and
we
establish
this
baseline
of,
what's
actually
necessary
for
your
application
to
work
and
to
run
if
we
start
seeing
anomalies,
meaning
things
we
haven't
seen
before.
We
bring
it
to
your
attention,
you
don't
need
to
go
through
splunk
logs
and
do
some
big
incident
response
to
figure
out.
C
C
So
when
I
bring
it
to
your
attention,
something
weird
is
going
on,
you
might
want
to
go
check
it
out,
so
we
bring
that
to
your
attention
right
away.
Based
on
this
information
and
based
on
the
this
being
my
number
one
risk,
I
want
to
take
a
look
and
see.
Well
if
an
attacker
is
in
case
or
is
in
fact
inside
of
this
application
visa
processor.
Where
else
can
he
go
in
my
environment?
C
C
We
can
display
that
for
you,
we
can
give
you
a
visual
perspective
on
exactly
what
the
network
flows
are
inside
of
your
individual
clusters.
So
I'm
going
to
take
a
look
at
my
production
cluster
and
I'm
going
to
look
at
a
loud
traffic
and
say:
okay,
guys,
your
visa
processor
may
be
compromised.
I
need
to
figure
out
where
else
it
can
go
in
this
case.
It's
red
which
red
means
it's
wide
open.
C
So
that
means
an
attacker,
could
theoretically
go
from
visa
to
gateway
or
visa
to
mastercard.
But
I
don't
know
if
there
should
be
traffic
or
allowed
communication
between
those
things.
So
then
you
usually
put
a
data
call
together
you
find
the
application
owner.
You
find
the
developer.
You
determine
exactly
what
the
communication
path
should
be
after
you
do
all
that
they
say
yeah.
You
know
we
we
guess
there's
there
should
be
communication
from
visa
to
gateway,
but
not
visit
a
mastercard.
C
I
could
have
saved
you
a
lot
of
trouble
and
just
said,
stackrock
show
me
what's
necessary.
Now
stack
fox
is
going
to
show
you
based
on
what
we've
been
watching
on
your
application.
Visa.
Only
talks
out
to
gateway,
but
does
not
talk
to
mastercard.
So
it's
a
really
good
use
case
to
put
a
network
policy
in
place
inside
of
kubernetes
inside
of
openshift.
You
have
the
ability
to
leverage
network
policies
out
of
the
box,
so
you
go
back
to
the
team.
You
say
team.
C
I
want
you
to
put
together
a
network
policy
that
would
prevent
traffic
from
visa
to
mastercard,
because
I
want
to
isolate
the
overall
attack
variable
or
the
blast
radius
for
this
application,
so
they
get
out
their
their
little
favorite
yaml,
editing,
tool
and
gotta
love,
yamls,
they're
great
to
work
with
you
get
your
ruler
out.
You
put
together
a
network
policy
similar
to
this.
That
would
allow
communication
from
visa
to
gateway,
but
restrict
communication,
visa
and
mastercard
again.
C
Again,
just
go:
ask
stackrocks
kind
of
like
the
back
in
the
90s.
You
asked
jeeves,
let's
ask
stackrocks
what
we've
done
here
is
we
just
asked
stack
rocks
to
generate
a
new
set
of
network
policies
that
can
be
leveraged
inside
of
kubernetes?
That's
going
to
allow
all
my
traffic
or
all
my
applications
continue
to
work
but
drop
anything
unnecessary,
and
that's
what
we've
done
here.
Here's
a
nice
new
brand
brand
new
yaml
that
would
work
directly
in
this
environment,
and
this
looks
like
this
looks
great.
C
You
know
people
look
at
this
like
brandon
smoke
and
mirrors
no
way
this
works.
So
what
we've
done
here
is
we
switched
over
to
a
simulation
view.
Remember
all
those
things
that
used
to
be
red
well
now:
they're
blue,
which
means
there's
policies
that
could
be
applied
to
it.
We
haven't
actually
applied
anything
yet
we're
just
visualizing
what
this
would
look
like
if
we
applied
this
and
now,
if
I
hover.
B
C
I
can
see,
there's
no
longer
communication
between
views
and
mastercard.
So
that's
you
know
it
did
exactly
what
we
expected
it
to
do.
We're
all
good.
Here
we
were
like
all
right.
This
is
great.
Stockrock's
awesome
figured
out
everything
we
need
to
do.
Let's
do
it,
so
we
can
hit
apply
network
policies
and
it's
going
to
hit
the
kubernetes
api.
Openshift
api
is
going
to
apply
the
the
network
policies
right
from
there.
I
have
two
problems
with
that
one.
C
You
know
most
of
the
the
customers
that
you
guys
are
representing
have
some
sort
of
change,
control
process
in
place
and
they're
not
going
to
allow
you
to
apply
network
policies
directly
in
production.
Two.
It
kind
of
violates
this
whole
notion
of
devops
or
devsecops.
What
we
recommend
doing
is
hitting
share
yaml.
C
Let
me
show
you
what
this
looks
like
inside
of
slack,
but
probably
not
the
best
place
to
put
it,
but
it's
easy
to
demo,
but
what
most
of
our
customers
are
doing
are
integrating
this
in
with
their
issue
tracking
systems,
so
maybe
something
like
jira
or
servicenow
or
whatever
it
may
be.
Where
we're
going
to
dynamically
create
a
new
incident
or
new
issue
inside
of
that
system
and
say:
hey
security,
wants
you
to
go!
Take
a
look
at
these
network
policies.
C
If
you
agree
with
it,
you
like
what
we're
doing
here
go
ahead
and
take
it
put
it
in
your
code,
so
now
put
it
back
in
the
build
time,
put
it
in
the
deployment
time
so
that
way
by
the
time
it
gets
into
the
production
environment
it's
already
hardened
as
code.
Everyone
talks
about
security,
is
code
and
shifts
left
we're
enabling
you
to
do
so
with
this
integration
here.
C
Let's
talk
about
compliance,
so
we've
we've
looked
at,
you
know
being
able
to
reduce
vulnerabilities
with
our
vulnerability
management
solution.
We've
looked
at
overall
risk.
We
looked
at
the
ability
to
utilize
network
policies
to
reduce
the
blast
radius.
At
the
end
of
the
day
whether
we
need
to
get
a
authority
to
operate,
to
bring
an
application
from
on-prem
into
a
cloud
environment
into
a
kubernetes
environment,
you're
moving
from
monolithic
applications
to
microservices.
C
We
have
to
get
some
sort
of
approval
to
do
so.
So
typically
that
starts
with
our
compliance
framework.
We
have
the
ability
to
assess
your
compliance,
utilizing
cis
benchmarks
for
docker
cis
for
kubernetes.
We
have
industry
specific
things
like
pci
and
hipaa,
and
then
we
have
government
specific
things
like
nist,
800,
190
and
nist
853..
C
You
have
the
ability
to
assess
the
compliance
against
those
standards
across
individual
nodes,
individual
name
spaces,
individual
clusters
or
all
clusters.
Together,
we've
got
cool
little
spinny,
wheels,
spin,
around
change,
color
and
show
you
different
things,
but
at
the
end
of
the
day
to
get
the
to,
you
know
pass
the
assessment
to
get
the
approval
for
for
an
ato
to
satisfy
the
auditor.
We
need
to
show
data
so
we're
going
to
take
a
look
at
data
from
my
nist
800
190
compliance.
C
C
So
this
is
all
auditor
speak
and
if
there's
any
folks
from
this
out
there
or
minor
or
whomever
involved
in
writing
this
thing.
I
don't
speak
this
language.
This
doesn't
make
sense
to
me,
so
I
apologize
in
advance.
So
we
give
you
a
little
clip
from
this
version
yeah.
This
is
what
you
probably
should
do
to
come
into
compliance
with
this
control.
So
we
give
you
all
this
information.
C
This
is
what
a
lot
of
compliance
tools
do,
and
it's
great
your
here's,
your
overall
compliance,
here's
your
individual
compliance!
This
is
what
you
should
do.
Go
do
that
come
back
and
check
again
and
we'll
try
this
whole
process
again.
We
don't
want
to
just
be
a
tool
that
tells
you
about
your
problems.
We
want
to
be
a
tool
that
helps
you
fix
them.
So
give
an
example.
Inside
of
this
control
here,
nist
says
we
need
to
put
quality
gates
in
place
at
the
build
and
deploy
phase.
C
Utilizing
a
cvss
scoring
system,
so
stack
rocks.
Has
this
notion
of
system
policies?
Now
we
ship
with
64
policies
out
of
the
box,
you
have
the
ability
to
write
your
own.
I
usually
tell
my
customers
take
one
that
we
already
have
clone
it
and
tailor
it
to
meet
your
needs.
So
I'm
just
going
to
quickly
look
in
here
and
see
if
we
have
one
for
cvss
and
we
do
so.
This
is
a
policy
really
easy
to
read
and
write.
You
give
it
a
name,
a
severity
level.
C
The
severity
level
gets
utilized
in
the
overall
risk
algorithm.
We
looked
at
before
life
cycle
stage.
This
is
really
important,
so
we
have
the
ability
to
assess
this
at
the
build
time.
So
this
is
gonna,
be
through
an
integration
with
something
like
jenkins
or
get
lab
or
circle
ci,
whatever
you're
utilizing
during
the
build
process,
we
will
integrate
with
that,
so
we
will
detect
immediately
when
there
is,
in
this
case
a
violation
of
the
cbs
score
equal
to
or
greater
than
seven
we'll
do
the
same
thing
against
deploy
time.
C
So
this
is
through
the
emission
controller
stage,
with
openshift
or
kubernetes
and
then
runtime.
This
is
actually
happening
running
in
production.
We're
looking
for
process
execution
things
like
that,
so
you
have
the
ability
to
use
a
combination
of
all
three
of
those.
This
said
do
build
and
deploy
so
we'll
leave
that
there
description,
rationale,
rationale,
I
think,
is
pretty
neat.
You
know
dealing
with
a
lot
of
security
policies
or
products
in
my
past.
C
C
This
is
what
you
should
do
in
order
to
go
fix
those
things
so
we're
establishing
communication
again
between
security
and
devops
and
then
finally,
what
we're
actually
looking
for.
So
this
is
great
we're
going
to
be
notified
right
away
anytime,
there's
a
violation
that
occurs,
but
nist
has
put
quality
gates
in
place,
so
let's
take
a
step
further
and
turn
on
some
enforcement
behavior.
C
The
next
thing
we're
going
to
do
is
we're
going
to
turn
on
deploy
time
enforcement.
This
is
going
to
leverage
native
capabilities
inside
of
openshift
and
kubernetes
called
mission
controllers.
What
this
means
is
when
someone
tries
to
deploy
it,
the
application
actually
into
production
kubernetes
will
come
to
us
and
say
hey
stackrocks.
C
C
We
come
back
to
my
compliance
and
I'm
going
to
do
a
quick
reassessment.
Now
this
happens
on
a
hourly
basis.
By
default.
You
can
change
that
duration.
You
can
also
kick
it
off
manually.
It
gets
to
there
and
now
we
can
see
my
compliance
level
has
gone
up.
So
not
only
did
I
tell
you
about
your
problems,
I
give
you
a
tool
to
be
able
to
fix
it
as
well.
C
So
now
you
are
really
excited
and
you
you
bring
the
auditor
over
and
say:
hey
look,
100
stackrock
says
so
right
there
you
see
it
100
and
they
go.
What
is
stack
rocks
I've
never
heard
of
that
before.
I
don't
believe
you.
How
do
they
know?
So
what
they're
really
looking
for
is
evidence.
You
know
how
do
we
know?
What
do
we
check?
Do
we
just
run
a
scan?
What
are
the
level
of
false
positives?
Where
can
we
manually
verify
this
information?
C
So
we
have
the
ability
to
verify
excel.
Apparently,
we
have
the
ability
to
also
provide
evidence
as
well,
so
the
evidence
can
be
exported
to
csv,
which
is
possibly
loading
here
somewhere
over
here,
so
export
to
csv,
which
is
going
to
show
yeah.
I
know,
mr
auditor,
you
don't
know
stack
rocks,
but
this
is
what
they
did.
They
assessed
this
standard
on
this
cluster
on
this
object.
Name
for
this
control
number
with
this
description
pass
fail
state,
but
evidence
is
what
they're
looking
for
we're,
literally
going
down
to
individual
files.
C
Looking
for
specific
configuration
line,
items
inside
of
that
file
that
determine
overall
compliance,
so
we're
not
just
running
a
scan
and
we're
not
just
repurposing
old,
like
cis
results,
we're
literally
going
down
and
checking
these
individual
line
items
for
overall
compliance-
and
this
is
really
the
data
that
they're
looking
forward
to
to
provide
you
that
ato
or
let
you
move
forward
in
the
process.
Now
again,
this
can
be
done
via
csv
or
we've
got
this
pretty
cool
api.
C
So
api
is
available
to
you.
We've
got
a
lot
of
customers
that
are,
you
know,
just
pulling
information
directly
out
of
the
api.
So
if
you
are
on
the
dod
side
and
you're,
utilizing
emass
for
your
rmf
framework
or
whatever
it
may
be,
that
same
information
that
was
in
a
csv,
we
can
dump
that
into
whatever
your
system
is
via
the
api
as
well,
and
while
we're
talking
about
integrations,
let's
take
a
look
at
some
of
these
that
we
have
out
of
the
box.
C
There's
some
names
you
know
and
love
like,
I
said
we're
going
to
integrate
in
with
whatever
scanner
you
already
have.
You
don't
want
to
utilize
ours,
whatever
registries
you're
utilizing
to
get
information
out,
we're
going
to
send
it
splunk
we're
going
to
send
it
to
jira
we're
going
to
send
to
email
if
you've
got
newer
technology
that
supports
web
hook,
integration
we've
got
that
for
you
built
out
of
the
box
as
well.
C
A
lot
of
customers
are
integrating
with,
like
matter
most
or
something
like
that,
so
lots
of
lots
of
ability
to
integrate
with
a
lot
of
different
solutions
and,
last
but
not
least,
dark
mode.
Everyone
loves
dark
mode.
We
we
have
that
for
you
well,
so
as
well,
so
save
your
eyesight
a
little
bit
there,
but
that
is
then
the
stack
rocks
in
a
quick,
20
minute
demo
real
high
level.
I'm
gonna
go
ahead
and
pull
up
our
last
few
slides
here
and.
A
We
have
our
last
polling
question
which
I'll
launch
right
now,
if
you
guys
want
a
complimentary
30-day
trial
I'll
just
leave
this
window
open
hit,
yes
or
no.
A
Our
complimentary
30-day
trials
are
actually
very
hands-on,
so
they're
they're,
definitely
good
value.
Okay,
so
q,
a
there's,
a
few
questions.
Brandon.
Let
me
just
go
through
this
real
quick
on
vulnerability
management.
How
do
we
have
to
use
your
scanner
or
can
we
use
other
scanners.
C
Yeah
it's
a
good
question
like,
like
I
said,
the
the
ability
to
leverage.
Other
scanners
is
one
of
our
one
of
our
big
selling
points.
You
know,
there's
a
lot
of
people
that
are
accustomed
to
some
open
source
solutions
like
claire
and
ancor,
or
you
know,
the
incorporate
solution
is
are
great
alternatives
to
our
scanning
solution.
I
know
a
few
folks
are
leveraging
in
the
dod
space
they're,
converting
old
acaz
licenses
from
tenable
detentable
io.
That's
not
a
problem
either
we'll
pretty
much
integrate
with
whatever
scanner
you
have
out
there.
C
Yeah,
so
a
couple
different
options
there.
If
you
are
utilizing
jenkins,
we
actually
have
a
jenkins
plug-in,
that's
in
the
jenkins
hub
or
storefront
or
web
store
or
whatever
whatever
the
jigging
store
is
so
the
plugin
is
available
there.
For
others,
we
have
a
cly
or
our
command
line
interface
that
gets
added
to
the
build
phase.
So
it's
it
gets
called
via
the
build
phase,
and
then
we
will
do
the
policy
assessment
as
one
of
the
last
steps
of
the
build
phase
there.
C
A
Cool
okay,
so
we
have
a
couple
questions
on
threat:
detection
we're
using
splunk.
Can
we
send
notifications
to
splunk.
C
Good
question
yeah,
so
absolutely
so
we
have
the
ability
to
send
notifications
via
splunk
via
that
integration.
I
showed
there
and
that's
probably
going
to
be
your
best
place
to
to
do
the
integration
at
that
point.
A
And
then
are
you
able
to
kill
processes
running
in
a
container
instead
of
killing
the.
C
That's
entire
pod
question,
so
we
actually
do
kill
the
pod
so
because
we're
leveraging
kubernetes
or
we're
leveraging
openshift
environments,
you're
gonna,
have
replica
sets
in
place
that
are
going
to
ensure
that
your
application
is
always
alive
and
there's
different
copies
of
it.
So
by
killing
the
pod,
where
we
won't
be
interfering
with
your
productivity
and
applications
being
available
and
online,
but
we're
also
going
to
not
only
remove
the
malicious
execution
of
a
processee,
but
we're
also
going
to
eliminate
the
attacker
itself.
C
C
A
Yeah,
we
have
a
question
on
network
segmentation.
How
is
this
different
than
how
your
competitors
are
doing
it.
C
Good
question,
so
competitors
will
typically
tell
you
that
you
need
to
install
some
sort
of
proxy
or
some
sort
of
sort
of
service
mesh.
You
can
say
we
want
you
to
do
that,
because
that's
how
we're
going
to
get
visibility
into
your
network
traffic.
We
want
this
proxy
we're
going
to
analyze
all
the
traffic
that
goes
through
this
proxy
and
if
it
violates
one
of
the
policies
you
put
in
place,
we're
going
to
drop
the
the
traffic
for
you.
C
The
way
I
look
at.
That,
though,
is
that's
like
saying:
okay,
well,
you
want
me
to
stand
up.
A
new
part
of
my
infrastructure
run
all
my
network
traffic
through
it.
What
if
this
new
part
of
the
infrastructure
that
you're
requiring
me
to
utilize,
that's
not
native
to
kubernetes?
What,
if
that
dies,
or
it
fails
and
typically
they'll,
say
that's
no
problem-
we're
going
to
fail
wide,
open,
okay,
well,
great,
my
network's
still
flowing,
but
now
you
just
you
know
open
me
up
to
attack
again.
There's
no
security!
In
that
I
said.
C
Well,
that's
not
a
problem.
We're
gonna
fail,
close!
Then!
Okay!
Well
now
you
just
broke
my
network
because
you
told
me
to
run
all
my
traffic
through
your
proxy.
What
we
do
is
we
leverage
the
native
capabilities
instead
of
kubernetes
utilizing
network
policies,
there's
no
additional
infrastructure.
It's
all
there.
It's
at
the
lowest
possible
level
of
the
orchestrator,
so
we
know
it's
always
going
to
be
available
to
us.
A
Cool
and
then
let's
see
here,
how
do
you
guys
work
with
istio.
C
So
I
like
to
say
we're
istio
aware
what
that
means
is
like
when
we
showed
the
network
segmentation
page,
we
showed
you
how
we're
creating
new
network
policies
dynamically
for
your
environment,
based
on
what
we
know
we're
going
to
continue
to
do
that
in
istio
environment
at
the
again
at
the
network
policy
level,
while
still
allowing
istio
to
do
more
layer,
7
type
work.
So
it's
going
to
give
you
the
ability
to
send
80
of
the
load
over
to
this
environment
20
over
that
environment.
A
Cool
and
then
I
probably
have
time
for
one
more
question-
configuration
management
question
here,
for
you:
can
I
create
policies
on
our
back
in
my
environment.
C
It's
a
good
question.
I
I
didn't
have
time
because
of
the
the
amount
of
time
to
show
you
here
but
yeah,
so
that
that
is
probably
the
biggest
differentiator
for
us
in
our
configuration
manager
capability
is
we're
actually
showing
you
live
role-based
access
control
directly
from
kubernetes
or
directly
from
openshift,
so
starting
at
the
the
level
of
roles
you
know
if
we've
got
a
bunch
of
roles
that
are
out
there
that
are
not
being
utilized
either
with
users
or
groups
or
service
accounts
associated
with
them.
C
A
Cool
all
right.
Well,
I
think
that's
all
the
questions
that
we
have
time
to
answer.
If
we
didn't
get
to
your
questions,
we'll
definitely
email
and
follow
up.
Please
look
out
for
next
steps.
We'd
love
to
have
a
deep
dive
meeting
and
have
a
one-on-one
with
you
guys
on
just
to
follow
up
so
look
out
for
that
and
then,
if
you
guys,
are
curious,
we're
donating
625
to
the
american
red
cross
today,
so
awesome
job
there.
A
Thank
you
trace
again,
really
appreciate
your
partnership
with
us
and
thanks
brandon.
That
was
an
awesome
demo.