►
From YouTube: OpenShift Commons Briefing: Container Deployment & Security Best Practices John Morello (Twistlock)
Description
The briefing reviews and explains the security concerns associated with container technologies and make practical recommendations for addressing those concerns when planning for, implementing, and maintaining containers that are outline in NIST’s Special Publication 800-190 Application Container Security Guide
Speakers: John Morello, CTO Twistlock and Dirk Herrmann, Product Manager, Red Hat
B
All
right,
good
morning,
good
afternoon,
good
evening,
everyone
thank
you
for
joining
our
session
today.
This
is
both
Red
Hat
and
twist
law
and
we're
going
to
be
talking
today
about
deployment
30
best
practices.
We
know
that
the
number
of
our
clients,
our
customers,
are
deploying
more
so
containerized
applications
leveraging
micro
services
further
with
kubernetes
and
so
forth.
B
I
next
Claudia
wimpy
today
our
two
speakers
that
are
going
to
take
you
through
the
best
practices
around
18
rotation
around
the
leveraging
of
those
technologies
and
in
front
of
you
how
they
were
how
they
reference
beam
it
security
value
for
best
practices
today
is
with
us,
is
John
relevances,
chief
technology
officer
of
twistlock
and
with
us
also
is
there
Herman?
Who
is
the
funny
vendor
for
both
open
shade
for
the
ship
inflate
next
slide?
Please.
B
Influence
in
the
potato
market
day
well
very
concerns.
If
Davidov,
then
190
is
the
Dutchman
18.
All
the
relative
formation
on
how
you
can
employ
containers
is
a
very
secure
manner.
Both
ready
hands.
Look
long.
You
talk
about
the
content
that
we
have
the
resources.
We
have.
The
platforms
that
we
have,
which
sheet
those
recommendations
and
we'll
have
time,
is
a
very
important
waves
and
questions
and
answers
we'd
like
to
make
this
open
to
you
throughout
the
conversation.
So,
if
you
do
have
questions,
please
be
sure
to
insert
those
the
chat
box
will
be
monitoring.
D
B
D
John,
my
name
is
John
Morello,
as
John
mentioned
I'm
the
CTO
for
twistlock
I'm,
also
one
of
the
lead
authors
or
than
this
special
publication.
So
800
190.
We
worked
on
for
quite
a
while
myself,
some
folks,
Francisco
and
Intel.
Obviously
Marusia
soup
IO
who's,
a
computer
scientist
at
NIST
who
led
a
lot
of
the
efforts,
but
this
was
a
very
large
effort
that
included
multiple
rounds
of
public
comment
and
incorporation
of
feedback
from
lots
of
different
organizations,
including
customers
like
Motorola
and
Department
of
only
a
security.
D
So
it's
not
like
a
special
publication
is
created
for
kind
of
a
minor
enhancement
or
even
a
new
product
area,
but
really
for
kind
of
generational
changes
in
computing.
So
as
an
example,
there
was
you
know,
NIST
special
publications
that
came
out
when
virtualization
became
very
popular.
You
know
10,
10
or
12
years
ago
there
was
another
series
of
special
publications
that
was
released
when
cloud
first
became
prominent
and
now,
with
this
wave
of
cloud
native
and
containers.
D
This
is
the
purpose
of
this
new
SP
800
190
is
to
define
what
the
framework
is
for
securing
and
protecting
applications
and
infrastructure
that
you
run
to
support
these
containerized
applications
and
micro
services
on
this
special
publication
is
explicitly
designed
to
be
a
vendor
agnostic,
basically
roadmap
and
overview
of
the
space.
Not
to
tell
you
explicitly
how
to
configure
settings
in
a
given.
You
know
product
or
implementation,
but
when
you
look
at
the
special
publication,
it's
intentionally
not
telling
you
instructions
for
how
do
you
configure
open
ships
or
how
do
you
configure
dock?
D
There
may
be
some
examples
in
the
special
publication
where
we
utilize
vendor
products
such
as
openshift
or
core
OS,
as
an
example
to
illustrate
a
concept
or
to
be
an
example
of
of
a
solution.
But
the
special
pubs
are
not
designed
to
be
something
that
gives
explicit,
like
configuration,
guidance
for
those
vendor
products
or
specific
project,
specific
implementations
of
concepts.
D
And
finally,
we
want
to
note
that
that
the
special
publication
is
not
designed
exclusively
for
government
use,
not
designed
explicitly
for
customers
in
the
United
States,
while
it
was
created
by
a
US
federal
government
agency.
The
purpose
of
NIST
is
not
just
to
serve
the
federal
government.
The
purpose
is
to
create
guidance
and
an
awareness
that
can
be
utilized
by
all
organizations
and
even
individuals,
not
even
just
in
the
United
States,
but
but
internationally.
The
National
Institutes
of
Standards
and
Technology
NIST
is
a
part
of
the
Department
of
Commerce.
D
So,
as
you
can
imagine
as
part
of
the
Department
of
Commerce,
one
of
the
fundamental
goals
with
these
guides
is
to
allow
organizations
to
utilize
these
technologies
in
a
safe
and
secure
way
for
the
promulgation
of
commerce
and
conduct
of
commerce.
So
don't
think
of
this
as
just
government
specific
information,
although
there
will
be
government
guidance
that
will
be
created
based
on
this
again.
This
is
designed
to
be
vendor
agnostic
and
high-level,
such
that
it's
not
specific
to
to
one
industry
or
vertical.
D
Let's
talk
now
a
little
bit
more
about
what
the
the
guide
actually
gets
into
and
what
are
the
kinds
of
questions
that
go
into
it
and
how
do
we
address
in
the
guide
the
way
to
think
about
this
space?
So
if
you,
if
you
further
go
to
the
next
slide,
we
call
out,
in
the
special
publication
a
few
things
that
are
new
challenges
when
you
think
about
securing
containers.
D
This
is
not
to
say
that
containers
are
an
insecure
technology
or
the
containers
or
higher
risk
than
traditional
patterns,
or
anything
that
that
implies
any
kind
of
like
fundamental
problems
with
containers.
But
because
containers
are
a
new
technology
like
any
new
technology,
there's
different
risks
and
mitigations
that
are
associated
with
them
and
so
the
beginning
of
the
special
publication.
We
call
out
what
those
are
so
that
you
can
understand
how
this
is
different
than
traditional
security
systems
or
traditional
security
patterns
for
systems
like
virtual
machines
or
just
standalone
server
operating
systems.
D
One
of
those
is
that
containers
scale
up
and
down
far
more
frequently
than
traditional
systems.
Typically
do
so
you
know
in
the
in
the
past,
you
might
have
had
a
model
where
you
would
deploy
a
new
version
of
your
application
quarterly
or
maybe
just
annually-
or
maybe
not
even
that
frequently,
but
it
was
pretty
rare
for
people
to
be
updating
the
application.
D
Technically,
the
scale
of
the
environment
is
much
larger
than
you
typically
have
had
in
previous
patterns
by
the
number
of
entities.
So,
if
you
think
about
your
traditional
kind
of
in
tiered
application,
if
you
ran
a
it's
kind
of
a
simple
web-based
app
in
virtual
machines,
perhaps
you
might
have
six
vm's
that
would
compose
that.
Maybe
a
couple
of
front-end
nodes,
a
couple
of
application,
tier
nodes
and
then
a
couple
of
back-end
database
nodes
and
and
the
entire
application
would
be
encapsulated
within
those
six
virtual
machines.
D
One
of
the
fundamental
concepts
with
containerization
is
to
decompose
the
application
and
to
tear
apart
the
functionality
into
many
more
micro
services
that
can
be
iterated
on
and
updated
much
more
independently
of
each
other
and
it
can
be
scaled
and
managed
independently
of
each
other
and
that's
a
fundamentally
good
thing
from
an
application,
agility
standpoint
and
your
ability
to
scale
your
application
more
efficiently.
However,
from
a
security
standpoint,
it's
also
greatly
increasing
the
number
of
entities
that
you
have
to
manage
and
deal
with.
D
D
What's
normal
and
fine
what's
abnormal
and
it
doesn't
rely
on
human
interaction
to
be
able
to
do
the
configuration
of
the
security
policies
or
the
tooling
that
executes
when
those
policies,
if
you
skip
to
the
next
slide,
the
the
special
pub
we
try
to
organize
a
special
pub
with
this
notion
of
threats
and
countermeasures.
So
if
you
look
at
the
document,
the
beginning
of
the
document
goes
through
just
an
architectural
overview.
What
are
the
different
components
that
are
involved
in
it?
D
How
do
those
components
roughly
relate
to
traditional
patterns
using
virtual
machines
and
define
what
registries
and
orchestrators
and
so
forth
or,
and
then
the
majority
that
the
balance
of
that
document
is
a
risk
analysis
and
a
set
of
recommended
countermeasures
for
those
risks,
and
we
organize
those
risks
around
these
five
areas
which
are
kind
of
the
big
building
blocks
of
any
container
related
or
container
a
runtime
environment.
You,
obviously
you
have
the
images.
You
have
your
registry,
where
those
images
are
stored.
You
have
your
Orchestrator
or
like
openshift.
D
If
you
move
on
to
the
next
slide,
we
made
a
number
of
recommendations
about
the
risk
to
consider
and
the
countermeasures
to
apply
to
those
but
I'm
going
to
go
through
each
one
of
them
with
some
gravity
here.
So
we
can
get
through
all
the
content.
I
definitely
would
encourage
you
to
look
at
the
special
publication
for
more
investigation
of
the
actual
underlying
risks
and
the
kind
of
threat,
profiling
and
modeling
that
we
did
to
come
to
that.
D
But
to
talk
about
some
of
the
countermeasures
from
an
image
standpoint,
one
of
the
recommendations
that
we
think
is
most
important
is
that
you
have
tooling
that
is
aware
of
containers
and
images
and
is
able
to
look
at
those
containers
and
images
throughout
their
lifecycle.
So
the
traditional
mechanisms
for
doing
vulnerability
management
often
were
a
reactive
approach.
You
know
you
would
deploy
your
application
or
deploy
your
virtual
machine
and
then
you
would
scan
it
for
vulnerabilities
after
it
had
been
deployed.
What
we're
saying
and
what?
D
The
next
part
that
we
talked
about
is
to
make
sure
that
you're
using
images
that
are
trustworthy
so
much
as
in
the
traditional
world.
You
wouldn't
want
to
be
running
software
from
unknown
sources.
Similarly,
with
containers,
you
don't
want
to
be
running
upstream
images
that
you
don't
really
have
validation
of
and
that
you
don't
trust,
and
that
may
seem
like
an
obvious
thing,
but
one
of
the
things
that
makes
containers
so
attractive
to
people
is
that
it's
very
easy
to
reuse.
D
Software,
that's
already
out
there,
and
so
the
whole
ecosystem
makes
it
very
simple
for
you
to
just
say:
docker
run
whatever,
and
to
have
that
image
automatically
pulled
and
executed
within
your
environment.
And
while
that
can
be
very
handy
from
a
development
standpoint
and
save
you
some
time.
Those
approaches
can
introduce
software,
that
you
don't
know
that
you
don't
have
any
control
or
provenance
over
into
your
environment.
D
And
so
you
really
want
to
make
sure
that
the
images
that
you're
using
are
trustworthy
images
that
you've
vetted
not
just
for
the
compliance
and
security
and
invulnerability
aspects
of
it,
but
to
make
sure
that
those
images
are
behaving
in
a
manner
that
you
expect
them
to.
The
next
section
we
have
is
on
registries.
D
And,
finally,
the
that
registries,
like
any
other
storage
system,
can
can
accumulate
a
lot
of
I
guess
cruft
for
lack
of
a
lack
of
a
better
term
in
them
over
time.
You
know
you
might
have
images
that
you
haven't
used
in
months
or
years
that
that
are
still
there
and,
if
you're,
using
a
good
tagging
and
naming
process
for
your
images,
you
can
avoid
accidentally
using
older
images,
but
there's
also
not
a
lot
of
value,
necessarily
in
keeping
those
things
around
and
to
reduce
the
risk.
D
The
next
pane
we
have
is
talking
about
Orchestrator
countermeasures,
though
by
Orchestrator
we're
talking
about
the
technologies
like
open
shift
or
docker
swarm
or
kubernetes
itself
that
are
designed
to
provide
you.
The
ability
to
treat
a
large
number
of
individual
compute
nodes,
kind
of
a
general
pool
of
capacity
and
the
orchestrator
is
like
open
shifts
or
what's
responsible
for
load,
balancing
and
deployment
and
monitoring,
uptime
and
scale
and
so
forth.
Because
of
this
they're,
basically
a
privileged
operator
there.
D
They
are
the
management
clean
for
your
environment,
and
so
it
becomes
very
important
for
you
to
be
able
to
ensure
security
of
that,
because
if
you
have
an
insecure
Orchestrator
configuration,
you
can
literally
make
the
entire
environment
insecure
because
it
has
such
privileged
access
to
each
one
of
the
nodes.
That's
part
of
that
environment.
So
one
of
the
things
that
we
recommend
here
that
that
again
seems
like
an
obvious
notion.
It
is
something
that's
that's
been
in
many
previous
news.
D
Special
publications
is
to
make
sure
that
you've
got
a
least
privileged
access
model
to
managing
that
Orchestrator.
You
know,
someone
simply
needs
to
be
able
to
push
some
images
to
the
registry.
They
shouldn't
have
an
administrative
level
of
access
within
the
orchestrator,
where
they
can
choose
to
run
arbitrary
code
or
arbitrary
images
throughout
the
environment.
You
should
make
sure
that
the
access
to
the
orchestrator
reflects
the
actual
business
needs
of
a
given
user
within
that
environment.
D
One
of
the
things
that's
that's
often
talked
about
or
that
we
I
often
talk
to
customers
that
have
questions
about
at
least
is
those
kind
of
those
set.
Those
second
two
bullets
they're
about
separating
network
traffic
and
and
really
just
generally
workload
isolation.
You
know
one
of
the
things
that
that
people
sometimes
are
concerned
about
is
the
notion
of
the
the
compromised
neighbor.
D
You
know,
for
example,
like
if
I
have
a
scenario
in
which
I'm
running
some
application
workload
that
gets
compromised
I
want
to
make
sure
that
that
application
that's
been
compromised
is
not
able
to
go
and
impact
other
aspects
of
my
environment
that
are
not
maybe
exposed
to
the
same
risk,
but
are
still
running
on
the
same
infrastructure.
The
one
extreme
mechanism
you
could
use
to
do
that,
of
course,
would
be
to
establish
like
separate
invar
for
each
one
of
your
applications
that
wouldn't
really
be
very
conducive
or
efficient.
D
Though,
because
you
didn't
up
with
such
a
large
amount
of
overhead,
it
would
be
impractical
to
really
make
use
of
containers.
At
the
same
time,
you
don't
want
to
have
a
model
in
which
you've
got
your
front.
End
containers
running
on
the
same
computer
that
are
exposed
directly
to
the
Internet
as
the
containers
that
are
maybe
running
your
most
sensitive
applications
that
deal
with
like
health,
information
or
financial
details.
D
So
what
we
recommended
a
special
publication
is
more
of
a
sensitivity,
level,
zoning
of
resources,
so
you
may
have
an
environment
where
you
have
a
single
cluster
and
you
use
node
affinities
to
ensure
that
workloads
that
you
consider
to
be
highly
sensitive,
all
run
within
the
same
set
of
compute
nodes
and
workloads.
You
consider
to
be
less
sensitive,
run
in
a
different
load
set
of
compute
nodes,
or
you
might
take
more
of
a
federated
environment
where
you
may
actually
have
individual
clusters.
D
You
know
one
cluster,
that's
for
high
sensitivity,
a
different
one
for
lower
sensitivity,
a
third
one
for
medium
and
so
forth,
and
to
federate
those
or
you
know,
with
a
single
management,
pane
of
glass
and
a
common
common
way
of
operating
and
configuring.
Those
environments,
there's
not
a
particular
pattern
that
they
must
do
for
everyone
that
must
do,
and
the
thing
that
you're
really
trying
to
ensure
that
you
have
is
some
ability
to
isolate.
D
Isolation
is
to
run
those
workloads
on
different
computes.
Now
that
doesn't
mean
they
have
to
be
physically
different,
but
the
degree
of
isolation
that
a
hypervisor
provides
is
much
greater
than
the
degree
of
isolation
that
the
container
runtime
provides.
And
so
we
recommend
using
hyper
hypervisor
for
virtualization
as
a
as
a
completely
appropriate
way
to
separate
those
those
levels
of
of
workload
by
sensitivity.
D
So
it's
more
about
making
sure
that
the
host
OS
environments
are
different
based
on
sensitivity,
level
and
that
can
be
done
again
per
cluster
per
know
within
the
cluster
by
having
some
sort
of
a
fence,
deer
or
taint
for
nodes.
So
there's
a
variety
of
ways
to
do
it,
but
isolating
those
workloads
by
sensitivity
level
is
one
of
the
most
important
recommendations
that
we
make
in
a
special
publication.
D
Next,
if
we
go
to
container
countermeasures
some
of
the
things
that
we
recommend,
there
are
again
kind
of
obvious
stuff
that
you've
seen
before,
which
is
automatically
be
able
to
find
and
remediate.
Vanar
abilities
that
you
find
in
that
container
run
time.
I
mean
that
the
run
time
itself,
you
know
either
docker
or
run
see.
Those
runtimes
are
our
software
themselves.
You
know,
even
though
they're
very
minimal
and
designed
for
security
as
a
foremost
goal.
D
D
You
can't
do
that
if
you
require
all
this
manual
work
and
manually
configured
policies
to
do
so,
but
you
have
to
rely
on
software
to
be
able
to
see
these
are
the
things
that
are
normal
within
this
container
and
to
automatically
prevent
things
that
are
abnormal,
and
the
reason
why
this
really
becomes
practical
for
the
first
time
for
with
containers
is
that
the
underlying
platform
has
some
key
differences
from
previous
generations
of
computing.
You
know
containers
relative
to
virtual
machines
or
more
minimalistic.
D
It's
a
lot
less
stuff
that
you're
dealing
with
they're,
designed
to
be
much
more
predictable
in
nature
over
time,
you're
not
going
to
go
and
update
the
software
in
the
container
you're
going
to
destroy
the
container
reprovision
a
new
image
if
you
teeners,
are
also
more
declarative
in
nature,
you're
building
it
from
it
from
a
docker
file.
So
you've
got
a
more
declarative
and
discoverable
way
to
learn
some
fundamental
aspects
about
what's
normal
and
those
three
attributes.
D
Those
three
differences
from
traditional
systems
really
allow
you
to
have
a
sec,
t'v
and
practical
way
to
do
this,
behavioral
learning
and
then
to
identify
anomalies
based
on
what
those
models
predict
and
the
final
slide
we're
talking
about
is
the
host
countermeasures.
You
know
as
part
of
the
stack
you
can't
ignore
what
the
host
OS
is
doing,
because
at
the
end
of
it,
a
kind
of
the
foundation,
if
you
will,
the
entire
thing
is
the
operating
system
that
you
run.
D
We
also
recommend
a
few
other
things
on
this,
but
probably
the
most
important
one
that
I
would
mentioned
here
is
that
you
know
you
really
have
to
maintain
a
minimalistic
set
of
file
system
permissions
on
your
containers.
One
of
the
things
that's
that's
useful
for
running
containers
or
released
from
from
a
development
standpoint
is
containers
can
very
easily
interact
with
the
host
operating
system
in
the
hosts
file
system,
and
while
they
can
be
friendly
and
useful
from
a
development
standpoint,
doing
that
in
production
can
lead
to
all
kinds
of
unforeseen
consequences
and
security
risks.
D
Do
you
really
want
to
make
sure
that
the
containers
that
you're
running
have
a
minimal
set
of
permissions
within
that
host
operating
system?
So
you're,
not
exposing
the
underlying
host
to
threats
from
a
compromised
container
that
it
may
be
running,
jerk
I?
Think
you're
going
to
pick
up
the
next
section
on
on
how
we
help
and
RedHat
helps
achieve
those
those
recommendations
in
the
special
publication.
C
We
post
into
his
talk,
can
offer
to
address
all
the
areas
all
the
items
mentioned
in
the
and
let
me
just
start
with
the
right
at
site
just
to
make
it
a
little
bit
easier.
So
one
of
the
things
missed
that
in
the
special
application
is,
it
defines
five
two
frontiers
of
a
container
technology
architecture,
so
it
starts
with
developing
systems
which
effectively
create
or
generate
those
images
and
then
hand
over
them
to
the
testing
in
a
later
day,
accreditation
systems
which
validate
and
verify
the
content
inside,
signed
them
and
finally
sent
them
to
registry.
C
The
third
part
is
the
registry
itself,
and
then
the
orchestrator
in
the
host,
as
we
just
mentioned
of
the
risk
areas,
are
somewhat
aligned
and
to
this
technology
architecture.
If
we
just
look
at
the
product
for
Fujio
beers
with
it
offers
a,
we
have,
of
course,
much
more
products,
but
just
to
highlight
the
two
most
relevant
in
this
area.
Of
course,
we
have
open
shipped
over
trip,
is
a
comprehensive
enterprise
grade
application
platform
built
for
container
workloads?
C
Basically,
it's
the
company's
enterprise
to
Korea's
distribution
and
it
features
a
lot
of
the
things
call
out
in
the
publication
and
effectively
it
spends
across
all
the
five
areas
outlined
in
the
NIST
architecture.
So
we
support
developer
systems
with
service
provisioning
and
consisting
environments
with
automated
building
deploy.
We
have,
or
we
shipped
as
part
of
openshift
testing,
characterization
parts
such
as
the
ICD
pipelines.
Of
course,
it's
an
awkward
August
where
so
it's
a
copious
distribution,
and
it
has
a
couple
of
related
features
by
default.
A
couple
of
the
recommendations
already
enabled
by
default.
C
A
very
comprehensive
Arabic
and
auditing
model,
so
Quay
offers
a
lot
of
additional
features
and
effectively.
Both
products
together
provides
a
huge
amount
of
the
capabilities
outlined
there,
but
not
all
of
them,
and
that's
the
main
reason
why
we
have
this
this
presentation
today,
because
what
I
am
at
least
realized
while
reading
did
it
is
hella
wanna
buy
a
dog.
C
So,
let's
quickly
talk
about
one
of
the
key
elements
in
the
special
application,
which
is
the
container
host
OS,
so
John
only
mentioned
we
have
read
the
Kois,
it
might
be
a
little
misleading
for
the
guys,
one
of
families,
the
course
acquisitions.
Already
we
acquire
course
earlier
this
year
and
B,
we
made
a
decision
very
earlier
the
year
to
merge
two
existing
smaller
container
specific
host
classes,
so
automaton
at
that
side
and
cross
container
Linux
and
come
up
with
a
new
name.
C
The
new
name
is
wedded
course,
as
a
retic
course
is
effective
in
the
new
contain
a
specific
host
West,
which
we
will
start
shipping
with
OB
shift
or
at
all,
carry
out
soon,
which
is
effectively
what
NIST
describes
in
the
dark.
It's
a
very
small
light.
White
House
really
made
to
run
containers
only
it's
an
immutable
host.
C
We
have
for
at
identify
sales
and
then
it
all
it
starts
to
add
where
our
twistlock
fits
in
twist
lock
helps
us
to
secure
post
the
host
the
platform
developers
on
top
of
it.
So
there
are
multiple
things
where
twist
doc
could
provide
additional
value
on
top
of
what
we
as
we're
tied
shipped
by
default.
C
Another
interesting
painting
thing
in
the
context
of
the
Catena
host
operating
system,
and
this
explicitly
calls
out
and
I
like
this.
This
that's
why
it
is
quote
here,
so
you
need
to
keep
in
mind
that,
even
if
you
have
a
very
light
by
its
container
host
OS-
and
it
has
a
very
small
attack
surface-
this
still
means
that
there
is
a
attack
surface
and
even
those
packages,
and
especially
those
packages,
are
typically
affected
by
a
security
volatile
abilities.
C
So
it's
not
only
having
this
smaller
life
I
host
operating
system,
it's
also
taking
care
on
the
update
management,
especially
in
the
security
context,
so
quickly
pushing
our
security
updates
for
the
host
operating
system.
This
is
one
one
critical
element
of
the
mission
and
that's
that's
what
we
address
with
the
over-the-air
update
or
it
should
have
developed
by
reddit
or
by
Korres
as
part
of
the
red
head
course,
code
hero's
quest,
which
at
the
same
time
applies
all
the
updates,
bug,
fixes
and,
of
course,
CV
remediation.
Why
are
they
already
update?
C
D
C
This
is
pretty
powerful,
as
you
probably
can
imagine.
We
are
working
on
getting
this
integrated
way
that
we
can
ship
all
the
different
types
of
container
systems
or
the
post,
the
platform
of
components,
the
workloads
on
top
of
it
and
the
host
offerings
within
wet
head
quake.
This
makes
it
pretty
powerful,
John
Joe
showed
it
one
slide
around
the
image,
specific
content
measures
and
I
just
want
to
pick
a
few
of
them.
C
We
can't
cover
all
of
them,
so
one
of
them,
one
of
the
recommendations
or
key
takeaway-
was
to
use
container
specific
tools
or
technologies
for
mobility,
compliance
and
secrets
management.
So
rented
wave
features
a
built-in
scanning
yr
Claire,
which
continuously
scan
all
the
images,
which
means
you
don't
need
to
pool
again
or
rescan
an
image
if
the
new
vulnerability
is
coming
out,
which
effectively
means
the
metadata
is
changed
or
updated
and
automatically
Claire
detected
and
a
heads
into
the
corresponding
wanted
to
the
affected
images.
C
We
have
a
couple
of
different
metadata
sources
which
are
used,
but
for
for
the
scanning
itself
there-
and
this
includes,
of
course,
the
ratted
contents
or
the
original
word
swim,
but
it
also
includes
other
military
sources
for
non
words
and
content
such
as
alpine
Debian
and
Ubuntu
interpreters.
But
still
it's
it's
it's
less
than
ten
different
metadata
sources,
and
we
will
see
on
the
next
slide.
What
twist
or
can
add.
C
There
is
no
doubt
under
scalability
of
earthquake,
especially
not
on
the
scanning
side,
because
we
have
seen
a
couple
of
other
offerings
out
there,
which
of
course
can
scam
other
stuff
as
well
by
the
question
is
what
is
the
scalability?
Those
tools
could
achieve
if
it
comes
to
a
large
customer
environment
was
really
hundreds
of
thousands
of
images
in
there
and
then
probably
Quay
is
a
very
good
choice
with
the
way
how
we
implemented.
C
The
clear
scanning
was
in
Quay
that
it
can
scale
up
to
the
second
largest
registry
running
in
the
world,
which
is
quite
a
haul,
in
addition
to
the
plain
scaling
itself
yeah,
so
Quay
features
audio
today,
a
very
powerful
notification
method
and
one
of
the
notifications
could
be,
for
example,
a
web
hook,
which
then
would
allow
you
to
integrate
this
into
your
existing
tool
chain,
for
example,
create
an
incident
in
your
incident
management
system
or
stuff.
C
Like
this,
we
are
working
on
a
couple
of
enhancements
on
the
reporting
and
dashboard
side
on
both
Quay
and
openshift,
so
Bailey
the
ideas
to
bring
together
the
information
which
is
currently
stored
in
Quay
for
both
availabilities,
with
the
information
stored
in
openshift
for
the
runtime
itself.
So
is
this
image
used
by
any
of
my
parts
belonging
to
which
application
and
belonging
to
which
particular
environments,
or
is
the
death
only
or
is
it
in
production
as
well?
C
And
we
are
bringing
together
those
different
information
pieces
to
have
enhancements
on
both
sides
on
the
reporting,
dashboard
and
modification
side,
and
there
is
another
piece
we
are
working
on
and
again
it's
it's
mostly
aligned
to
what
what
has
been
described
in
the
special
application,
doing
policy
enforcement
based
on
the
information
captured
over
the
lifecycle-
and
this
includes
one
availability-
is
this-
includes
signature
information.
This
includes
any
other
type
of
other
stations
or
whatever
information
is
relevant,
which
could
be
captured
in
earlier
stages
of
the
lifecycle.
C
If
it's
going
away,
we
can
leverage
it
to
do
policy
management
and
enforcement
on
the
opposite
side.
So
this
is
a
feature
we
are
heavily
working
on
and
hopefully
we
will
be
able
to
release
it
pretty
soon.
But
let's
talk
about
the
additional
value
twist,
lock
can
bring
to
it.
I'm
heading
back
to
John
yeah.
D
Thanks,
sir,
to
the
things
that
we
do
beyond
when
it
comes
in
the
base
kind
of
from
the
redheads
standpoint
from
the
the
base
components,
because
we
try
to
give
people
and
customers
a
more
contextual
view
of
the
information
and
the
vulnerabilities
that
exist
within
their
environment.
So,
rather
than
just
giving
you
a
list
of
the
CVEs
that
are
present,
we
give
you
a
view
of
that
of
that
information.
In
a
way.
That's
unique
to
your
specific
environment,
though,
as
an
example.
D
You
have
to
go
and
be
able
to
update
those
images
on
the
things
that
our
greatest
criticality
to
you.
We
also
have
a
lot
greater
upstream
data
sources
than
a
lot
of
other
providers
have
and
that
we're
able
to
go
and
look
at
it
and
utilize
over
30
different
upstream
providers
such
that
we
have
more
precise
information
in
our
results.
D
If
you're
running
an
image,
you
know
when
your
core
OS
hosts
protected
or
are
with
with
openshift,
and
it's
a
java
image
and
it's
built
on
a
redhead
base
layer
and
the
more
achill
functionality
and
some
maven
packages
that
you've
added
to
it
within
twistlock,
because
we
have
these
diverse
upstream
sources,
we'll
be
able
to
use
a
much
more
precise
correlation
of
data
directly
from
Red
Hat.
To
look
for
vulnerabilities
in
the
base
layer
directly
from
Oracle,
for
example,
to
look
for
vulnerabilities
in
the
JDK.
D
We
have
custom
and
and
oftentimes
boutique
security
research
companies
that
provide
us
data
specifically
on
vulnerabilities
and
maven
packages,
and
what
that
means
for
you
as
a
customer
is
that
you
have
a
lot
a
much
lower
likelihood
of
false
positives
and
the
information
that
you
get
is
more
actionable,
because
it's
really
designed
to
show
you
exactly
what
the
vendor,
who
knows
that
component
best
thinks
about
a
given
vulnerability
and
that
detection
capability
that
we
have
spans
the
entirety
of
your
applications
lifecycle.
So
we
can
do
and
detect
those
problems
and
actively
prevent
those
problems.
D
Literally,
by
failing
builds
from
the
very
beginning
of
your
CI
process,
so
if
you're
running
a
tool
like
Jenkins
or
really
any
other
CI
5'4,
we
have
plugins
that
allow
you
to
look
for
those
vulnerabilities
and
compliance
defects
and
force
them
from
the
very
first
build
to
whatever
registry
that
you
have
all
the
way
into
your
production
environment.
You
can
have
a
very
simple
rule.
D
For
example,
it
says
in
my
open
shift
cluster
prevent
running
any
image
that
has
a
high
severity
CBE
and
every
time
you
do
a
deployment
twistlock
will
validate
whether
or
not
that
image
is
impacted
by
the
rule,
if
so,
actually
prevent
the
vulnerable
image
from
running
the
dirt.
If
you
want
to
progress
the
next
one.
C
Yep
thanks
so
in
a
similar
tool
to
what
we
just
discussed
for
the
ability
scanning
itself.
So
the
same
applies
to
nearly
every
other
area.
So
this
is
just
one
other
example
on
their
registry
countermeasures,
and
so
basically
it's
some
of
the
recommendations
inside
the
special
applications
are
kind
of
obvious,
so
they
the
host,
can
only
connect
to
the
registry
overcooked
the
channel.
This
is
kind
of
obvious
that
the
request
or
the
yeah
the
access
requires
old
is
also
kind
of
obvious.
C
The
only
thing
in
logging,
so
those
are
features
which
already
exist
today
or
since
a
long
time
in
record
way
and
other
words
squeeze
offerings
as
well,
and
some
of
the
features
are
or
some
of
the
requirements
are
not
obvious,
and
luckily,
as
John
said,
the
this
publication
doesn't
explicitly
describe
how
to
solve
the
problem.
It
busy
a
gives
us
a
lot
of
recommendation
of
what
we
need
to
address
and
how
we
should
address,
but
it
still
gives
us
a
little
bit
free
to
a
degree
of
freedom
on
how
to
really
check
out
the
problem.
C
So
one
of
the
examples
we
discussed
back
and
forth
is
the
automatically
pruning
or
the
leading
the
outdated
or
stale
images
using
a
registry,
and
basically
we,
for
example.
We
currently
envisioned
that
we
approach
it
in
slightly
different
way
that
we
don't
leave
them
we're
just
preventing
users
to
access
all
these
run
them,
which
is
effectively
it's
achieving
the
same
or
it's
just
a
different
implementation,
which
is
much
more
efficient
in
our
opinion,
and
the
same
applies
to
to
other
stuff,
such
as
the
context
of
our
authorization
control
that
it
is.
C
But
again,
if
the
the
key,
the
key
takeaway
for
the
registry,
we
are
aware
of
the
importance
of
the
registry
itself
in
the
security
context.
So
one
of
the
key
takeaways
for
us
is
the
registry,
sits
in
the
middle
between
all
the
content
and
disowning
metadata
on
the
one
hand,
and
the
the
orchestration
platform
and
the
real
running
workloads
on
the
other
end
so
basely.
The
registry
is
a
very
important
and
critical
piece
for
this
whole
set
up,
and
this
also
includes
how
to
get
information
into
the
registry.
C
How
to
store
information
from
different
media
data
providers
and
typically
military
providers
are
scanners,
that
is
the
built-in
scanner
clear
or
external
scanners,
such
as
Twitter
twist-off.
So
we
started
to
investigate.
How
can
we
ensure
that
we
have
all
the
data
we
need
or
we
want
to
use
on
the
orchestration
side
and
score
them
in
deliver
it
directly
in
turn
in
the
registry
or
in
the
minute,
in
a
pack
and
used
by
the
registry
to
have
a
realistic
view
on
all
the
data
we
want
to
use
what
we
need
to
use
on
the
operator
side.
C
Just
another
example
just
quickly
throw
on
the
image
countermeasures.
So
there
is
a
lot
of
guidance
inside
this
publication
or
on
the
untrusted
images,
and
we
are
aware
of
the
problem
yeah.
So
the
idea
of
maintaining
whitelist
to
control
what
images
are
allowed
to
be
to
be
student
was
born
in
the
registry
and
then
finally
used.
This
is
a
key
item.
We
are
working
on
there.
The
next
one
is
the
discrete
identification
by
its
signatures
or
the
whole
technical
stories
is
a
big
item.
We
actively
collaborating
with
other
windows
and
working
on
their
own.
C
This
feature
as
well
and,
of
course,
the
ongoing
monitoring
and
maintenance
of
the
repositories,
as
we
just
discussed
that
we
ensure
that
different
images
updated
and
then
he
became
applicable
to
it.
Then
it's
visible
also
in
case
you
know,
familar
was
the
right
to
continue
Caleb
and
the
container
authentic.
C
So
this
is
what
we
try
to
solve
is
to
continue
on
all
settings
in
the
past
that
we
make
it
visible
what
the
current
state
of
images
and
how
to
ask
you'll
see
at
the
end
of
today,
an
image
or
a
repository
in
the
window
behind
it
in
this
case,
and
then
we
added
a
couple
of
recommendations
for
different
builds
by
stages,
the
different
life
cycle
stages.
C
So,
let's
start
with
the
builds,
as
I
already
mentioned,
the
container
can
look
and
the
house
index,
which
is
a
good
indicator
for
the
first
key
evaluation
or
search,
and
they
evaluate
which
images
you
want
to
use.
What
are
the
criteria
of
those
images?
What
is
the?
What
are
the
characteristics?
What's
the
current
security
status
of
those
images
long
before
you
even
pull
them
so
long
before
those
images
are
in
your
environment
and
sports
somewhere,
we
have
a
lot
of
guidance
on
how
to
use
where
that
way.
C
Let's
ensure
that
every
content
which
is
supposed
to
be
deployed
on
openshift
and
cone
areas
is
stored
centrally,
and
then
you
can
apply
all
the
governance
models
and
policy
stuff
directly
in
this
centralized
English
part
the
the
the
feature
I
already
mentioned
earlier,
so
an
image
is
automatically
scanned
immediately
after
it's
getting
pushed
into
the
registry.
C
There's
at
this
part
of
the
query,
clear
integration
and
the
builds
pipeline
as
part
of
all
Contrave
could
leverage
this
information,
so
both
sides
have
api's
and
we
are
working
on
making
it
easier
for
customer
to
integrate
those
two
different
things
that
they
can
only
doing
a
pipeline
can
double
check.
Ok,
the
images
pushed
in
the
registry
watch
the
scan
result.
Ok,
it's
not!
Ok,
let's
stop
here.
C
We
don't
need
to
continue
there
suffragists
and
of
course,
we
have
a
lot
of
tooling
or
capabilities
and
features
which
help
you
to
automate
this
whole
thing
and
to
integrate
it
better
into
the
existing
landscape,
and
this
includes
a
couple
of
met,
hooks
triggers
notifications
and
so
on
and
so
on.
So
this
is
just
on
the
record
side
and
of
course
we
have
much
more
on
the
or
even
more,
on
the
twist
up
side.
So
John
do
you
want
to
explain
it.
D
Yeah,
absolutely
one
of
the
things
that
we
provide
is
called
compliance
checks,
so
our
compliance
checks
take
the
the
entirety
of
the
cia's
benchmarks
for
docker
for
kubernetes
for
the
linux
host
OS
itself.
They
add
several
dozen
checks
that
our
own
security
research
team
added
as
well
and
give
you
the
ability
to
to
do
more
than
just
look
for
vulnerabilities,
but
actually
look
for
insecure
configurations.
D
We
have
a
template,
that's
built
in
specifically
for
the
special
pub
for
SB
801
190
that
allows
you
to
go
and
pre
select
a
set,
a
section
of
those
checks
across
all
those
those
different
sources.
That's
aligned
with
the
guidance
that
you
see
in
either
one
night
so
that
you
can
go
through
and
use
that
template
to
pre-select
the
specific
things
that
are
aligned
with
all
these
recommendations,
we've
made
about
how
to
secure
your
host
and
secure
your
images
and
registries
and
so
forth,
and
those
checks
are
now
I,
just
simply
reactive
and
monitoring.
D
They're
they're
really
trying
to
give
you
a
proactive
way
to
actually
prevent
insecure
configurations,
but
I
mentioned
earlier.
You
can
check
for
them
in
the
build
process.
You
can
also
check
for
them
at
deployment
time,
so
you
can
say
like
if
an
image,
for
example,
includes
private
keys
or
unencrypted
secrets,
where
it's
configured
to
run
its
route,
prevent
all
of
those
things
from
being
able
to
be
run
inside
your
production
environment.
D
One
of
the
other
capabilities
that
customers,
often
use
with
twistlock
is
the
ability
to
to
mark
certain
registries
as
trustworthy.
One
of
guidance
points
we
talked
about
earlier
was
ensuring
it's
really
running,
trusted
images
and
with
twistlock,
it's
very
easy
to
go
in
and
say
only
trust
images
that
originate
from
these
registries
or
these
repositories.
It
can
be
very
specific
about
what
are
those
trusted
sources
that
you
want
to
allow
images
to
run
from
throughout
your
cluster.
C
Cool,
thank
you
and
just
just
picking
another
life
cycle
stage.
So,
if
I'm,
finally
moving
the
application
after
develop
the
earlier
stages
of
the
life
cycle
to
production,
then
there
might
be
even
additional
constraints
or
regulations
applicable
to
its
for
the
whole
staff
yeah.
So
it
might
be
more
critical
than
other
environments
so
busy
and
we
have
a
huge
amount
of
different
technologies
features
and
tools
to
have
different
environments
clusters
separating
them.
C
Whatever
else
are
OpenShift
is
it
has
a
lot
of
those
features
already
built
in
into
the
platform
and
you
can
configure
and
we
provide
a
lot
of
guidance
and
documentation
on
it,
and
one
of
the
one
of
the
items
we
actively
working
on
is
what
what
John
has
mentioned
so
having
the
policies
or
policy
enforcement
components
to
ensure
that
production
really
has
the
highest
level
of
protection
compared
with
other
environments.
So
well,
you
could
do
something
and
F
we
probably
want
to
prevent
that.
C
You
can
do
the
same
in
production,
but
this
still
requires
that
you
have
all
the
information
across
all
those
environments.
So
it's
not
about
bypassing
the
pipeline
or
bypassing
the
scanning.
But
if
that's,
it's
obviously
not
a
goal,
it's
really
doing
this
of
environment,
specific
policy
management
and
busy.
This
includes,
of
course,
the
policy
enforcement
based
on
many
different
types
of
other
stations
of
you
mentioned
them.
Signatures
volatile
abilities,
any
kind
of
plain
text,
labels
or
whatever
else,
and,
of
course
it's
important
that
also
the
security
scanning.
C
This
is
not
a
one-time
event,
so
it's
not
a
point
of
time
scanner
that
images
scanned
during
the
8c
ICD
pipeline.
This
isn't
continuous
one
ability
scanning
so
each
time
and
your
vulnerability
is
coming
out
automatically.
It's
picked
up
by
clear.
The
notification
could
be
send
out.
Any
notification
could
trigger
events
of
we
build
automation
and
build
automation
and
not
an
important
aspect,
especially
40.
Coming
up.
We
shift
for
the
Oklahoma
to
converge
platform.
C
The
automated
operations
across
the
whole
stakes
makes
it
easier
or
much
much
easier
for
customers
to
apply
this
to
their
existing
running
platform.
Inaudible
develop
loads
running
on
top
of
it,
because
if
you
can
very
efficiently
manage
updates
across
the
entire
stack,
so
starting
with
the
hose
over
the
platform
and
the
workers
on
top
of
it
based
on
the
operator
technology,
then
you
are
in
a
much
better
shape
than
you've
been
before,
where
you
had
to
do
all
these
manually.
C
Even
if
you
have
the
information
coming
out
of
a
scan
or
something
else,
this
is
not
sufficient.
So
one
of
the
key
takeaways-
and
we
had
this
quote
slide
at
the
very
beginning
of
this
presentation-
is
really
this
only
scales.
If
you
can
automate
it
and
in
addition
to
demented
tools,
we
provide
twists
of
offers
again
a
couple
of
valuable
add-ons
on
top
of
it.
Don't
probably
so.
D
We
you
know,
we
really
provide
that
modeling
capability,
that
we
talked
about
earlier
to
be
able
to
understand
and
create
this
four
dimensional
model
automatically
all
the
process,
activity,
network
activity,
file,
system
and
system
calls
and
then
to
automatically
look
for
prevent
anomalies
relative
to
what
those
models
predict.
We
actually
take
that
model
in
combine
models
from
multiple
entities
to
also
create
this
connectivity
mesh
so
that
with
twistlock
you,
you
actually
get
what
looks
very
much
like
a
Google
map
of
your
environment.
You
can
zoom
out
and
see
all
the
different
namespaces
you
deployed.
D
D
Well
s
in
this
flight
here
really
gives
you
a
view
of
what
that
looks
like
you
know
the
ability
to
be
to
automatically
detect
anomalies
in
the
environment
and
prevent
those
we
we
have
this
view.
We
call
incident
Explorer
that
when
we
do
find
those
anomalies,
we
surface
all
that
information
to
you
in
this
very
usable
format,
so
that
you
can
see
when
an
incident
occurs.
All
the
information
about
it.
What
actually
ran
what
the
checksum
on
the
binary
was.
D
What
other
things
that
entity
talks
to-
and
you
can
notice
there
in
the
middle-
is
this
button
that
you
see
to
view
forensics
data,
one
of
the
cool
things
that
we're
doing
is
always
having
a
basically
a
flight
data
recorder.
That's
that's,
recording
all
the
process
and
network
activity
within
the
containers,
but
but
storing
it
and
locally
on
the
node
and
only
forwarding
into
the
twistlock
console
where
there's
an
actual
event,
so
that
it's
a
very
distributed
way
to
collect
a
lot
of
detail.
Forensics
data
proactively.
D
Yeah,
okay,
just
to
summarize
some
of
the
key
takeaways
from
the
special
table
that
twist
lock
and
Red
Hat
held
jointly
deliver.
One
of
the
most
important
ones
is
again
that
that
organizational
people
process
adaptation
so
that
you're,
taking
advantage
of
the
automation
that
you're
able
to
deal
with
the
changes
to
scale
and
the
frequency
of
deployments
to
utilize.
D
A
container
host
operating
system
for
a
smaller
attack
surface
like
Red
Hat,
core
OS,
two
separate
workloads
by
sensitivity,
levels,
which
is
again
something
you
can
do
as
part
of
your
open
shift
design
using
the
tools
and
the
processes
that
take
advantage
of
containers.
To
give
you
that
automated
and
scalable
way
of
doing
security,
and
that's
really
where
twistlock
comes
into
play
there
and
to
give
you
tooling
it
that
or
to
use
to
only
rather,
that
gives
you
visibility
into
the
entire
stack.
So
you
don't
want
to
have
something
that
just
helps.
D
You
understand
the
security
risk
at
the
container
layer,
but
something
that
helps
you
understand
the
security
risks
and
vulnerability,
posture
and
configuration
of
the
host
OS
and
the
orchestration
tools.
And
again
we
try
to
build.
On
top
of
the
great
work
that
that
red
hat
is
done
to
create
that
foundation
and
give
you
that
vulnerability,
management
and
runtime
defense
and
compliance
not
just
for
the
containers,
but
also
to
us
that
they
run
on
in
the
orchestration
layer
that
that
you're
using
to
manage
it
as
well.
C
And-
and
that's
also
one
point
I
would
like
to
highlight
it
again.
So
basically,
one
of
the
key
takeaways
for
me
personally
is
that
after
reading
this
document
I
realized
okay
out
of
the
box,
we
are
doing
a
great
job
on
the
right
hand,
side
on
providing
most
of
the
capabilities
mentioned
in
there.
So
we've
we've
done
a
lot
of
stuff
which
is
mentioned
in
there,
which
is
great,
but
not
everything.
C
So
if,
for
whatever
reason
you
need
or
you
want
to
be
fully
aligned
and
once
you
have
all
the
guidance
implemented,
which
you
mentioned
the
documentation,
then
it's
not
an
honest
issue:
it's
not
about
redhead
or
twistlock.
It's
really
about.
There
is
an
additional
value
coming
out
of
all
of
our
friends.
So
we've
we've
used
this
as
an
example
to
demonstrate
you,
okay,
we
have
a
couple
of
pieces
out
of
the
book.
If
you
want
to
have
the
loan
implementation,
then
we
have
partner
frames
to
close
several
gaps.
C
D
From
the
twistlock
standpoint,
what's
lock
is
just
a
qualitative
app,
so
we
give
it
to
you
as
a
set
of
images
and
you
deploy
them
onto
openshift,
there's
any
sort
of
like
OpenShift
application,
so
you
run
it
as
a
replication
controller
and
deploy
the
twistlock
defender
to
your
nose
as
a
daemon
set.
So
your
data
is
completely
under
your
control
at
all
times.
It's
not
a
sad
service
to
it's
like
never
cease
your
logs
or
images
or
anything
else.
You
run
it.
Wherever
you
run
openshift
it
could
be.
Any
cloud
could
be
a
hybrid
cloud.
C
The
same
applies
for
and
there's
one
additional
things
or
the
opposite
side.
We
should,
by
default
a
couple
of
different
storage
options,
and
we
have
a
couple
of
different
underlying
storage
options
for
you,
which
might
help
you
to
try
to
add
some
additional
data.
Privacy
concerns
as
well
as
it's
entirely
up
to
you.
So
it's
in
your
responsibility,
the
last
nothing
thought
outside
or
shared
for
somebody.
C
So
busy
what
we
are
doing
together
as
we
are
running
workshops
with
different
customers,
we
have
typically,
we
only
have
today
join
customers
which
haven't
been
able
to
bring
those
two
different
pieces
together,
but
they
started
to
look
at
it.
It
asked
and
basically
asked
us:
ok,
could
you
help
us
implementing
more
efficient
and
scalable
approach
for
security,
and
then
we
start
to
to
review
what
what
the
current
state
is
and
where
they
would
like
to
go
and
what
other
requirements
and
stuff
it
is?
C
And
then
we
talked
about
all
the
options
and
typically
I'm
using
dismiss
801
and
90
on
a
regular
basis.
Explain!
Ok!
This
is
a.
This
is
a
huge
set
of
options
and
it's
up
to
you
to
pick
all
of
them
or
some
of
them,
and
then
we
talked
about
ok,
what
are
the
options
within
all
those
different
areas
and
then,
typically,
we
are
working
closely
together
with
twistlock
and
other
partners
to
bring
them
bring
them
into
their
customer
conversations
and
finally
come
up
with
an
implementation
roadmap.