►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Well,
hello,
everybody
and
welcome
again
to
another
OpenShift
Commons
briefing
this
time.
An
incredibly
timely
talk
on
security,
open-source
security
and
containers
from
our
friends
at
black
duct
and
mackie's
with
us
we're
gonna.
Let
him
do
his
presentation
and
talk
all
about
the
goodness
of
what
Black
Ducks
doing
there
and
some
of
its
there's
only
patchy
stret
accusations
from
Equifax
as
that's
the
problems
of
the
world.
A
Black
Ducks
gonna
help
us
figure
out
how
to
how
to
solve
those
and
prevent
that
stuff
from
happening
and
give
us
there
there's
their
insights
into
this
space
so
Tim
without
any
further
ado,
I'm
gonna!
Let
you
go
ahead
and
introduce
yourself
and
your
topic,
you
can
ask
questions
in
the
chat,
I'll,
try
and
answer
them
or
there's
a
couple
other
folks
from
Black
Duck
that
are
on.
They
may
answer
them,
but
we'll
have
live
Q&A
at
the
end.
So
thanks
cool.
B
Thank
You
Diane
and
welcome
everyone.
My
name
is
Tim
Macke
I'm,
a
senior
technologies
with
software
and
I'm
gonna
talk
today
a
little
bit
about
managing
your
risk,
and
this
talk
is
going
to
be
at
a
little
bit
higher
level
than
just
openshift
specific,
for
probably
about
three-quarters
of
the
talk
and
then
we're
gonna
kind
of
dive
down
into
how
this
is
all
gonna
benefit
an
open
shift
environment
in
the
container
of
the
structure
and
so
forth.
It's
actually
what
we
all
want
to
be
using
and
deploying
and
being
successful
with
in
an
open
chest.
B
B
Driven
development
and
deployment
model
has
developers
empowered
with
security
information
security
driven
release,
policies
trusted
components
that
we
have
as
part
of
our
CI
loop
security
testing
is
baked
in
that
binary
artifacts
only
get
created
if
those
policies
are
met
and
that
we
are
signing
things
that
we're
supposed
to
sign
and
that
images
are
being
stored,
trusted
container
registries
and
that
there
are
only
deployed
from
those
trusted
registries.
And
that's
my
assertion.
B
That's
my
starting
point
and
if
that's
those
nine
things
are
actually
met
and
then
becomes
well
what
the
wrong
the
sad
thing
is
is
that
quite
a
bit
can
actually
go
wrong
and
so
at
CSO
magazine
earlier
this
year
had
an
article
out
by
Maria
Korolev.
That
said,
the
easiest
way
to
get
fired
in
2017
was
to
have
a
security
breach.
B
I,
don't
know
exactly
how
many
people
at
Equifax
are
now
potentially
looking
for
jobs,
but
this
is
the
reality
that
IT
lives
in
today,
and
part
of
this
is
born
on
regulations
that
are
global
in
nature.
So,
for
example,
in
the
EU
there's
gdpr
in
Canada,
there's
a
proposed
legislation
called
pipi,
EDA
and
they're.
B
Basically,
setting
regulatory
requirements
for
organizations
that
have
personally
identifiable
information
on
their
customer
base
around
how
they
disclose
how
quickly
this
goes
in
the
details
under
which
they
disclose,
with
a
set
of
penalties
and
in
the
case
of
assay
gdpr,
there's.
Actually,
a
percentage
of
revenue,
that's
associated
with
those
penalties
and
other
scenarios
we've
seen
execs
like
the
CEO
of
Target,
actually
get
fired
as
a
result
of
the
breach
they
had
a
few
years
ago
when
to
kind
of
like
focuses
down
a
little
bit.
B
Ibm
and
the
bottom
on
Institute
annually
put
out
a
cost
of
data
breach
study,
and
there
were
three
items
in
there
that
really
caught
my
eye,
the
first
being
the
average
cost
of
the
data
breach
being
a
little
over
seven
million
and
that
the
lost
business
from
it
was
a
little
over
four
million.
But
the
shocking
number
was
that
the
length
of
time
that
it
took
to
identify
in
container
breach
was
a
little
over
six
months.
B
Well,
that's
under
six
months,
which
means
that
as
horrendous
as
it
is
to
make
this
statement,
Equifax
actually
did
a
better
job
than
the
average
person
or
the
average
organization.
So
this
is
the
type
of
world
that
we're
actually
living
in
Oh.
A
lot
of
people
go
and
say:
well,
you
know
what
from
an
application
perspective,
it
doesn't
really
matter.
Our
infrastructure
guys
have
thought
of
everything.
B
So,
if
I
look
at
a
build
out,
I've
got
some
users,
some
shiny,
happy
people
on
the
Left
I've
got
some
perimeter
defenses
that
I've
got
my
data
center
and
if
I
build
out,
what's
a
node
side
of
that
data,
center
I
may
have
a
hypervisor
in
place.
It
has
its
own
set
of
services,
which
includes
an
SDN
network
for
ensuring
that
only
the
traffic
that
supposed
to
be
there
is
there.
I
have
some
form
of
security
service
in
place
that
it's
looking
for
malicious
activity
within
the
virtual
machines
themselves.
B
I
have
obviously
a
virtual
machine
that
is
going
to
be
container
I.
So
this
allows
me
to
have
multi
tenant
segmentation
and
I'm
going
to
have
a
minimal
OS
like
Red,
Hat,
atomic
and
I'm
gonna
have
some
number
of
containers
in
here
and
because
this
is
a
virtual
environment,
I'm
gonna
replicate
it
to.
However,
many
container
games
are
necessary
if
one
of
these
containers
happens
to
have
a
component
that
is
vulnerable.
Things
get
interesting
quickly,
but
we
change
our
shiny,
happy
people
into
a
malicious
actor.
B
And
now,
let's
assert
that
that
malicious
actor
was
able
to
compromise
that
vulnerable
container,
well,
they're,
now
on
exactly
the
other
side
of
all
these
perimeter,
defenses
and
so
they're
now
in
a
position
where
they
could
potentially
mount
an
attack
from
a
compromised
container
to
another
compromised
container.
Despite
all
of
these,
the
structural
rules
that
are
in
place
notwithstanding
the
fact
that
a
lot
of
the
vulnerabilities
that
we
see
in
web-based
frameworks
require
a
reconfiguration
of
perimeter
defenses
in
order
to
even
detect
the
patterns
of
attack.
B
So
all
when
trying
to
the
security
and
base
in
a
large-scale
infrastructure
is
truly
to
question
everything
and
continually
reevaluate
the
trust
of
what's
out
there,
but
we
should
be
looking
at
things
like
where
does
you're
at
this
image
actually
come
from
if
you're
building
it
locally?
When
was
it
brought
down?
What
is
the
health
of
that
base
image
within
the
Red
Hat
container
catalog?
We
have
now
a
health
index
that
is
showing
how
up-to-date
it
is.
B
Does
it
have
any
known
CVEs
in
it
when
what's
the
patch
interval,
what
what
is
truly
the
health
of
that
image,
if
you're
running
through
a
set
of
build
servers
before
who
trusts
them?
Do
you
trust
them
themselves?
Is
there
a
way
that
a
foreign
can
container
can
start
in
your
environment?
Are
you
allowing
say
an
open
ship
template
to
come
in?
Are
you
allowing
users
to
go
and
access
containers
that
are
referenced
from
docker
hub
directly?
B
Do
you
have
the
Providence
in
there
if
you're,
building
base
images
from
outside
what
happens
if
that
registry
goes
away,
what
happens
if
the
tag
that
you're
beholden
to
goes
away
or
changes,
because,
after
all
tags
are
immutable?
What's
the
process
to
determine
the
impact,
if
there's
a
new
security
disclosure,
all
of
these
things
are
part
and
parcel
to
the
trust
model
that
needs
to
be
put
in
place
as
we
grow
and
as
we
mature,
and
if,
at
this
point
you
say,
wow
my
brain,
it's-
and
this
is
an
awful
lot
of
work
right.
B
Yes
and
yes,
most
people
would
be
right
in
saying
this
is
gonna
hurt
their
brain.
So
that's
us!
A
little
bit
more
deeper
dive
into
how
we
can
better
manage
some
of
this,
because
at
the
end
of
the
day,
we
don't
want
anybody
fired
as
a
result
of
the
data
breach.
These
these
things
are
manageable.
If
you
understand
the
information
flow,
and
so
one
of
the
things
that
challenges,
enterprises
and
I'd
be
willing
to
bet
that
this
is
something
that
challenged
Equifax.
Is
that
open-source
doesn't
play
by
the
traditional
commercial
proprietary
software
rules?
B
If
you
take
a
look
at
a
pure
upstream
project
compared
to
commercial
code,
when
a
project
decides
that
a
given
version
is
end-of-life,
there's
no
opportunity
to
pay
a
boatload
of
money,
there's
no
dedicated
support
team
with
s
LA's,
there's,
no
staff
of
security,
researchers,
there's
no
transactional
relationship
between
a
procurement
house
or
a
procurement
team
and
a
quote
unquote
vendor.
Now,
obviously,
as
you
move
from
pure
upstream,
you
can
get
organizations
like
Red
Hat,
who
are
going
to
very
nicely
provide
support
services
and
curation
for
all
of
this.
B
But
when
we
look
at
the
infinity
of
open
source,
it
is
truly
a
community-based
activity.
It's
truly
a
scenario
where,
if
you
forked,
if
you've
done
anything,
that's
deviating
from
that
distributed
component,
ultimately
you're
the
one
who's
responsible
and
you
need
to
establish
that
relationship.
So
it's
kind
of
a
little
bit
of
a
point
on
its
they
look
at
this
MediaWiki
maintenance
release,
announcement
for
versions,
one
26,
25,
24
and
23.
There
are
two
key
pieces
that
are
in
here:
there's
a
security
disclosure
that
says
various
special
pages
resulted
in
fail
errors.
B
That's
in
the
first
yellow
blot
that
I
highlighted.
That's
the
nature
of
the
security
update.
So
if
you
are
a
media,
wiki
admin
looking
to
determine
whether
this
is
appropriate
or
not
to
deploy
at
this
particular
point
in
time.
That's
the
information
that
you're
working
with
the
other
thing
is.
Is
that
there's
also
a
note
about
end-of-life
and
says?
Please
note
that
the
one
twenty
four
six
marks
the
end
of
the
support
for
the
one
twenty
four
series
of
releases?
B
So
here
we
have
a
potential
attacker,
and
this
attacker
has
a
job
to
do.
The
job
is
to
determine
whether
or
not
there
is
a
set
of
vulnerabilities
against
a
specific
set
of
configurations
or
platforms.
They
create
their
attack
when
they
test
it
against
platforms
and
chances
are
old,
iterations
don't
go
so
well
serve
errant
and
they're,
going
to
iterate
and
eventually
they're
gonna
find
something
successful.
Now
that
success
might
not
necessarily
be
something
that
supported
the
original
thesis,
but
a
success
is
assessing
a
success.
Is
a
success.
B
Is
a
success
and
they're
gonna
claim
victory,
move
on,
and
so
in
order
to
move
on,
they
have
to
create
a
deployment
eagle
or
utilize
one
of
the
multitude
of
deployment
vehicles
that
are
out
there
to
take
this
attack
that
they've
now
created
and
packaged
up
for
utilization.
Now
they
have
a
trust
issue
themselves.
They
need
to
be
able
to
demonstrate
that
this
in
fact
works.
B
So
in
all
likelihood
they
go
and
they
create
a
video
that
on
YouTube
showing
exactly
how
their
attack
was
able
to
compromise
the
system
and
gee
whiz
isn't
there's
a
wonderful
thing
you
should
be
using
it
to,
and
if
this
looks
a
little
bit
like
an
SD
LC,
that's
because
it
is.
This
person
has
a
job
to
do
they're
working
for
someone
who
has
a
an
end
goal
of
being
able
to
build
something
out.
B
But
this
is
something
that
also
happens
a
little
bit
off
in
the
shadows
and
occasionally
things
like
Equifax
or
as
I
refer
to
it.
The
PR
department
is
involved
and
nature
of
their
vulnerability.
That's
out.
There
ends
up
getting
a
ton
of
publicity,
and
so
that
increases
their
credibility,
increases
the
value
of
what
they've
accomplished.
This
is
their
business,
and
this
is
what
we're
collectively
fighting
against,
though
talked
a
little
bit
of
stuff.
B
That's
theoretical,
I'm
gonna
actually
make
this
incredibly
real
and
decomposed
a
specific
vulnerability
and
I'm
going
to
decompose
a
vulnerability
that
was
made
public
through
the
process
known
as
responsible
disclosure,
and
so
under
that
process
a
security
researcher
uncovers
an
issue
goes
to
whoever
the
project
owner
is
and
says:
gee
whiz.
But
if
I
do
this
bad
things
happen
and
I
assert
that
that's
a
security
issue
collectively
they
work
together
to
determine
exactly
what
the
scope
of
it
is
and
create
patches.
B
Those
patches
are
then
brought
downstream
in
to
distributions,
and
the
idea
is
that
until
those
patches
are
actually
released,
nobody
outside
of
those
core
team
members
actually
knows
that
the
issue
is
happening
or
that
that
could
be
out
there.
So
in
decomposing,
this
vulnerability,
I've
decided
to
choose
a
vulnerability
from
last
fall
that
impacted
a
lot
of
the
systems
that
we're
dealing
with
on
a
daily
basis.
B
Specifically
the
Linux
kernel,
this
was
an
embargoed
vulnerability,
which
is
the
term
that's
used
when
you're
working
through
responsible
disclosure,
it
was
given
the
name
CVE
2016,
51,
95
and
C
V
stands
for
common
vulnerability.
Enumeration
2016
just
happens
to
be
the
year
in
which
was
allocated,
and
5195
just
happens
to
be
the
sequence
number
that
was
associated
with
the
block
of
numbers
that
this
was
run
through.
Nothing
particularly
fancy
about
it.
B
Now
the
upstream
patch
was
created
on
the
18th
of
October
by
Lennis,
and
this
is
his
commit
message
and
I
highlight
a
couple
of
important
pieces
in
here.
First
piece
being
this
is
an
ancient.
It
was
actually
attempted
to
be
fixed
once
11
years
ago,
but
that
was
then
undone.
So
what
we've
effectively
established
is
a
set
of
commit
IDs.
That
I
didn't
highlight
that
go
back
11
years
and
represent
the
timeline
for
this
particular
issue.
Now,
there's
a
whole
series
of
Forks
that
happened
over
that
time
period.
B
So
there's
going
to
be
multiple
branches
of
the
kernel
that
are
going
to
be
impacted
by
the
patches.
Well,
the
next
piece
that
I
want
to
call
out
is
the
last
highlighted
section
which
says
also
the
VM
has
become
more
scalable.
What
was
a
purely
theoretical
race
condition
back
then
has
become
much
easier
to
trigger,
and
so
what
that
really
is
saying
is
if
we
look
back
at
the
types
of
servers
that
we
were
working
with
ten
eleven
years
ago,
they
were
single
core
machines,
maybe
there's
some
hyper-threading.
B
We
might
have
two
or
three
socket
to
her
to
her
four
sockets
in
there
three
ever
actually
worked
two
or
four
sockets
in
there.
So
there
wasn't
a
whole
lot
of
concurrency
race
conditions,
love
concurrency.
Today
you
can
get
12,
18,
24,
core
sockets
and
there's
a
ton
of
concurrency
in
there
throw
a
second
socket
in,
and
you
just
double
the
nature
of
the
concurrency
and
when
you're
dealing
with
something
that's
a
copy-on-write
issue,
as
this
was
race,
conditions
can
be
particularly
problematic.
B
So
this
is
lynn
is's
commit
message
on
the
18th
of
October
on
the
21st
of
October.
The
embargo
expires
is
tons
of
media
coverage.
Silly
dirty
cow
branding,
dirty
cow
ninja
is
created
where
you
can
buy
silly
things
in
their
shop,
including
absorbing
the
price,
t-shirts
and
coffee
mugs
patches
are
available
from
all
major
distributions
and
the
embargo
is
expired
and
various
people
start
to
make
assertions
and
the
timeline
moves
forward.
B
Though,
in
the
US
and
Canada
we
have
a
little
fall
festival
called
Halloween,
where
people
love
to
dress
up
in
silly
costumes
and
kids,
dress
up
in
silly
costumes
and
go
door
to
door
and
look
for
candy
and
grown-ups
party
in
their
silly
costumes,
and
we
have
a
lot
of
fun
of
black
done
and
well.
This
is
Madeleine.
Madeleine
is
one
of
our
build
sales
people
and
she
decided
that
she
was
going
to
dress
up
as
dirty
cow
as
part
of
her
team
and
they
actually
won
contest
and
that's
the
31st
of
October.
B
Now,
if
you've
got
media
coverage
for
this
vulnerability,
the
logical
place
where
you
would
expect
to
have
security
information
would
be
in
what's
known
as
the
national
vulnerability
database.
The
NBD.
Sometimes
people
refer
to
as
mitre
actually
had
no
meaningful
information
on
this
vulnerability
until
the
10th
of
November.
B
So
that's
roughly
three
weeks
of
timeline
from
when
the
embargo
expired
patches
were
available,
people
who
dress
in
silly
costumes-
and
there
was
still
nothing
from
a
security
perspective
that
was
disclosed,
it's
a
pretty
big
time
window
or
someone
who
mount
a
malicious
attack,
but
there's
a
whole
series
of
point
in
time.
Decisions
and
information
pieces
that
play
in
when
the
embargo
expired.
Various
media
outlets
were
asserting
that
well,
this
was
not
remotely
executable.
B
It
turned
out
that
it
was
that
information
came
out
about
six
hours
later
when
the
researcher
said
well,
I
figured
this
out
by
looking
my
web
blocks.
So,
yes,
this
is
remotely
executable.
There
were
initial
assertions
that
virtualization
meant
that
this
was
not
exploitable
that
took
better
part
of
a
day
to
say.
Well
that
depends
on
how
the
hypervisor
is
architected,
and
so
some
are
and
some
aren't.
Then
there
was
about
three
days
went
by
where
people
were
asserting
that
if
you
were
in
a
container
that
the
namespaces
effectively
prevented
that
from
occurring.
B
If
you
look
about
halfway
down
the
page
of
VOCs
you'll,
see
one
says
dead,
beef
see
that's
actually
a
container
breakout
and
that
took
a
little
over
three
days
for
that
to
come
out,
use
a
very
interesting
way
of
manipulating
the
system
in
order
to
bring
that
forward.
At
this
point
in
time,
there
is
well
over
80
such
proofs
with
concept
out
there,
and
so,
if
you
made
your
decision
about
how
to
go
about
patching
this
when
the
embargo
expired,
you
exposed
yourself
to
a
different
level
of
risk
than
if
you
were
continuously
reevaluating.
B
B
Some
organizations
look
at
security
analysis,
as
you
know
what
I'm
going
to
go
and
do
a
pattern,
static
analysis,
so
fundamentally,
Coverity
scan,
for
example,
others
are
going
to
do
some
injection
testing.
Those
are
gonna,
do
some
fuzzing
or
some
contesting
on
the
system.
In
reality,
all
of
these
techniques
are
focusing
on
the
code
that
the
individual
is
creating
or
the
organization
is
creating,
but
they're
not
focused
on
upstream
and
they're,
not
focused
on
the
dependencies.
B
It's
where
tools
like
vulnerable
'ti
analysis,
which
is
what
black
ducked
out
I'm
going
to
play
so
in
a
full,
end-to-end
security
model.
You
want
to
do
static
analysis.
You
want
to
do
injection.
You
want
to
do
dynamic
analysis,
but
you
probably
aren't
going
to
get
buy-in
from
your
leadership
team
to
say,
run
static
analysis
on
the
Lynx
kernel
or
run
it
on
the
docker
engine
or
run
it
on
open
shift
components
or
run
it
on
your
Sdn
controllers.
B
B
The
vector
was
actually
through
an
open,
SSH
vulnerability
from
2004,
with
a
flag
allowed
TCP
forwarding
set
to
true.
If
you
look
at
that
particular
disclosures
description,
one
of
the
things
you'll
notice
is
that
it
describes
nothing
like
what
we
have
today
for
a
compute
environment.
It
does
not
describe
IOT
devices,
it
looks
like
it
should
be
one
of
these
ones
where
yeah
this
is
legacy.
It
doesn't
really
impact
me
if
you
dig
just
a
little
bit
deeper
you'll
find
that
this
allows
a
CP
forwarding
flag
is
set
to
true
inside
the
man
page.
B
It
says
that
well,
this
is
not
a
security
issue
and
I
certain
anytime.
Someone
says
this
is
not
a
security
issue,
it
probably
is,
and
you
should
be,
having
your
spidey
sense
going
thing
yeah.
We
want
to
work
on
that.
This
is
not
a
security
issue
because
it
needs
to
have
a
well
known
password
and
be
publicly
connected
in
order
to
be
exploited.
It
just
so
happened
that
a
lot
of
these
IOT
devices
had
things
like
password
of
say,
admin,
admin
or
admin
password.
B
So
all
of
a
sudden,
something
that
shouldn't
be
an
issue
becomes
a
big
issue.
Let
me
Apache
struts
vulnerability.
This
actually
impacted
the
Canadian
Revenue
Agency
CRA,
which
is
the
Canadian
equivalent
of
the
IRS
right
in
the
middle
of
the
e-file
tax
season,
and
this
had
a
little
bit
of
extra
press
around
it
because
they
were
proactive
and
reached
out
to
the
media
to
say:
look.
We
are
turning
this
efile
system
off
because
we
are
vulnerable.
We
need
to
get
this
thing
fixed
as
it
turns
out.
B
The
same
vulnerability
from
March
is
exactly
what
impacted
Equifax
last
week
or
became
disclosed
with
Equifax
last
week.
The
vulnerability
response
times
matter,
awareness
matters
when
looking
at
things
and
there's
an
incredible
longtail,
we
may
want
to
kind
of
point
our
fingers
at
Equifax
and
say:
why
did
it
take
you
so
long,
but
we
can
equally
point
our
fingers
at
the
roughly
200,000
websites
that
are
still
vulnerable
to
heartbleed
a
three
year
old
vulnerability
in
open
ssl.
B
One
of
the
things
that
this
actually
plays
into
is
what
we
look
at
as
an
open
source.
Development
risk
maturity
model,
and
this
is
gonna,
feel
really
familiar
to
most
people
at
1:00.
This
is
when
we're
worrying
about
features
and
functions.
We
really
don't
care
what
we're
bringing
in
what
our
dependencies
are.
It's
a
state
of
blissful
ignorance.
B
We
want
to
make
something
we
want
to
hit
that
MVP
so
that
we
can
actually
get
something
out
there
and
find
out
that
it
solves
the
problem,
because
that's
actually
what
we
care
about
as
we
move
forward,
and
we
find
a
few
people
working
it.
We
find
a
few
people,
who've
actually
downloaded
and
might
be
actually
using
it.
We
get
a
little
bit
of
an
awakening
to
understand
how
the
security
implications
of
what
we're
going
to
be
working
with
should
be
attended
to,
but
we're
still
very
much
focused
on
features
at
this
point
of
time.
B
Before
we
get
a
level
of
understanding,
we
start
throwing
in
some
manual
to
review
processes
some
fairly
basic
tooling.
We
might
have
some
spreadsheets
to
keep
track
of
stuff.
We
might
try
and
look
at
some
free
or
low-cost
tools
and
do
security
scans,
maybe
prior
to
each
release,
as
opposed
to
going
on
an
ongoing
basis.
This
is
where
a
lot
of
projects
are
today.
B
What
we
really
want
to
get
to
is
a
state
of
enlightenment,
where
we
have
automatic
identification
of
all
the
risks,
as
they
happen
and
as
they're
disclosed
and
we've
baked
all
this
into
our
CI
NCD
environments.
So
from
an
open
shift
perspective
that
would
be
hey.
Can
I
bake
this
into
my
build
pipelines?
What
kind
of
awareness
do
I
have
around
my
builders
and
though
this
is
the
model
that
we
want
to
take
forward
when
we
want
to
automate
and
there's
a
set
of
criteria
that
we
put
in
place
around?
B
What
makes
for
successful
automation
and
highlights
and
extracts
the
best
information
flow?
So
the
first
thing
we
want
to
look
at
is
the
factors
that
impact
risk.
So
item
number
one
is
a
vulnerable
open
source
component.
Where
are
they?
Where
are
all
the
dependencies
coming
from?
Are
the
dependencies
on
the
application
side
or
on
the
the
user
space
side?
That's
inside
the
container
image
is
a
component
that
were
dependent
upon
a
fork
or
a
true
dependency.
How
is
it
being
linked
in?
B
These
are
the
factors
that
go
into
the
vulnerable,
open
source
side
of
things
point
in
time.
Decisions
are
another
problem.
We
all
want
to
use
stable
components
wherever
possible,
but
that
stable
might
be
end-of-life
enough.
End-Of-Life
equates
to
dead.
We
might
have
a
lot
of
responsibility
and
potential
technical
debt
to
attend.
Are
there
change
sets
that
are
coming
down
the
pipe
that
could
make
it
a
lot
more
difficult
to
actually
update
to
a
newer
version
if
the
need
dictates
API
versioning
issues,
what
is
the
security
response
process
for
the
project?
B
What
is
the
commit
velocity
and
who
are
the
contributors
and
are
they
changing?
Are
they
stable?
Some
of
this
information
is
actually
coming
out
of
a
new
initiative
that
the
Linux
Foundation
announced
on
Monday
around
what
they're
calling
the
Chaos
project
to
understand
exactly
what
the
true
health
of
upstream
projects
really
is.
B
B
We
might
do
some
a
B
we
might
put
a
canary
out
there,
but
fundamentally
were
not
in
the
patching
process
when
we
were
working
with
containers,
but
we
do
need
to
question
that
patch
process,
so
each
of
its
in
time
we're
actually
struts
became
more
vulnerable
due
to
the
nature
of
whatever
was
disclosed
against
it.
If
you
happen
to
be
at
a
version
prior
to
that
and
upgraded
to
a
version
that
was
worse,
did
you
actually
move
from
the
frying
pan
into
the
fire?
Are
you
the
fish?
Are
you
the
person
controlling?
B
We
want
to
build
a
risk
profile
for
every
single
container
in
the
system,
even
builders,
so
if
I'm
doing
a
source
to
image
and
my
get
environment,
changes
and
I've
got
the
source
image
build
it's
going
to
toss
it
into
the
internal
shift
record
registry
and
I've
got
my
triggers
and
my
deployment
triggers
in
place.
They
vulnerable
and
comes
in
through
it
workflow.
It
makes
sense
that
you
might
have
some
vulnerabilities
in
your
container,
but
what
happens
if
that
builder
container
is
similarly
impacted?
B
The
last
piece
of
the
puzzle
is
around
ongoing
changes
in
risk.
So
if
we
assume
that
we've
got
every
single
test
done
and
we've
got
a
shiny,
happy
object
and
a
new
disclosure
happened
to
say
40
minutes
ago,
we
don't
want
to
be
in
the
business
of
continuously
scanning
our
running
containers,
because
that's
going
to
impact
the
performance
and
scalability
of
our
system.
B
Through
now
expected,
we
have
a
solution
that
historically
has
been
geared
towards
the
developer
experience
in
the
release.
Engineering
experience
so
able
to,
for
example,
provide
security
information
within
the
Eclipse
IDE
or
Visual
Studio
IDE
work
with
various
package
managers
integrate
within
CI
tool
chains
from
pretty
much
every
CI
out
there
integrate
within
the
security
the
static
and
dynamic
analysis
tools.
So
if
you're,
for
example,
going
through
a
micro
focus
fortify
that
you've
got
the
pieces
in
place
wherever
the
artifact
storage
is
that
we
can
scan
with
that.
B
We
have
a
knowledge
base
that
we
host,
partly
because
it's
about
500
petabytes
in
size.
It's
five
hundred
terabytes
in
size
is
about
to
become
a
petabyte,
and
most
people
don't
really
want
to
be
in
the
business
of
hosting
that
every
other
aspect
of
the
solution
is
actually
customer
hosted.
So
our
core
application
is
called
the
hub
and
we
have
essentially
a
hub-and-spoke
kind
of
scenario
where
various
integration
elements
hang
off
of
that.
If
we
look
the
open
shift
environment-
and
this
can
be
an
enterprise
deployment,
this
could
be
an
origin
deployment.
B
They
all
work
exactly
the
same
way.
I
have
the
potential
for
an
integrated
registry
and
I
have
image
stream
events
that
are
going
to
hang
out
for
that.
Obviously,
I
have
the
potential
for
an
external
registry
that
could
be
the
Red
Hat
there
catalog
that
could
be
docker
hub.
That
could
be
your
own
internal,
artifactory
or
nexus
repository.
What
we
do
is
we
actually
in
place
upon
some
integration
element
designed
around
being
able
to
listen
for
activities
that
are
happening
in
the
system
that
relate
to
a
mutable
container
objects.
B
So
a
new
image
stream
is
created,
that's
associated
with
an
image
that
is
within
the
registries.
Purview
we'll
see
that
create
event,
also
see
an
update
and
will
be
done
as
well.
We'll
also
see
pod
creation
events
so
that,
if
container
image
is
brought
in
from
outside
the
integrated
environment,
I
will
be
able
to
see
that
when
we
see
it
we'll
go
and
perform
an
assessment
to
determine
whether
or
not
it
needs
to
be
scanned.
If
it
needs
to
be
scanned,
we
perform
that
scan
and
results.
Go
up
hub
and
hub.
B
Bosun
takes
a
look
at
that
bill
of
materials
and
maps
it
against
our
knowledge
base
to
go
and
say
here's
what
the
risk
it
risks
Larry
says,
are
assessed
by
our
policy
engine,
which
then
goes
and
communicates
everything
back
into
the
scan
controller
and
ultimately
annotates
the
images,
and
so
those
annotations
are
actually
pretty
interesting.
So
with
origin,
1,
6,
&,
openshift,
Enterprise,
3
6,
we
actually
have
a
spec
in
place
where
those
annotations
can
actually
be
used
within
an
emission
control
workflow
to
decide
that
you
know
what
this
image
is.
Okay
to
deploy.
B
This
image
was
okay
to
deploy.
Now,
it's
not
so
great
and
of
course,
the
outside
world
is
continually
changing.
So
the
policy
and
general
update
as
the
outside
world
painters
and
we'll
get
new
notifications
coming
in,
which
will
also
update
those
annotations
ensuring
that
the
state
of
the
system
is
they
no
more
than
an
hour
out
of
date
so
effectively.
What
this
boils
down
to
is
that
had
theoretically
equifax
been
using
OpenShift
in
this
application.
That
was
the
attack
vector
that
we've
all
been
talking
about
for
the
better
part
of
the
last
week.
B
We
would
have
been
in
a
position
to
actually
let
them
know
exactly
which
images
were
impacted
within
an
hour
of
that
disclosure
back
in
March
and
continuously
monitor
for
any
changes
so
that,
even
if
a
developer
happened
to
revert
to
an
older
version
because
that's
what
they
needed
to
do
that
we
could
have
flagged
it.
As
that
happened,
that's
our
piece
of
the
puzzle,
but
we
want
to
make
certain
that
we
are
truly
layering
container
security
and
the
success
criteria
for
a
truly
trusted
environment
starts
with
the
platform.
B
But
that
still
leaves
the
infinity
of
of
open
source
and
that's
where
we
take
over,
and
so
we
literally
scan
any
and
all
container
images
in
an
open
shift.
Employment,
including
our
own,
providing
visibility
into
the
open
source
components
regardless
of
the
source
annotating
those
images
with
the
vulnerability
information
and
automatically
updating
them
with
new
disclosure
information,
as
that
occurs,
without
any
need
for
rescan
without
any
human
involvement.
It's
completely
automated
and
integrated
within
the
system.
B
B
B
But
I'm
gonna
show
exactly
how
easy
it
is
to
install
actually
no
I'm
gonna
what
we
gonna
make
it
I'm
logged
in
the
right
place.
So
here's
my
open
shift,
console,
probably
gonna.
Let
me
login
again
and
I
actually
have
a
project
called
Black
Duck
scan
which
I'm
going
to
delete
right
now
and
the
black
dots
can
project
is
our
integration
elements
and
so
once
it's
actually
completely
deleted
itself
and
all
the
container
infrastructure
underneath
it
we'll
go
and
we'll
refresh.
B
Well,
it's
my
username.
That's
word
select
a
version
of
my
hub
server
I
happen
to
know
this
version:
3.7
dot,
1
I'm,
going
to
go
with
two
concurrent
scans
to
make
this
go
just
a
little
bit
faster
for
the
most
part.
This
is
the
value
that
people
use
occasionally
if
the
nodes
are
smaller,
we
recommend
going
with
one
concurrent
scan
and
in
very
large
clusters,
we've
seen
three
to
be
beneficial.
B
It
goes
and
creates
all
the
components
and
if
I
go
back
to
my
open
shift,
console
I'll
see
the
Black
Ducks
can
project
in
place
and
I
have
a
total
of
five
containers
that
are
here
now.
My
infrastructure
itself
has
four
nodes
and
the
way
that
this
is
architected
we
have
a
daemon
set
that
is
on
each
of
the
worker
nodes
and
the
daemon
set
is
listening
for
any
activity
related
at
the
node
level
to
images
being
created
and
deployed.
B
When
it
uncovers
a
an
image
that
might
need
to
be
scanned,
it
will
then
go
and
ask
for
permission
from
the
arbiter
itself,
and
so
I'm
gonna
go
and
take
a
look
at
our
pods
I'll.
Take
a
look
at
the
arbiter
and
we
see
right
now
he's
going
and
assigning
a
variety
of
jobs
to
ensure
that
the
scans
of
all
these
fully
qualified
containers
are
being
performed.
B
If
I
look
at
a
controller,
the
controller
actually
consists
of
two
containers:
one's
a
sidecar,
which
is
the
scan
engine
that
isn't
being
used,
and
the
other
is
the
actual
scans
that
are
being
performed
from
a
usability
perspective.
I
could
kill
any
one
of
these
things
off
and
they
would
restart
themselves
because
that's
what
cloud
native
computing
is
all
about.
At
the
end
of
the
day,
these
scans
are
being
performed
and
the
information
is
coming
up
into
our
trend,
which
I'm
going.
B
And
what
we
see
is
a
set
of
projects
that
are
created
and
in
each
of
them
there's
going
to
be
some
amount
of
registry
information.
So
this
172
31
103
10,
that
happens
to
be
the
console
that
is
associated
with
this
OpenShift
environment,
or
we
have
multiple
openshift
environments
that
are
coming
in.
If
an
image
is
coming
from
docker
hub,
it
won't
be
fully
qualified.
B
Sometimes
we
get
ones
from
docker
I/o
that
are
will
also
be
scanning
things
that
are
coming
out
of
the
Red
Hat
container
catalog,
as
they're
used
and
so
I'm
going
to
go
and
take
a
look
at
this.
This
image
here
hub
documentation.
The
version
is
the
first
10
characters
of
the
pulse
back.
So
that's
completely
immutable
and
I
see
all
the
components
that
are
actually
in
here.
B
So
there
are
221
components
that
are
part
of
this
particular
application
and
there's
some
hibernate
pieces
in
here
there's
some
things
that
are
coming
along
completely
for
the
ride
as
a
result
of
the
user
space,
and
in
this
case
there
are
19
components
that
have
a
high
severity
vulnerability
associated
so
I'm,
going
to.
Let's
see
what
do
I
want
the
gun
they'll
pick
on
Tomcat,
let's
see
what
we
have
from
a
tomcat
perspective.
I
have
some
new
vulnerabilities
in
place.
I
can
see
exactly
what
the
record
is
get
a
deeper
view
of.
B
What's
in
here,
descriptions
how
exploitable
it
is
see
some
references,
a
lot
of
the
time
we're
going
to
be
able
to
get
to
things
like
discussions
around
the
particular
fix
being
able
to
occasionally
find
the
actual
exploit
code
so
that
you
can
actually
test
it
against
it.
These
are
all
part
of
the
normal
things
that
we
we
have
with
specific
vulnerability.
But
importantly,
if
I
go
this
one
was
a
Tim,
our
Pacheco.
B
This
is
some
of
the
annotation
information
that
ends
up
being
put
in
place.
So
I
can
see
the
server
that
this
was
running
on
the
version
that
was
there
what
the
endpoint
is
and
this
quality
images
that
openshift
I
owed
policy
black
duck.
This
is
the
specification
that
it
was
referring
to
earlier,
where
I'm
now
able
to
flag
that
this
is
not
a
non-compliant
policy,
and
so,
if
I
had
a
mission
control
in
place
that
prevented
non-compliant
policies
from
allowing
for
container
execution.
This
would
automatic
be
managed
by
that
emission
controller.
B
So
we
bake
all
this
information
in
so
that
the
entire
interaction
with
the
black
duck
hub
user
interface
is
something
that's
not
necessarily
going
to
be
a
requirement
for
an
Operations
user.
They
can
go
and
read
out
these
annotations,
bring
them
into
a
Cystic
or
a
data
dog
and
do
the
right
things
based
off
of
the
the
information
that's
present
there
and
that's
one
of
the
the
second
key
elements
that
we
have
is
that
we're
not
forcing
people
into
our
our
UI?
B
Now
it
was
designed
with
an
understanding
of
how
Forks
torques
affords
parallel
streams
of
development
that
might
actually
be
merged
back
in
his
feature
elements,
but
those
are
actually
that's
normal,
open
source.
That's
what
you
want
to
see.
We
enhance
all
that
security
information
that
we're
collecting
from
the
world
with
the
security
research
team
that
today
numbers
a
little
over
50
people,
we're
updating
as
the
security
issues
occur.
B
So,
even
when
the
struts
update
of
s2o
52
and
s2o
53
came
out
last
week,
we've
had
those
in
and
fully
mapped
through
within
an
hour
and
being
able
to
map
those
to
public
exploits.
This
is
really
crucial
at
this
point
in
time,
were
half
a
petabyte
of
storage,
but
we're
pulling
in
open
source
information
from
about
10,000
different
data
sources,
where
all
of
github
counts
as
one
of
them
and
oddly
enough,
I've
date
these
these
stats.
B
All
that
we
have
is
to
be
able
to
have
full
ended
and
his
ability
to
be
able
to
inventory
those
that
are
in
place.
Mathema
known
security
issues
identify
those
risks,
manage
them
to
your
governance
policies
and
alert
when
the
world
changes
around
and
do
this
at
a
hundred
percent
automated
way.
That
requires
no
human
interaction
and
that
for
practical
purposes,
if
a
human
tried
to
get
in
the
way
that
we
would
be
able
to
detect
that
the
human
is
messing
with
the
system
and
you're
trying
to
prevent
scans
from
happening.
B
B
Black
duck
has
its
user
conference.
It's
called
flight
flight
2017
is
being
held
in
Boston
at
the
Seaport
Hotel
and
World
Trade
Center.
If
anyone
watching
this
recording
went
to
Red
Hat
summit
earlier
this
year,
that's
the
same
area.
It's
being
held
the
7th
of
the
9th
of
this
year
and
we've
got
some
really
really
good
content
packed
in
here
around
Bobson
security,
research
and
innovation.
We'll
have
some
of
our
researchers
over
who
will
be
able
to
explain
some
of
the
techniques
that
they're
using
to
simplify
the
security
man.
It.
A
Sounds
like
a
really
good
event
too.
You
know,
and
it's
this
is
one
of
those
things
where
it's
you
could
go
down
the
wormhole
and
ask
tons
of
questions
about
each
and
every
one
of
them,
but
I
think
these
kinds
of
sessions
and
and
this
event
will
probably
be
a
really
good
way
for
anyone
who's
in
the
sysadmin
or
security
or
even
developers
we're
working
on
applications
to
get
a
better
understanding
of
where
the
risk
factors
are
and
where
they're
coming
from
so.
B
Completely
and
and
the
registration
code
they
put
down
there,
Tim
99,
that's
the
special
code
that
we
put
together
for
on
the
open
source
events
that
we
we
do.
That
gets
you
in
for
99
dollars.
So
that's
a
huge
huge
savings
for
anyone
who
has
to
justify
to
their
boss
that
flying
into
Boston
is
a
little
bit
pricey
yeah.
A
A
B
B
Of
is
this.
My
do
I
have
to,
for
example,
scan
my
entire
data
center,
or
is
it
only
in
this
area
and
these
types
of
analysis
that
need
to
go
into
it?
If
you
have
a
tool
that
can
go
and
say
these
are
the
applications
which
are
using
this
thing
that
has
now
become
problematic,
even
though
it
wasn't
yesterday
focus
here
that
should
hopefully
help
with
the
velocity
of
fixes,
and
even
if
you
can't
necessarily
get
it
down
to
an
instantaneous
fix.
At
least
you
can
get
to
that
run
book
for
how
to
resolve
it.
A
One
thing
we
think
that
automations
is
funnest
is
going
to
save
the
world
and
everything,
but
it's
also
like
some
of
the
stuff
that
we've
built
in
with
open
ship
and
the
annotations
come
back
to
give
us
the
ability
to
block
an
image
from
being
deployed
and
I.
Think
that's
really
useful
too,
or
at
least
throw
up
some
decks
before
something
to
get
redeployed
or
you
spun
up.
So
I
think
this
is.
A
A
So,
thank
you
very
much
Tim
for
taking
the
time
today
to
talk
to
us
about
this
they'll,
be
this
will
get
posted
on
the
open
ship
blog
shortly
and
they'll.
Put
up
the
links
in
here
that
you
found
and
we'll
shoot
it
out
over
the
Internet's
and
over
the
social
channels,
and
hopefully
we'll
get
you
back
again
and
we'll
see
some
of
those
lead
times
chunking
down
from
under
200
days,
or
so
it
won't
be
around
the
dirty
cow
or
or
heartbleed.