►
From YouTube: Red Hat Enterprise Linux Presents (E05): Security
Description
A show that features the people and technology that make Red Hat Enterprise Linux into the the world’s leading enterprise Linux platform.
In this episode, we'll be talking with Mark Thacker, the RHEL Product Manager for product security.
A
Good
morning
good
afternoon
good
evening
and
welcome
to
episode
five,
I
believe,
of
red
hat
enterprise
linux
presents,
I
am
chris
short
host
of
the
show,
I'm
joined
by
two
other
wonderful
red
hatters,
who
are
kind
of
popped
off
to
the
right,
because
well
I
changed
scenes
and
that
does
something
to
zoom.
So
I'll
have
to
fix
that
here
in
a
second
but
scott
mcbrian
is
here
the
the
other,
the
show
essentially
and
I'll.
Let
him
introduce
himself
while
I
click
a
button.
Real
quick.
B
Hey
everybody
so
scott
mcbrian,
I
work
with
the
red
hat
enterprise,
linux
team
and
today
we
are
joined
by
mr
mark
thacker,
the
principal
product
manager
for
red
hat
enterprise,
linux
security
experience,
welcome,
mark.
B
So
mark,
do
you
want
to
tell
us
a
little
bit
about
yourself
or
what
you
do
or
how
much
you
love
red
hat.
I
mean
any
of
those
is
acceptable.
C
Any
of
those
in
any
order
right
sure,
yeah,
absolutely
again,
first
glad
to
be
here
really
awesome.
I
hope
we
have
some
great
questions
today.
So
again,
I'm
mark
thacker,
I'm
based
in
myrtle
beach,
south
carolina.
C
C
C
It's
similar
to
that.
No,
that's!
That's
a
really
good
point,
I
remember
being
in
academia,
so
I
I
did
a
lot
of
college
focused
on
computer
science
and
then
my
first
part
of
my
career
was
in
academia
as
essentially
an
I.t
administrator
set
up.
The
very
first
sets
of
gopher
servers
for
anyone.
That's
old
enough
to
remember.
A
C
A
C
Turbo
gopher,
it
was
the
pc
command
line,
gopher
option,
so
it
lacked
mainframe
support
for
tn3270
emulators,
which
was
required
if
you're
going
to
talk
to
most
online
card.
Catalog
systems
of
the
day
were
actually
hosted
on
tn
3270
systems.
A
A
B
C
C
Go
for
d,
what
it's
it's
not
anything
having
to
do
with
the
go
language!
That's
your
first
lesson
right.
Nothing.
C
The
go
language
anyway,
so,
no
actually,
in
that
context,
I
was
really
annoyed
at
the
I.t
departments
at
these
universities
because
they
would
the
university.
You
know
the
idea
was
being
no
firewalls.
Everyone
can
set
up
a
web
server
on
their
laptop
or
their.
They
didn't
have
laptops,
but
desktide's
this,
you
know
desk
size
system
and
you
just
go
for
it.
It's
all
free
everything.
Well,
it
didn't
like
that
and
that
really
annoyed
me.
So
I
started
studying
and
trying
to
understand.
C
Why
is
it
that
there's
all
these
firewall
restrictions
and
then
for
some
reason
that
turned
into
the
so
you
like
security
and
I
ended
up
going
to
work
in
actually
a
long
time
at
sun
microsystems
and
did
a
very
long
stint
at
some
microsystems
in
just
about
a
year
at
oracle,
doing
security
technologies,
firewalls,
multi-level
security,
if
anyone's
ever
heard
of
trusted,
solaris
or
solaris
with
trusted
extensions
that
was
mine,
turned
that
into
a
product
management
role
and
kind
of
been
in
product
management
ever
since
then,
in
a
variety
of
different
jobs
and
focused
here
at
red
hat,
just
on
security.
C
A
Well,
mark
in
your
honor,
I
am
wearing
my
set
in
force
one
shirt
today.
I
know
it's
all
about
dan
walsh,
but.
C
B
B
A
B
Here,
since
you've
spent
so
much
time
in
security
land,
why
do
you
think
security
is
important
and
what
specifically
about
it
are
you
most
keen
on.
C
So
good
question:
I
have
a
saying
that
I've
borrowed
from
many
other
bright
people,
but
that
is
that
security
is
a
process.
It
is
not
a
product,
it's
not
a
technology.
C
Is
that
an
answer
to
your
question?
Well,
actually,
it
is
because
it's
a
process.
Security
is
important
because
it's
not
a
thing.
That's
a
check
box.
It
is
a
process,
it
is
a
lifestyle.
C
In
fact,
if
you
go
back
and
take
a
look
at
red
hat
summit,
you'll
see
for
the
last
three
years,
I've
been
fortunate
enough
that
I
literally
do
have
a
standing
presentation
called
the
security
is,
is
a
lifestyle,
and
it
focuses
just
on
that.
The
the
key
is
security
is
important,
because
it's
not
just
a
one
bit
on
or
off
it's
also
not
an
absolute
security
is
hard
to
measure.
C
C
Well,
maybe
I
know
him
no,
it's.
I
think
security
is
important
because
it
is
not
an
absolute
state.
It's
a
constantly
changing
paradigm.
It
is
something
there
is
no
such
thing
as
a
fully
secured
system.
Now
I
will
be
completely
transparent
with
you.
I
think
the
maturity
I've
seen
the
maturity
of
I.t
understanding
of
security
change
quite
a
bit
over
time.
C
It
used
to
be
that
the
thought
was.
Most
people
didn't
know
anything
about
security
and
they
assumed
that
security
was
something
that
was
absolute.
Is
my
system
secured
check,
got
it.
C
They'll,
do
that
that's
right,
they'll
they'll
check
it
out,
it'll
be
fine
and
I
t
administrators
would
be
like
I've
got
something
that
works,
and
now
I
need
to
do
the
security
check
box
to
it,
which
is
difficult,
I've
seen,
then.
I
think
the
industry
became
a
little
more
aware
that
security
is
a
little
more
complex
and
suddenly
you
get
into
a
whole
series
of
checks
and
balances.
C
That's
why
we
have
security
baselines
now
things
like
disastig,
if
you're
in
the
dod
world
the
department
of
defense
role,
things
like
the
cis
benchmark,
all
of
which
subscribe
to
say,
yeah,
here's
some
checks
and
balances
you.
This
is
what
you
need
to
do
to
make
your
system
secure.
C
Well,
that's
better
than
assume
everything
is
secure
if
I
at
least
have
a
password
on
it
right,
so
it's
gotten
more
mature,
no
doubt
about
it.
I
will
tell
you,
though,
security
is
important,
because
I'm
seeing
people-
how
do
I
express
this?
I
see
people
in
industries
sort
of
delegating
security
back
to
being
a
set
of
checkbox
items.
C
C
Is
security
is
patching
equal,
equal
security?
The
answer
is
no,
of
course
not.
Security
is
risk
management.
Managing
security
is
managing
risk
and
you
always
are
in
a
risk
trade-off
discussion.
You
get
an
automobility
you're
driving
yourself
you're
in
a
risk
trade-off
discussion
with
yourself
constantly
do
I
trust
that
I'm
not
going
to
go
run
into
something
and
someone
won't
run
into
me.
It's
a
risk
trade-off
discussion
with
security.
C
It's
the
same
thing
if
I
assume
running
the
latest
versions
of
packages
from
the
upstream
means
that
I
can
get
that
nice
wonderful,
green
check
mark
on
my
cve
scanner.
Does
that
mean
I've
reduced
my
risk?
Oh
hell?
No!
All
that
means
is
you've
gotten
a
green
check
mark.
You
still
have
other
risk
with
security
risk,
because
now
you've
introduced
new
versions
of
code,
which
you
don't
know
what
the
security
vulnerabilities
and
that
are
you
just
don't
also
stability
risk
right.
So
when
we
go
to
a
model,
were
you
guys
really?
C
I
mean
this
is
a
wonderful
conversation.
I
didn't
think
we
were
going
to
go
down
to
the
fashion,
but
when
you
go
down
the
model
of
say
going
to
security
back,
go
back
to
the
model
of
using
security
as
a
check
mark
saying
right.
Well,
do
I
have
all
my
cds,
then
you've
gone
back
to
the
model
of
saying
well
simply
having
a
check
mark
checkbox
on
cves.
C
Is
an
overriding
factor
that
I
now
have
ignored
anything
like
api
stability,
api
stability,
the
compatibility
of
this
new
version
of
a
package,
for
example,
with
all
of
my
existing
infrastructure,
you've
ignored
the
risk
part
of
risk
management.
You've
gone
back
to
checking
your
box,
so
security
is
important
because
it's
a
process.
B
So
you
have
to
have
multiple
layers
right
and
cve
mitigation
is
definitely
one
of
those
layers
right,
but
you
know,
if
you're
not
protecting
your
services
through
something
like
a
firewall
or
you
know
even
os
provided
network
connection
rules,
then
you're,
just
relying
on
the
fact
that
all
those
services,
while
maybe
they've,
all
been
updated
to
close
any
known
cve,
are
cool
and
in
reality
like
there
could
be
an
unknown
cv.
That's
lurking
around
out
there
that
somebody
knows
about
that.
You
don't
know
about.
C
Right
and
not
to
toot
the
red
hat
horn
too
much,
but
we
go
back
to
chris's
shirt.
This
is
why
you
don't
disable
lse
linux,
because,
what's
going
to
protect
you
on
the
day
zero,
it's
it's
got
to
be
something
else.
This
is
defense
and
depth
right.
It's
got!
That's
where
you're
talking
about
the
the
the
the
parfait
slash
onion
example:
everybody
likes
good
parfait,
you
you
have
to
have
defense
in
depth.
Yeah,
it's
layers
flares.
C
C
Any
given
upstream,
open
source
project
can
be
built
in
many
different
ways:
compiler
flag
choices,
the
kernel
of
the
operating
system.
What
choices
were
made
about
the
security
flags
for
the
kernel?
All
of
these
impact,
your
in-station
security
stance,
good
example-
and
we'll
probably
talk
about
this
later-
I
don't
know,
but
our
product
security
team
at
red
hat.
C
They
will
look
at
an
inbound,
cve,
a
known
vulnerability
and
they
evaluate
what
is
the
impact
of
that
cve
on
a
red
hat
environment
based
on
how
do
we
build
it?
How
do
we
build
that
upstream
source
code
because
rel
oracle
ubuntu
each
one
of
us
chooses
a
totally
different
set
of
compile-time
choices
which
can
increase
or
decrease
the
security
exposure
that
you
could,
as
a
customer,
have
for
that
same
given
source
code?
What
ends
up
in
your
system?
How
was
it
built?
How
is
it
used?
What
are
the
default
choices?
B
And-
and
one
of
that
brings
up
another
point-
that
I
know
that
you
get
harangued
about
a
fair
amount,
which
is
why
is
it
that
red
hat
product
security,
score
of
a
cve
is
not
always
the
same?
As
you
know,
the
miter
score
right
on
the
cve,
and
so
if,
if
something
like
well
so
the
miter
score
is
just
about
the
affected
software.
C
Oh,
it's
worse
than
that
no
yeah
so,
and
just
for
those
not
necessarily
in
the
know,
the
the
miter
score
or,
what's
often
known
as
the
national
vulnerability
database,
the
nvd
score.
So
everybody
uses
a
standard
scoring
techniques
for
cdes.
It's
called
cvss
common
vulnerability,
scoring
system
typically
version.
Three.
C
C
It's
typically
the
in
the
case
of
open
source
software,
it's
typically
the
upstream
maintainer.
They
are
reporting
the
problem
to
the
best
of
their
understanding
of
the
potential
exposure.
Cool
standardized
scoring
methodology
got
it,
but
their
interpretation
of
the
impact
is
completely
dependent
upon.
C
So,
red
hat,
when
we
get
a
cve
to
evaluate
we
look
at
again,
how
do
we
build
it?
What
defaults
did
we
choose
because
we
don't
often
necessarily
do
the
same
defaults
as
upstream,
we
like
to
maintain
the
same
defaults
as
upstream,
because
everybody
likes
it,
but
sometimes
we
don't.
C
C
They
are
actively
engaged
with
national
vulnerability
database,
and
so,
if
there
is
a
score
differential
between
a
red
hat
score
and
an
nvd
score,
we
actually
work
with
nvd
and
we'll
explain.
Okay,
here's
why
we
think
the
score
should
be
x
and
to
nvds
credit.
They
will
often
say
wow.
You
guys
really
thought
through
this
we're
going
to
change
the
score
to
actually
match
what
red
hat
said
by
the
way.
Sometimes
there
is
an
assumption
that
red
hat
always
lowers
scores.
C
B
C
Okay,
so
here's
the
deal
you
remember
how
I
talked
about.
Only
the
vendor
actually
really
understands
the
impact
of
say
a
security
vulnerability.
C
Well,
if
you're
someone
like
red
hat
and
you
spend
your
life
trying
to
make
a
stable
platform,
stable
enterprise
class
operating
platform,
you
we
will
do
this
technique
called
back
porting
and
what
we
mean
by
that
sorry,
let
me
take
one
step
back,
so
a
lot
of
scanners,
of
which
there
are,
I
was
going
to
use
a
carl
sagan
reference.
There
are
billions
of
scanners
and
every
time
that
I
get
a
query
from
someone
about
a
new
scanner.
I'm
not
surprised.
Oh,
look,
there's
a
new
scanner
on
the
market.
C
Imagine
that
most
of
these
scanners
work
in
a
very
simple
model.
They
will
look
at
the
national
vulnerability
database
for
a
list
of
known
vulnerabilities
cves.
They
then
say:
okay,
what
version
of
telnet
has
that
vulnerability?
I
like
picking
on
telnet,
because
for
anyone,
if
you
are
using
telnet,
please
stop.
C
A
C
A
A
Although
we
still,
we
still
have
an
rsh
command
in
the
kubernetes
land
and
it's
literally
just
to
mean
the
remote
shell
right,
like
they're,
taking
the
command
name
and
repurposing.
C
The
thing
yes
yeah
exactly
anyway
see
there.
You
go
telnet
pulled
me
off
again:
scanner.
Vendors
will
often
look
and
say
well
a
given
application.
Let's
say
it's
telnet
version.
1.2.3
has
a
known
vulnerability.
Therefore,
if
on
your
operating
system
or
platform
or
device,
you
have
telnet
and
it's
anything
other
than
1.2.3
or
later
you
must
be
susceptible
to
that
problem.
C
Here's
the
issue,
if
you're
running
something
like
rel,
we
back
port
security,
fixes
from
say
the
next
release
of
telnet
into
the
version
of
telnet
that
we
actually
shipped
in
the
operating
system,
but
we
don't
change
the
version
number,
except
for
at
the
very
end,
we
have
an
incrementing
number
like
a
four
digit
number
that
gets
added
to
the
end.
So
if
you
look
and
say,
oh,
my
god,
rel
you're
running
telnet
1.2.2
known
vulnerable
problem,
because
it's
not
the
new,
the
newest
version
of
it.
That's
simply
not
necessarily
true
at
all.
C
Instead,
you
have
to
examine
our
database
feed
of
cves
and
what
what
platforms
and
what
products
are
affected
by
those
cves,
because
we
will
backport
the
fix
into
an
existing
version
of
our
technology.
Good
example.
The
kernel
anyone
know
what
version
of
kernel
we
have
in
raw
eight
scott.
C
That's
right
that
244
1
1.
What
does
that
mean?
That
means
that
means
it's
4.18
with
a
bunch
of
stuff
back
ported.
So
your
answer
about
its
five
is
both
right
and
wrong.
Simultaneously,
it's
the
schrodinger's
cat
model
of
cve
knowledge,
because,
basically,
we
have
patches
from
the
five
series
of
kernel
and
security
mitigations
already
brought
back
in
4.18,
whereas
if
you
take
any
other
vendors
4.18,
it's
the
vanilla
4.18,
maybe
with
some
minor
enhancements,
so
you
could
argue
well,
they
move
faster.
C
C
Fortunately,
we
actually
make
a
freely
available
data
feed
called
an
oval
data
feed
that
provides
you
with
the
full
list
of
all
the
cves
that
we
know
about,
and
what
are
the
patches
available
to
address
those
cves?
We
even
have
a
beta
version
of
the
of
the
data
feed
that
let
you
know
about
here
are
some
cves
that
we
know
about
and
either
we've
not
addressed
them
or
they're
in
that
state,
where
we're
evaluating
them
right.
We're
triaging
them,
but
we
know
about
them
and
they're
public
right
just
to
be.
C
If,
if
another
big
takeaway
is
no
one
gets
pre-knowledge
of
embargoed
cves
right,
no
one
does
it's.
We
release
that
information.
At
the
same
time
to
everybody,
everybody,
I
don't
care
who
you
are.
Everyone
gets
the
information
at
the
same
time
if
it's
embargoed
cve.
So
if
we
know
about
a
cve
and
it's
public,
we
can
tell
you
about
it.
Even
if
we
don't
currently
have
a
mitigation
for
it.
C
B
Yeah
and
I
had
actually
written
an
article
for
enable
sysadmin
a
while
ago
short,
I
put
it
in
our
chat
here
yeah.
It
basically
is
like
exactly
this,
because
I've
had
this
conversation
on
my
own.
When
I
worked
as
a
system
administrator
or
you
know
in
the
field-
it's
like.
B
B
Right
and
so
a
lot
of
times,
you
end
up
doing
things
like
looking
at
the
rpm
change
log
to
suss
out
the
cves
that
have
been
updated
in
the
version
that
you're
running
and
providing
that
list
back
to
the
auditor
to
be
like.
Okay,
you
told
me
you
were
concerned
about
these
cve
numbers.
Well,
look
when
I
look
at
the
version
that
I'm
running,
even
though
it's
the
you
know
older
from
open
source
project
release.
B
C
A
Years,
yes,
it's
10
years
of
support
out
of
the
box
kind
of
deal,
yeah.
B
Yep,
although
at
at
year
five,
when
rail
8
moves
from
full
support
to
maintenance
support,
then
there's
also
a
change
in
how
we
address
or
what
we
produce
for
that
box.
So
it
moves
to
getting
critical
and
important
mitigations
and
then
anything
else
on
top
of
that
would
be
gravy.
But
we
we
commit
to
doing
critical
and
important.
C
Yep,
which
is
actually
expanded
because
previously,
when
products
were
in
maintenance
mode,
we
had
only
really
committed
to
doing
critical
security
updates.
We've
expanded
it
to
critical
importance,
which
then
is
a
good
segue
to
say
that
way.
The
way
that
red
hat
classifies
vulnerabilities
is
in
a
four-point
category,
critical,
important,
moderate
and
low,
and
we
address
critical's
importance.
That's
we've
committed
to
do
that
across
all
active
life
cycles
of
product.
C
We
do
moderates
and
lows
generally.
We
pick
those
up
in
our
minor
updates,
so
when
a1
a2
a3,
that's
when
we'll
pick
up
the
the
the
other
moderates
and
lows
typically,
because
we
either
rebase
or
we
back
port
to
two
pointers:
real
quick
so
to
scott's
point
about
looking
in
rpm
logs,
we
actually
do
have
a
cve
database
that
anyone
can
query,
doesn't
even
require
an
account,
and
I
put
a
link
for
your
purposes
chris
into
our
chat.
So
you
can
literally
type
into
cve
and
it'll.
C
Go
back
and
it'll
tell
you
oh
metaphor:
seven
or
yes,
there's
a
patch
available
for
real
seven
eus
right
so
remember:
there's
different
life
cycle
offerings
from
rel
as
well
and
to
the
question
of
course.
Roll
eight
is
a
10-year
life
cycle
and
for
the
first
time
ever
we
can
actually
have
a
conversation
right
now
about
the
last
ever
release
of
rel8.
C
C
B
C
Yeah,
absolutely
every
six
months,
there's
a
rel
eight
minor
release,
and
that
means
that
we
can
chart
that
out
five
years
active
support,
five
years
of
maintenance
support,
so
8.10,
which
comes
out
in
mid
2024
according
to
what
I'm
looking
at
on
this
chart,
that
will
be
the
maintenance
release
and
it
just
has
a
really
long
tail
of
update
errata
nice.
B
C
Okay,
well,
we
joke
about
it,
but
I'm
just
gonna,
I'm
going
to
hark
back.
Okay,
there's
two
things:
don't
don't
turn
off
se
linux,
ever
okay
and
two:
don't
replace
everything
that
we
give
you
with
something
that
you
built
yourself
unless
you
have
a
really
good
reason
to
do
so,
and
that
may
sound
really
harsh
because
come
on
you're
a
linux
vendor,
we
can
do
that.
Yeah
you're.
Absolutely
welcome
to
do
that,
however,
understand
what
you
may
be
giving
up
right.
Good
example:
apache
the
apache
that's
in
rel
today.
C
C
You
can
put
that
ssl
cert
into
a
hardware
security
module,
a
tamper-proof
device
that
makes
it
impossible
for
even
root
on
the
operating
system
to
obtain
that
certificate.
That's
built
into
the
apache
that
we
have
right.
Yes,
we
commit
the
code
and
make
it
available
for
the
upstream,
but
it's
right.
There
also
apache
recognizes
things
like
our
system-wide
cryptographic
policy
in
realm.
One
command,
as
I
like
to
joke,
easy
enough
that
even
a
product
manager
can
use
it
update.
Crypto
policies
applies
a
consistent
policy
for
cryptographic,
backend
operations
across
all
of
rel.
C
You
get
all
of
this
integrated
with
the
apache.
That's
in
rel
you
build
it
yourself,
you're,
probably
going
to
be
pulling
in
your
own
crypto
libraries
oops.
Well,
who
verified
that
those
were
fips,
validated
or
interoperate
with
anything
you're
on
your
own,
which
is
the
beauty
of
linux.
Don't
get
me
wrong,
however,
now
you're
back
into
the
so
you
decide
to
take
on
all
the
security
fixes
yourself
right.
So
don't
turn
off.
Sc
links
stop
replacing
stuff,
because
you
think
you
have
to
check
out
what
we've
got
already
in
the
box.
B
Are
you
going
to
do
that
every
two
weeks,
every
three
weeks
right?
Are
you
going
to
watch
the
apache
website
to
know
when
they've
produced
updates
or
have
patches
to
imp
apply
to
it
like,
in
addition
to
all
the
red
hat
stuff,
you
would
have
to
take
on
that
extra
one
for
this
new
software
that
you
downloaded
and
deployed.
C
Right
right,
don't
forget
the
kernel
yeah
yeah,
so
I
mean
that's.
My
big
takeaway
is,
you
know,
don't
turn
off
sc
linux,
oh
by
the
way.
I
I
joke
about
that
it's
kind
of
a
joke,
but
really
don't.
C
If
you're
at
all
interested
in
containers,
you
may
or
may
not
be
interested,
though
openshift,
which
is
red,
hats,
kubernetes,
based
orchestration
environment,
for
running
containers.
It
requires
sc
linux
to
be
enabled.
It
is
the
only
way
you
get
separation
of
containers
workloads
from
each
other
right.
So
when
customers
are
like
well,
I
don't
really
want
to
use
sc
links,
but
I
want
to
go
to
containers.
I
just
quietly
go
you're
going
to
use,
see
linux,
even
if
you
don't
know
it
and
they
end
up
doing
that
and
they
don't
know
it
right.
C
It's
totally
transparent.
We've
had
essay
linux
enabled
by
default
since
roll
six.
It
is
there,
it
is
solid.
Can
you
run
into
problems
with
seo
linux?
Of
course
you
can,
but
you
can
run
into
problems
with
anything
right
if
there's
something
that
doesn't
work
the
way
you
would
expect
it
to
and
you
believe
it
should
you
can.
You
can
file
a
request,
we
can
take
a
look
at
it.
Occasionally
we
do
find
bugs
in
the
policy,
but
we
update
them.
It
is
there
by
default.
I
cannot.
C
C
Relational
databases
in
the
world
out
there
very
popular,
especially
on
they,
have
this
policy
and
their
instructions
for
installations
to
say
well
turn
off
selects.
C
C
B
Well,
but
I
mean
also
back
in
the
day
if
you
were
following
things
like
linux
file
system
hierarchy
standard
and
you
did
as
a
third
party
vendor
what
you're
supposed
to
do,
which
is
put
all
your
stuff
in
opt
and
you
had
like,
opt
etsy
and
or
opt
your
thing
etsy,
it
didn't
get
contexted
right,
and
so
you
try
to
spin
it
up
and
it
would
fail
because
it
didn't
have
the
right
context
to
read
files
right
so
like
that.
That
was
a
legitimate
thing
15
years
ago.
C
I
think
we
are
yeah,
I
mean
broadly,
we
don't
have
issues
with
customers
installing
software,
it's
because
we
have
this.
This
unconfined
policy.
Now
that's
pretty
generous,
but
even
if
even
if
a
lot
of
your
applications
are
running
in
unconfined,
the
core
system
services
are
still
being
protected
by
se.
Linux
right,
every
container
breakout
exploit
has
been
prevented
by
se.
Linux.
A
Yeah
and
kind
of
there's
a
question
here
in
chat:
let's,
let's
touch
on
it
since
it's
you
know
pertinent:
where
do
we
draw
the
line
about
what
sc
linux
supports
globally
versus
you
know
a
container
and
yeah
like?
I
want
to
kind
of
make
sure
that
folks
understand
what
sc
linux
is
doing
is
on
the
host.
Often,
yes-
and
you
put
it
in
your
container.
A
A
C
C
Yeah,
that
said,
do
we
have
a
demo
for
you?
Yes,
we
do
yeah.
So
that's
that's
a
really
good
question,
chris.
So
right,
se,
linux
defines
a
policy
at
the
kernel
level
that
affects
all
objects
and
subjects
to
use
an
selinux
term
on
the
system,
which
is
basically
how
does
the
process
interact
with
the
processes,
files,
sockets,
namespaces
and
objects
on
the
system
right?
C
It's
a
mandatory
access
control
policy,
but
it's
implemented
at
the
kernel
level
so
for
in
openshift
world
every
every
container
is
assigned
an
sc
linux
label
that
sc
linux
label
means
that
they
are
essentially
peer
separated.
So
one
container
workload
which
is
resource,
constrained
and
labeled
constrained,
cannot
see
processes,
files
or
anything
from
the
namespace
of
another
container,
but
that's
enforced
outside
of
the
container
contacts
it's
enforced
in
the
kernel,
so
you
don't
enable
sc
linux
in
a
container
there's
one
policy
that
applies
to
all
of
them.
It's
the
same
policy.
A
C
No,
you
don't
but
scott.
I
think
this
does
lead
us
into
a
really
solid
demo
on
what
we
can
do
with
ic
linux
and
how
to
create
a
semi,
customized
selinux
policy
for
a
container,
because
then
the
follow
on
chris
is
well.
I
have
an
application
that
needs
to
do
more
than
what
the
se
linux
policy
would.
Let
that
container
do
right.
What
do
I
do,
then?
C
Right
and
normally,
as
you
know,
the
answer
is
okay,
you
have
to
run
a
privileged
container,
which
means
it
has
all
privileges
yeah,
and
you
don't
want
to
do
that
either.
If
you
can
at
all
avoid
that
so
we'll
let
mr
mcbrine
take
over
here,
because
it
looks
like
you're
already
sharing
the
appropriate
demo.
B
Yeah
I'm
working
on
it,
it's
provisioning
right,
so
we
have
a
service.
That's
part
of
the
container
tools
called
uditsa
or
the
the
americanized
production
pronunciation
would
probably
be
utica
and
a
lot.
What
it
allows
you
to
do
is
create
a
set
of
sc
linux
policy
rules
that
affects
a
container.
A
B
Quarters
three
quarters
down
the
page:
there's
a
tile
for
creating
a
container
customized
policy
and
then
in
square
brackets.
It
says
uditsa
next
to
it
yeah
I
got
it.
I
got
a
link
all
right
well,.
B
So
right
there
is
our
ubi
image
we
just
downloaded
and
then
here
we're
just
doing
a
podman
run,
setting
up
a
pass
through
for
home
to
be
read
only
and
var
spooled
be
read,
write
and
port
80
pass
through.
B
So
now
we're
running
our
container
and
then
don
when
he
wrote
this,
he
returned
the
containers
id
to
a
variable,
a
show
variable
called
container.
B
So
what
don
did
was
he
looked
at
all
the
processes
running
on
the
system,
including
their
essay
linux,
contexts
and
then
just
pulled
out
the
ones
that
had
a
container
t
type
right.
So
that's
the
one.
That's
that's
our
running
container
right
here.
That
is
running
a
bash
shell
inside
of
it
that
is
container
t
type,
nice
all
right
and
just
to
show
that
we
are
selinux
enabled
right
here
here
we
go!
B
B
So
we're
looking
for
things
that
are
container
t
type
that
are
giving
home
directory
access
to
stuff.
So
you'll
recall
that
when
we
did
our
invocation,
we
said
pass
through
slash
home
stuff
to
hosts
home
stuff,
but
when
we
actually
tried
it
permission
died
and
the
reason
for
that
permission
denied
is
that
there's
nothing
in
the
policy.
Currently
that
says
stuff
running
in
a
container,
that's
accessing
slash
home
can
be
passed
through
to
the
hosts,
slash
home
right
s.
B
Linux
is
stopping
it,
which
is
why
we're
getting
that
permission
denied
inside
of
our
container
and
recall
that
when
sdlinux
refuses
things
essentially,
the
kernel
just
returns
a
an
error
to
the
application
requesting
the
activity
to
occur
and
that
application
is
what
presents
the
error
to
the
user.
So
ls
was
told
by
the
kernel.
No,
you
can't
have
that
listing
and
then
ls
said,
oh
permission
denied,
because
in
ls's
world
that's
the
only
reason
you
would
ever
be
refused
from
actually
looking
at
something
all
right
same
thing
for
var
spool
right.
B
We
try
to
look
at
var,
spool
and
in
our
invocation
of
our
container,
we
said
pass
var
spool
from
the
host
to
var
spool
and
the
container
same
deal
right.
Selinux
is
stopping
it
from
occurring
because
we're
not
allowed
to
to
pass
that
through
because
we
don't
have
any
s.
Linux
rules
that
allow
container
t
type
activities
to
happen
all
right.
B
So
here
we're
just
looking
to
see
if
our
spool's
allowed-
and
we
know
it's
not
and
then
here
don
is
looking
at
the
policy
around
socket
connections,
and
so
when
we
invoked
our
container,
we
said
pass
through
home,
pass
through
our
spool
and
pass
through
port
80
on
the
host
well
turns
out,
the
port,
80
stuff
would
work,
and
why
is
that?
Well,
that's
because
there's
rules
that
affect
container
t
type,
things
that
allow
tcp
connections
to
be
passed
between
the
host
and
the
container
nice.
C
Yeah,
it
is
nice,
but
if
I
can
point
out,
it
also
means
there's
actually
more
than
port
a
that
would
actually
work
here
right,
that's
the
other.
You
know
the
other.
Two
examples
are
like
okay.
Well,
I
don't
have
access
to
something
from
inside
this
container,
the
port
80
example.
Actually
you
could
run
a
lot
of
different
things
in
the
container
to
access
a
variety
of
different
ports.
We
really
don't
want
to.
We
really
want
to
run
apache.
It
doesn't
need
anything
more
than
port
80
and
port
443
right.
B
Not
at
all
I
mean
so
it's
often
the
case
that
you'll
run
like
or
ford
port
8081
to
containers,
port
80
right
because
in
the
container
you're
running,
vanilla,
apache,
but
on
the
host,
you
don't
want
it
to
conflict
with
your
host
services
and
this
seo
linux
set
of
rules
allows
that
to
occur
as
well.
Right
specifically
this
guy.
If
you
are
accessing
a
unconfined
type
port
right,
one,
that's
not
set
up
in
the
essay
managed
list
of
ports
that
are
allowed
to
different
services.
A
B
All
right
so
we're
doing
a
podman
inspect
of
our
running
container,
we're
going
to
shove
that
into
a
json
file
and
just
to
show
you
what
that
looks
like.
B
All
right,
so
this
is
what's
in
there
right.
This
is
what
we're
running
in
our
container,
currently
all
right,
and
then
we're
going
to
use
the
udita
application
to
create
a
a
new
set
of
rules
that
we're
calling
the
my
container
rules
right
said
right
here.
My
policy
for
my
container
created
all
right
and
what
we
did
when
we
ran
this
was
we
said
you
look
at
our
container
json
that
we
captured
earlier
and
use
the
information
in
there
to
create
this.
My
container
policy
for
se
linux.
B
B
C
All
right
which,
since
that
is
now
a
policy
you
could
apply
it
to
not
just
this
container,
but
any
other
very
similar
containers
right
that
are
essentially
trying
to
do
the
exact
same
thing.
They
need
read-only
access
to
home.
Actually,
I
think
it
was
read,
write
to
home,
read
only
to
var,
temp
or
parspool.
C
B
And
so
here
we
see
that
when
we
search
the
s
linux
policy
again,
there's
a
allow
my
container
process
access
to
the
home
directory
and
if
we
look
at
var
spool
same
thing,
we're
allowing
access
to
varspool
but
a
little
bit
more
permissions.
So
up
here
with
home,
we're
given
read
and
search
down
here,
we're
given
read
and
write
because
it
looked
at
that
container
json
from
our
container
that
we
started
where
we
passed
through
those
resources
and
went.
Oh
if
these
are
passed
through
to
the
container.
B
B
So
the
only
way
that
they
would
conflict
is,
if
you
named
your
custom
container
process
the
same
as
a
custom
container
policy
policy.
Thank
you
the
same
as
somebody
else's
existing
policy.
So
don't
do
that.
C
Each
container
can
only
have
one
policy
applied
to
it
and
that
policy
will
over
what
we
did
was.
We
said:
okay
containers
get
a
default
policy
called
container
t,
but
for
this
container
we're
going
to
override
container
t
and
it's
a
building
block
model,
because
what
you
didn't
see-
and
this
is
the
beauty
of
your
dish-
is
it
it-
takes
the
existing
container
t
policy
and
says
okay.
This
is
your
base,
but
to
that
I'm
either
going
to
subtract
permissions
or
I'm
going
to
add
permissions.
B
Container,
just
looking
through
the
rest
of
this,
it's
more
just
verifying
that
everything
is
working
correctly
and
testing
it
like.
We
can
actually
write
files.
We
can
get
access
to
files
so
I'll
leave
that
as
a
an
exercise
for
the
watcher
if
they're
interested
right.
So
this
is
the
main
homepage
for
lab.redhat.com
and
the
one
that
we
were
just
working
through.
B
Here
awesome,
so
we've
got
about
eight
minutes
left
mark
you
want
to
try.
I
know
you
want
to
try
crypto
policy
as
our
last
one.
C
I'm
joined
it's
fun,
it's
easy
yeah,
let's,
let's
do
crypto
y
policies,
sure
so
yeah
you're
already
sharing.
So
you
get
to
drive.
But
let's
talk
about
this,
you
remember
my
joke
about
it's
so
easy
that
even
a
product
manager
can
do
it
a
little
background
so
and
on
any
linux
system.
You
have
traditionally
multiple
cryptographic
policies
or
providers
in
the
backend
open,
ssl,
nss
java,
crypto
libraries,
lib
ssh.
All
of
these
different
backend
providers
cryptography.
C
You
may
have
a
policy
that,
for
example,
you
don't
want
to
allow
rsa
keys
or
you
have
to
have
encryption
keys
that
are
bigger
than
10
24
bits
to
date
on
row,
seven
and
below
you
get
the
lucky
job.
As
an
administrator
of
manually
editing
all
of
the
configuration
files
to
enforce
that
policy,
there
was
no
tool
they
unified
it
together,
really
on
any
operating
system
other
than
perhaps
windows,
but
that's
because
it
has
one
crypto
cryptographic
provider
on
it.
So
we
created
this
thing
called
the
unified
or
system-wide
cooper
policies.
C
So
let's
take
a
look
so
first
thing
we're
going
to
do
it's
built
into
rally:
let's
go
ahead
and
run
the
update.
Cryptos
policies,
there
are
four
deep:
there
are
four
policies
this
is
going
to
get
redundant
by
default.
There's
four
policies:
there's
default
legacy,
fips,
140
and
future
right,
so
the
default
is
what
we're
doing
by
default
in
relay.
C
So,
let's
see
we're
actually
going
to
go
ahead
and
run
the
httpd
service.
In
the
background,
it
fires
it
up
cool,
it's
definitely
listening,
that's
a
good
thing
and
we're
going
to
take
a
look
at
the
certificates
that
are
associated
with
education,
pda
service
right.
So
it's
a
20
48
bit
long
rsa
key
that
is
permitted
by
the
default
policy
set,
that's
great,
but
maybe
that's
weaker
than
what
you
would
like.
C
So
let's
go
on
with
this
demo,
we're
going
to
take
a
look
and
see
what
else
happens
all
right,
so
you
might
have
a
need
to
go
to
a
really
strong
set
of
cryptographic
policies.
Well,
there's
one
called
future
on
this
system
and
we
can
turn
on
future
future
requires
much
longer
and
stronger
key
links
by
default,
and
what
this
policy
will
do
is
allow
us
to
kind
of
set
feature
as
the
new
standard,
the
baseline.
So
let's
go
ahead
and
invoke
update
crypto
policies
dash
test
set
future.
C
Yep,
that's
right.
It
says
it
applies
it
on
startup,
but
in
reality,
for
things
like
we're
going
to
do
you
don't
actually
have
to
restart
your
system
long-term
running
processes.
You
would
like,
if
you're
we're,
going
to
shut
down
and
restart
apache
in
just
a
moment
just
to
see
it
fail,
because
it's
key
for
cert,
it's
not
going
to
be
long
enough
according
to
the
new
future
policy.
C
If
you
want
details
on
what
exactly
is
in
any
given
policy,
you
can
just
do
a
man
page
on
update
crypto
policies
or
do
a
main
page
on
crypto
dash
policies
and
I'll
find
out
all
right,
very
easy
to
make
sure
the
system's
running
in
future
mode,
because
you
can
do
update
crypto
policy
show
tells
you
the
current
state.
I
now
know
across
my
entire
system,
it's
running
with
the
future
mode,
all
right,
let's
move
on
so
now,
let's
go
ahead
and
restart
the
apache
server.
C
B
C
30
72,
thank
you.
Well,
it's
not
strong
enough.
It's
372.!
If
you
looked
at
the
tail
yeah,
it
should
show
you
there
yep
yeah.
So
basically,
this
is
doing
exactly
what
you
would
want
it
to
do.
You've
moved
to
a
stronger
crypto
policy
and
it
will
not
allow
applications
to
use
certificates
or
algorithms
that
are
not
part
of
that
future
set.
If
you
will
okay
cool
now,
we
go
through
the
rest
of
this
and
I
don't
think
we
have.
We
have
like
four
minutes
left,
but
here
you
can
actually
go
through
safely.
C
It's
fine!
Okay!
So
these
next
couple
commands
all
they
really
are
doing
is
they're
moving
some
private
keys
around
and
generating
a
new
certificate
that
is
of
the
appropriate
length
now,
in
fact,
scott,
if
you
just
want
to
get
through
those
yep,
30
72
bit
yay.
C
So
now,
we've
got
that
in
place.
If
we
go
ahead
and
restart
the
http
service,
no
error
messages
glory,
and
now
it
is
running
with
a
32-bit
service.
Now
you
can
look
at
that
say
well.
What
was
the
big
deal
here?
The
big
deal
is
apache
did
not
have
its
configuration.
Modified
apache
just
asked
the
underlying
openssl
environment
to
generate
a
default
length
certificate.
C
Well,
it
worked.
Fine
in
our
default
doesn't
work
under
future
mode.
All
we
did
was
move
some
things
around
generate
a
new
cert
and
boom.
Everything
worked
great
with
future
mode,
so
this
is
one
way
in
which
you,
as
administrator,
have
a
centralized
way
of
enforcing
a
cryptographic
policy.
It
applies
to
all
your
applications
on
rail,
with
the
exception
of
the
following.
Remember,
I
said
way
back
at
the
beginning:
don't
replace
stuff!
C
B
C
B
No
super
super
old.
No,
so
this
specific
surface
is
used
by
associates
at
red
hat,
for
doing
things
like
contracts
and
reviewing
contractual
data
and
so
that
the
employees
that
often
use
it
are
not
technical.
B
B
The
other
thing
that
I
saw
people
do
to
resolve
this
weirdness
because
keep
in
mind
that,
of
course,
the
default
was
not.
Oh,
let's
fix
this
service.
That's
way
out
of
date
like
that.
No,
what
it's
crazy
town,
the
other
thing
that
people
did
was
they
installed
chrome
and
the
chrome
web
browser
was
able
to
access
that
and
get
their
older
encryption
or
encrypted
connections,
because.
B
That's
right,
it
brings
all
its
own
cipher
or
crypto
libraries
with
it,
which
are
not
linked
against
the
crypto
policy,
so
yeah,
some
some
interesting
alternate
methods
than
actually
fixing
the
problem.
A
Yeah
well
there
you
go
actually
fixing
the
problem
yeah.
So
there's
a
question
in
chat.
Could
you
use
these
pilots
like
the
the
policy
to
enforce
the
upstream
cas,
for
you
know,
certificate
issuing
right
like
if
a
corporate
policy
was
must
only
use,
crypto
resistant
algorithms
and
you
know
must
be
you
know,
coming
from
a
ca.
That's
using
those
can
that
be
done
with
this.
C
So
from
a
rel
as
a
client
perspective,
of
course
it
will
not
accept
and
or
consume
and
use
or
exchange
information
with
something
that's
been
signed
with
a
weak
certificate.
I'll
give
you
an
example
in
in
related
beta
testing,
8.0
beta
testing,
we
actually
had
a
stronger
particular
default
on
a
particular
setting.
A
C
Oh,
so,
if
you
participated
in
the
8.0
beta,
you
couldn't
download
content
oops,
so
I
mean
it
literally
happened.
Now
what
you
cannot
do
is
you
can't
make
the
ups?
You
cannot
make
the
the
provider
change
what
they
did,
but
you
would
not
be
able
to
consume
it
if
it
didn't
meet
the
criteria.
Right
now,
on
the
flip
side
of
that,
if
that
that
ca
was
running
on
a
relate
system,
then
of
course
the
crypto
rules
would
be
in
effect
there
and
they
would
not
be
able
to
generate
a
search.
C
It
wasn't
acceptable
as
well,
so
yeah,
it's
more
like
you,
wouldn't
be
able
to
consume
incorrect
or
out
of
policy
content
right,
and
that
would
generate
lots
of
error
messages
and
upset
customers
which
then
might
cause
you
to
contact
somebody
and
say:
hey,
redhead
I.t.
Can
we
update
this
and
they
did
to
be
fair.
B
So
there
you
go
so
there
is
also
another
lab
on
lab.redhat.com
where
they
talked
about
talk
about
making
a
customization
of
the
crypto
policy.
So
in
red
hat
enterprise,
linux
8.2
and
later
we
support
not
just
changing
which
of
the
four
policies
we
ship
with
is
applied
to
your
system.
But
you
can
also
go
into
the
policies
and
either
make
your
own
or
make
changes
to
the
ones
that
are
shipped
to
choose.
What
ciphers
are
permitted
in
a
policy.
C
B
So
loosen.
B
Right,
and
so
you
could
instead
of
saying
you
know-
I've
received
this
thing
from
my
up
my
ca
in
my
company.
What
do
I
do
with
it?
You
could
actually
make
a
custom
crypto
policy
that
would
permit
those
certificates
to
work,
even
though
other
things
are
prohibited
on
the
system
because
of
the
future
policy
or
whatever
your
copy
of
future.
Would
me
the
alternative?
B
Is
we
you
would
get
just
what
we
saw
with
our
demo
that
we
went
through
quickly,
which
is
you
pull
down
the
the
certificate
and
you
put
it
in
the
right
places
and
you
start
up
your
services
and
they
all
die
off
because
they're
not
allowed
to
access
the
certificate
and
that's
how
you
know
that
it's
not
strong
enough,
but
there
you
go
nice
awesome
all
right
well,
mark!
I
don't
want
to
make
you
late
for
your
next
meeting.
I'm
sure
we
already
have.
B
Thank
you
for
thank
you
for
joining
us
today.
C
B
C
A
Yeah,
thank
you
to
the
audience
for
all
the
wonderful
questions
and
conversation
got
a
lot
of
good
info
out.
So,
if,
if
you
are,
you
know,
training
people
in
rel
security
hand
them
this
video.
It
will
do
them
a
great
bit
of
good.
It
is
available
on
youtube
right
now,
yeah.
So
thank
you
so
much
for
joining
today.
Scott.
You
want
to
send
this
off
in
style,
since
it's
the
last
one
of
the
year.
B
It
is
the
last
one
of
the
year
and
I
think
we're
not
going
to
be
back
until
the
third
week
in
january.
If
I
remember
correctly,
so
it's
going
to
be
a
little
bit
of
a
hiatus
which
is
probably
good,
because
I
just
got
told
that
my
school
is
going
to
be
doing
virtual
school
for
the
first
couple
weeks
of
january.
So
everybody
has
to
listen
to
my
elephants
running
upstairs,
which
is
good.
A
Yeah
adjustment
period
would
be
nice
there
yeah.