►
From YouTube: State of Container Security
Description
State of Container Security
A
That
was
an
awesome
last
talk.
I
was
really
happy
to
see
over
to
four
again
today,
we've
been
working
really
hard.
These
past
I
don't
know
how
many
months
ever
since
really
core
OS
came
on
board
the
synergy
with
core
OS
and
and
we're
the
open
shift
has
been
amazing
and
we're
all
so
proud
to
finally
have
over
ship
for
out
soon
we're
very
soon,
no,
not
officially
I'm
oh
hold
on.
A
So
I'm
Sally
we're
here
to
talk
about
container
security,
I'm
Sally,
I'm
overshift,
the
auth
team.
Now,
but
when
I
started
at
Red
Hat
a
few
years
ago,
I
was
on
the
containers
team.
We
were
submitting
pull
requests
to
a
little-known
project
upstream.
Maybe
you
guys
have
heard
of
it
docker,
but
a
lot
has
changed
in
the
past
few
years
and
that's
what
we're
here
to
talk
about
today,
hello,
everyone,
oh.
B
Yeah
it's
on
okay,
hello,
everyone.
My
name
is
Rishi
Madani
and
I'm,
a
software
engineer
at
Red
Hat
on
the
open
ship,
runtimes
team,
I
work
on
like
the
lower
level
container
tools
that
kubernetes
and
open
chefs
use,
and
today
we're
going
to
sort
of
talk
about
container
security
and
how
you
can
make
your
workloads
more
secure
and
how
these
tools
sort
of
take
makeup
have
a
huge
role
in
that.
A
Yes,
so,
like
I
mentioned
a
few
years
ago,
we
were
submitting
polar
quest
to
docker.
Also,
there
were
some
other
companies
getting
involved
in
containers.
Core
OS
was
developing
rocket.
There
were
some
runtimes
being
used
other
than
run
sea
like
kata
containers
and
G
visor.
Well,
we
all
realized
very
quickly
that
a
set
of
open
industry
standards
we
were
going
to
need
a
set
of
open
industry
standards
to
propel
things
forward.
A
Otherwise
we
were
going
to
end
up
with
you,
know:
rocker
and
rock
versus
redhead
images
versus
docker
images,
and
that
just
nobody
wanted
that
so
Google
docker
Red,
Hat
Microsoft.
All
the
big
players
got
together
and
came
up
with
the
open
container
initiative
it
put
in
place
standards
around
what
is
a
container
image
format
and
what
is
a
container
runtime,
and
so
now
any
OCI
image
can
run
with
any
OCI
runtime
and
we
were
free
to
develop
new
tools
that
suit
our
specific
needs.
A
So
before
we
dive
in,
though
just
real
quickly
what
what
are
containers,
they're,
just
Linux
processes,
I'm
sure,
you've
heard
this.
They
have
any
like
any
process
in
Linux,
it's
secured
by
things
like
SELinux
app
armor
set
comp
for
cisco
filtering
linux
capabilities,
they're
also
constrained
in
the
amount
of
resources
they
can
take
up
on
your
system
like
CPU
and
memory,
and
that's
done
through
Linux
cgroups
and
finally,
they're
isolated
through
the
use
of
namespaces
Linux
namespaces
like
if
you're
in
a
container
you're
in
a
pit
namespace
and
if
you're
from
inside
that
container.
A
A
So
what's
a
container
image
again,
nothing
special,
it's
a
tart
up
set
of
layers
and
with
a
JSON
file
description,
there's
a
base
layer,
usually
that's
like
an
operating
system,
user
space
layer
and
then
additional
layers
are
packages
or
binaries
dependencies.
Anything
you
need
on
top
of
that
operating
system.
That's
it
tar
it
all
up.
That's
a
container
image.
If
it's
an
OC
eye
image,
it
follows
a
certain
specs
that
are,
you
know,
defined
by
the
OCI.
What
do
container
engines
do?
A
They
are
programs
that
know
how
to
take
that
container
image
and
extract
it
explode
onto
your
local
disk,
usually
using
copy-on-write,
hopefully
using
copy-on-write
file
system,
because
otherwise,
if
you
had
10
Fedora
images,
you'd
need
10
copies
of
that
base
layer
with
copy-on-write.
It
shares
the
layers
it
also.
A
container
engine
then
creates
like
a
runtime
config.
That's
another
JSON
file.
It
creates
that
from
user
input
like
any
Flags,
you
pass
to
your
container
run
command
like
privileged
or.
A
B
Alright,
so
now
we
know
what
containers
are,
so
the
continuous
piece
can
actually
be
broken
down
into
four
different
sets
of
action.
You
have
building
your
container
images
second
running
and
testing
these
container
images
locally.
Then
you
would
want
to
be
able
to
share
these
images,
probably
move
it
from
a
local
storage
to
a
remote
registry.
And
finally,
you
want
to
run
these
containers
in
a
production
cluster
such
as
kubernetes
or
OpenShift.
B
Now
what
would
happen
if
we
had
all
this
all
these
functionality
and
just
a
monolithic
tool
like
think
about
it
in
terms
of
security,
we
would
obviously
end
up
with
like
the
least
common
denominator
when
it
comes
to
security.
So,
for
example,
like
you,
don't
need
as
many
privileges
to
run
a
container
in
production
as
you
need
to
build
the
container
images.
So
why
bunch
them
all
up
in
one?
B
So
we
decided
that
follow
the
UNIX
philosophy,
which
says
that
you
should
design
programs
to
do
one
thing
to
do
it
well
and
to
work
well
with
other
programs,
and,
as
you
can
see
here,
the
UNIX
founders
are
pretty
happy
that
we're
trying
to
follow
their
ideology
here
and
we
decided
to
make
for
tools
to
target
each
of
these
each
of
the
set
of
actions
that
I
mentioned.
So
we
have
built
up
obviously
standing
for
building
a
continue
images.
B
A
So
we
have
a
script
to
show
off
some
security
features
of
these
four
projects
that
Red
Hat
has
been
working
on
the
past
few
years.
So
first
one
is
build.
It's
the
tool
for
building
container
images
when
you're
building
a
secure
image.
One
thing
you
should
think
about
first
is
make
your
image
as
minimal
as
possible,
only
put
in
your
container
exactly
what
you
need
to
run
a
build
that
makes
this
really
easy
to
do.
If
you
run
the
command,
build
it
from
scratch.
A
A
And
you
can
see
where
we're
running
these
commands
as
sudo,
but
with
bilder.
The
the
really
cool
thing
is
that
you
don't
need
sudo
to
run
bilder
when
you're
building
from
a
docker
file,
especially
you
don't
need
to
run
it
as
sudo.
But
when
you're
setting
up
a
mount
point
like
this,
you
do
because
mount
requires
root.
So
once
I
have
set
up.
My
mount
point
to
my
working
container.
A
I
can
now
use
my
host
system
package
manager
DNF
to
install
whatever
package
I
need
and
nothing
else
into
the
mount
point,
and
here
I
pre
downloaded
the
RPM
for
busybox,
because
I
didn't
want
to
chance
it
that
much
and
DNF
installed
it
and
gave
it
the
little
known
flag
to
DNF,
which
is
installed
root.
So
once
we're
done
with
that,
we
can
simply
unmount
that
container
and
commit
it
to
an
image.
Now,
I
have
a
container
image
that
I
can
run
with
any
any
run
command.
A
So
we'll
use
pod
man
run
and
what's
interesting
is
what's
not
in
the
image.
There's
no
ping.
You
can
see
that
arid
out
any
usually
any
from
command
and
a
docker
file.
You'll
end
up
with
ping
in
your
container
same
with
Python,
there's
no
Python,
so
the
only
thing
in
here
is
busybox.
There's
the
busybox
help
menu
the
the
more
you
put
in
an
image,
the
more
that
can
go
wrong,
so
shrink
your
attack
surface.
B
So
inside
your
container,
you
can
give
your
build
process
all
the
elevated
privileges
you
want
to
and
if,
if
the
process
happens,
to
break
out
of
this
container
and
tries
to
attack
the
host,
it
won't
be
able
to
because
it
won't
have
the
same
elevator
privileges
on
the
host.
So
this
is
a
simple
docker.
This
is
the
docker
file
I
used
to
create
an
MHL.
It
already
has
builder
installed
inside
it.
B
This
is
another
dhaka
file
that
I
want
to
build
inside
my
container,
pretty
simple
docker
file
not
doing
much
and
using
pod
man
I'm,
going
to
run
that
image
and
try
to
build
that
docker
file
inside
it.
The
it
looks
a
bit
long
because
I'm
just
volley
mounting
in
pod
so
I
can
access
the
image,
that's
built
and
then
transfer
my
docker
file
from
my
host
to
the
container.
So
this
should
just
take
about
a
few
seconds
to
do
so.
Yeah.
A
And
in
open
ship,
for
we
actually
use
bilder
now
to
build
all
of
our
images
in
openshift,
and
so
that
has
resulted
in
our
image,
builds
truly
being
containerized.
There's
no
demon
with
build
us,
so
there's
no
leaking
of
information
from
inside
the
container
to
the
docker
to
the
the
daemon
running
on
the
host,
and
it
should
finish
up
come
on
internet
Oh,
funny
story
builder
was
named
filled
up
because
they're,
our
team
leader,
but
she's
team
lead
Dan
Walsh
has
a
really
prominent
Boston
accent.
B
The
story
was,
he
was
like
you
might
as
well
call
it
builder
and
I
don't
care
and
he
pronounced
it
as
well
duh.
So
we
named
it
Valda,
okay,
so
now
I'm
going
to
build
the
images
and
they're
to
try
to
see
the
image
and,
as
you
can
see,
the
image
that
on
the
bottom
there
called
my
image.
I
was
able
to
build
that,
and
now
from
here,
I
can
move
this
to
registry
run
it
do
whatever
I
want
to
do
with
it
and
that's
what
we
have
for
Buddha.
Oh,
yes,.
B
So
once
I
build
my
container
image,
I
kind
of
like
to
test
them
locally,
just
to
ensure
I
have
everything
I
want
in
there,
and
for
that
we
have
the
tool
called
pod
man.
So
what
part,
when
you
can
do
everything?
It's
like
an
all-in-one
CLI
tool
and
you
can
do
everything
from
building
container
images
to
running
containers
and
even
pods
pod
man
stands
for
pod
manager
so
and
to
tie
back
to
our
UNIX
philosophy.
Bodmin
actually
uses
build
down
at
the
hood
to
do
the
build
processes.
B
B
So,
as
you
can
see
here
now,
the
pod
man
command
doesn't
have
a
suit
on
in
front
of
it
and
I'm
just
going
to
list
the
images
I
have
pulled
as
without
root
to
show
you
that
it's
actually
rootless,
I'm,
going
to
run
it
with
sudo,
and
you
can
see
it's
the
different
list.
So
when
you're
running
pod
man
without
Roo
with
the
storage,
is
being
created
under
your
user,
so
it's
tied
to
your
specific
user.
B
This
is
great
because
you
can
have
multiple
users
use
the
same
computer
and
our
machine
and
they
wouldn't
even
know
of
the
existence
of
the
other
users,
containers
or
images.
So
that's
this
isolation
there
and
I'm
just
gonna
run
this
to
show
you
that
to
show
you
what
the
UID
is
inside
the
hole.
So
even
though
I'm
invoking
pod
man
without
root
and
the
container
I
have
I
am
root,
while
on
the
host,
I
am
1000,
which
is
the
user
I
locked
an
ass,
so
there
you
have
it
last
one
man.
B
B
I
can
probably
do
if
I
have
system
root,
but
if
on
the
host
you
can
access
certain
files
or
processes
because
your
user
doesn't
have
you
wouldn't
be
able
to
do
the
same
in
the
container
if
you
mount
that
in
so
the
rules
still
apply
so
a
way
for
you
to
like
just
a
way
to
explain
this
a
bit
further.
So
every
modern
system-
well
Simon
and
above
have
this
file
called
at
C
sub
UID.
This
shows,
like
a
mapping
of
you,
IDs,
that
your
user
has
assigned.
B
A
Build
up
on
share,
let
me
just
point
out:
it's
a
it
drops
you
into
a
user
name
space,
so
you
can
play
around
and
see
what's
going
on
with
bilder
when
you're
in
a
user
name
space
and
don't
forget,
though,
when
you're
running
build
on
share
that
you
are
running
it
because
I
do
that,
all
the
time
and
things
start
to
look
wonky
and
I'm
like
why
can't
I
run
sudo
here
and
then
I.
Remember:
oh
I'm!
In
a
user
name
space
yeah!
So
to
get
out
you
hit
exit
yeah.
B
So
now,
as
you
can
see,
all
the
files
that
were
owned
by
a
Somali
are
now
one
by
root
and
this
user
name
space
and
the
one
that
was
on
my
route
was
owned
by
nobody.
Well
yeah,
that's
because
on
their
host,
my
user
doesn't
have
access
to
access
those
root
files
and,
as
you
saw
from
the
mapping,
I,
can
only
access
anything
that
one
hundred
thousand
to
one
sixty
five
thousand
blah
blah
blah.
B
A
B
There's
this
file
called
an
proc
self,
is
file
call
UI,
and
this
shows
what
your
UID
mapping
looks
like.
So,
as
you
can
see
here,
the
user
1000
is
mapped
to
0
on
this
user
namespace.
That's
why
all
the
files
that
were
owned
by
a
Somali
on
the
horses,
not
1
by
root
in
this
user
namespace
and
then
everything
from
10000
and
onwards
is
mapped
to
1.
So
this
is
the
way
the
mapping
works
and
how
you
end
up,
basically
getting
the
permissions
the
container
so
yeah,
that's
it.
B
For
our
part,
mine
ruthless
actually
works
all
right.
So
this
username
space
stuff
is
pretty
cool.
It
can
add
a
layer
of
isolation
between
your
host
and
your
containers
with
pod
man.
You
can
easily
do
this
by
setting
the
UID
map
flag
and
defining
a
flag.
You
want
to
map
sorry
defining
the
range
of
your
IDs.
You
want
to
map
I'm
going
to
run
this
pretty
quickly
and,
as
you
can
see,
as
I'm
I
showed
you
before,
and
the
container
it's
root,
but
on
the
host.
B
It's
hundred
thousand,
because
that's
the
mapping
I
did
I
mapped
hundred
thousand
of
the
hosts
to
0
in
the
container
Department
job
command
is
a
pretty
it's
a
cool
command
that
lets.
You
look
at
various
features
of
your
container,
like
security,
user
paid
host
bid
and
all
and
the
latest
flag
is
also
a
very
user
friendly
feature
because
you
don't
it
picks
up
the
most
recent
container
you've
treated
or,
and
so
you
don't
have
to
go
back
and
like
look
at
what
your
container
ID
was
etc.
B
So
now,
when
we
look
at
PS
on
the
host,
you
can
see
that
that
container
is
running
as
100,000
the
host,
even
though
inside
the
continue
its
root.
So
now
what
pod
man?
You
can
actually
define
different
UID
rangers
for
different
containers.
This
we're
adding
another
layer
of
isolation
between
your
containers,
not
only
between
your
container
and
your
hosts
same
thing
as
before,
moved
in
the
container
200,000
on
the
host
and
now
think
about
it.
So,
let's
say
process
from
container
a
breaks
up
and
tries
to
attack
the
process
from
container
B.
A
What's
next
Oh
Padma
and,
as
we
mentioned,
there's
no
demon
with
pod
mana.
It
runs
with
a
true
fork,
exact
model.
So
that
means
that
the
login
UID
is
inherited
from
the
parent
process
to
the
child
process.
I
can
show
you
this
in
a
few
different
ways.
So
on
every
system,
there's
a
file,
proc
self
login,
new
ID.
That
will
tell
you
who
is
currently
logged
into
the
system
on
this
machine.
A
It's
me
it's
one
thousand,
whether
or
not
I
gain
root
access
that
login
new
ID
stays
the
same
and
as
I
mentioned,
it's
inherited
for
every
child
process.
So
if
I
run
pod
man
process
and
inside
the
container
cat,
proc
self
login
UID,
you
can
see
it's
one
thousand
as
you'd
expect.
Docker
is
a
client-server
model,
and
so,
if
I
run
a
docker
command,
and
you
can
see
that
the
login
UID
inside
the
container
is
this
unsigned
32-bit
int,
it
means
it's
a
user
who
is
never
logged
into
the
system.
A
The
system
doesn't
know
who
that
is
now.
Let
me
show
you
another
way.
If
I
set
up
an
audit
rule
to
watch
like
some
sort
of
sensitive
data
like
the
Etsy
shadow
file,
if
I
run
pod
man
I'm,
you
know
a
user,
a
given
privilege
to
run
sudo
privilege,
containers,
I
mount
the
root,
filesystem
and
I
want
to
make
a
change
at
C
shadow.
Just
to
be
mischievous.
A
You
can
see
in
the
audit
logs
clearly
that
s
O'malley
has
been
messing
around
with
Etsy
shadow,
and
you
know
a
system.
Admin
will
come
to
my
cubicle
and
ask
me
some
questions,
but
if
you
run
with
the
docker
command
your
same
user,
so
you
give
access
to
the
docker
daemon.
You
can
see
that
that
that
audit
log
shows
that
the
unset
user
has
been
changing
se
shadow.
So
that's
another
way
to
show
that
the
for
cozec
model
is
has
benefits
when
auditing
who's
running.
A
What
on
a
system
and
Urvashi
mentioned
these
pod
man
top
commands.
They
give
you
some
useful
information
about
what's
currently
configured
on
your
running
containers,
the
so
if
we
just
start
a
container
in
the
background
and
again
that
latest
flag
is
super
convenient,
you
don't
have
to
remember
the
container
ID
to
cut
and
paste
it.
So
you
can
see
the
PID
inside
the
container
versus
the
host
PID.
You
can
see
what
SELinux
labels
are
currently
set
up.
A
Make
note
of
that
messy
linux
label
container
underscore
t,
because
we
are
going
to
talk
about
that
at
the
end
of
our
demo.
It
can
show
that
suck
comp,
Cisco
filtering
is
currently
turned
on,
and
it's
a
nice
pretty
list
of
all
of
your
Linux
capabilities
that
are
currently
enabled
in
that
running
container.
It's
a
nice
feature,
yeah.
A
We're
about
to
buy
capabilities
a
little
later
yep,
so
oh,
okay,
cool
cool
we've
talked
about
bilder
we've
talked
about
pod
man,
I
guess,
scope,
EO
is
next.
We
want
to
manage
our
images
once
we
have
them,
how
we,
how
we
want
them.
We
want
to,
you,
know,
push
them
to
registries,
public
private
registries.
We
want
to
inspect
images
off
of
remote
registries.
The
tool
to
do
that
is
scope,
EO,
again,
scope,
EO
does
not
require
route.
Here's
the
help
menu
for
scope,
EO.
A
It
started
out
as
a
command
to
just
pull
down,
not
pull
down
but
inspect
the
JSON
file
off
of
a
remote
registry
without
having
to
pull
it
down
to
your
system.
It's
crazy,
but
before
this
tool,
scope
EO,
you
had
to
actually
pull
the
image
down
to
your
system
to
see
the
information.
So
you
can
get
information
about
the
tags
who
owns
it,
the
when
it
was
so
you
can.
You
know,
in
the
spirit
of
don't
run
random
crap
on
your
system.
You
can
know
exactly
what
you're
going
to
download
before
you
download
it
and.
B
And
just
so,
you
know:
Scorpio
stands
for
remote
viewing
and
Greek
alright,
so
another
feature
of
scope.
2
is
you
can
move
your
images
from
one
environment
another
and
a
pretty
cool
thing
about
this?
Is
that
let's
say
you
have
something
in
your
private
registry
and
you
want
to
move
to
your
public
registry,
but
you
don't
actually
have
it
locally
on
your
machine.
What's
coffee,
you
don't
have
to
download
the
image
locally.
You
can
literally
say
move
from
registry
to
registry
piano
copy
it
over.
You
can
do
something
similar
locally.
B
So
now
we
have
built
a
container
image.
We
have
tested
them
locally,
we're
happy
with
it.
We
have
pushed
it
up
to
a
public
registry,
and
now
the
next
thing
to
do
is
actually
run
it
in
a
production
cluster.
And
that's
where
cryo
comes
in
cry
is
a
lightweight
continue
engine
that's
used
to
run
your
continued
deployments
and
a
kubernetes
cluster
openshift
cluster
cry
was
always
a
compatible.
That
means
it
can.
It
supports
all
those
say,
compatible
images
and
all
OC
runtimes
such
as
runs,
ecology,
visor
and
cryo
actually
has
a
daemon.
B
It's
a
lightweight
daemon
just
because
it
needs
to
be
able
to
talk
to
the
kubernetes,
the
CRI
API
that
kubernetes
provides.
So
when
we
run
containers
in
production,
we
firmly
believe
that
you
should
run
them
in
read-only
mode.
What
this
means
is
that
all
the
continue
all
the
processes
running
inside
your
container
should
not
be
able
to
write
to
any
Pat's
in
the
container.
It
should
only
be
able
to
write
two
volumes.
Your
vine
mounted
in
or
their
three
temper
fast
paths.
We
have
made
writable
this
way.
B
If,
if
your
continue
does
get
tagged
into
the
first
thing,
that
a
bad
actor
would
probably
want
to
do
is
to
place
a
back
backdoor
in
your
container,
so
the
next
time
it
restarts
they'll
have
access
to
it.
But
if
you
weren't
being
only
mode,
they
won't
be
able
to
do
that.
So
you
can
easily
set
read-only
mode
as
a
system-wide
default
setting.
We
have
a
crowd
account
file
and
in
there
there's
an
option
called
read-only
and
I
started
to
true
right
now.
B
I'm
gonna
restart
the
cryo
daemon
and
since
cry
was
created
to
for
kubernetes
and
OpenShift,
the
way
to
run
can
pause
and
containers
loop
at
a
cluster.
It's
a
bit
complicated
than
what
pointman
and
bill
does
you
have
to
have
a
whole
JSON,
config
and
everything,
and
for
that
we
have
a
different
to
a
cold
cry
CTL
but
yeah.
It's
just
a
way
of
running
it
locally
and.
A
B
I'm
going
to
create
my
pod,
create
my
container
here
and
start
the
container
now,
that's
time
when
installed
well
done
this
container,
just
because
I'm
going
to
use
it
to
build
images,
guess
what
that
fails
because
I'm
running
in
read-only
mode,
when
you
try
to
install
a
package,
that's
expected
to
write
too
fast,
like
slash
var,
slash
for
a
log
and
because
read-only
mode
restricts
it.
It
would
feel
to
do
so
handsome
more
secure
workload.
B
An
added
advantage
is
that,
let's
see
of
any
continuing,
actually
storing
some
data
there,
when
your
container
disappears,
your
data
is
going
to
be
gone
forever.
Also.
So
this
stops
you
from
doing
that
as
you'll
probably
be
writing
to
volume
by
margin
which
will
stay
on
the
host
even
after
your
container
is
gone.
A
Yeah
cryo
is
also
very
easy
to
modify
what
Linux
capabilities
are
enabled
system-wide
throughout
your
cluster
in
every
container,
every
pod,
so
Linux
capabilities
are
just
parcels
of
pseudo
power,
so
a
sudo
privileges
are
divided
by
different
functions.
The
idea
is,
if
you
disable
all
of
your
Linux
capabilities
and
you
run
a
sudo,
you
have
no
increase
in
privilege,
so
here
we'll
just
remove
DAC
override
and
oh
in
in
cryo.
A
You
can
see
the
list
of
default
capabilities
that
are
enabled
is
much
shorter
than
if
you
were
to
look
in,
say,
pod,
man's
or
Dockers
default
capabilities,
and
that's
because
we
believe
that
you
should
run
with
as
few
capabilities
as
you
absolutely
need
in
your
in
production.
So
if
we
remove
dock
override
and
then
restart
cryo
and
again
start
a
pod,
you
can
see
that
it's
not
as
pretty
as
pod
man
top,
but
that's
how
you
list
the
capabilities-
and
you
can
see
Dec
override,
is
missing.
A
The
cool
thing
is
that
it
it
gets
carried
through
every
container
in
the
pod
as
well,
and
you
can
see
that
none
of
them
have
deck
override
now
so
again,
run
with.
As
few
capabilities
as
you
need
be
conscious
of
what
your
production
containers
are
doing
so
that
you
know
exactly
what
shouldn't
be
in
there.
A
B
For
Padma
and
we
spoke
about
running
without
root
again,
we
also
spoke
what
I
can
have
isolation
with
user
namespaces,
not
only
between
you
continue
and
host,
but
also
between
multiple
containers.
We
spoke
about
how
the
fork
exact
model
lets
us
keep
track
of
who's
doing
what
on
your
system,
so
you
can
easily
find
out
who's,
trying
to
be
shady
and
as
Dan
Walsh
would
say,
hashtag
no
big
fat
demons,
all
actual
I
mean
these
stores,
don't
have
demon
and.
A
B
And
don't
download
random
crap
off
the
internet
and
for
cryo
we
mentioned
to
run
your
continuous
in
read-only
mode
when
running
in
production
as
much
as
possible.
We
also
showed
you
how
we
have
fewer
capabilities
enabled
and
you
can
even
reduce
the
list
as
much
as
possible,
depending
on
what
all
you
need.
Your
continuous
to
have
cryo
has
the
same
user
name
space
support
as
we
saw
in
pod
man.
It's
just
a
work
in
progress
and
kubernetes
right
now,
so
waiting
on
kubernetes
to
catch
up
and
will
take
full
advantage
effect
and
my.
A
Favorite
FIPS
mode
support,
if
you
are,
if
you
have
to
run
your
system
in
FIPS
mode,
you
work
for
the
government
or
something
I.
Don't
know
you
cryo
is
your
only
option
for
a
container
runtime
engine
and
cryo
is
the
only
runt
is
the
only
engine
that
can
recognize
if
you're
running
in
FIPS
mode
and
if
you're
running
a
fips-compliant
image
it
can
enforce
FIPS
mode
in
your
container.
So
if
you
try
to
use
a
weak
crypto
algorithm,
it
will
error
out
like
it
would
on
your
hosts.
A
A
Have
one
more
thing
to
tell
you:
a
few
months
ago
there
was
a
CVE
that
was
announced
out
of
nowhere.
It
was
like
90%
of
all
containers
running
in
production
are
affected
by
this
exploit,
where
you
can
take
over
the
run,
see
binary
rewrite
it
and
you
know,
have
full
access
to
the
hosts
and
we
were
like,
oh
my
gosh.
What
are
we
gonna
do
like
people
are
walking
around
Red
Hat
with
their
head
hung
low,
like
might
as
well
just
like
pack
up
and
quit
because
we're
done
it's
not.
B
So
if
we
were
following
some
of
the
security
stuff
we
spoke
about
today,
you
were
less
likely
to
be
affected.
Well,
one
of
the
most
you
don't
run.
Random
images
of
the
internet.
Second
was
if
you're
already
running
without
root.
I
would
have
been.
It
would
have
been
more
difficult
for
that
exploit
to
be
to
actually
exploit
your
fancy
binary,
but
the
main
thing
that
was
stopping
this
was
having
SELinux
enabled
so
I
run
C.
So
how
I
see
the
next
does
stuff
as
a
key?
B
It
has
labels
for
each
file,
the
process
and
based
on
what
label
you
have.
It
gives
you
access
or
not,
and
the
run
C
file.
The
Ranchi
binary
has
the
continued
runtime
exact
label
and,
while
all
container
processes
usually
have
container
T
and
can
only
access
files
that
are
label
container
file
T.
So
now,
as
you
can
see,
container
file
T
is
not
the
same
as
container
runtime,
exact
T.
So
if
you
had
a
C,
Linux
abled
I
ceiling
would
have
like
a
no.