►
Description
A deeper dive on the work we are doing to secure containers in OpenShift. We will dive into features like container signing and scanning, talk about plans for container registry and how this ties into new features in RHEL and OpenShift with Marc Curry, Ben Breard, Dan Walsh from the OpenShift team at the OpenShift Commons Gathering Boston.
Learn more: https://www.redhat.com/en/summit/2017/agenda/sessions
A
So
welcome
everybody.
Let
me
be
the
first
to
welcome
you
guys
here
just
kidding,
so
my
name
is
Ben
Briard
I'm,
a
product
manager
here
at
Red,
Hat
I
work
on
both
the
rel
team
and
the
OpenShift
team
and
spend
most
of
my
time
focused
on
you
know
our
container
technologies,
including
you
know,
atomic
host,
tainer
engine,
runtimes
container
content
and
then
a
few
de
they're
fun
things
like
system,
D
and
other
things.
That's
me
I'm,
going
to
kick
this
over
to
my
cohort
partner
crime
mark
I'm.
A
A
Excellent,
so,
okay,
so
we're
going
to
focus
this
time
on
talk
about
core
container
run
time
host
platforms,
mostly
from
the
security
perspective.
But
this
is
the
time
we
want
to
keep
this
conversational
so
feel
free
to
speak
up.
Ask
any
questions,
we're
happy
to
enjoy
the
dialogue.
Okay,
so
this
year
at
Summit,
you're
going
to
be
hearing
this
theme
a
lot.
The
containers
are
the
Linux,
of
course.
If
you're
like
me,
you
read
that
and
think
yeah,
but
wait
a
minute.
A
There's
these
other
things
out
there
just
ignore
those,
because
this
is
red
hat
no
seriously.
I
would
say
that,
though,
well,
there
are
other.
There
are
other
non
Linux
implementations
so
forth
in
this
space,
but
I
mean
all
of
the
exciting
energy,
the
production
stuff.
Everything
is
happening
in
Linux
right
and
that's
that's
the
reality
where
we
find
ourselves
and,
of
course,
being
Red
Hat.
We
we
know
a
thing
or
two
about
Linux,
so
so
we
we
do.
A
We
do
like
those
and
benefit
from
it,
and
not
only
that
our
heritage
and
everything
that
we
do
the
value
we
put
into
rel
when
our
platforms
and
everything
just
it
all
carries
forward
to
the
space
not
only
from
a
Content
perspective,
but
just
also
how
we
continue
to
evolve
rel
as
an
operating
system.
But
you
want
to
talk
a
little
bit
about
that,
even
when
we
look
at
rel
from
a
Content
perspective,
you
guys
know
today
it's
full.
A
You
know
five
six
thousand
different
rpm
packages
and
as
we
as
we
go
forward,
we
are
looking
at
breaking
down
those
services
into
more
a
role,
specific
containerized
content
that
are
meant
to
be
deployed
on
platforms
like
open
ship.
So
as
it's
foundational
to
the
operating
system,
one
of
the
areas
we
put
a
lot
of
effort
into
recently
is
a
container
content
of
obviously
going
forward.
Breaking
down.
The
operating
system
is
a
big
part
of
that,
but
even
just
at
the
base
image
layer,
you
know
all
you
guys
running.
A
A
Ok,
the
only
10
people
do
so
here's
your
knowledge,
but
no,
but
we
did
release
a
very
small
language.
We
call
it
the
atomic
base
image
and
we
basically
pull
out
all
of
the
non-essential
components.
So
there
is
no
pipe
on.
There
is
no
system
D
and
we
basically
get
it
down
to
when
it's
compressed.
It's
28,
roughly
28
megabytes
on
disk
of
75.
A
So
again,
we're
not
really
going
after
like
an
alpine
type
thing,
but
we
are
going
after,
like
just
give
me
a
good,
solid
G,
let
see
where
I
could
plug
in
the
rest
of
the
RPM
the
ecosystem
into
it.
That's
a
supportable
basis
that
could
run.
You
know
any
type
of
purpose-built
that
so
that's
new
and
then
the
other
cool
thing
that
we
haven't
released
yet,
but
it's
in
flight
it'll
be
soon.
A
We
will
have
in
this
based
images
what
we're
calling
them
so
for
both
rl6
and
rel
7.
These
basically
are
set
to
run,
run
either
systemd
or
upstart,
depending
on
which
version.
So
if
you've
got
multi,
container
or
multi
service
containers
which
I
know
is
sacrilegious
in
the
micro-services
world,
but
there
are
legitimate
use
cases
for
this.
A
We
will
have
these
base
images
that
make
it
that
much
easier
and
faster
to
get
multi
service
containers
up
and
going
so
we
basically
have
a
spectrum
on
the
base
image
for
you
guys
to
choose
alright
I'm,
going
to
talk
a
little
bit
about
improvements,
we're
doing
at
the
container
host
layer
and
again
any
questions
or
comments
or
whatever
just
raise
your
hand,
we'll
get
we'll
get
you
a
mic,
really
quick
at
the
container
host
level
things
we've
been
working
on
on
the
atomic
host
sides.
Those
you
guys
are
familiar.
A
Openshift
runs
on
atomic
dos
or
rel.
Today,
one
of
the
things
we've
been
doing
in
atomic
oohs,
it's
something
called
live
FS,
which
is
really
really
cool.
It's
kind
of
chippy
as
well
upstream
today,
but
the
idea
is
when
you
update
atomic
host,
you
you
pull
the
update
down
and
then
you
reboot
into
the
new
update
environment
and
with
live
FS.
You
know.
Last
week
we
had
a
critical
in
SS
c
ve
come
out
or
HSA.
A
However,
you
want
to
think
about
that
and
with
live
SS
we
can
actually
pull
that
down
and
drop
it
on
on
a
running
atomic
ou
system
without
that
without
that
reboot
into
the
end
of
the
new
update.
So
it's
a
faster
way
to
take
user
space
patches.
So
it's
something!
That's
that's
pretty!
That's
pretty
cool.
A
lot
of
work
has
gone
around
user
namespaces
keeps
coming
up
and
we're
targeting
to
have
kernel
support
in
seven
four
died,
and
you
want
to
talk
about
the
killer
stuff.
That's
coming
in
the
future.
Yeah.
C
So
you
use
a
namespace
has
always
been
the
holy
grail
of
namespaces
and
people
believe
that
it's
going
to
be
this
huge
security
feed
here,
and
it's
been
that
for
like
three
years
now,
everybody's
been
talking
about
user
namespace
and
one
of
my
favorite
expressions
is
it's
always
six
months
away.
So
continue
with
six
months
away.
So
this
past
year
we
docker,
as
of
1.12,
got
some
user
name
space
support
in
it,
but
the
thing
that
it
does
to
give
you
separation
between
the
containers.
C
C
The
namespace
has
always
been
that
using
namespace
works
everywhere,
except
on
the
file
system,
which
means
that
if
I
mount
a
file
system
with,
say,
I
pick
five
new
ID,
five
thousand
as
being
my
roof
inside
of
my
container,
if
I
hand
in
a
real
file
system,
one
but
I
note
of
zero
in
it,
then
UID
5000,
which
is
running
as
Rudin's
I,
didn't
use
a
namespace
can't
use
that
high
note
zero.
It
looks
like
it's
minus
one
or
so
the
unknown
entity.
C
So
what
you
need
to
do
is
you
need
to
have
a
shifting
FS.
So
when
you
want
to
be
able
to
do
is
say
that
this
image,
when
it's
mounted
inside
of
the
username
space,
will
actually
shift
in
the
UID
0
to
5000.
Then,
if
you're
in
the
username
space,
when
your
UID
5000
looks
at
your
UID
5,000,
it
seizes
0.
So
it's
basically
shifting
both
ways.
C
So
we've
actually
IBM
worked
on
a
shifting
file
system
to
be
able
to
support
this
about
a
year
ago
and
I
actually
think
they
use
it
in
the
cloud
entry
right
now,
but
they
when
they
had
paused
on
it
anyway.
So
we
put
some
engineering
mylashia
to
actually
get
it
further
along,
and
we
made
some
really
good
progress
on
it
until
and
then
about
a
month
ago
there
was
a
conference
called
vault,
which
is
where
all
the
colonel's
filesystem
guys
get
together.
C
They
examine
shift
FS
and
decided
that
it's
who
belongs
in
the
virtual
file
system
layer
and
not
as
a
real
file
system,
so
there
seems
to
be
grown
consent,
oh,
and
this
is
the
right
way
to
go,
except
that
they
don't
want
it
as
a
file
system.
They
want
a
virtual
file
system,
which
means,
of
course,
it's
about
six
months
away.
So,
but
this
progress
people
are
actively
working
on.
I
was
just
good
and
we're
trying
to
drive
that
as
much
possible.
We.
A
Also
like
to
call
that
a
road
map
item
yeah,
there's
a
magic
unicorns
coming
thanks
for
that.
Another
thing
where
we've
put
a
lot
of
investment
is
something
we
call
system
containers,
and
this
is
where
we
are
basically
get
the
idea.
You
can
see
these
three
red
boxes
here.
This
works
the
same
on
our
rel
host
as
an
atomic
host,
where
we're
actually
getting
to
the
point
where
we're
actually
shipping
and
distributing
components
like
docker
or
cloud
agents.
Things
of
that
nature.
A
Today,
we
also
do
with
like
open-vm-tools
sed
flannel,
a
lot
of
early
boot
services.
It
works
great
for
the
basically
these
runners,
read-only
containers
on
the
host
and
it's
a
good
way
to
separate
different
stacks
in
the
filler
different
component
in
the
stack
right
and
isolate
those
from
regular
OS
updates.
So
you
can
keep
updating,
keep
patching
your
systems
without
worrying
about
you
know:
thermic
docker,
incrementing,
outside
of
the
version
you're
on
so.
C
I
just
want
to
plug
I'm,
giving
a
talk
on
Thursday.
That's
going
to
talk
about
all
the
new
features,
we're
adding
to
containers
and
will
be
heavily
going
into
it
system
containers
that,
but
one
of
the
other
goals
of
system
containers
is
actually,
if
you
look
at
the
atomic
host
that
currently
comes
with
a
bunch
of
services
running
on
it.
So
it
comes
with,
as
it
comes
with
sed
flannel
D,
and
to
contain
a
runtime,
also
known
as
a
formerly
known
as
docker
and
comes
with
kubernetes.
C
So
our
goal
is
actually
put
all
four
of
those
into
system
containers,
so
they
can
be
swapped
in
and
out
of
your
atomic
hosts,
so
you
can
be
able
to
just
install
different
containers,
running
different
services
and
a
system
containers.
The
other
really
neat
feature
of
all
system
containers
they
have
to
be
running
before
your
container
runtime
is
running,
so
they
usually
I
mean
if
you
look
at
a
kubernetes
and
kubernetes
toxic
container,
runtime
or
the
container
runtime
has
to
have
the
network
set
up
and
then
the
network
setup
usually
involves
a
database.
C
A
A
Okay,
so
again
no
hands
have
gone
up.
Yet
nobody
has
any
questions
about
anything
or
okay.
All
right,
we
will
keep
going
now.
One
of
the
things
we
have
been
putting
a
resource
into
other
OPI
technologies.
Here,
there's
like
like,
for
example,
run
see
we
ship
a
supported,
run
C
on
rel
today
that
used
you
know
within
docker,
but
we
also
use
that
for
the
system,
container
implementation
and
there's
been
other
work
with
CRI
and
cRIO
right.
C
So
we've
been
working
on
an
effort
to
a
couple
years
ago,
about
a
year
ago,
kubernetes
decided
to
standardize
on
what's
called
a
container
runtime
interface.
Cri
main
reason
they
did
this.
As
core
OS
came
to
kubernetes
and
said
we
want
to
have
rocket,
be
a
supported,
container
runtime
to
run
under
kubernetes,
and
so,
if
we
look
at
kubernetes
code,
I
had
docker
engine
API
built
into
the
code
and
so
what
they
wanted
to
do,
rather
than
build
the
rocket
API
engine
and
they
decided
to
standardize
on
the
way
they
would
talk
to
container
runtimes.
C
So
they
built
the
same
CRI
in
the
kubernetes
CRI
for
defining
what
a
demon
they
wanted
to
talk
to
kuba.
These
would
talk
about
protocol.
So
at
that
time
we
did
an
experiment.
We
wanted
to
build
a
pure
CRI
implementation
of
a
container
runtime
that
we
use
standards
for
running
OCI
images,
so
it
uses
run
C
or
actually
we
also
work
with
Intel
now
and
reuses
run
v
underneath
so
we
can
actually
well
as
last
week
we
have.
C
The
cryo
is
called
CRI,
oh
and
we
pass
all
where
the
first
CRI
to
actually
pass
all
all
of
the
checks
that
kubernetes
currently
does
so
we're
about
to
move
into
testing
OpenShift
on
top.
But
it's
a
pure
implementation
of
a
container
runtime
just
for
kubernetes
and
the
goal
is
to
prove
out
the
kubernetes
workflow.
So
it
eliminates
the
need
for
running
a
docker
engine
underneath
the
covers,
but
fulle
implements
everything
in
kubernetes
needs
and
that
went
to
version
point
0
3.
As
of
yesterday
and
again
I'll
talk
about
that
more
on
Thursday.
A
C
I
would
say
that
it's
the
number
one
problem
I
see
what
containers
is
the
myth
of
containers,
so
the
great
myth
that
a
lot
of
people
are
buying
into
right
now
is
that
I
can
run
anybody's
OS
with
anybody's
kernel,
and
things
are
just
going
to
work
and
people
are
falling
into
this.
This
trap
and
I
hear
about
companies.
Now
you
know
picking
random
operating
systems
to
run
this
software
on
and
then
expecting
everybody
to
support
random
operating
systems
underneath
it
I
just
liked
everybody
in
this
room
that
thinks
about
installing
software
in
the
company.
C
Would
you
install
10
different
versions
of
Linux
around
environment?
Would
you
have
an
alpine
Linux
Ubuntu
Linux,
a
Fedora
Linux,
a
rel
sent
to
us
and
have
all
these
different
physical
hardware's
running
different
software,
depending
on
which
software
company
you
bought
an
application
from
it
would
come
when
I
say:
hey
I'm,
coming
with
this
this
operating
system
and
in
like
almost
guarantee?
No
one
in
this
room
would
do
that.
C
C
A
C
I
think
guys
would
go
out
and,
as
you
know,
security
issues
start
to
proliferate.
You
know
the
other.
You
know
people
want
to
buy
into
micro
services
and
software
is
going
to
be
constantly
updated
and
all
that,
but
one
of
the
number
one
things
we
hear
from
customers
is,
is
what
we
call
lift
and
shift
which
is
I
want
to
take
a
rel
six
workload
or
a
Relf,
given
a
rel
five
workload
and
move
it
on
to
a
new
hardware.
C
So
they
just
want
to
take
the
operating
system
and
move
it
into
a
piece
of
physical
hardware,
and
so
what's
going
to
happen
with
containers.
Eventually,
they
get
asked
that
to
get
real
old
and
people
not
going
to
want
to
be
able
to
update
the
software,
and
at
that
point
you
know
if
you
use
some
random
operating
system
for
the
base
layer
of
an
applications
of
your
software
who's,
going
to
support
it
and
just
some
things
to
think
about
as
the
software
starts
to
proliferate.
It
just
is
weird.
B
Dan,
a
common
question
that
I
get
from
customers
that
are
coming
from
traditional
VM
environments
to
containers
they
want
to
do
a
port
of
their
trishul
VMs
to
micro
service
containers
is
the
concern
is
security.
How
would
you
address
that
between.
C
I
put
Hugh
years
ago
when
I
first
start
talking
about
containers,
security,
I
used
to
say,
containers
don't
contain,
and
the
reason
I
was
saying
that
was
was
comparing
him
to
virtual
machines.
So
usually,
when
I
talk
about
container
security,
I
started
several
physical
machines,
which
is
says
save
this
way
to
run
to
applications,
is
on
two
separate
physical
machines.
Obviously
too
expensive.
The
next
level
is
to
run
them
on
separate
VMs,
so
VMs
there's
a
very
small
attack
surface
on
the
crown.
C
Once
I
broke
into
a
virtual
machine,
now
I'm
attacking
the
host
host
carnal,
if
I'm
in
a
container
I'm
already
talking
to
the
host
kernel.
So
so
a
virtual
machine
is
always
going
to
be
more
secure
than
a
than
a
container
environment.
That
being
said,
if
you're
running
two
services
on
the
same
machine
at
the
same
time,
a
container
is
always
going
to
be
more
secure,
so
you
just
take
so
to
say
a
database
server,
a
web
front-end
and
you
take
them
sick
on
the
same
machine.
You
stick
them
into
a
container.
C
You
were
far
more
secure,
so
containers
are
more
secure
than
running
services.
Ain't
machine,
but
really
this
whole
question
about
whether
containers
are
more
secure
than
virtual
machines.
This
is
not
a
zero-sum
game.
You
can
run
your
containers
inside
the
virtual
machines
and
really
what
you
want
to
do
is
start
to
look
at
the
security
value
of
the
containers
and
place
them
in
virtual
machines,
depending
on
how
well
you
want
to
isolate.
C
So
if
you
had
the
database
from
that
back-end
and
you
have
your
web
front-ends,
you
might
put
a
whole
bunch
of
web
services
inside
them.
A
whole
bunch
of
virtual
machines
and
then
you
would
take
a
whole
bunch
of
container
databases
and
put
them
in
a
whole
bunch
of
different
virtual
machines
and
running
your
own
backs.
If
you're
going
to
do
that,
the
note
the
tool
you
need
to
do,
that
is
an
orchestration
tool.
C
So
you
need
to
have
an
orchestration
tool
that
can
basically
judge
where
to
load
your
containers
and
to
keep
them
isolated
for
security
purposes,
something
like
open
check.
Perhaps
but
yeah.
That's
that's
really
the
goals.
But
since
you
asked
me
a
question,
you
are
printing
system
level,
one
things
when
I
give
talks
on
security.
I
often
get
asked,
is
moving
up.
The
network
network
stack
and
they
thing
I
usually
get
asked,
is
okay,
I'm
running
multiple
workloads,
mixed
workloads
on
top
of
systems?
C
B
True
and
actually
do
you
mind,
I,
don't
know
where
we
are
the
slide
deck,
but
can
you
go
to
so
one
one
way
of
addressing
this
is
via
for
those
of
you.
An
earlier
session,
maybe
hear
a
little
bit
about
this,
but
via
a
new
tech
preview
feature
and
open
ship
3
5,
and
that
is
network
policy,
so
cooperage
network
policy.
The
idea.
B
Is
that
well
who
in
here
so
that
we
default?
If
you
don't,
do
anything
and
just
install
open
shift,
you're
getting
OVS
subnet
as
the
default
plugin
for
this
little
mechanism,
but
you
have
the
option
of
switching
that
out
for
OVS
multi-tenant,
who
in
here
uses
the
OBS
multi-tenant
plugin,
so
a
significant
number.
So
what
does
the
advantage
of
OBS
multi-tenant
over
over
subnet?
The
idea
is
some
isolation
layer
between
projects,
so
OBS
multi-tenant.
As
soon
as
you
plug
it
in
and
label
it,
then
you
immediately
have
isolation
between
every
projects
and
in
the
deployment.
B
B
B
I
want
those
two
to
be
able
to
talk
to
each
other
and
you
could
enable
that
it's
called
joining
joint
network
problem
with
that
is
is
if
I
join
those
two
together
with
the
multi-tenant
plug-in,
then
every
pod
in
project
a
it,
can
talk
to
every
pod
and
project
B
in
both
directions.
So
that's
that
may
or
not
may
not
be,
may
or
may
not
be
what
you
were
hoping
for.
If
that's
fine,
then
you're
good
with
obvious
multi-tenant.
B
The
other
thing
you
can
do
is
you
can
say
well,
one
of
my
projects
I
want
that
to
be
globally
accessible
by
all
the
others,
and
so
there
is
a
command
that
you
can
run.
That
says,
make
this
one
project
or
these
specific
projects
globally
accessible.
All
the
others
would
be
isolated
and
that's
good
if
that
meets
your
use
case,
but
once
again,
that
might
be
a
little
bit
more
access
than
which
you
would
like
to
provide
so
enter
in
network
policy.
So
what
is
that?
What
policy
do
that
multi-tenant
couldn't
do
so?
B
B
What
who
you
would
like
to
be
able
to
talk
to
who
and
then
you
establish
your
policy
that
way,
you
can
still
say
I'd
like
this
entire
namespace,
to
be
able
to
talk
to
me,
but
you
can't
say:
I
want
to
take
that
namespace
and
be
able
to
talk
to
it.
It's
a
one-way
path
and
it
works
on
ingress
only
currently.
So
if
you
did
want
to
go
both
ways,
you'd
have
to
go
set
a
network
policy
on
that
other
namespace.
B
So
the
policies,
the
policy
barriers
are
in
in
spaces,
so
you
define
them
on
each
one.
Another
difference
between
it
and
multi-tenant
is
right.
Now,
in
the
tech
preview,
there
is
no
initial
deny
all
rule
like
when
it
seems
you
enable
multi
tenant.
That's
everybody's
isolated,
there's
an
annotation.
You
have
to
set
network
policy
to
establish
that
sort
of
deny
all,
and
you
have
to
do
it
in
every
project
that
you
would
like
to
be
a
participant
in
that,
and
then
you
establish
whiteness
to
amal
definitions.
B
The
other
thing
is,
if
you
might
imagine,
is
with
multi
tenant.
Everything
is
cluster
admin
control,
it
kind
of
has
to
be
right,
because
if
you
join
two
projects
together,
not
only
are
you
saying
yeah,
you
can
talk
to
me,
but
now
I've
also
said:
I
can
talk
to
everyone
a
year
nodes
as
well,
so
that's
been
defined
at
the
cluster
admin
level,
whereas
with
network
policy
you're
not
only
defining
who
can
talk
to
me,
so
that
is
something
that
can
be
defined
by
a
project
admin.
So
it's
a
way
of
you
know
it's.
B
B
B
You
know,
as
the
one
on
the
right
hand,
side
implies.
This
is
its.
It
also
works
two
ports
as
well
too,
so
you
can
say.
Not
only
can
you
only
connect
to
say,
for
example,
red
pods
coming
from
blue
pods
and,
and
the
analogy
being
maybe
my
red
pods
is
our
web
servers,
and
my
blue
pods
are
databases
that
need
to
feed
the
web
server
somehow
someway
or
whatever
other
combination
of
pods
you
might
have.
B
D
Yeah,
it's
a
more
of
a
big
picture
question
so
in
companies
that
are
large
and
have
you
know
traditional
hierarchies
monolithic,
apps,
a
big
hardware
where
they're
transitioning
to
container
what
sort
of
trends
do
you
see
about
behaviors
at
work
where
you
know
folks
have
to
educate
each
other
about
who's
responsible
for
what
in
the
new
world
and
sort
of
getting
an
understanding
of
what
shits
in
that
transition?
If
that
makes
sense
or
a
security
perspective,
yeah.
B
So
that's
a
really
good
question
and
in
this
particular
example
the
example
network
policy.
There's
the
trend
from
instead
of
having
a
security
team
or
a
networking
team
make
these
decision.
It
can
be
pushed
down
to
the
development
right
if
he's
in
charge
of
the
project.
But
for
you
know
for
for
larger,
larger
groups
that
have
people
that
are
really
specifically
assigned
to
do
the
security
policies
that
apply
between
all
these
different
pods.
B
F
F
We
saw
the
exact
same
thing
in
the
early
2000s
around
virtualization,
with
VMware,
where
IT
managers
I
spoke
to
swore
that
that
would
never
run
in
production
right
that
their
apps
needed
dedicated
servers
because
of
performance
characteristics,
security
characteristics
that
they
were
glad
that
the
developers
liked
the
virtualization,
but
it
would
never
perceived
you
on
death
tests.
You
fast
forward
to
today.
It
almost
seems
kind
of
quaint
right
that
that
was
even
nothing.
F
We
think
that
that
same
thing
is
happening
with
containers
and
a
lot
of
it
is
just
education
and
understanding
this
technology,
a
is
Linux
technology,
as
Ben
was
saying
and-
and
you
know,
there's
different
approaches
around
making
it
more
secure
for
different
environments
and
so
forth.
So
we
think
we
see
that
acceleration
happening
even
more
quickly,
but
it
is.
It
is
an
education,
it
is
and
it
a
lot
of
times.
It
just
starts
small
with
certain
pilots
and
then
getting
that
confidence.
Yeah.
A
G
Hi
with
regard
to
the
OBS
network
policy,
if
it's
implied
great,
let
me
know,
but
if
not,
does
it
apply
to
two
routes
and
and
services
as
well,
particularly
on
the
route
that
I
want
to
limit
access
to
something
inbound
from
either
a
particular
IP
range
or
specific
IPs.
We
have
multiple
lines
of
business
that
we
may
want
to
separate
and
and
some
ethical
barriers
that
we
applied
with
multiple
legal
entities
within
our
firm.
It.
B
Does
not
apply
to
those
directly
right
now
we
have
ingress
objects
that
we're
moving
towards
from
routes.
That
would
do
that.
Your
other
question
is:
doesn't
work
with
services.
It
does
it's
designed
to
work
with
services
such
that,
so,
for
example,
if
you
didn't
want
to
be
direct
pod
to
pod
communication,
but
instead
you
wanted
a
pod
to
speak
to
a
service
which
was
then
back
ended
by
several
other
pods.
It's
designed
to
do
that.
So
it's
done!
There's
yes!
F
B
A
B
C
Up
so
it
has
really
been
very
little
agreement
and
open-source
world
around
image.
Signing
at
this
point,
so
the
there's
been
different
efforts.
We
tried
to
work
with
docker
on
some
of
this
stuff,
but
we
found
that
it
was
far
too
complex
and
what
they
built
was
also
tied
to
registries,
and
we
we've
noticed
that
there's
a
proliferation
of
registries
out
there
and
I
think
just
about
every
company
out
there
seems
to
be
having
their
own
containing
registry.
C
So
you
have
Amazon,
you
have
you
know
red
Talia,
docker
I/o,
you
have
core
OS,
you
Google
has
one
or
got
a
factory
from
J
frog,
so
we
see
lots
of
customers
all
and
nobody
wants
to
have
five
six
different
container
registries
running
in
their
environment.
So
tying
signing
to
a
particular
registry.
To
me,
you
know
to
us
was
very
poor.
The
other
thing
we
didn't
like
about
some
of
the
signing
is
that
we
wanted
to
allow
real
flexibility,
total
flexibility
and
signing.
C
So
we
introduced
its
will
being
introduced
into
rel
right
now,
a
thing
we
call
simple
signing
and
basically
it's
GPG
signing
and
it's
signing
and
out
of
fact
that
we
associated
with
the
container
image
so
that
container
image
it
has
a
checksum
associated
with
a
container
image.
You
take
that
checksum,
you
create
you
sign
it
with
what
the
GPG
key
simple,
gbt
key,
and
then
you
take
that
GPG
scientific
signed
out
of
fact
and
put
it
any
way
you
want.
C
So
it's
basically
a
web-based
protocol
we've
built
into
all
of
our
tools
now
to
support
trust
relationship
with
that
signed
artifact.
So
you
can
say
that
you
know
if
you
work
for
acne
ink,
you
can
say
only
run
containers
images
that
are
signed
by
acne
ink
or,
you
can
say,
trust
one
so
signed
by
Red
Hat.
We
have
the
same
image
signed
by
red,
daddy
and
acne
ink.
You
could
have
it
signed
by
Dan
Walsh.
If
you
want
to
trust
all
container
images
rather
signed
by
me,
you
could
do
that.
H
C
So
the
idea
is
to
allow
images
and
not
only
that
we
can
set
up-
we
also
have
cross,
so
you
can
base
and
say
which
registries
you
trust
or
which
images
on
which
registries
you
trust.
So
if
you
want
to
connect
to
images
are
available
in
dr.
IO,
you
can
set
up
your
your
container
runtime,
so
they
can
pull
those
images.
This
can
all
be
built
into
kubernetes
protocol
built
into
darker,
and
you
can
store
your
images
on
out
of
factory
Claire.
C
You
know
any
anybody's,
basically,
amazon
cloud
engine.
Anything
and
just
add
your
signings
signing
a
lot
just
basic
trust
setup.
We
can
pull
the
pull
your
registries
and
you
can
start
to
control.
You
know
random
developers
in
your
environment
just
going
on
and
say,
hey
pull
an
image
from.
You
know
this.
This
random
hadoop
image
from
docker
Iowa.
You
don't
want
that
pull
into
your
environment.
C
That's
the
goal
with
simple
signing:
we're
building
an
int
open
shift
so
that
open
shift
registry
because
we
want
you
to
use
open
shift
registry
will
have
first-class
signing
built
into
it.
But
you
can
sign
any
images
and
just
store
them
on
on
a
website.
So
you
can
set
up
pointed
a
website
built
on
the
sign
data
box.
Then.
A
G
B
Image
scanning,
so
we
have
a
couple
of
tools
that
are
sort
of
I,
just
facto
standards
for
doing
scanning,
so
the
first
one
being
cloud
forms
which
is
open
ASCAP.
We
also
have
Red
Hat
insights,
which
also
uses
that
so
those
are
sort
of
the
two
tools
that
we
know
work
and
are
going
to
continue
to
work.
We
have
also
the
ability
to
work
with
third-party
scanning
tools,
as
well,
so
the
Black
Ducks
and
the
J
flag,
x-ray,
twistlock
and
so
forth.
B
So
we
have
the
ability
to
work
with
third
parties
and
what
we
do
is
we:
they
have
their
own
policies
that
they
evaluate
when
they
do
their
scanning
and
then
based
upon
the
evaluation
of
those
policies
on
the
image.
Then
they
can
assign
an
annotation
to
an
open
shift
image
stream
object
and
then
open
shift
examine
that
image
stream
objects
annotation
and
decide
whether
or
not
to
run
a
container.
B
So
this
doesn't
apply
against
containers
that
have
already
running,
but
that's
kind
of
the
general
flow
of
how
things
work
with
the
new
Red
Hat
container
catalog.
It
has
a
really
nice
thing
in
next
to
every
single
container.
In
their
container
image
in
there
and
the
idea
is
that
it
gives
it
quote-unquote
a
freshness
grade
that
freshness
grade
ranges
from
A
to
F
and
colorized
to
make
it
visually
apparent,
whether
something
is
fresh
or
not,
but
that
freshness
grade
is
based
upon
multiple
things
like,
for
example,
the
type
of
CDE
the
severity
of
the
CDE.
B
If
the
CP
has
been
applied
or
not
or
fixed
or
not,
and
several
other
things,
and
so
that's
going
to
determine
this
freshness
grade,
so
you
can
before
in
pulling
it
into
your
registry,
decide
that
that's
the
image
you
want
to
pull
it
or
not.
That's
all
part
of
the
process.
The
the
goal
is
to
get
it
right
so
that,
before
things
get
into
your
registry,
you
decide.
If
you
want
the
name
there.
Yeah.
F
F
Called
the
it's
called
the
container
health
index
that
that's
where
that's
where
this
freshness
crate
comes,
and
you
know
we
use
it
to
grade
our
own
software
that
we're
packaging
and
giving
to
you.
So
you
can
get
the
grades
on
all
the
images
that
we
produce,
but
it's
also
targeted
at
our
ISV
partners
right.
So
a
lot
more
of
the
ISPs
that
you
guys
are
working
with
third-party
Ivy's
will
start
delivering
software
to
you
as
containers
as
container
images
and
maybe
you're
already
dealing
with
some
of
these
thing.
F
So
we're
we
created
this
container
health
index
not
just
for
ourselves
to
create
our
own
container
images,
but
also
to
work
with
our
partners
and
partner
with
them,
or
not
only
the
quality
of
what
they're
shipping,
but
the
base
images
that
they're
shipping
on
in
it-
and
you
know,
if
there's
an
issue
to
basically
be
able
to
quickly
patch
and
update
those
images
together
with
our
ISP
partner.
So
so
just
something
to
think
about.
F
We,
we
talk
to
a
lot
of
people
that
don't
actually
realize
that
until
you
get
into
like
what
is
actually
this
container
thing?
Well,
it's
just
the
separation
of
the
user
space
from
the
kernel
and
every
image
has
a
user
space.
So
so
now
everybody's
in
the
Linux
space-
and
you
know
it
was
it-
took
us
a
while
to
get
there.
F
That's
a
good-news
bad-news.
We
wanted
to
announce
this
six
months
ago,
but
just
imagine
if
we
have
an
entire
Linux
division,
an
entire
middleware
division.
We've
been
shipping
and
updating
packaged
software
for
over
15
years
as
RPM's
now
as
containers.
If
it
took
us
a
little
while
to
get
this
straight
now
multiply
that
across
thousands
of
is
vs
right
and
how
you
know
how
much
work
is
it
going
to
be
for
them
right?
C
I
I'm
afraid
to
say
it,
but
I'm
very
interested
in
federated
clusters.
It
has
not
even
been
defined
yet
cute
con
2016,
they
talked
about
I,
do
cube
con
umbrella.
Lena
was
spoken
about
I
mean
it
was
well
defined,
but
service
discovery,
synchronizing
objects
across
clusters
was
a
very
valid
use
case,
for
that
seems
it
so
easily
said
about.
We.
F
Yeah
actually
so
we
we
covered
Federation
in
the
Q&A
session
right
before
lunch.
You
may
have
missed
it,
but
we
can
follow
up
with
you,
but
just
to
recap.
For
folks
the
kubernetes
Federation
project
is
about
federating
multiple
clusters.
There
are
also
security
implications
around
dealing
with
that,
but
but
it's
actually
what
you
said:
it's
not
one
thing:
it's
several
things
right,
it's
federated
ingress
so
that
you
can
take
traffic
into
your
applications
and
then
spread
that
across
application
pods
across
several
customers,
broad
clusters
federated
deployments.
F
So
when
you
deploy
something
out,
your
scheduler
can
be
Multi
cluster
aware
and
then,
when
you
deploy
stuff
well,
you
might
want
secrets
and
config
maps
and
everything
else
that
needs
to
come.
So
it's
federated
secrets
and
federated
config
maps
and
all
of
keeping
all
that
in
sync.
So
so
status
of
that
is
still
it's
still
I.
Think
it's
the
second.
F
/
beta
upstream,
it's
going
to
take
probably
through
the
end
of
this
year,
for
it
to
be
stable
in
the
kubernetes
upstream,
and
probably
you
know
at
that
time
and
into
next
year,
you'll
start
seeing
it
as
sort
of
tech
preview,
but
you'll
be
able
to
to
play
with
it.
You
can
play
with
it
now
in
the
upstream,
but
yeah.
That's
where
it's
at
I
don't
we
can
follow
up
afterwards
on
more
more
detail.
F
H
A
It's
a
good
question,
so
it
when
you
say
inception
issues
you
mean
like
staging
the
container
content,
on-prem
and
then
close
what
I
see
most
commonly,
for
that
is
people
using
pull
through
from
like
a
DMZ
that
does
allow
outbound
connections.
So
I,
don't
know
if
that
works.
You
know
specifically
for
your
environment,
but
that's
common.
A
C
They
can
take
the
container
registry.
Software
is
separate
from
the
actual
content
that
sits
of
the
container
registry,
so
the
system
container
running
container
registry
would
just
be
running
the
demon.
That's
listening
on
the
container
registry
port.
It
wouldn't
have
the
content,
so
you
really
this
two
parts
of
that
right.
You
want
potentially,
would
want
to
run
your
container
registry
software
has
a
system
container,
but
you
still
need
it.
You
know
the
database
of
the
amount
of
data
that
these
things
use
is
always
going
to
be
isolated
from
the
actual
container
process.
F
Of
the
comments
was
on
Mark's
slide,
but
the
see
a
presenter
cover
it
to
plate
covered
it
in
his
session,
which
is
ultimately
scanning
containers
or
even
signing
containers.
Isn't
our
end
goal
right?
It's
basically.
What
do
you
do
with
that
information
to
control
execution
around
and
execution
policies
within
OPA
chef?
F
So
probably
that
I
think
to
me
that
the
biggest
gap
we
need
to
address
is
to
make
that
whole
flow,
seamless
from
scanning
and
signing
all
the
way
through
actively
setting
and
forcing
policies
on
execution
and
that's
going
to
be
a
big
focus
of
ours.
So
it
was
towards
the
end
of
the
bullets
there,
but
I
just
want
to
make
sure
everybody
knows
that.
That's
something
I
think
about
a
lot
yeah.
A
There
are,
there
are
some
initial
plans.
We
are
we're,
making
big
investments
and
alternative
architectures.
Today,
it's
gonna
you're,
going
to
see
it
as
a
phased
approach
from
Red
Hat.
So
first
will
be
the
base
rel
enablement,
which
you'll
see
that
is
planned
to
on
both
of
those
happen
and
later
this
year,
and
so
from
there
once
we
have
the
OS
x
and
then
you
know,
challenges
they're
obviously
involve
like
hardware.