►
Description
Dan Walsh gives the keynote at the OpenShift Commons Gathering held in London, UK on January 31, 2018.
A
A
So
my
name
is
Dan:
Walsh
I
lead
the
container
team
at
Red
Hat
I.
Basically,
she
talked
about
kubernetes
under
the
covers
or
under
the
under
the
hood.
Well,
I
do
what's
under
the
hood
of
kubernetes,
so
we
do
everything
to
do
with
running
containers
on
the
host
level,
so
I
have
under
my
team.
That's
probably
about
20
or
30
engineers
spread
all
cross-functional,
so
it's
not
just
I
actually
always
worked
on
Ralph
I,
don't
know
if
you
know
I
been
a
Red
Hat
for
almost
17
years,
I've
done
mainly
than
that
time.
A
I
do
security
products,
I,
see
Linux,
I'm,
fairly
famous
law,
but
I've
been
doing
container
technologies,
actually
all
the
way
back
to
roll
five
2005
timeframe.
So
when
the
revolution
started
a
few
years
ago,
I
got
picked
to
start
looking
at
it
at
the
low
level
on
the
operating
system
and
I
work
for
the
OpenShift
organization.
But
in
my
group
our
people
that
work
on
rail
work
on
storage
work
all
over,
so
we
have
a
really
across
and
I'm
an
engineer,
not
a
manager
so
anyways
this
talk
with.
Hopefully
this
will
work.
A
Okay,
hopefully
I'll
I'll
be
running
back
and
forth.
So,
as
Diane
said
last
night
about
five
o'clock,
I
got
a
note
saying
from
management,
saying
that
we
should
attend
this
meeting
after
close
of
the
stock
market
at
four
o'clock
yesterday
and
I
noticed
that
the
email
went
to
me
and
most
of
the
people
on
my
team,
even
though
we're
crossed
so
I
said
there's
something
going
on
here,
though
I
didn't
know
anything
about,
but
basically
read
out
and
Koro
us
just
decided
to
join
forces
together
and
there's
very
little.
A
I
can
answer
for
questions
I'm,
not
sure
what
this
all
means.
I
have
some
ideas,
but
you
know
we
haven't
even
really
talked
to
the
core
OS
guys
other
than
tweeting
to
them
and
saying
welcome
to
red
app,
but
one
of
the
things
when
I've
done
this
presentation
in
the
past
I've
often
talked
about
the
contributions
of
core
OS
to
the
to
the
container
environment,
and
now
I
can
put
their
logo
on
the
slide.
So
I'll
show
you
where
they've
contributed
greatly
so
working
awesome.
A
Thank
you
very
much
man,
okay.
So
when
I've,
given
this
presentation,
I've
been
given
a
virgin's
presentation
back
at
the
Red
Hat
summit
and
one
of
things
I'd
like
to
talk
about
in
the
beginning
is
those
three
letters
where
those
three
letters
mean
to
you
in
this
room.
When
you
see
something
that
ends
in
dot
PDF,
what
does
it
mean
to
you?
I
believe
it
means
to
you
that
you
know
you
can
look
at
it
right.
It's
a
document.
You
see
it.
You
know
it's
a
document.
You
can
view
it.
A
What
can
you
view
it
in
all
the
web
browsers?
You
can
what
view
it
in
different
tools.
You
can
use
it
just
about
anywhere.
How
can
you
create
PDFs?
There's
lots
of
tools
to
create
PDFs
right.
You
can
create
it
from
the
web
browser
from
your
email
or
there's
tools
that
allow
you
to
create
special
PDFs.
But
when
you
see
that
PDF,
do
you
instantaneously
say
Adobe?
A
Do
you
think
that
the
only
way
I
could
have
a
look
at
one
of
these
is
with
Adobe
Reader?
Do
I
think
that
the
only
way
I
could
ever
actually
print
it
is
from
Adobe
products
right
it
a
sort
of
a
generic
thing
and
it's
great
and
it
actually
made
adobe
stronger
because
it
became
everywhere
right.
It
became
a
standard
Linux.
A
When
you
see
the
keyword,
Linux
right,
you
have
to
think
of
Red
Hat.
No
right,
Linux
is
everywhere.
It's
on
your
cell
phones,
it's
in
your
cars.
It's
on
your
you
know,
routers!
It's
on
your
IOT
Linux
is
everywhere,
but
if
there
was
only
one
company
that
ever
provided
Linux,
even
if
it
was
Red,
Hat
I,
don't
believe
Red
Hat
would
be
as
successful
as
it
is
by
not
controlling
it.
It's
a
standard
people
see
the
word
Linux
and
they
know
it's
an
operating
system
that
can
run
everywhere.
A
Now
we
get
to
containers
all
right.
We
have
to
make
connait
containers
generic.
We
have
to
allow
different
ways
of
creating
this.
How
many
people
in
this
room
know
what
a
container
is?
Okay,
I
can't
that's
good,
but
let
me
give
you
my
definition
of
what
a
container
is.
A
container
is
simply
a
process
on
a
Linux
system
that
lives
with
some
resource
constraints
in
Linux
we
call
those
cgroups.
A
Secondly,
it
has
some
security
constraints.
It
has
things
like
maybe
set
comp
rules,
it
has
ownership
file
ownership,
it
has
capabilities
capabilities
associated.
It
has
selinux
if
you're
running
on
NSC
Linux
as
I
see
Linux
labels
on
it.
And
thirdly,
has
this
concept
of
namespaces.
So
namespaces
are
things
like
the
pit
namespace,
where
you
sort
of
lose
you
sort
of
get
that
feeling
for
virtualization,
so
I
start
to
have
virtual
feel,
like
a
network
namespace
where
I
have
my
own
network
device.
A
So
if
I
boot
it
up
a
relevant
system
right
now
or
a
fedora
or
a
bun
to
system
and
I
looked
at
the
first
process
that
comes
up
on
the
system
and
I
looked
at
if
I
cat
it
out
proc
/,
1,
/,
C
groups,
guess
what
pit
1
is
in
the
C
group?
If
I
went
to
proc
1
/
NS
I
would
see
that
pit
1
is
in
a
namespaces,
it's
in
the
group
of
namespaces.
If
I
looked
at
it
and
asked
what
the
SELinux
laid
it
has
an
SELinux
label.
A
If
I
looked
at
its
capabilities,
it
has
capabilities.
So
I
would
argue
that
every
process
on
a
Linux
system
is
in
a
container
okay
and
then
containers
just
become
us,
modifying
and
manipulating
those
those
fields.
Right.
I
can
modify
the
see
group
that
it's
in
I
can
change
its
resource
constraints.
I
can
change
this
SELinux
label.
I
can
change
the
namespaces,
but
every
process
on
linux
system
is
now
I
saw
someone
earlier
I
had
it.
A
We
have
assured
that
Red
Hat
puts
out
all
the
time
and
in
front
it
says:
Linux
is
containers
and
on
the
back
of
those
containers
of
Linux
and
that's
what
it
means
right.
Actually
it's
right
here
you
have
the
shirt
on
okay.
So
when
you
look
at
a
Linux
system,
everything
is
a
container.
So
if
people
come
out
to
me,
say
can
I
do
this
in
a
container
and
I
say
if
you
can
do
it
on
Linux,
you
can
do
it
in
a
container.
So
let's
continue
so
containers
are
just
Linux.
A
Okay,
this
might
be
a
US
thing,
but
you
guys
know
what
a
swear
jar
is:
okay
in
the
US,
when
you're
raising
kids
anytime,
they
swear,
they
have
to
throw
in
the
US
would
be
a
quarter.
Some
kind
of
coin
has
to
go
into
this
weird
yeah
anytime.
You
say
you
swear
so
I've
been
asked
by
the
D
company
not
to
use
the
D
word
anymore.
So
if
I
use
the
D
word,
I
will
throw.
A
So
when
open
shift
or
kubernetes
comes
along
and
they
say
oh
when
you
want
to
run
it
and
forget
about
open
shipping
kubernetes
when
you
want
to
run
a
container,
what
do
you
want
to
do?
What
does
it
mean?
I
want
to
run
a
container
on
the
box?
Well,
the
first
thing
I
do
is
I
need
to
have
a
definition
of
what
a
container
is.
A
Well,
what
can
a
container
image
because
usually
you're
not
saying
you
know,
as
I
said,
all
processes
are
containers,
but
really
what
I
want
to
say
is
I
want
to
run
the
engine
nginx
container
I
want
to
run
the
Fedora
container,
I
want
to
run
an
Apache
container
or
an
application
container.
So
what
does
that
mean
when
I
say
that?
Well,
we
have
to
have
a
standard
that
defines
what
those
containers
are.
Okay
and
and
I
give
them
credit
I
really
can't
do
it
without
it,
but
darker
developed
a
standard
for
that.
A
They
defined
a
standard
that
was
basically
an
image
format
and
the
image
format
is.
This
is
real
technical
there
it's
at
our
ball
in
a
JSON
file.
Okay,
you
create
a
route
FS,
which
basically
looks
like
the
slash
of
an
operating
system.
You
tear
it
up,
and
then
you
get
some
JSON
data
that
you
associate
with
that
tab
wall
and
the
JSON
data
defines
things
like
this
is
the
entry
point
to
my
container.
This
is
the
environmental
variables
I
want
set.
When
you
run
a
container.
A
A
A
People
have
had
a
packaged
software
for
Linux
in
two
different
formats,
so
we
wanted
a
single
format
and
because
core
OS
wanted
to
develop
this
standard
for
it,
it
forced
the
D
company
to
come
together
and
we
formed
a
standards
body
that
included
companies
like
core
OS,
Red,
Hat,
Microsoft,
Google
IBM,
and
about
four
or
five
others,
and
we
formed
a
company
organization
called
the
open
container
initiative
and
the
first
thing
they
did
is
they
standardized
on
the
OCI
image
format?
Okay,
that
got
standardized
last
year
about
this
time.
A
So
now,
I
have
a
standard
way
of
putting
content
into
what
I
call
a
container
image
or
defining
my
software
packaging.
My
software
on
container
image
I
can
now
take
this
container
image
and
I
can
actually
store
it
at
a
website.
Okay
I
can
put
this
stuff
out.
Here's
my
application,
I
put
it
at
a
website.
Okay,
those
websites
are
often
called
container
registries.
Okay,
but
really
a
container
registry
is
just
a
website
that
has
a
whole
bunch
of
these
OCI
image
formats
files.
A
So
the
next
thing
I
need
to
do
is
how
do
I
get
that
image
off
of
the
registry
and
copy
to
my
host?
So
we,
you
know,
how
do
you
cut?
How
do
you
install
an
application?
How
do
you
pull
an
application
on
someone
in
the
room?
Tell
me
how
do
you
get
containers
onto
your
system?
Anybody
you
got
a
quarter.
A
Okay,
that's
the
only
way.
That's
the
only
way
to
copy
a
table
all
off
of
an
HD
off
of
a
website
to
the
host
well,
four
years
into
this
four
years
into
it,
and
that's
the
only
way
ain't.
That's
that
it
gets
sadder.
Okay,
so
we
decided
to
pull
the
image
stuff
a
little
history
that
some
of
this
talks
back
a
few
years
ago,
we
decided
to
create
a
product
called
scope.
Yo
I'm
gonna
talk
about
that.
A
A
Those
table
balls
can
get
awfully
big.
I've
heard
rumors
that
some
about
JBoss
container
images
are
like
1.5
gigabytes.
All
right
so
you're
pulling
this
thing
over.
Well,
what
happens
if
you
put?
How
do
you
want
to
look
at
it?
If
you
just
want
to
look
at
that
JSON
file
and
you
want
to
copy
to
your
box,
you
have
to
copy
one
point:
five
gigabytes.
You
get
a
tiny
box,
though,
that
ain't
what
I
wanted?
A
Let
me
destroy
it
right,
so
we
wanted
to
add
something
to
D
in
SPECT
to
allow
us
to
look
at
the
just
pull
down
the
JSON
file.
Don't
pull
down
the
entire
image
and
we
went
to
the
D
company
and
asked
them
to
the
inspect
remote
and
they
said
I,
you
don't
need
to
do
it
just
a
website
just
build
your
own,
so
we
built
a
tool
called
scope,
EO,
so
scope,
you'll
implemented
the
protocol
scope.
A
Eo
means
remote,
viewing
I'm
in
Greek
and
what
we
did
is
we
were
able
to
remote
remotely
view
the
image
and
figure
out
if
you
wanted
to
pull
it
down
and
on
or
if
there
was
an
update,
maybe
add
an
image
locally
see.
What's
on
the
host
and
figure
out,
you
want
to
pull
him
it
just
so.
Eventually,
the
engineer
that
did
that
for
me
said
well,
if
I'm
going
to
do
that,
I
might
as
well
just
implement
the
entire
protocol
for
pulling
the
image
to
the
host
eventually
implemented.
A
It's
a
push
to
the
host,
so
we
had
scope.
Yo
scope
is
probably
our
most
most
used.
Open-Source
projects
from
the
tools
are
going
to
be
talking
about
here.
Lots
and
lots
of
companies
using
it
now
to
move
images
around
we'll
talk
about
it
more
at
the
end,
but
we
were
working
with
core
OS
at
the
time
and
they
were
interested
in
basically
using
it
to
pull
into
rocket
and
use
in
rocket,
but
they
said
they
didn't
want
to
use
the
command-line
tool
to
do
it.
A
They
wanted
to
develop
a
library,
a
go
library
to
be
able
to
do
that,
and
so
we
created
this
thing
called
containers
image,
so
github
containers
image
now
has
the
entire
protocol
for
moving
images
back
and
forth
between
container
registries
and
local
store.
But
it's
actually
we've
added
a
whole
bunch
of
additional
functionality.
You
can
actually
move
images
from
one
container
registry
to
another
container
registry
using
containers
image.
You
can
move
containers
images
to
you
host
it's
a
directory
structure.
You
can
actually
move
it
into
the
darker
daemon
directly.
A
You
can
actually
move
in
and
to
the
thing
I'm
about
to
talk
about,
but
basically
containers
image
becomes
this
protocol
for
moving
images
around
moving
these
tables
around
and
actually
converting.
You
can
actually
convert
v1
image
into
a
OCI
image
and
back
and
forth.
It's
really
really
cool
that
we
have
this
library
now.
A
So
the
next
thing
is,
we
talked
about
the
container
image
as
being
this
layered
thing
right.
It
has
two
three
four
layers
and
the
way
that
you
can
create
and
uncreate
these
things
is
based
on
a
layering
or
a
copy-on-write
file
system.
So
copy-on-write
file
system
is
a
file
systems
that
you
can
actually
create
a
directory
write
to
it,
and
then
you
create
another
layer.
A
You
basically
put
some
kind
of
storage
over
the
original
layer
and
then
you
put
another
table
all
you
know:
until--
the
second
table
wall
on
it
and
you
put
another
layer
on
top
some
of
the
layering
file
systems
that
have
been
developed
over
the
last
few
years.
A
device
mapper,
there's
butter,
a
fast
version.
There
was
au
FS,
which
is
only
it
was
a
bunch
of
only
and
overlay
is
now
the
most
popular
one.
So
all
these
you
know
Red
Hat,
actually
developed
three
of
those
we
developed
overlay,
bio,
fast
and
device
mapper.
A
A
A
Ok
and
again,
it's
a
JSON
file
and
it
exploded
root
of
s.
So
I
have
to
have
the
root
of
s
on
my
desk,
basically
a
directory
that
has
something
that
looks
like
a
root.
Filesystem
and
I
have
a
JSON
file
associated
with
that
that
JSON
files
defines
things
that
the
user
adds
to
the
original
image
and
says
you
know:
I
want
to
run
this
executable
into
it.
A
So
when
you
run
a
container
runtime,
it
goes
into
the
image
because
that
witness
JSON
is
and
that
takes
user
input,
combines
those
together
and
creates
the
run
C
spec
or
the
it's
not
the
run,
C
spec,
it's
the
OCI
runtime
spec,
so
we
have
the
OCI
runtime,
spec
developed
and
also
the
default
implementation,
which
is
called
run
C
so
run.
C
is
a
go
program
for
running
containers.
A
Every
project
right
now
that
runs
containers
pretty
much
in
the
whole
world
is,
you
runs
open.
Oci
containers
is
actually
using
run
C.
So
if
you
download
the
D
project
and
run
it,
it's
executing,
the
container
is
run
C.
If
you
download
rocket
it's
moving
towards
using
run
C,
if
you
download
any
of
the
products
I'm
going
to
be
talking
about
from
here
on
it,
we're
all
using
Runcie,
so
we're
using
the
same
image
format
at
the
container
registries
and
we're
using
the
same
runtime
on
on
the
host
and
all
the
runtime.
A
Does
this
configures
the
kernel
configures,
those
three
things
in
the
kernel,
security,
resource
constraints
and
the
namespaces
to
run
a
container?
And
that's
what
run
C
does
and
we're
gonna
be
talking
later
on
about
other
container
runtimes
that
have
been
developed
because
it's
a
standard
run.
C
is
the
default
implementation,
but
other
people
have
been
implementing
these
okay.
Anything
I
just
said
talked
about
containers
demons.
A
Okay,
everything
I
just
talked
about
was
all
about
things
that
you
can
do
in
individual
process
right,
pulling
the
image
writing
to
the
disk,
putting
in
storage
and
running
it,
and
yet
in
the
market,
everybody's,
putting
out
demons
and
they're
getting
fatter
and
fatter
and
fatter.
Okay,
so
I
have
a
big
push.
I'm
trying
to
get
trending
that
says
no
big
fat
container
demons.
Okay,
I
want
to
stop
all
the
good
proliferation
of
demons.
A
If
you
run
kubernetes
right
now,
so
you
don't
open
shipped
I,
say:
I
want
to
run
kubernetes
the
first
thing
it
does.
It
talks
to
the
kubernetes
daemon
kubernetes
daemon
carcass
calls
out
to
the
darker
demon.
That's
two
demons:
the
darker
demon
then
calls
out
to
container
D.
That's
three
demons
container
D
then
goes
out
and
talks
to
run
C
to
run
the
container
okay,
so
there's
basically
four
different
processes
between.
So
when
you
run
something
you're
going
through
all
these
different
processes,
but
anything
goes
wrong
and
any
one
of
those
steps.
A
A
Okay.
So
now
we
talked
about
those
four
components.
Now,
let's
look
at
kubernetes
and
open
ship.
What
happens
when
kubernetes
or
openshift
wants
to
run
a
container?
Well,
the
first
thing
it
does
first
sit
well
again:
let's
take
take
a
step
back,
so
kubernetes
was
originally
built
totally
around
darker.
A
Said
in
it
was
totally
embedded
all
the
code
into
the
into
the
program
in
Along
Came
core
OS.
They
wanted
rocket
support,
so
what
did
they
do?
They
wrote
the
biggest
patch
in
the
universe,
to
kubernetes
that
basically
did
the
equivalent
an
if-then-else
statement.
So
they
said
if
I'm
running
rocket,
do
with
this
step.
Otherwise
do
the
original
code
and
the
kubernetes
developers
said
at
that
point
time
out.
A
If
we
do
this
for
rocket,
someone
else
is
going
to
come
along
and
say:
do
the
same
thing
so
kuba
nice
turned
it
on
sea
air
and
said
instead
of
us,
taking
in
another
container
runtimes
to
run
underneath
kubernetes
we're
gonna
define
a
protocol
called
CRI
contain
a
runtime
interface
and
they
said
when
kubernetes
wants
to
run
it.
It
will
call
out
to
a
daemon
and
basically
say
run
this
for
me.
It'll,
say
exec
into
this
thing
for
me.
A
Give
me
the
stats
on
this,
so
they
defined
the
protocol
that
they
would
talk
and
what's
happened
then
is
core
OS
went
back
in
with
rock
and
Eddie
created,
rockin
Eddy's,
which
was
a
cryo
based
front-end
for
rocket
the
C
and
the
D
guys
basically
created
a
shim
program
that
would
talk
to
the
daemon.
You
know
the
doctor
shim
program
would
basically
also
front-end
that
demon.
So
all
this
became
possible
about
a
year
ago
or
a
year
and
a
half
ago,
so
kubernetes
tells
the
CRI
that
it
wants
to
run.
A
A
container
CRI
needs
to
know
what
means
to
be
a
container,
so
it
uses
the
OCI
standard
for
running
container
CA.
Right,
pull
needs
to
be
able
to
pull
that
image
to
the
copy-on-write.
Filesystem
so
needs
to
pull
an
image,
and
then
it
needs
to
execute
it.
That's
what
happens
when
I
run
a
kubernetes
container
seem
very
similar
to
the
previous
slides.
A
So
what
my
engineers
after
we
had
done
all
this
work,
came
to
me
about
a
year
and
a
half
ago
and
said
why
don't
we
build
a
very
lightweight
all
to
build
in
right,
Tanner's
and
that
was
called
trial.
So
CRI
o
is
the
name
of
a
demon
that
we
have
created
a
very
lightweight
demon,
not
a
big
fat
demon.
That's
my
excuse!
That
is
scoped
the
coconutty
CRI.
It
only
supports
uses
of
kubernetes
and
use
the
standard
components
for
building
blocks.
Okay,
nothing
more!
Nothing
less!
A
Does
everybody
to
know
what
version
of
the
D
word
the
kubernetes
currently
supports.
It
supports
1.12!
That's
what
we're
shipping
right
now
in
realm.
The
problem
was
darker:
was
updating
so
fast
and
constantly
breaking
backwards?
Compatibility
that
kubernetes
finally
said.
That's
it.
We're
only
supporting
this
and
currently
in
kubernetes,
has
just
moved
to
113,
which
came
out
about
nine
months
ago,
so
we're
kind
of
in
a
sticky
place
right
now,
because
we
can't
update
to
the
latest
things
that
the
d
command
has
because
they
keep
on
breaking
backwards.
A
Compatibility
there's
been
a
lot
of
stability
problems
underneath
kubernetes.
To
do
this,
a
matter
of
fact,
even
doctors
admit
to
this
and
they're
creating
new
products
to
be
able
to
run
kubernetes.
The
thing
called
cry
container
deep,
so
we
wanted
to
basically
say
we're:
gonna
build
a
lightweight
container
daemon
that
is
totally
dedicated
to
the
kubernetes.
We're
close
cryo
cryo
loves
kubernetes.
All
right,
it's
totally
dedicated
to
cognise
kubernetes
is
everything
to
us:
mesosphere
she's,
a
cute
chick.
Alright,
we
kind
of
liked
her
but
we're
a
one-woman
went
man.
A
So
we
don't
do
mesosphere
suam,
not
my
favorite
looking
gal,
but
we're
sticking
with
kubernetes
the
new
chip
on
the
block,
not
for
us
sticking
to
the
old
gal,
not
for
us
all
right
cry
out
all
about
kubernetes!
That's
it.
If
kubernetes
says
we
need
an
interface
that
does
this.
We
implement
it
in
cryo.
We
implement
nothing
else.
A
So,
let's
look
at
cryo:
let's
look
a
little
deeper
at
Rio,
so
cryo
not
only
takes
advantage
of
that
container
storage
and
containers
image
in
the
OCI
image
bundle
in
the
OCI
runtime,
but
it
actually
has
to
create
that
JSON
file
on
disk.
So
there's
an
open
part
of
the
open
containers
initiative,
there's
some
libraries
and
tooling
that
was
built
to
create
that
JSON
crate,
the
open,
runtime
spec
and
we
take
use
that
tool
in
ice,
OCI,
runtime
tools,
library
issues
to
generate
a
yai
config.
A
The
next
thing
we
use
is
the
CNI
CNI
was
actually
developed
again
by
core
OS.
So
core
OS
developed
a
standard
way
that
everybody
in
the
industry
is
sort
of
glommed
onto
for
running
containers.
It
is
the
default
networking
standard
for
running
now,
setting
up
networks
in
a
kubernetes
environment
in
trials
using
it.
So
when
you
set
up
your
networks,
we
will
use
cross
sea
and
I
for
do
it.
It's
been
tested
with
different
backends,
so
flannel
we've
opened
chef,
Sdn
and
all
the
new
container.
A
You
know,
new
networking
tools
that
are
coming
out
are
all
implementing
CNI
backends.
Finally,
we
implemented
when
you
run
containers
on
the
box.
They
just
processes
living
on
the
box,
so
you
need
usually
need
something
that
monitors
it.
Ok
keeps
track
of
it
a
lot
of
times
the
D
package
in
the
olden
days,
when
you
stopped
the
darker
daemon,
when
you
restarted
it
all
the
containers
would
go
away
because
there's
nothing.
A
You
have
a
monitoring
program
that
does
it
it
used
to
be
the
demons,
but
we
built
a
very
lightweight
process
that
sits
up
there
and
just
watches
the
container,
so
it
monitors
logging
it
handles
the
tty,
so
you
can
connect
back
into
the
tty
and
out
and
and
basically
the
texts
have
like
if
the
container
dies
and
if
one
process
and
the
container
dies
it'll
finish
off
the
container
and
that's
called
conmen.
It's
written
in
C,
very
lightweight,
incredibly
small
memory
footprint.
So
this
is
what
a
pod
looks
like
inside
of
containers.
A
Everybody
know
what
a
pod
is:
okay
about,
half
the
room,
so
kubernetes
doesn't
run
containers,
it
runs
pods
pods
are
one
or
more
containers
running
in
the
same
environment,
so
this
year
the
same
network.
They
share
the
same
IPC,
but
they
basically
run
together.
So
you
see
up
here
that
they're
sharing
IPC
network
namespace,
pid'
namespaces,
are
actually
optional
and
they
also
run
in
the
same
C
groups.
It's
kind
of
a
cool
idea.
A
Most
people,
you
know
still
think
in
the
terms
of
containers,
but
pods
are
all
about
things
like
you
might
want
to
have
a
sidecar
pod
or
a
monitoring
pods.
You
might
have
a
continuing
monitoring
container.
You
might
have
your
primary
workload
running
inside
of
the
container,
and
then
you
have
a
secondary
container
that
watches
it.
Okay,
some
security
companies
are
doing
that
now.
Another
idea,
I
have
is
basically
a
lot
of
times.
A
Containers
come
and
they
need
really
high
privileges
to
sort
of
modify
the
Carnales
say
in
Nvidia
cod
comes
and
it
wants
to
load
a
kernel
module
well
and
then
the
container
is
going
to
actually
use
the
container
applica
different
container
applications
actually
going
to
use
say
that
the
the
special
device
for
special
CPU
device,
so
you
might
want
to
have
a
sidecar
container,
that's
able
to
load
kernel
modules,
but
then
the
secondary
container
is
locked
down.
So
you
have
some
interesting
ideas
with
pods,
but
a
pod
in
a
cryo
environment
basically
looks
like
this.
A
So
in
a
kubernetes
environment,
there's
always
this
infra
container
or
a
pause
container
that
really
just
holds
open
the
network
namespace
and
then
you
have
container
ray
and
optionally
container
B,
and
then
you
have
karma
so
when
every
time
cryo
creates
a
container,
it
creates
a
a
pod
that
looks
like
this,
and
then
we
get
up
to
a
higher
level,
and
this
is
what
the
whole
architecture
of
cryo
looks
like.
So
we
have
the
kulit,
which
is
part
of
kubernetes
and
that
talks
the
I'm
trying
to
find
me.
A
You
g
RPC,
here
G
RPC
in
this
case
is
CRI,
so
it's
actually.
This
is
the
protocol.
It's
called
the
G
RPC
go,
that's
a
go
language
RPC,
but
the
the
protocol
is
actually
the
CRI
and
that
talks
you
cryo
inside
a
cryo.
We
have
a
library
issues
in
containers,
image
that
we
talked
about
beginning.
We
also
have
container
storage,
it
also
has
CNI
for
setting
up
the
network.
A
It
has
that
OC
I
generate
so
we
can
generate
the
runtime
spec
before
launching
run
c
or
some
other
kind
of
runtime,
and
then
it
launches
the
runtime
service,
and
it
also
has
the
image
service
with.
Basically,
this
is
how
we
manage
manage
the
container
storage,
but
you
know
what
container
what
images
are
currently
on
the
box,
whether
we
pull
on
things
like
that,
and
then
you
have
the
two
pods
in
this
case
we're
showing
two
pods
running.
We
applied
one
with
two
containers
two,
and
so
we
have
the
pods
container
plus
the
two
others.
A
That's
the
previous
picture
and
the
other
one
is
probably
the
most
common
way
people
run
pods.
Is
they
have
one
container
running
it
in,
along
with
the
pods
container?
That
is
the
entire
infrastructure.
That's
the
entire
thing
of
cryo,
so
cryo
is
actually
a
very
thin,
a
very
lean.
So
let's
talk
a
little
bit
about
cryo
status.
At
this
point,
we
came
out
with
trial
back
in
a
few
months
ago,
but
oh
so
one
of
the
things
about
cryo.
A
If
you
want
to
commit,
contribute
to
trial
that's
great,
but
in
order
to
get
anything
merged
into
cryo,
it
has
to
pass
our
test
suite.
Our
test
suite
is
currently
running
over
five
hundred
kuben
of
these
tests
on
it.
Our
goal
is
that
if
you
cannot
pass
the
entire
kubernetes
test,
suite
cos
hire
openshift
test
suite.
You
are
not
going
to
get
your
patch
merged,
so
no
PI's
merge
of
that.
That
means
that
every
time
we
get
a
patch
in
it
takes
like
hours
like
one
to
three
hours
to
actually
pass
the
test.
A
If
you
fail
you're
out
so
we
shipped
cryo
1.0
back
in
the
I
think
in
the
November
timeframe.
So
the
guys
on
my
team
wanted
to
have
a
$1
I
wanted
no
part
of
one
dot,
oh
okay,
because
it
becomes
it's
a
hassle
to
scrap.
Describe
it.
So
we
have
one
zero
cryo
supports
kubernetes
1.7,
currently
add
some
tech
preview.
A
So
if
you're
running
openshift
on
rail
right
now
you
can
actually
setup
cry
or
underneath
your
kubernetes
environment,
it's
in
tech
preview
not
supported,
but
you
can
play
around
with
them
later
we
came
out
with
1.8
1.8
2
4
is
the
current
version
notice
we
jumped
from
1.0
to
1.8
4
now
on
kubernetes
and
cryo
gonna
have
the
same
release
number.
So
if
you
want
to
run
kubernetes
1.9,
you
will
run
cryo
1.9.
If
you
want
to
run
1.8,
you
will
run
cryo
1.8.
A
So
when
we
get
kubernetes
1.10
anybody
in
the
room
tell
me
what
version
of
trial
you'll
use
with
it.
Anybody
yeah
that's
a
slow
crowd,
so
the
idea
is
basically
we
don't
want
to
have
any
confusion
on
it.
Kubernetes
1.8
is
not
something
that
openshift
is
going
to
ship,
so
OpenShift
is
actually
skipping
shipping
of
1.8,
except
for
online.
So
as
of
right
now,
kubernetes
1.8
is
being
shipped
on
open
shift
online
origin
right
now
is
that
3
.,
that's
open
shift
version.
3.8
supports
kubernetes
1.8,
but
you
can't
use
it.
A
A
The
goal
at
openshift
3.10
is
to
flip
it
and
make
trial.
The
default
in
the
d-word
is
the
alternative.
Okay,
so
that
is
scheduled.
I
think
sometime
in
the
summer
time
maintain
is
in
contributors
to
the
cryo
project.
Red
Hat
and
Intel
have
been
working
very
heavily
on
this.
Lately
we
begin
at
a
lot
of
contributions
from
lyft.
Susy's
have
been
involved
now.
I
could
probably
put
coral
Wes
up
there,
since
they
will
be
involved.
So
these
the
the
heavy
maintain
is
of
it.
A
A
So
one
of
our
companies
contact
us
and
we
heard
rumors
that
they
were
using
it
they're
not
have
not
given
us
liberty
to
say
who
they
are
yet,
but
they
we
asked
them.
Why
haven't
you
told
anybody
using
trial
and
production,
and
this
was
their
quote?
Was
cryo
just
works
for
them?
So,
there's
no
reason
to
complain
and
I
think
that
is
the
perfect
reason.
That
is
the
reason
we
built
cryo
right.
We
want
cryo
to
be
containers
and
production
gets
to
be
boring.
A
Okay,
it
just
works,
and
that
is
what
I
go
with
cryo
is
to
simplify
and
make
it
as
simple
as
possible
for
running
containers
and
kubernetes
kubernetes
and
OpenShift
a
complex
enough.
We
don't
need
to
make
an
adventure
of
running
containers
on
the
hosts,
so
everything
we
do
you
know
RedHat
the
reason
I
get
paid
my
team
gets
paid
is
to
make
OpenShift
successful,
so
open
shoot.
One
of
the
reason
we
dig
try.
Always
we
wanted
to
make
OpenShift
more
stable
running
in
the
environment,
but
OpenShift
actually
has
other
features
than
just
running
kubernetes.
A
So
what
else
does
OpenShift
need?
And
these
the
ability
to
build
container
images
needs
the
ability
to
push
container
images
to
container
registries
right,
so
anybody
has
played
with
the
OpenShift
use
source
the
image
or
you're
gonna.
You
know.
Basically,
you
want
to
be
able
to
build
containers
as
well
as
that,
so
this
guy's
now
and
died
by
one
year
ago.
This
week
we
come
now
to
dev
cough
to
have
confidence,
a
big
developer
conference
out
in
Brno
Czech
Republic,
and
we
were
talking
about
container
storage
and
contain
his
image.
A
At
that
point
and
I
turned
to
him
and
I
say
you
know
what
I
really
need
is
the
core
utilities
package
for
building
containers
right
if
I
can
build
a
table
all
in
a
JSON
file,
I
want
to
build
them
together
and
he
said
well.
What
should
we
call
him?
I
said:
well,
just
call
a
builder
okay,
so
he
came
out
and
said,
build
up
and
if
you
know
I
happen
to
have
a
slate
Boston
accent.
I
thought
so
he
created
builder
now
I'm
gonna
ruin
everybody's
picture
of
our
icon
right
here,
that's
a
dot!
A
A
So
that
the
newer
icon
actually
is
more
of
a
hot
hat,
but
I
keep
that
one
just
for
that
joke.
Okay,
so
builder
was
the
goal
with
builder
was
to
make
you
know
again
looking
at
container
technology.
How
do
you
build
containers,
someone
shout
it
out
in
the
room?
How
do
you
build
containers
now
container
images
be
built?
Okay,
can
someone
name
another
way
to
build
container
images
as
to
why
and
what
does
that
do
I
use
under
the
covers
it
uses
D
built
okay.
A
Here
we
are
four
years
into
the
container
revolution.
The
only
way
to
create
a
table
all
in
a
JSON
file
is
with
the
D
word.
Don't
we
suck
isn't
that
horrible
right
I
could
tie
up
I
can
do
that
by
a
shell
script,
so
I
wanted.
Is
it
series
the
tools
to
be
able
to
do
that,
so
we
want
to
call
utilities
for
building
container
images
with
a
simple
interface,
so
the
Builder
command
actually
has
built
it
from
because
you
want
to
be
a
bit
somehow
specified.
A
I
want
to
get
a
container
off
a
container
registry
and
pull
it.
So,
if
I
wanted
to
build
from
a
container
image,
I
could
do
build
it
from
fedora
and
it
creates
the
container
and
then
I
can
mount
the
container
onto
my
host.
All
right
from
that
point
on
I
can
just
interact
with
that
mount
point
on
the
host
segue,
so
anybody
ever
used
this
command.
A
A
Okay,
I
put
it
in
the
core
utils
package,
so
you
were
able
to
use
this
copy
command
to
actually
copy
content
in
and
out
of
containers.
And
how
do
you
do
it?
You
just
copy
our
sauce
directory
into
the
container.
Pretty
cool
I
didn't
stop
there
now
wait!
Wait
to
see
this
I
created
this
tool
called
DNF
by
young
for
those
guys,
so
it
used
to
be
called
Yama
I
changed
the
names
DNF
and
now
gonna
change
it
back
to
young
because
I'm
schizophrenic.
A
Alright,
so
I
can
use
DNF
and
they
say
I
added
a
flag
to
it
called
install
route
and
I
can
actually
point
to
the
directory
and
I
can
actually
install
content
into
the
directory
into
the
root
of
s
directory,
but
I
didn't
stop
there
I
created
a
tool
called
make
yeah,
it's
a
make
tool.
Is
it
actually,
you
know
it's
fairly
popular
in
the
sea.
Community
and
I
can
actually
do
a
make.
Install
with
a
dester
appointed
to
a
directory
and
I
can
actually
install
directly
into
the
directory.
A
Ok,
so
I
have
lots
and
lots
of
tools
that
I
built
over
the
years
to
be
able
to
you
know,
move
data
into
a
directory.
One
of
the
cool
things
here
is
when
I
create
this
container.
I
need
to
add
some
stuff
to
remember.
I
talked
about
the
OCI
that
JSON
file,
so
we
have
a
build,
a
config
that
actually
can
create
a
you
know.
Entry
points,
environmental
variables
set
all
the
flags
to
put
inside
the
container
and
then
I
think
committed
to
create
a
container
image.
A
A
You
know,
moving
content
around
really
simple,
but
really
cool
thing
about
this.
When
you
run
container
images
that
are
built
with
D
built,
what's
the
problem
with
them,
they
come
with
not
just
Apache
or
not
just
nginx,
there's
a
problem
and
it's
a
benefit
and
a
cost,
but
it
comes
with
DNF
inside
of
it.
It
comes
with.
If
you
wanted
to
run
a
make
inside
of
it,
you
have
to
have
make
in
that.
A
You
have
to
have
GCC
when
you
run
the
container
images
in
the
world
they're
coming
with
all
the
build
artifacts
required
to
build
them,
so
I
always
will
often
work
with
security.
People
that
say:
I
need
to
get
all
that
stuff
out
of
there.
I,
don't
want
a
hacker
getting
in
and
having
access
to
all
these
tools
when
he
logs
to
a
machine,
so
they
want
small
images,
everybody's
after
small
images
and
yet
the
only
way
we
build
images
right
now
is
to
stick
every
build
tool
in
the
universe.
A
In
there
Python
gets
stuck
in
every
single
continue
weren't
an
Apache.
You
have
to
have
Python
in
there.
Why?
Because
DNF
uses
Python,
do
you
need
DNF?
And
there
know
the
way
you
supposed
to
update
container
images
is
not
going
to
the
container
and
do
a
DNF
gem
update
or
a
DNF
for
a
young
update.
What
you're
supposed
to
do
is
replace
the
image,
so
this
tooling
actually
allows
you
to
build.
A
minimum
image
allows
you
to
build
container
images
with
very
minimal,
and
so
you
say
to
me
dan
wait.
A
A
He
looked
at
me
very
strange.
Give
me
a
big
handful
of
change.
So
what
about
D
file
so
beat
builder
supports
the
D
file
command
and
we
call
it
builder.
Build
using
D
file.
Dash
F
basically
follows
the
same
syntax.
As
you
know,
D
build,
but
we're
lazy
engineers
are
lazy,
so
we
actually
have
build
a
bud.
Okay
and
anheuser-busch
has
not
approved
the
name,
but
we're
gonna
go
with
it
for
now,
so
you
can
actually
build
containers
using
with
using
the
traditional
method
for
building
containers.
A
A
What
about
other
formats?
I
wrote
a
brand
new
tool
called
bash.
Okay,
I
built
shell
scripting.
In
the
way
you
build
build.
The
containers
is,
you
can
use
D
file
or
you
can
use
bash
either
one
of
the
tools.
Okay,
we're
not
going
to
build
a
builder
file
right,
there's
not
going
to
be
some
special
language
for
doing
this.
The
goal
is
basically
use
the
standard
tools
that
you
have
available
on
the
Linux
system
to
build
tar
balls
with
JSON
files.
A
A
So
if
you
want
to
specify
in
an
instable
playbook
how
you
want
your
container,
what
you
want
the
contents
of
the
container
we
are
going
to
work
with
ansible
containers
to
pull
their
currently
using
the
D
word,
underneath
it
builder
has
become
a
lot
of
people
looking
at
builder
and
what
they
really
want
to
do
is
they
want
to
actually
run
builds
inside
of
kubernetes,
so
I
wanted
run,
distributed,
build
systems.
Things
like
that
builder
has
a
lot
of
features,
a
lot
of
simple
simplification.
A
Currently,
when
people
do
this,
they
call
always
linking
the
darker
socket
into
the
container,
which
gives
you
full
root
access
on
the
host
as
soon
as
you
do.
So
if
you
want
to
run
a
system
build,
it
might
be
a
simpler
tool
for
running
and
say
a
large
kubernetes
environment
builder
has
some
shortcomings
and
positives
over
speed
of
building,
but
basically,
if
you're,
building
builder
containers
in
a
production
environment,
this
work
stream
is
actually
gonna
work
faster
than
d
builds,
so
that's
build
up.
A
So
what
else
does
openshift
need?
Well,
you
need
a
way
to
debug
this
thing:
ok,
currently
in
an
open
shift
environment
with
the
d-word
running.
If
something
goes
wrong
in
the
host,
what
do
you
do
you
SSH
on
to
the
box,
and
you
start
running
D
commands?
Ok,
so
I
start
doing
things
like.
Let
me
look
to
see
what
images
are
installed.
Let
me
look
at
what
containers
are
running
on
okay
well
in
cryo
world
there's
two
tools
that
are
gonna
be
one
of
them
was
called
CRI
CTL,
CRI,
CTL
I.
A
Don't
cover
that
that
closely
in
this,
although
we're
about
to
start
shipping,
it
is
actually
a
test.
Originally
was
a
test
tool
for
testing.
You
see
our
eyes,
so
it
implements
the
kubernetes
protocol
and
they
can
talk
to
the
daemon
and
actually
tell
it
stuff,
like
you
know,
show
me
the
pods
that
are
running
show
me
and
stuff.
So
a
lot
of
stuff
you
might
want
to
do
diagnosing,
but
outside
of
the
container
at
runtime.
A
Well,
what
happens
to
the
container
runtime
is
hung
up
and
you
want
to
look
behind
it.
Well.
Remember:
I
talked
about
all
the
storage.
All
the
stuffs
happening
on
disk.
It's
not
tied
to
cryo
builder,
is
using
the
same
database,
the
same
storage
that
cryo
does
so
everybody
is
able
to
use
it
together
because
I
invented
another
thing:
called
filesystem,
storage,
okay
and
I
create
a
thing
called
filesystem
locks.
You
can
put
locks
on
file
systems
now,
thanks
to
me.
A
So
what
we're
doing
is
we're
basically
allowing
tools
to
work
together
without
requiring
a
big
fat
container
daemon
that
controls
everything.
You
know
everybody
mother,
mae-eye
made
mother
mae-eye
made
mother
Maya,
so
we
needed
a
tool
that
actually
works
underneath
the
covers
on
the
back
storage,
so
we
created
a
project
called
live
pod.
A
Anybody
ever
hear
a
que
pod
soak
a
pod
man
was
actually
used
to
be
called
que
pod,
but
we
had
to
wait
forever
to
legal
and
marketing
and
stuff,
so
they
came
out
with
pod
pod
manager,
pod
man,
one
of
the
things
we
wanted
to
do
with
pod
man
was
actually
implement
the
entire
darkus
CLI
without
a
big
fat
container
daemon.
So
we
copied
the
exact
CLI.
So
if
you
want
to
list
the
containers
they're
running
on
the
system,
it's
pod
man
PS.
A
You
want
to
run
a
container
on
the
system,
it's
padmi
and
run
t
or
asleep.
If
you
want
to
exactly
into
the
container,
if
you
want
to
list
the
images
on
this
container
system,
so
pod
man
is
just
about
to
release
Fedora
we're.
Looking
to
get
this
out
in
Tyrell,
probably
around
the
three-nine
time
frame,
so
lining
up
with
that,
but
basically
you
can
do
everything
you
want
inside
of
a
container
have
that
entry
level
experience
using
pod
man
that
you
get
traditionally
with
the
D
command.
A
So
that's
the
goal
with
pod
man
to
implement
the
entire
stack,
we're
not
implementing
swamp,
we're
not
an
to
implementing
compose
or
how
we're
implementing
is
sort
of
the
basic
tools.
But
I
don't
have
a
list
of
here,
but
we
probably
have
about
95
percent
of
everything.
You'd
ever
want
to
do
with
the
D
command
is
now
a
mention
in
pod
man.
A
So
we
talked
in
the
beginning,
mentioned
scope,
EO
I'm,
just
gonna
follow
up
scope.
Eo
is
being
used
heavily
with
OpenShift,
underneath
the
covers
managing
containers,
images
moving
them
around
the
environment.
You
can
do
all
these
cool
things
with
it.
You
can
inspect.
Remember
I
talked
about
its
original
goal
was
to
inspect.
You
can
actually
copy
this
case.
We're
copying
off
of
a
container
registry
and
moving
it
to
an
atomic
registry.
You
can
copy
from,
and
you
know
you
can
copy
directly
from
docker
IO
and
into
a
directory.
You
can
create
OCI
images.
A
You
can
delete
images
off
the
container
registries.
Basically,
this
tool
allows
you
to
basically
work
with
container
registries
and
it
actually
can
work
directly.
So
if
you
want
to
copy
off
of
a
container
registry
and
push
it
directly
into
the
darker
daemon,
that's
supported.
If
you
want
to
copy
it
directly
into
cryos
database,
that's
supported,
work
with
builder
or
works
with
everything.
Again.
It's
using
containers,
image
and
the
covers
the
same
library,
that's
being
used
by
bilder
pod
man
cryo
and
this
so
they
all
can
share
the
database.
A
They
all
can
she
had
the
content
on
the
system,
so
scope,
EO,
as
I
said,
scope
is
being
used
all
over
the
place,
it's
being
used
by
pivotal
as
a
major
contributor
to
its
to
run
and
the
Pass
environments
we're
getting
contributions
from
revi,
strange
companies.
You
know
that
you
don't
normally
think
of
is
running,
but
lots
of
lots
of
big
industrial
companies
now
running
containers
and
in
their
environments,
and
they
need
to
be
able
to
manage
these
container
images,
move
them
around
and
scope.
Yo
attempts
to
be
the
tool
to
do
that.
A
So
everything
I
talked
about
in
this
talk
is
listed
here.
So
we
have
a
whole
bunch.
Everything's
open
sauce,
fully
open
sauce
they're
all
up
on
github
there's
cRIO
build
a
scope.
Yo
live
pods,
a
little
different.
If
you
want
to
play
with
pod
man,
we
also
sit
on
two
different
free
nodes,
so
we
sit
on
trial
in
pod
man
and
we
have
the
sites
any
questions,
everybody's
taking
a
picture
I'm
sure
they
want
to
get
me
in
this
picture.
Yes,
what's
this
about
image
signing
so.
A
Red,
Hat
and
partners
inside
of
containers
image
developed
what
we
call
simple
signing:
there's
a
real
problem
in
the
world
right
now,
in
that
nobody
does
a
real
good
job
of
signing
images.
Okay,
so
the
people
in
the
room
might
have
heard
of
notary.
So
notary
was
the
effort
by
docker
to
basically
create
a
capability
of
signing
images
that
people
want
to.
You
know
have
something
like
an
RPM
trust
signature.
We
found
that
notary
was
way
way
too
complex,
okay
and
we
found
that
almost
no
one
was
using
it.
A
So
what
we
want
to
do
is
go
off
and
create
our
own
signing
capability.
So
we
built
what
we
call
simple
signing:
it's
basically
GPG
signatures.
We
allow
you
to
sign
images
that
exist
on
any
container
registry.
We
don't
make
you
put
in
some
kind
of
big,
specific
container
reg
reruns
to
some
specific
daemon.
You
can
create
image,
artifacts
signatures
and
you
can
actually
store
these
signatures
on
any
web
browser.
You
want
local
files,
we
actually
built
it
into
openshift
registry.
A
So
if
you
pull
images
off
for
openshift,
we
can
actually
do
signatures
on
it
and
it's
actually
works
pretty.
Well,
the
problem
with
signatures,
though
right
now,
is
that
kubernetes
doesn't
know
about
them.
So
we
built
it
into
our
tools
all
about
tools.
Pod
man
build
cryo,
they
all
support
signatures,
so
you
can
actually
configure
assist
that
basically
says
I
only
trust
images
that
come
from
this
registry
or
I
only
trust
images
that
assigned
by
Dan
Walsh.
A
If
you
wanted
to
do
that,
not
a
good
idea,
but
you
know
you
might
want
to
do
that
and
you
could
set
all
this
up
and
what
happens
is
built.
A
kubernetes
comes
down
and
says,
run
a
container
and
then
the
container
runtime
you
come
in
and
say:
I
want
to
run.
You
know
an
Jynx
container.
It
comes
in
and
says
wait
a
minute
signed
by
Dan
Walsh
and
since
it's
not
allowed,
but
the
protocol
doesn't
go
back
up
to
kubernetes
in
both
the
notary
case
and
simple
signatures.
So
what
does
kubernetes
do?
A
It
says
you
know
the
container
runtime
says
I'm
not
running
it
and
kubernetes
says
no
you're
gonna
run
it
and
then
it
says
no
I'm,
not
you
end
up
with
like
you're
hiding
with
a
five-year-old
and
and
there's
no
there's
no
protocol
build.
So
lately,
kubernetes
has
started
an
effort
called
Rafi
s
and
gracias
is
looking
at
moving
signatures
into
the
kubernetes
protocol
and
we're
looking
to
get
our
simple
signing
as
being
the
default
implementation.
A
So
we're
trying
to
work
with
Google
to
basically
say
we
just
need
GPG
signed
keys,
we
don't
need
CAS,
we
don't
need
huge
infrastructures
for
this.
We
just
need
the
same
stuff
that
we've
been
signing
rpms
for
forever
and
hopefully
we'll
be
able
to
work
with
kubernetes
to
get
simple
signing
to
get
up
a
layer
into
kubernetes.
So
kubernetes
will
know
all
this
node.
It's
not
allowed
to
run
images
that
are
not
signed
by
Dan
Walsh.
So
therefore,
I
won't
push
images
and
unsigned
to
Danlos
to
that
image.
So
that's
that's.
A
A
A
A
A
A
So
we
have
four
people
that
need
that,
but
in
the
kubernetes
world
that
doesn't
make
any
sense.
So
those
those
interfaces
aren't
necessary.
Okay,
one
of
the
big
problems,
one
of
the
big
advantage
than
one
of
the
big
problems
with
Dee,
is
that
it
forces
it's
a
client-server
operation.
Okay,
so
you
have
to
have
this
big
friggin
demon
sitting
out
there
for
every
application.
When
you
run
say,
say
you
want
to
just
run
a
container
in
a
Buddha
in
a
system.
A
The
you
know,
file
that
comes
up
say:
I
want
to
just
run
Apache
at
boot
time.
Okay,
if
you
put
the
the
D
CLI
into
the
system
to
you
know
file,
it
actually
is
not
the
the
container.
It
ends
up
not
being
a
grandchild
of
system
D.
It
ends
up
being
a
different
grandchild
of
system.
D
of
a
totally
different
parent,
okay,
it
ends
up
being
a
child
of
the
daka
demon.
Okay.
A
So
it's
it's
kind
of
a
weird
situation
that
we
built
now
because
they
have
a
demon,
they
can
actually
remote
access
to
the
demon
over
the
network
and
that's
something
we
can't
do
with
with
pod
man.
So
there
is
a
different
experience
there,
but
as
a
security
guy
I'm,
not
really
into
allowing
this
big
fat
demon.
That
gives
you
full
route
access
to
my
machine
with
no
authorization,
a
little
skeptical
of
that.
So
my
opinion
is
as
soon
as
you
go
across
machine.
A
You
really
want
to
use
something
like
open
shift
and
kubernetes
and
a
lot
because
they
built-in
authorization
and
authentication
and
all
that
stuff
that
the
you
know
that
Dockers
never
built
into
this,
the
toy
so
yeah.
While
we
talk
about
different
but
I'm,
really
talking
about
different
use
cases
and
and
and
Dockers
actually
moving
away
from
that
also
so
darker
does
not
in
the
future.
A
As
of
the
the
conference
their
conference
this
past
spring,
they
actually
announced
they've
taken
a
look
at
cryo
and
said:
you
know,
there's
some
good
ideas
going
there,
so
they
took
that.
Remember.
I
talked
about
this
container
D
thing,
so
the
original
one
you
would
talked
to
the
D
diamond
and
the
D
diamond
would
do
the
pulling
of
images
and
would
put
them
into
their
storage
and
then
would
talk
to
the
container
D
diamond
to
actually
launch
the
containers.
The
reason
they
did,
that
is,
they
wanted
swamp
to
have
better
performance.
A
If
you're
going
through
the
D
diamond,
your
performance
a
tends
to
be
bad
because
you're
going
through
this
layer
and
it's
a
really
complex
diamond,
and
so
they
wanted
to
basically
have
swamp,
talk
directly
to
container
D,
so
they
could
get
rid
of
that
bottleneck,
but
container
D
originally
didn't
do
anything
about
pulling
images
or
storing
images
that
still
happened
to
the
dark
edemen
after
they
saw
us
doing
all
this
work.
They
actually
moved
the
code
into
the
crowd
daemon
coming
into
that
container
D
daemon.
One
of
the
problems,
though,
is
so
you
know
darker.
A
The
company
is
being
a
little
schizophrenic
right
now
because
they
still
want
swarm
and
they
want
mesosphere
and
they
want
kubernetes
and
so
they're
constantly
chasing
after
what
cryo
is
doing,
but
they're
doing
it
with
a
knife
at
night
feelings,
a
very
big
container
daemon
that
you
know
I'm
not
in
trial
I'm,
not
interested
in
supporting
those
those
guys
want
to
build
demons
to
support
mesosphere.
There
should
be
a
separate
missile
sphere
container
daemon
that
just
you
know,
infamous
whatever
mesosphere
needs
not
not
merge
them
all
together
into
one
big
demon.