►
Description
OpenShift Commons Gathering December 5th 2017 Austin, Texas
Dan Walsh and Mrunal Patel (Red Hat)
A
Perfect
well
welcome
back
everybody
and
everybody
use
on
Facebook
we're
going
into
our
second
part
of
today's
OpenShift
commons
gathering
and
we're
going
to
kick
it
off
with
a
state
of
the
container
ecosystem.
With
two
of
my
fellow
Red
Hatters,
many
of
you
have
heard
of
them,
Dan,
Walsh
and
Monroe
Patel,
and
we're
really
pleased
to
have
you
all
back
and
we're
gonna
try
and
stay
on
time
today.
So
I'm
gonna,
let
them
get
started.
So.
Thank
you
all.
B
Everybody
hear
me:
okay,
my
name
is
Dan
Walsh
I
run
the
container
team
at
Red,
Hat
I
have
I
now
work
in
the
openshift
division,
I
used
to
work
in
the
rel
division
of
up
to
about
a
month
ago.
So
we
started
a
one
of
the
things
at
the
low
level
of
containers
that
I've
been
fairly
depressed
with
is
how
little
advancement
we've
made
over
the
last
few
years
in
containers,
mainly
because,
there's
always
you
know,
there's
only
one.
People,
people
think
that
there's
only
one
way
to
run
containers
and
and
containers.
B
B
You
know
just
one
way
of
doing
containers,
so
what
we
started
working
on
about
a
year
and
a
half
ago
was
trying
to
break
a
pod,
what
it
meant
to
run
a
container,
how
you
want
to
run
a
container
and
and
really
so,
what
do
you
need
to
do
when
you
want
to
run
a
container
on
your
system?
So
the
first
thing
you
have
to
have
a
definition
of
what
a
container
is
or
what
you
know
what
the
content
of
a
container
is-
and
this
is
actually
the
biggest
contribution
that
darker
made
to
the
ecosystem.
B
It's
basically,
they
got
everybody
to
standardize
on
this.
There's
one
way
of
bundling
up
a
image
or
a
group
of
software
they're
going
to
install
in
your
environment
and
luckily,
over
the
last
couple
years,
has
been
a
standardization
effort
on
that
bundle
to
make
sure
that
everybody
agreed
to
continue
to
use
that
bundle.
One
of
the
things
we
I've
always
feared
since
I
started
working
in
containers
is
that
we'd
have
a
bifurcation
and
actually
it
started
to
happen
about
three
years
ago,
core
OS
decided
they
wanted
to
standardize
on
what
they
call
the
app
see.
B
Spec
and
I
wanted
to
standardize
on
an
image
bundle
that
was
different
than
what
doctor
was
doing,
and
so
I
began
to
see
sort
of
the
RPM
versus
Debian
thing
happening
all
over
again,
so
people
would
have
to
package
software
in
different
ways.
Luckily,
all
the
major
players
in
the
container
world
got
together
and
they
worked
on
a
standard
and
that
actually
went
1-0
last
December
and
it's
the
OCI
image
bundle
spec
window.
So
when
I
want
to
run
a
container,
I
have
to
be
able
to
identify
the
container.
B
The
container
sits
of
the
container
registry
and
everybody
pretty
much.
The
funny
thing
is
in
the
competitive
business
of
containers.
Container
registry
is
really
where
everybody
can
peek
right.
There's
hundreds
of
different
people
doing
containers
each
one
of
the
big
cloud.
Vendors
does
contain
a
registry
Red
Hat
we
have
an
open
ship
registry
is
obviously
Dhaka
Rio,
there's
Quade
from
core
OS
is
out
there.
So
there's
lots
and
lots
of
competitors
that
the
container
storage,
but
basically
they
all
store
the
same
thing
these
image
bundles.
B
So
the
next
thing
I
need
to
do
to
run
the
containers
I
need
to
be
able
to
pull
the
container
image
from
the
registry
to
my
hosts.
Okay,
can
everybody
tell
me
how
they
do
that
and
everybody's
rooms?
Gonna,
say:
docker
pull,
there's
only
one
way
to
do
that,
back
up,
pull
in
the
whole
world.
There's
only
one
way
to
do
it,
dr.
pol
that
sucks
right.
B
What
is
the
container
registry?
It's
a
web
front-end,
it's
a
web
service
right
I
should
be
able
to
do
with
curl
right
I
mean
I
should
be
able
to
do
it
with
any
any
tool
to
pull
off
of
these
images
off
of
a
bundle.
So
we
started
working
a
few
years
ago
on
a
tool
called
scope.
Yo
we're
going
to
talk
about
the
end
but
scope.
You
ended
up
evolving
into
this
thing,
called
containers
image,
so
we
actually
built
a
go
library
called
containers
image.
B
If
you
go
to
github
containers,
image
you'll
find
it
lots
and
lots
of
people
contribute
to
it.
And
now
we
have
waste
of
moving
images
from
continue
registries
to
other
container
registers
from
container
registry
into
local
storage,
different
types
of
things,
so
we
needed
a
standard
wave
of
implementing
the
pulling
and
pushing
pulling
and
eventually
pushing
images
and
that's
what
container
images
the
next
thing
you
need
to
do
to
run
a
container
is
actually
take
that
bundle.
B
You
pull
down
using
containers
image
and
now
you
need
to
explode
it
on
disk,
but
you
have
to
put
it
on
a
sort
of
a
different
type
of
disk.
It's
called
a
copy-on-write
filesystem.
Okay,
if
you
think
about
what
what,
when
we
have
these
image,
bundles
they're
layers
right,
we
have
a
first
layer,
the
second
layer,
the
third
layer.
B
So
these
are
layered
file
systems
that
you
have
to
put
it
on
top
of
in
these
layers,
eventually,
you'll
get
to
a
writable
layer,
so
it's
copy-on-write,
which
means
I,
can
write
I,
feel
like
I'm,
ready
to
the
let
layer
but
I'm,
actually
ready
to
a
different
place,
copy
and
write
file
systems.
The
things
like
overlay
device
mapper,
but
RFS
has
a
version.
B
So
there's
lots
and
lots
of
copy
and
write
file
systems
and
the
only
one
that
had
carbon
write
file
system
was
inside
a
docker,
so
the
only
place
to
store
it
was
inside
a
docker
storage.
So
we
decided
to
create
a
library
called
container
storage
right.
We
took
basically
took
the
codes
and
most
of
it
was
written
by
Red
Hat.
Originally
that
was
in
containers
in
darker
pulled.
It
out
made
it
into
a
library,
so
people
could
start
storing
copy
and
write
file
systems
on
the
disk.
So,
lastly,
you
need
a
standard
mechanism
for
running.
B
How
do
you
define
what
it
means
to
run
a
container?
Ok
and
that
really
has
to
be
standardized
just
like
the
bundles,
so
you
need
to
standard
what
it
means
to
run
a
container
and
what
it
means
to
store
the
container.
Luckily,
OCI,
runtime
spec,
ok,
also
1.0.
What
that
does
is
it
has
a
JSON
file
that
it
basically
says
what's
going
to
happen
in
this
container,
what
the
environmental
variables
out,
what
kind
of
security
constraints
are
on
it?
What
is
the
entry
point?
What
is
the
current
working
directory?
B
All
those
things
are
written
inside
of
a
JSON
file,
and
then
you
have
it
exploded
file
system.
Next
to
it,
which
is
a
root
of
s,
ok,
run
C
is
the
default
implementation
of
the
OCI.
You
expect
clear.
Linux
containers
is
another
implementation
of
the
OCI.
You
expect
clear.
Linux
happens
to
use
KBM
for
isolation,
run
c
uses
namespaces
for
name
space
in
the
c
groups
on
the
local
filesystem.
Other
people
are
building
also
building
OCI
in
respects.
B
B
Okay,
if
I
have
four
components,
I
should
be
able
to
do
these
things
from
the
command
line.
Right
I
should
be
able
to
do
each
one
of
these
steps
from
the
command
line
without
having
a
big
demon
in
the
way,
all
right
and
I'm.
One
thing
that
drives
me
crazy
in
the
container
world
is
people
keep
on
putting
up
demons
every
way
you
need
a
demon
for
everything,
everything's
a
client-server
operation,
but
in
a
lot
of
ways
I
just
like
to
have
it.
You
know
a
lot
simpler.
B
Basically,
you
know
these
are
just
processes
on
a
Linux
system.
I
should
be
able
to
just
exactly
so
you're
at
the
openshift
conference,
kubernetes
coupe
con
this
week.
We
want
to
talk
about
kubernetes,
so
what
does
kubernetes
need
to
do
sivanna
container?
Well,
first
of
all,
if
you
look
back
in
kubernetes,
it
was
originally
developed.
On
top
of
docker
okay,
they
built
into
kubernetes
the
entire
dr.
API.
To
talk
to
the
you
know
how
to
talk
out
to
containers,
and
here
again
core
OS
actually
caused
some
problems.
B
Good
problems,
Clara
West,
came
along
and
said
we're
going
to
write
a
huge
amount
of
patches
to
kubernetes
to
make
kubernetes
work
with
rocket
and
kubernetes.
Basically
looked
at
the
huge
amount
of
packaged
pet
patches,
the
new
one
they
were
gonna
have,
if
that,
if
you
know
I'm
using
C
code
but
basically
saying,
if
then,
if
running
with
dark
I,
do
it
this
way,
you're
running
with
coop
with
rocket?
Do
it
this
way
and
they
say
well,
we
can't
support
that
kind
of
code.
B
So
what
they
said
with
the
kubernetes
guys
said
at
that
point,
is
we're
going
to
define
our
role
in
interface
and
then
allow
anybody
to
build
a
container
runtime
for
that
interface
and
that's
called
container
cRIO
contain
a
runtime
interface.
So
so
what
they
went
back
to
rocket
and
said
you
guys
take
the
rocket
build
a
rocket
daemon
that
implements
the
CRI
implements
our
protocol,
and
we
will
gladly
talk
to
you
at
that
point.
Darker.
They
started
internally
to
create.
B
What's
called
the
darker
shim
darker
shim
is
basically
putting
all
of
darker
CLI
calls
behind
a
CRI.
So
now
kubernetes
now
talks
to
this.
This
protocol,
the
CRI
protocol
and
you
can
support
multiple
different
container
runtimes.
So
a
year
and
half
ago,
back
in
September
Ronal
up
here
when
my
best
engineers
came
along-
and
you
know
we
said,
let's
do
a
skunkworks
project-
let's
see
if
we
could
build
using
those
four
components
that
we
built.
B
Let's
see
if
we
could
build
a
little
tiny
demon,
not
a
big
fat
game
and
hopefully
a
sin
demon
that
would
implement
those
four
things:
ability
store
an
image
somewhere,
pull
the
image
store
on
disk
and
then
create
a
run,
see
configuration
very
synthesis
ammo
to
what
darker
did
and
that's
what
we
call
well
actually.
Originally
we
call
this
something
different.
This
prescott
will
hold
a
bit
and
also
then
became
red
house
walking.
Darker.
That's
not
what
we
did.
C
What
is
chrome
to
from
the
name?
You
can
see
it's
an
OC.
I
base
implementation
of
the
kubernetes
CRI,
so
we
took
all
the
components
that
Dan
talked
about
and
created
crowd.
So
what
is
the
scope
of
prior
exactly
what
cuban
it
is?
Cri
needs,
we
don't
add
any
more
code
than
what
the
CRI
needs.
So
it's
like
nothing,
more,
nothing
less
just
implement
the
CRI
and
each
version
of
cube.
Whenever
there
are
some
changes
to
the
CRA
will
pick
up
those
changes
and
we
implement
them
in
cryo.
Only
supported
user
is
kubernetes.
C
This
we
don't
so
to
support
any
other
demons
or
any
other
orchestration
tools,
and
we
use
we
try
to
use
standard
components
wherever
possible
to
implement
cryo.
So,
in
addition
to
the
components
that
we
already
went
over,
these
are
some
other
components
that
we
use
for
Claro.
So
the
first
of
these
is
the
OCI
runtime
tools,
so
Dan
talked
about
run
C
and
how
it
needs
a
conflict
or
JSON.
So
the
OCI
runtime
tools
has
a
library
for
generating
those
configurations
and
it's
a
project
under
open
containers.
C
So
anyone
who
wants
to
use
that
library
is
free
to
do
so
and
it
keeps
in
sync
with
run
C.
So
we
use
that
to
just
generate
new,
generate
config
Raji's,
one
for
running
the
containers
in
cryo
and
then
for
networking.
We
ended
up
using
CNI.
It
has
kind
of
become
the
default
networking
solution
everywhere.
All
companies
that
provide
container
based
networking
solutions
have
a
CNI
plugin.
We
have
tested
it
with
a
bunch
of
popular
plugins
like
flannel,
weave,
openshift,
sdn
and
calico,
and
all
of
them
just
work.
C
And
finally,
please
last
but
not
the
least,
is
con
man.
So
con
man
is
like
a
monitoring
process.
It
monitors
each
container.
It's
it's
a
small,
tiny
binary
that
we
wrote
in
C
to
be
efficient,
so
it
doesn't
use
a
lot
of
memory
or
CPU,
so
it
monitors
the
container
for
exit
codes.
It
handles
the
logging,
so
CRI
defines
a
format
that
each
container
runtime
is
expected
to
write
out
the
logs
in
and
so
Kanban
is
a
component
and
cryo.
That
does
that
for
us,
and
then
it
also
handles
TTY.
C
So
whenever
you
want
interactive
terminals,
Kanban
is
responsible
for
reading
the
master
pwee
device
from
both
the
container
and
copying
data
back
and
forth.
It
serves
your
attached
clients
and
finally,
it
detects
and
reports
zoom.
So
whenever
you
do,
Q
see
their
status
and
if
your
container
went
out
of
memory,
you'll
be
able
to
see
that.
C
So,
let's
take
a
look
of
look
at
what
a
pod
looks
like
with
Ron
C,
so
you
have
a
pod
which
is
the
holder
of
cgroups
IPC
net
and
optionally,
the
piddle
namespaces
in
newer
versions
of
cuban
artists
and
then
within
a
pod.
You
have
the
infra
container
and
then
the
actual
application
containers
that
are
specified
in
the
pod
specification
and
for
each
one
of
these
containers
we
run
con
man
con
man
is
small
and
efficient
and
it
uses
c
and
shared
library.
So
it
doesn't
have
a
lot
of
the
memory
overhead.
C
So
this
is
the
overall
architecture
when,
when
using
cubelet
with
cryo,
so
on
the
left
you
see
cubelet
and
it's
talking
over
G
RPC.
Api
CCRI
is
basically
a
G
RPC
API
and
it
has
two
different
services:
the
image
service
and
the
runtime
service.
The
image
service
is
responsible
for
listing
the
images
available
locally
and
also
pulling
images.
So
whenever
you
in
a
pod
spec,
you
specify
some
image.
Cue
blurred
uses
the
image
service
to
make
sure
that
the
image
is
present
locally.
C
If
it's
not,
it
makes
a
call
to
the
pool
api
which
then
cryo
cryo
implements
pool
using
the
containers,
image
API,
a
library
that
we
mentioned
earlier
and
then
for
the
runtime
service.
We
use
the
OCI
generate
library
for
generating
the
config
dot
JSON.
We
use
CNI
for
hooking
up
networking
for
the
container
and
finally,
we
use
the
storage
library
for
creating
the
root
filesystem
for
the
container.
C
So
you
have
cry
of
running
on
the
right
and
then
it's
launching
parts
depending
on
what
the
cubelet
requested
to
do
over
the
CRI
API,
so
krylov's
Kuban
intestine.
So
what
do
we
mean
by
that?
So
each
each
pull
requests
that
goes
into
cryo
passes
all
the
Kuban
at
as
tests.
We
don't
merge
any
pull
requests
if
it
breaks
any
cuban
itis
test.
Ever
so
we
we
run
more
than
300
tests,
for
each
bullet
occurs
before
it
gets
merged
into
crown.
C
So
like
again
like
in
the
theme
of
only
supported
user
rescue,
when
it
is
so
never
break
Cuban,
it
is
so.
What
are
the
versions
of
cryo
that
are
out
there?
So
the
first
version
was
a
one
dot,
o
and
latest
release.
There
was
107
and
we
wanted
a
one
dot
o
and
that
corresponding
to
cube
1.7,
but
in
keeping
with
the
theme
that
we
want
to
be
tied
to
cuban,
it
is
after
that
we
jumped
a
versioning
to
match
cube
versioning.
C
So
each
version
of
cry
after
that
is
easy
to
match
up
with
cube
like
cryo
one
8
X
supports
cube,
1
8,
X,
cryo,
1,
9
beta
was
released
last
week
and
once
cube,
one-niners
outcry,
1
9
will
be
out
as
well,
and
what
about
open
sure?
So
cryo
was
shipped
as
a
tech
preview
in
openshift
3.7
on
rel,
so
open
shift
container
platform
will
be
moving
to
3.9
afterwards,
but
Origen
still
has
a
3.8.
C
If
you
want
to
use
that,
but
cry
of
on
it
and
will
be,
will
be
targeting
that
for
openshift
online
to
deploy
cryo
to
OpenShift
online
as
a
first
step
and
again
open
ship,
3
9
will
have
full
support
for
cryo
as
a
runtime.
And
our
goal
is
for
three
or
ten
to
fully
support
cryos
a
default
option
and
did.
C
There's
no
confusion,
we
just
pick
the
matching
version
and
it
should
work
with
queue
and
we
have
maintenance
and
contributors
from
a
bunch
of
companies,
Red
Hat,
Intel,
Suzy
and
many
other
contributors
and
so
I'm
going
to
demo.
This
tomorrow
come
to
my
talk
to
cube
con
and
we'll
go
over
cryo
in
action.
B
Okay,
so
that's
cryo,
pretty
quick,
pretty
cool
it
just
basically,
traditionally
kubernetes
has
had
a
problem
with
docker
changing
out
from
underneath
it,
so
every
version
of
docker
is
broken
kubernetes.
So
what
we
wanted
to
do
when
we
built
Rio
was
basically
say
whatever
we
do.
We
can't
break
kubernetes.
Kubernetes
is
the
thing
that's
important
here?
Okay,
it's
not!
You
know,
you
know
so.
Basically,
when
darker
1.8,
dr
1.9,
dr
1.10,
kubernetes,
always
trills
behind
my
fact.
Kubernetes
right
now
only
supports
dr
1.12
they're
about
to
move
up
to
dr.
113.
B
At
that
point,
kubernetes
basically
is
sort
of
saying
that
might
be
the
last
version
of
Daka
that
they'll
support
going
forward
and
even
darker
is
moving
away
from
darker
than
moving
to
a
thing
called
container,
D,
okay
and
using
cry
stuff
for
that.
So
we
talked
about
openshift
using
coconut
chef
uses
kubernetes,
but
open
chef
hat
does
more
than
just
use
kubernetes
for
running.
You
know
using
containers,
it
builds
containers.
So
the
second
part
of
this
is
we
need
this
open
ship
needs
the
ability
to
build
a
container
image.
B
It
needs
the
ability
to
push
container
images
around
the
environment.
So
can
anybody
in
this
room
tell
me
a
way
of
building
a
container
image
darker
build?
Can
anybody
tell
me
a
second
way
so
it's
damaged,
and
what's
that
built
on
top
of
docker
build
anybody
else.
Tell
me
a
different
way:
ain't
that
depressing
four
or
five.
You
know
what
a
docker
images
or
OCI
image.
It's
a
tower
ball
in
a
JSON
file.
Okay,
I
could
build
a
shell
script
to
build
a
table
all
in
a
JSON
file.
B
B
Last
year,
when
we
were
demo
talking
about
this
stuff,
a
cat
dev
comp,
a
fellow
worker
of
mine,
decided
that
he
would
build
a
tool
to
demonstrate
it.
He
started
in
the
morning
in
the
end
of
the
day
he
wanted
to
build
a
tool.
He
decided
to
call
it
something
and
make
fun
of
my
accent,
because
I
said
why
not
I
told
him.
Why
don't
you
build
me?
Something
that
I
build
a
container.
Why
don't
we
just
called
a
builder,
and
he
said:
ok
and
that's
a
Boston
Terrier
is
the
dog.
Ok.
B
B
Ok,
we've
actually
changed
the
icon,
but
I
don't
have
the
new
one.
Ok,
ok!
So
what
is
build
up
builder
is
a
command
line
tool,
no
big
fat
demons
that
builds
containers.
Ok,
you
can
do
a
builder
from
pull
down
an
image
from
a
container
registry.
Guess
what
it
uses
under
the
covers
containers
image.
What
does
it
building
on
top
of
container
storage,
unpack,
Cinematography
and
you
just
say,
build
a
from
potato
fedora.
It
creates
an
idea
of
container.
B
B
If
it
runs
on
Linux,
you
can
use
it
to
move
content
into
it,
so
you
can
DNF
install
you
can
use
the
copy
command
you
could
do
make
install
you
can
do
anything
you
want
to
put
stuff
inside
this
image
when
you're
done,
you
can
do
a
build
up
config
to
set
those
special
environmental
variables.
Entry
points
things
like
that
they're
associated
with
the
image.
B
Ok
and
then
you
can
do
a
build,
a
push
to
push
in
any
way,
so
you
can
build
standard
Bosch
grips
instead
of
having
dockerfile
is
the
only
way
to
ever
build
a
container,
which
is
a
very
bad
version
of
Bosch
right.
You
can
do
this,
you
can
decide
when
to
commit
when
to
patch.
If
you
want
to
use
keys
from
your
host,
you
can
use
that
okay,
so
build
a
pretty
cool
tool,
pretty
cool
little
tool
and
we
also
support
build
using
docker
file.
B
Okay,
so
you
can
actually
use
the
darker
file
hinge
to
build
it
and
it
will
build
it,
but
we
don't
like
talking.
That's
typing
that
in
so
we
call
it
build
a
bud
and
I.
It's
a
bush.
It's
not
responsible
for
that
name.
Okay,
so
you
can
actually
build
using
docker
file
and
basically
we'll
build
a
container
image,
and
you
can
push
it
and
do
anything
you
want
all
on
top
of
container
storage
as
soon
as
you're
done.
Building
the
container
image
cryo
can
use
it
because
guess
what?
B
Unlike
docker
under
like
standard
docker,
we
have
big
fat,
the
container
daemons
controlling
the
storage
in
the
image
we
can
share,
storage
and
image
between
multiple
different
processes,
so
builder
can
go
and
build
a
container
and
have
been
instantaneously
available
to
cryo.
Okay,
we're
going
to
talk
about
a
couple
other
tools
that
also
use
this,
so
we
can
actually
share
storage
between
multiple
things.
This
filesystem
can
you
imagine
a
filesystem
sharing
between
processes.
B
What
a
novel
concept
we've
come
up
with:
okay,
we're
working
on
openshift,
so
the
next
version
of
source
tree
image,
hopefully
by
this
summer,
will
actually
use
builder
under
the
covers,
instead
of
just
an
implementing
docker.
So
as
we
go
out
right
now
and
OpenShift
online,
we're
actually
using
darker
for
builds
and
we're
using
cryo
and
it
needs
to
covers
as
we
move
forward.
We
want
to
basically
get
an
alternative
to
doing,
and
so
using
builder
the
goal
with
builders
actually
to
make
it
less
privileged,
require
less
privileges.
B
Right
now
we
require
still
requires
the
same
amount
of
privileges
to
build
a
container,
but
hopefully
in
the
future,
we'll
be
able
to
trim
down
the
amount
of
privileges
required
if
a
builder.
So
what
else
does
the
OpenShift
need?
Well,
one
problem
with
cryo
is
that
it
doesn't
have
everybody
that
goes
into
a
kubernetes
environment
right
now.
If
something
goes
wrong,
they
get
onto
the
box
and
they
execute
the
darker
commands.
They
do
docker
PS
to
see
what's
running
in
the
pods.
B
I'll
refer
to
it
as
Cape
Cod
today,
because
that
name
has
been
rejected,
but
that's
what
we
all
call
it
and
legal
has
not
come
back
with
the
term
that
we
can
call
it
with
so
and
we
have
it
as
part
of
the
live
pod
effort
and
the
naming
is
working
its
way
through
legal
but
Cape
Cod
is
a
pool,
is
a
tool
for
managing
pods
and
containers
based
on
the
darkest
CLI.
So
we
know
that
you
guys
all
understand
of
the
way
I
list
images
I
type
in
docker
PS
a
file
is
containers.
B
If
I
want
to
list
all
containers
I,
you
know
if
I
want
to
listen
short
name,
I
do
II.
So
all
this
knowledge
has
been
built
up
on
the
dock
is
CLI,
so
we
decided
to
build
a
thing
called
a
pod
and
use
our
own
special
CLI.
So
we
have
cape
IPS.
We
have
k
pod
Ron.
We
have
K
pod
exact.
We
have
K
pod
images,
so
we
really
were
very
creative
in
what
we
call
these
things.
But
basically
K
pod
is
an
entire
darkest
CLI
type
environment
that
executes
pretty
much
the
same
environment.
B
But
guess
what
no
big
fat
demons
so
when
you
execute
K
pod
run
the
process
that's
running.
The
container
is
a
child
of
the
client,
it's
not
connecting
to
a
big
fat
demon
somewhere
to
run
it
in
a
different
environment.
So
you
can
actually
start
to
build
smarter
environments
and
guess
what
k
pod
she
is
containing
a
storage
with
trial.
So
if
you
want
running
a
cryo
environment,
you
can
run
k,
pod
PS
and
it
will
show
you
all
the
images
that
are
running
in
it.
B
But
it's
in
the
separate
process
you
can
launch
k
pods.
You
can
launch
builder
to
use
these
things,
but
basically
they
all
can
share
the
same
storage,
but
we
want
to
do
with
K
pod.
In
the
long
run,
it's
actually
allow
it
to
have
full
concepts
of
pods
in
it,
so
it
basically
advance
past,
just
the
darker
CLI,
to
the
point
where
we
can
actually
join
containers
to
pods
and
start
getting
creative
about
what
it
means
to
be
a
pod.
B
So
here's
the
grandfather
of
all
scope,
yo
I
have
to
mention,
especially,
we
have
Antonio
the
creator
of
it,
so
scope
Yolo.
This
is
the
last
tool
and
I
think
I
have
seven
minutes
left
so
I'm
racing
through
these,
so
scope,
EO
Scorpio,
might
be
the
most
popular
one
of
our
tools
out
there,
but
nobody
talks
about
using
it,
but
a
lot
of
people
are
so
scope.
Eo
is
actually
the
original
CLI
that
containers
image
was
based
off
of
scope.
Eo
in
Greek
means
for
remote
viewing
so
a
little
history
on
scope.
B
Eo
a
few
years
ago.
We
wanted
to
be
able
to
go
out
to
a
continual
registry
and
actually
look
at
the
JSON
associated
with
an
image.
The
only
way
to
look
at
the
get
JSON
associated
with
an
image
is
actually
in
the
darker
world
is
actually
to
pull
the
image
to
your
host
and
then
you're
allowed
to
look
at
the
JSON.
We
had
a
problem
with
that
because
some
of
our
images
frankly
are
huge
so
pulling
down.
B
You
know:
half
a
gigabyte
or
a
gigabyte
of
disk
to
your
system,
just
to
look
at
that
JSON
and
say:
wow,
that's
really
not
what
I
needed
and
now
I'll
remove.
It
seem
like
a
waste
of
bandwidth
as
opposed.
We
wouldn't
want
it
to
basically
go
out
and
get
the
the
JSON
and
just
pull
that
down.
So
we
actually
built
a
patch
for
docker.
That
basically
said
instead
of
docker
inspect
its
at
docker,
inspect
remote
andhaka
rejected
and
they
said
you
should
go
off
and
implement
that
you're
on
your
own.
B
They
said
it's
just
simple:
you
know
web
it's
web
stuff,
so
just
implement
it
on
your
own,
don't
be
adding
new
patches
to
the
daka
CLI,
so
Antonio
here
said:
okay
I'll!
Do
that
the
problem?
Is
he
didn't
stop
at
that
point?
He
said
well,
if
I'm
going
to
pull
down
the
JSON
I
might
as
well
pull
down
the
image
to
inside
of
my
tool
and
I
said
well.
If
I
pulled
the
image,
I
might
as
well
push
the
image.
B
So
he
continued
to
develop
on
this
thing,
so
he
actually
built
the
scope
EO
tool
that
does
a
really
nice
job
of
pulling
and
pushing
images
with
containers
image.
He
can
actually
pull
an
image
from
one
registry
and
push
it
to
another
registry.
So
now
we
have
Windows
ports
of
this.
This
tool,
they're,
actually
moving
registries
around
this-
is
something
I
want
to
grab
my
phone.
That's
telling
me
I
have
a
medium
okay,
so
scope
II
was
able
to
move
but
scope.
Eo,
didn't
containers
images
actually
got
for
other
creative
containers.
B
Image
supports
container
storage,
so
I
can
pull
an
image
out
of
registry
and
pushing
it
directly
into
cryo
storage
or
build
a
storage
at
a
pod.
Storage.
I
can
actually
pull
in
and
push
it
directly
into.
So
I
can
pull
an
image
from
dr.
IO
and
stick
it
into
doctors.
Database.
Okay,
I,
can
push
to
a
directories.
I
can
push
LTI
images
in
dr.
B,
1
images
all
with
the
scope
EO
tool.
B
A
B
A
A
B
You
gonna
be
able
to
implement.
You
will
be
able
to
anybody
that
talks
to
a
container
registry
as
a
regular
user
you're
talking
about
like
doing
darker
socket
to
allow
it
well,
let
me
tell
you
about
exposing
the
darkest
socket
to
a
user.
Okay,
just
give
them
sudo
with
no
password
and
turn
off
logging.
Okay,
because
you
have
basically
given
them
full
route
to
the
system.
So
anytime
I
can
talk
to
anybody
that
can
talk
to
a
container
registry
I'm,
going
to
continue
to
run
time,
you're,
giving
you
full
route
to
the
system
so
yeah.
B
If
you
believe
that
you
should
allow
your
users
to
have
full
route
to
your
system,
then
give
it
to
them.
There's
no
additional
security
built
into
tri-oval,
what's
built
into
darker
okay.
So
again,
the
only
thing
cryo
does
is
implement.
What
darker
wants,
whether
or
not
we'd
be
allow
you
to
use
k-pot
in
the
future
to
use,
say
user
name
space
to
do
it.
That
would
be
something
we've
made
it
investigate
in
the
future.
B
There
is
a
tool
called
builder
and
a
bubble
wrap
that
actually
implements
some
of
that
and
if
you
follow
a
flat-pack
project
that
allows
you
to
do
some
stuff,
but
right
now
we're
not
doing
anything
special
in
cryo
any
of
this
stuff.
That's
going
to
not
require
root,
so
I
would
prefer
you
use
sudo
to
set
up
those
demeyers.
Someone
else
have
a
question.
B
You
should
not
see
the
shirt
if
windows
wants,
the
kind
of
windows
has
come
in
Windows
engineers
or
somebody's
come
in
and
giving
us
patches
to
make
scope
EO
to
work
on
top
of
Windows
and
on
top
of
Mac's.
So
it's
all
open
source.
If,
when
we've
actually
talked
to
Windows
about
potentially
using
some
of
this
technology
and
Microsoft
I
guess
I
should
say
so,
but
we're
not
doing
the
work,
so
you
know
I
would
love
to
I.
B
C
A
gone
one
isn't
isn't
like
an
init
container,
it's
it's.
It
is
like
a
small
process
which
is
a
parent
process
of
the
container,
and
this
is
because
the
way
osya
has
implemented
the
separation
of
create
and
run.
We
need
something
to
actually
monitor
the
process
and
that
that's
a
rule
that
Kanban
does
so
you
still
can
have
your
own
in
it
inside
a
container
but
to
monitor
the
container
itself
from
outside.
You
need
con
man,
so.
B
When
I
run,
when
I
run
a
run,
C
container
run
C
actually
start
speed
one
and
then
goes
away.
What
happens?
Is
con
mine
launches,
run
C
and
stays
around
running,
basically
listening
to
stand
it
in
and
stand
it
out
from
the
what
ends
up
being
paid
one
of
the
container
and
it
sits
out
there.
If
anybody
wants
to
connect
to
it
or
attach
to
it,
then
it
can
give
back
that
back
control
of
the
terminal.
Therefore
we
can,
you
know
something:
darker
used
to
not
be
able
to
do
was
restarted.