►
From YouTube: OpenShift Commons Briefing #91: CRI-O and Kubernetes Deep Dive with Mrunal Patel and Dan Walsh
Description
CRI-O provides an integration path between OCI conformant runtimes and the kubelet. Specifically, it implements the Kubelet Container Runtime Interface (CRI) using OCI conformant runtimes. In this Briefing, Red Hat’s Dan Walsh and Mrunal Patel give a deep dive into CRI-O and discuss implications of this initiative and what to expect in future releases.
A
Hello,
everybody
and
welcome
again
to
another
openshift
Commons
briefing.
This
time
we
have
in
round
Patel
and
Dan
Walsh
from
Red
Hat
and
they're
gonna
talk
to
us
us
about
and
I'm
probably
pronouncing
the
acronym
wrong
I
say:
creo
ana-lucy
I
based
kubernetes
runtime.
That
has
been
really
hot
and
very
topical
thing
around
containers
and
that
world,
and
rather
than
blather
on
myself,
I'm
gonna.
Let
well
introduce
himself
and
Dan
and
just
kick
this
off
the
way
the
session
works.
Is
you
can
ask
questions
in
the
chat,
we'll
try
and
answer
them.
A
B
C
My
name
is
Dan
Walsh
I
run
the
container
team
at
Red
Hat,
that's
the
basis
of
the
team
that
does
all
think
containers
underneath
the
kubernetes
level,
so
we're
we're
basically
in
the
operating
system
level.
So
my
team
manages
you
know,
fixes
fair
things
like
darker
underlying
storage,
pretty
much
everything
that
happens
act
at
the
hosts
that
you
know
something
like
kubernetes
or
regular
container.
Runtimes
need
to
do.
We
do
in
this
team,
so
so
starting
off,
so
we
pronounce
a
cryo
so
subscribe.
It
really
stands
for.
C
This
tool
chain,
if
I'm
using
darker,
go
down
this
change,
so
they
decided
to
do
is
rather
then
support
a
whole
bunch
of
different
runtimes
that
they
would
specify
runtime
interface
and
that's
what
they
see
a
try
stance.
Verse
contain
a
runtime
interface,
so
kubernetes
defined
basically
interfaces
that
it
needs
to
when
it
goes
to
run
a
container
defines
those
interfaces
and
it
will
call
and
tenant
contain
a
runtime
that
implements
those
interfaces.
So
since
that
time,
no
so
after
that
happened,
we
read.
C
Have
we
kicked
off
a
little
project,
a
little
side
project
to
see
if
we
could
implement
a
really
simplified
to
contain
a
runtime
whose
main
goal
was
totally
tied
to
kubernetes,
so
we
looked
at
what
docker
is
doing
and
with
with
rocket
is
doing
and
they
always
had
sort
of
conflicting
goals.
So
this
is
CLI
support
this
of
the
support,
but
but
they
weren't,
really,
you
know
totally
dedicated
to
kubernetes.
So
we
really
wanted
to
say
is
that
we're
gonna
build
a
runtime
whose
only
job
in
the
world
is
to
satisfy
kubernetes
requests.
C
So
if
kubernetes
changes
the
CRI,
we
implemented
on
top
of
our
tool,
if
other
orchestration
tools
come
in
and
want
to
use
our
use,
the
cryo
daemon,
they
have
to
talk
to
us,
the
VSC
are
Iying
and-
and
we
won't
add
any
interfaces
to
our
daemon
that
aren't
specifically
specified
by
kubernetes.
Lastly,
every
pull
request
that
comes
in
from
kubernetes
write.
Every
pull
request
that
comes
in
to
cryo
will
not
get
merged
unless
we
can
fully
pass
the
entire
kubernetes
test
suite.
C
C
So
we're
gonna
talk
a
little
bit
about
the
the
components
that
make
up
cry
out
there,
one
of
the
things.
If
I
actually
wrote
a
long
blog
that
talks
about
the
evolution
of
containers
on
open-source
comm,
you
could
go.
Look
it
up
if
you
find
it,
we
talked
about
breaking
apart.
So
if
you
look
at
the
tradition
of
what
darker
is
done,
they
basically
set
up
one
big
demon
that
sort
of
built
all
the
technology
into
that
one
demon.
C
So
if
you
go
out-
and
you
want
to
run
a
container,
you
talk
to
the
demon,
ask
the
demon
to
run
the
container
fire.
That
demon
goes
out,
pulse
pulls
the
container
from
somewhere
stores
it
on
disk
and
then
does
some
management
of
the
storage
and
then
launches
a
process
to
run
your
container
underneath
it.
But
everything
is,
it
has
to
go
through
that
demon
and
the
demon
becomes
a
central
point
of
it
all
control.
C
What
we
wanted
to
do
is
break
apart
that
demon
into
sort
of
core
components,
and
so
we
broke
out
a
few
different
core
components.
The
original
one
that
most
people
have
heard
of
is
the
OCI
specification
which
allowed
us
to
basically
get
down
to
a
core
container
runtime
low
level.
You
know
a
couple
of
examples
that
are
run
seeing
clear
containers
and
in
really
what
they
are,
is
tools
for
just
running
containers.
So
you
give
me
a
brew.
C
Defense
and
a
JSON
and
I'll
run
a
container
on
top
of
it
and
that's
what
OCI
specification
did,
but
we
also
needed
two
other
things
to
in
order
to
run
containers
for
kubernetes.
We
needed
storage.
So
we
needed
a
way
to
take,
take
a
an
image
and
actually
put
it
onto
disk,
and
so
we
really
originally
did.
We
started
with
sort
of
what
we
had
done
for
darker
and
building
up
the
storage
for
darker
and
pulled
that
out
into
a
separate
package
and
that's
stored
on
a
kid.
C
Ab,
comm,
slash,
containers,
flash
storage
and
we
started
developing
technologies
under
there
and
we
wanted.
What
we
wanted
to
do
is
actually
move
all
the
locking
and
all
the
controls
out
of
the
centralized
demon
and
not
to
want
to
desk,
and
so
that's
what
github
container
storages
and
then
the
secondary
pod
of
running
containers
is
how
do
I
pull
an
image?
C
How
do
I
pull
it
back
and
forth
between
container
registries
in
my
host,
and
so
we
broke
out
that
into
another
separate
goal,
a
goal
library
called
github
containers
image,
and
so
we've
done
there
is
we're
basically
well.
We
have
engineers
and
people
contributing
different
ways
of
moving
images
around
so
images
can
move
back
and
forth
between
container
registries
and
darker
demons
or
other
types
of
storage.
You
can
move
to
a
container
storage
even
to
directories.
C
So
we've
basically
built
these
two
different
libraries.
Along
with
the
OCR
specification.
We
have
all
the
building
blocks.
We
needed
to
basically
wrap
them
with
a
demon,
so
cryo
basically
takes
event
full
advantage
of
using
container
storage
for
storing
its
data
containers,
image
for
pulling
images
from
different
registries
and
different
locations,
and
then
we
use
run
C
and
we
support
using
run
C
to
run
the
containers
underneath
us,
which
is
exactly
the
same
thing
that
the
darker
daemon
does.
C
If
you
look
to
the
darker
daemon
now,
you'd
see
that
all
containers
are
actually
being
lodged
under
run
C,
but
we've
also
worked
within
our
CI
effort.
Clear
containers,
which
is
an
Intel
sponsored
effort,
clear
containers,
is
running
containers
inside
of
virtual
machines,
said
style,
lightweight
bm's,
and
so
we've
been
working
hand-in-hand
with
Intel
to
make
sure
that
the
cryo
works
well
with
multiple
different
container
at
runtime.
So
right
now
we
do
run
C
and
clear
containers
and
others
have
looked
into
potentially
using
it.
So
next
slide,
yep.
B
The
remaining
components
are
the
OC,
a
runtime
tools,
so
dan
mentioned
that
we
need
a
spec
file,
a
spec,
config,
dot,
JSON
file
for
running
around
C
and
a
route
up
to
the
library
in
the
ocean
and
used
to
generate
this
country,
and
the
advantages
is
achieved
by
all
other
projects,
and
it
has
all
the
latest
bug
fixes.
So
whenever
there
are
any
changes
to
the
spec,
we
get
the
fixes
in
the
generate
library
and
then
for
networking.
B
We
went
with
CNI,
which
is
which
is
kind
of
become
the
container
networking
standard,
and
we
use
CNI
to
hook
up
networking
for
the
pods
and
cryo
and
it
it
test,
but
pretty
much
all
the
different
plugins
you're
tested
with
flannel
weave,
OpenShift,
Sdn
and
and
a
few
others
and
the
final.
But
like
one
of
the
most
important
bits
there
is
conmen.
So
the
way
the
OCI
runtime
is
set
up.
B
B
So
this
is
what
a
pod
looks
like
when
using
cryo.
So
you
have
an
infra
container,
the
inflow
container
is
optional
and
it
is
what
holds
the
IPC
net
and
put
namespaces
that
are
shared
by
the
other
containers
in
the
pod,
and
we
have
made
the
in
front
container
pluggable,
so
you
can
select
whatever
you
want.
You
can
take
the
default
cube
and
it
is
pause
container
or
potentially
replace
it
with
the
systemd
container.
As
a
system
we
can
take
care
of
reaping
zombies.
B
Inside
of
your
other
containers
in
the
pod
and
running
on
top
of
each
container
is
Kanban.
The
Kanban
I
mean
even
though
we
launched
many
instances
of
Kanban
Kanban
is
written
and
see
for
like
efficiency,
and
it's
it's
very
efficient
in
terms
of
CPU
and
memory
usage.
It
has
just
the
minimum
bits
required
to
satisfy
requirements
for
logging
in
monitoring.
B
So
this
is
what
it
looks
like
with
clear
containers.
They
have
an
additional
stim
process
and
an
agent
so
clear
containers
are
actually
based
on
VMs.
So,
instead
of
launching
pure
Linux
containers,
they
are
using
via
VM
as
the
pod
container
and
inside
that
they
then
spawn
other
containers
and
for
the
latest
version,
3
door
that
they
are
working
on.
They
are
also
switching
to
using
lip
container,
which
is
the
same
library
as
grants
II
or
for
starting
the
containers
inside
the
VM.
B
So
this
is
what
the
overall
architecture
looks
like
the
on
the
left.
You
have
the
cubelet
and
cubelet
is
talking
to
the
cryo
daemon
using
GRP,
see
now
the
cubelets
Eri
defines
two
interfaces:
one
is
the
emit
service
and
the
runtime
service,
so
anyone
that
wants
to
implement
the
CRI
needs
to
implement
both
of
these
services
and
cryo
implements
them
both
and
for
the
image
service.
B
We
use
our
containers,
image
library
that
dan
mentioned
earlier
and
for
the
runtime
service
we
use
the
other
components
like
the
OCI,
generate
library,
TNI
for
networking
and
storage
for
setting
up
the
root
of
s.
So
what
storage
really
is
doing
is
when
you
say,
hey
I
want
a
Redis
based
container
it.
B
So
what
is
the
status
of
the
project?
So
all
the
kubernetes
node
conformance
tests
are
passing
and
they
are
run
on
each
pull
request
to
Claro.
We
merge
the
pr
only
if
the
test
pass
of
the
Nano
regressions
we
will.
Additionally,
we
also
pass
all
the
end-to-end
tests
and
kubernetes.
All
the
CRI
API
is
have
been
implemented.
If
you
wanna,
try
out
cryo
like
I
would
encourage
you
to
go
to
q1.
It
is
by
example.com
website
and
try
out
all
the
examples
and
they
should
work.
B
We
released
the
beta
version
couple
of
weeks
back
the
ones
out
of
beta
and
we
are
working
on
some
bug
fixes
and
after
that,
we'll
be
ready
to
release
the
final
one
dot,
o
version
we
have
maintained,
errs
and
contributors
from
red
heart,
Intel,
Susie
and
many
other
companies
like
I.
Think
at
this
point
we
are
at
over
50
contributors
on
github,
then
for
for
easy
setup
of
cryo,
we
have
integration
with
qadian,
and
so
we
have
a
few
repos
under
github
cry.
Orc,
where
you
can
check
out
check
out.
B
One
of
the
reports
and
I
will
help
you
set
up
QA,
DM
and
cry
mini
cube.
Integration
is
in
progress
and
we
also
support
mixed
workloads.
No
mixed
workloads
means
that,
under
the
same
kubernetes
deployment,
you
can
have
some
pods
that
are
running
under
run
C
and
some
others
that
are
running
under
clear
containers,
and
we
do
that
using
annotations
today.
A
B
A
B
C
I
would
say
so
for
me
open
shipped
origin
point
of
view.
One
of
the
reasons
Intel
is
working
very
hard
because
they
really
want
collect
containers
to
run
underneath
open
shift
so
from
an
open
shift.
Origin
point
of
view,
I
see
this
definitely
happening
and
when
we
moved
to
cryo
as
a
as
a
back-end,
so
whether
or
not
Red
Hat
will
end
up
supporting
it.
On
top
of
rel
is
always
a
question.
So
that's
that's
something
where
we're
talking
about
a
lot
internally,
but
right
right
now.
B
And
there
are
some
additional
challenges
and,
in
the
cube
world
on
how
to
manage
two
different
kinds
of
time,
so
that
those
need
to
be
worked
out,
whether
they
should
be
allowed
on
the
same
node
or
not
how?
How
should
they
be
scheduled?
How
are
the
resources
calculated,
so
this
is
like
a
proving
ground.
So
if
people
want
to
try
out
those
ideas,
they
can
use
cryo
and
hopefully
will
make
its
way
to
Cuba
at
some
point,
one.
C
Thing
to
think
about
when
you're
looking
at
clear
containers
also,
is
you
really
have
to
control?
You
have
to
run
them
on
physical
Hardware
because
they
are
usual
virtualization
and
most
of
the
cloud
vendors
block,
virtualization
inside
of
virtualization,
so
collect
containers
is
a
great
I.
Think
it's
the
best
solution.
For
you
know,
brennick
containers
inside
of
VMs
from
a
security
point
of
view,
is
awesome
for
giving
you
isolation.
That's,
but
the
big
hindrance
again
said
you
know
you
have
a
date.
They
have
some
written
run
on
physical
machines.
B
B
B
B
And
you
can
notice
that
both
of
these
are
running
under
the
same
slice,
so
they
are
getting
charged
to
the
pod
slice
here,
which
is
Q,
pods,
best-effort
pod,
something
something
dot
slice
and
we
can
also
take
a
look
at
the
ps3
of
conmen
to
see
how
the
processes
are
set
up
on.
One
is
a
parent
of
the
httpd
inside
the
container
and
it's
monitoring
and
supporting
logging
and
attach
and
all
those
features.
B
B
B
So
that
covers
they
call
the
features
in
NCR
right
now.
Let's
take
a
look
at
how
whether
this
works
with
open
shipped
as
well
now
OpenShift
snifter
to
the
CRI
in
version
3
point
6,
so
we
can
actually
replace
the
runtime
under
open
shift
to
make
it
talk
to
cryo.
So
what
I
would
have
set
up
here
is
an
open
shift.
Local
cluster,
but
cry
was
the
runtime
and
let's
try
some
OpenShift
features.
So
let's
do
OC
get
quads.
B
B
A
B
A
C
Okay,
so
one
of
the
one
of
the
problems
with
this-
you
know
everybody
that's
using
kubernetes
right
now
on
top
of
darker
deem
and
a
lot
of
people
like
to
go
in
and
so
to
debug
around
a
figure
out
what's
going
on.
So
unless
you
have
the
knowledge
that
rental
has
right
now
going
in
and
executing
this
run,
C
commands
and
trying
to
match
what
what
kubernetes
is
doing
to
and
what
actually
is
going
on
in
the
system
or
if
the
demon
for
some
are
the
container
runtime
for
some
reason
gets
hung
up
re.
C
C
You
know
different
types
of
activity,
so
we
decided
that
we
wanted
to
build
sort
of
a
simple
interface
on
the
backend
of
basically
behind
cry.
Oh
man,
now
since
cryo
is
using
container
storage
and
containers
image
and
actually
storing
all
its
data
on
disk
and
putting
its
locking
and
stuff
on
disk.
We
could
actually
build
up
a
command-line
interface
to
actually
interact
with
basically
the
storage
and
the
images
and
be
able
to
look
at
everything.
C
That's
going
on,
so
we
started
building
a
tool
called
K
pod
of
the,
so
it's
a
management
tool
for
containers
and
images,
and
it's
demon
list
so
you'd
only
have
to
you-
don't
even
have
to
run
the
cryo
demon
to
be
able
to
use
this.
So
if
the
cryo
demons
hung
or
shutdown
or
whatever,
you
can
still
go
in
and
look
at
storage
and
look
at
what's,
containers
and
pots
are
running
in
the
environment.
C
So
the
other
thought
thing
we
wanted
to
do
is
we
wanted
to
make
sure
that
it
was
easy
for
users
to
transfer
the
knowledge.
So,
if
you're
going,
if
you
go
from
a
docker
back
to
kubernetes
and
you
go
to
cryo
based
one,
we
wanted
to
make
it
simple
theater
to
transfer
the
knowledge,
so
we're
really
basing
the
Kay
pots
at
least
the
initial
version.
The
K
pod
is
matching
the
dhaka
CLI.
So
if
we
go
to,
the
next
slide
will
show
you
will
where
we
are
instead
development
that
the
K
pod
CLI.
C
So
these
are
the
some
of
the
commands
that
we've
implemented
so
far.
This
is
actually
up
one.
If
you
go
to
the
cryo
kubernetes
incubator
cryo
project
you'll,
find
on
the
readme
that
we
list
out
all
of
the
kapok
commands
that
are
currently
implemented
in
this
picture
was
taken
as
of
yesterday.
So
you
see
that
we
have
sort
of
a
lot
of
the
image
stuff
based
stuff,
that's
available
in
darker.
C
So
if
you
know
how
to
do
at
docker
images,
you'll
be
able
to
just
you
know,
switch
help,
the
dock,
a
command,
4k
plot
images
and
pretty
much
all
the
options,
and
things
like
that
remain
the
same.
So
we've
implemented
probably
about
half
to
three-quarters
of
all
the
interfaces
and
we're
4k
pod
at
this
point
to
match
the
entire
darker
CLI
suite,
obviously
we're
not
implementing
things
like
swarm
and
there
other
parts
of
it,
but
pretty
much
everything
everybody's
familiar
with
we're
implementing
in
k-pot
and
again
it's
it's,
not
demon
based.
C
So
if
you
look
down
about
half
way,
you'll
see
a
k
pod
mount
and
what
k
pod
mount
can
actually
do
is
mount
up
the
container.
So
you
can
actually
get
the
mount
point
of
where
a
container
is,
and
so
you
can
actually
start
to
fool
around
with
the
container.
You
can
actually
just
go
to
the
directory
and
look
at
what's
inside
of
the
container,
you
can
actually
copy
stuff
into
the
container
you
can
copy
stuff
out
of
the
container.
C
You
can
use
any
tool
you
want
to
interact
with
the
container,
so
you
can
use
external
tools
on
the
machine
to
manipulate
data,
so
we're
looking
to
enhance
sort
of
the
sort
of
the
docker
CLI
experience
by
taking
advantage
of
some
of
the
other
tools.
Eventually
right
now,
k
pod
is
really
about
managing
containers,
but
the
next
phase
after
we've
completed
the
dr.
CLI.
We
want
to
actually
get
into
management
of
pods,
so
you
know
how
do
I
launch
a
pod?
How
do
I
add
a
container
to
a
pod?
C
B
So
the
next
step
is
releasing
one
dot,
o
just
making
sure
that
everything
works.
Fine,
there
are
no
bugs.
We
are
doing
some
testing
so
believe
it
now.
The
one
dot
go
out
in
two
to
three
weeks
and
then
we
want
to
graduate
out
of
the
kubernetes
incubator
mate.
Whatever
the
requirements
are
were
there
for
graduation
and
post,
one
dotto
will
be,
will
be
changing
a
technique
technique
around
how
to
be
version.
Cryo
so
will
be
versioning
it
to
match
the
Cuban.
It
is
version.
B
So
it's
very
simple
for
someone
to
understand
what
version
of
creo
works
with
what
version
of
Cuban.
It
is
double
okra,
1,
7
walking
with
cube
1,
7
crowd,
1,
8,
working
with
cube,
1,
8
and
so
on,
and
also
we
are
working
on
integrating
cryo
and
targeting
it
for
openshift
3.7
and
getting
it
on
to
OpenShift
online.
B
So
we
have
a
blog
on
medium.com,
slash
cryo,
where
we
have
been
blogging
quite
frequently
about
the
new
features
that
we
add
to
cryo,
and
you
can
follow
us
on
get
a
talk
to
us
on
IRC
and
take
a
look
at
the
website.
Contributions
are
welcome.
Feedback
is
welcome.
Any
help
with
testing
is
super.
Super
welcome.
C
Basically,
what's
happening
is
at
any
time,
creo
stats
a
starts
a
container,
that's
that
it
basically
puts
down
the
information
onto
disk,
so
they
so
we're
using
so
the
central
store
for
information
about
what
containers
are
running
or
being
stored
on
disk
as
well
as
you
can
also
query
front
see,
so
we
can
figure
out,
you
know,
run
see
also
can
keeps
track
of
which
containers
are
running
on
the
system.
So
yeah.
C
If
you
look
at
one
of
the
issues
I
always
have
with
the
darker
daemon,
is
that
all
this
information
is
being
stored
and
hidden
away
inside
the
darker
daemon
and
that
therefore,
if
the
darker
demon
crashes
or
whatever
there's
a
chance
that
you
could
lose
information,
but
we've
moved
all
locking
and
the
status
information
is
actually
being
stored
on
disk,
so
we
can
interact
with
the
with
the
date
of
the
same
way
that
cryo
dead,
we're
actually
making
a
new
library
called
we're.
Calling
a
live
pod.
C
We're
working
on
that
now
by
the
wake
a
pod
in
lie.
Pod
are
not
blocking
cryo,
so
cryo
is
probably
gonna
get
released
to
1.0
before
we've
completed
all
of
Kay
pods,
but
basically
we're
looking
at
building
off
of
us,
pure
library,
so
that
others
can
interact
with
the
data
stores
and
stuff.
That
cry
was
using.
A
B
B
Mod
config
and
and
pass
this
these,
these
arguments
to
the
cubelet,
the
cont
container
runtime,
the
runtime
endpoint,
socket
pri,
there's
like
four
or
five
things
we
need
to
pass.
So
this
is
the
standard
that
you
need
to
pass
and
then
open
ship
starts
talking
to
cryo.
It's
really
that
simple
Wow.
B
B
A
Teasing
out
the
question
around
documentation
for
all
of
this
stuff,
so
is
the
documentation
by
blogging,
is,
is
wonderful
to
get
things
started,
but
real
documentation
on
stuff
like
this
and
so
I'm
been
pushing
people
to
write
stuff
for
the
open
ship
dock
set.
So
we'll
see
if
we
can
get
that
in
there
somewhere.
So
it's
searchable
Brad
is
asking
one
more
question:
what
are
the?
What
are
some
of
the
types
of
workloads
when
you
say
mixed
loads?
Okay,.
C
So,
basically,
if
you
think
about
what
what
is
the
riskiest
thing
to
run
in
your
environment,
you
potentially,
if
you've
seen
any
of
my
talks
in
the
past
I've
always
talked
about
your
virtual
machine.
Separation
is
better
than
container
separation
and
there's
there's
several
reasons
for
that.
But
the
main
one
is
that
that
the
kernel
is
a
single
point
of
failure
between.
C
You
know
that
if
there's
an
issue
in
the
kernel
there's
potential
break
out,
so
if
you're
gonna
run
sort,
no
really
dangerous
workloads,
workloads
that
really
require
heavily
privileged
operations
to
happen
inside
your
containers,
then,
and
those
workloads
you're,
probably
better
off
putting
them
inside
of
a
virtual
machine
so
using
something
like
Lea
containers.
For
that
would
be
helpful.
One
thing
we've
thought
about:
is
you
know?
How
would
we
be
able
to
do
something
like
openshift
builds?
C
So
if
you
think
about
their
people,
pulling
down
random
internet
packages
and
building
and
running
random
code,
that
might
be
something
that
we
want
to
put
inside
of
you
add
the
security
of
a
virtualization
wrapper
around
it.
But
I
mean
you,
you
know
it
goes
back
and
forth.
One
of
the
problems
with
some
like
layer
containers
is
that
you
start
to
lose.
You
start
to
get.
C
A
I
think
that
was
the
last
question.
If
anyone
else
was
going
to
yes,
michael
has,
unless
that
was
directed
at
you
around
getting
the
docs
if
you're
the
pet
peeve
of
mine,
we
tend
to
blog,
but
a
document
by
blogging
and
trying
to
get
it
into
the
doc
set.
Is
one
thing
that
trying
to
move
us
forward
on
and
Mike
said
yes
well,
he'll
do
both
awesome.
A
C
Right
now
you
know
right
now
with
cryo
we
tend
to
have
a
lot
of
people
sniffing
around
the
edges.
So
there's
you
know
all
day:
all
the
big
players
are
taking
a
look
at
cryo
there
they're
investigating
it,
and
but
we
have
no,
nothing
we
could
talk
about
are.
Nothing
is
actually
solid
as
far
as
potentially
adoption
by
large
cloud
providers
like
that,
but
again
our
main
goal
is
to
satisfy
kubernetes.
You
know
this.
A
It's
one
of
those
things
where
we
can
perhaps
have
the
t-shirt
that
says
kubernetes
not
just
for
containers
anymore.
We
all
thought
the
I
even
bought
vm's
would
go
away
and
they're
back.
So
this
is
wonderful.
Wonderful
news
for
lots
of
health
security
minded,
so
I
really
appreciate
all
the
work
you
guys
are
doing
and
all
the
collaboration
that's
going
on
across
all
over
the
different
communities
to
make
this
happen.
A
C
Place
that
we're
actually
looking
to
integrate
with,
we
actually
have
another
part
of
this
project,
which
is
called
bilder,
which
is
a
replacement
for
so
obviously
cryo
doesn't
do
anything
about
building
containers.
It's
it's
just
for
running.
Kubernetes
and
kubernetes
doesn't
build
container
images,
so
we've
actually
broken
a
pod
and
used
similar
tools
to
build
a
thing
called
builder.
So
that's
a
builder
like
I
would
say
it.
A
Let
me
know-
and
maybe
we
can
get
you
there
too-
and
to
share
your
use
cases.
So
that
would
be
great,
so
that's
gonna
be
December
5th
in
Austin
Texas,
along
with
coupon.
It
should
be
the
following
day.
So
spend
me
lots
of
fun
and
we
can
listen
to
Dan's.
Well,
Boston
accent
talking
some
more
about
build
up.
So
thanks
everybody
for
joining
us
today.