►
From YouTube: TGI Kubernetes 038: Kata Containers
Description
Lots more details on this episode at https://github.com/heptio/tgik/tree/master/episodes/038
Come hang out with Kris Nova as she does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Kris talking about the things she knows. Some of this will be Kris exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
A
So
the
first
thing
we
want
to
do
before
we
get
into
everybody's
favorite
part
of
the
episode
which
is
hellos
is
we
want
to
make
sure
that
we're
we're
good
on
a
sanity
check
for
all
systems
in
place
and
I
want
to
talk
about
the
the
check
here
and
the
screen
and
how
it's
gonna
be
a
little
bit
different
this
time
so
hi
everybody
looks
like
we
got
some
folks
in
chat.
Let's
see
what
we
got
here.
Oh
I'm
scrolling
all
the
way
up.
A
Okay,
we
got
hi
vennett,
hi,
George
good
to
see
you
hey,
Sean,
hey
Bob,
Nadine,
Marko,
always
good
to
see
you
Sebastian
hi,
and
here
hey
how's,
it
going
Suresh,
Lou
Maddy,
another
one
of
our
regulars,
Leonardo
Joe
Eric
and
it
looks
like
we
had
people
joining
from
North
Carolina
from
Havana.
Although
I
think
last
time
somebody
said
they
were
from
Cuba
and
they
were
actually
weren't
from
Cuba.
We
got
somebody
coming
in
from
London.
A
So
thanks
for
joining,
if
this
is
your
weekend-
hey
Paul,
hey
Jim,
hi,
hi
Mitch
from
Sydney
and
hi
Stephen,
how's
it
goin.
There's.
A
huge
turnout
today
looks
like
folks
are
excited
to
learn
about
kata
containers.
Lu
Maddie
wants
to
know
what
I'm
hiking,
not
Everest.
The
answer
is
never
and
if
you
want
to
know
why
I'll
tell
you
why
off
line
strong
opinions.
There,
though
hi
from
India
good
to
see
you
hi
there
from
Tanzania
Scotland
Hamburg
hi
Chris
from
New
Zealand
here.
Okay,
definitely
will
be
climbing
to
New
Zealand
some
time.
A
So
you
should
ping
me
offline
as
well.
I
hide
from
Bangalore,
ok,
I,
gotta
move
forward
because
there's
just
a
lot
of
people
saying
hi
right
now,
so
we're
running
on
my
arch
linux
computer
today,
so
I
looked
at
kata
containers
and
one
of
the
first
things
I
noticed
was
that
you
know
it
only
runs
on
Linux
and
I.
A
You
know
I
tried
to
run
it
on
Darwin
and
I
got
into
some
trouble
along
the
way
and
I
think
right
now
they
support
like
a
couple
of
the
more
popular
flavors
of
Linux
like
Ubuntu
and
cent
and
some
other
ones.
So
of
course,
naturally
I'm,
like
I'm,
going
to
try
to
run
this
thing
on
so
I
dragged
in
my
huge
Lennox
tower
into
the
office
into
the
TTI
case.
Studio
I,
set
it
up.
A
I
hooked
everything
up,
recompiled
a
bunch
of
stuff
today
to
try
to
get
everything
working
and
one
of
the
things
that
we
actually
couldn't
get
working
was
the
the
YouTube
chat,
which
is
pretty
good.
If
you
think
about
it.
Considering
all
that's
happening
right
now
on
Linux
everything
from
audio
drivers
and
video
drivers
and
doing
screen
matching
and
dealing
with
X
Server
and
doing
all
this
on
arch
and
kind
of
getting
this
all
up
and
running
I
think
we're
in
a
pretty
good
shape
for
today.
A
This
is
going
to
be
a
pretty
in-depth
episode,
hopefully,
by
the
end
of
the
episode,
were
actually
going
to
be
compiling
a
Linux
kernel
and
running
kata
containers
with
docker
on
top
of
it,
but
we'll
we'll
see
how
it
goes
and
to
start
off
we're
gonna
do
one
of
my
favorite
parts
of
the
episode,
which
is
talk
about
all
the
exciting
things
that
happened
in
in
kubernetes
this
week.
So
we've
got
our
handy,
dandy,
TGI,
K,
repo
that
we
created
a
while
ago
we're
almost
up
to
100
stars.
So
thank
you,
everyone
for
your
support.
A
We,
we
love
it
yeah,
totally
rockin
16
cores
and
they're
hyper
threaded
cores
as
well.
So
I
actually
can
do
a
32
concurrent
assistant
process.
These
at
one
time
running
on
the
Intel
i7,
which
I
figured,
would
it
be
appropriate
since
we're
looking
at
what
used
to
be
called
Intel
contain
clear
containers
that
are
is
now
called
cotta
containers,
so
a
lot
of
Intel.
You
goodness
happened
in
today.
A
So
anyway,
we've
got
our
episode.
38.
We
put
the
image
in
the
repository
every
week.
So
if
folks
want
to
use
the
image
for
anything
feel
free
to-
and
we
also
have
this
readme
file
here,
which
has
links
for
the
episode
and
then
if
anybody
wants
to
open
up
any
pull
requests
throughout
the
episode
to
share
other
links
or
to
share
information
or
just
to
share
notes.
That
would
be
handy
for
folks
watching
the
episode
on
YouTube
afterwards,
please
feel
free
to
open
up
PRS.
They
usually
get
merged
pretty
quickly
and
yeah.
A
Let's
just
take
a
look
at
it,
so
the
first
section
we
have
here
is:
we
have
this
a
miscellaneous
section
that
George
just
added
Paul
has
a
question
here,
but
that
folks,
including
myself,
might
be
interested
in
cotta
containers,
verse,
G,
visor,
pros
and
cons
for.
What's
the
best
scenario,
Paul
I
think
that's
a
really
great
question.
I
think.
Maybe
we
can
do
a
G
visor
at
TGI
K
and
then
after
we
do
the
kana
containers.
We
can
try
to
look
at
the
differences
and
form
an
opinion
and
talk
about.
A
A
Excuse
me,
which
this
happened
up
in
Vancouver
just
the
other
day.
It's
a
YouTube
video
come
check
this
thing
out
and
it's
gonna
kind
of
explain
like
more
in
depth,
more
professional,
put
together
explanation
of
cotta
containers
and
why
they
exist
in
the
first
place
and
I'll
try
to
do
my
best
to
give
you
folks
a
rundown
today
when
we
really
jump
into
cotta
they're,
the
the
summit
was
great,
I
went
up
and
I
gave
a
talk
on
Cuba
corn
up
in
Vancouver
I've
never
been
to
Vancouver
before
this
gorgeous
city.
A
A
So
next,
in
line,
we
have
a
cotton
containers,
which
is
another
session
here
from
Vancouver
I,
come
in
check
it
out
if
you're
interested
in
there
anymore,
it's
really
cool
if
they
had
all
these
up
and
running
on
YouTube
already
like
the
I
feel
like
the
conference
was
just
the
other
day.
So
a
pretty
quick
turnaround
there
from
the
OpenStack
summit,
and
then
we
have
the
cotta
containers
project
update.
This
is
a
you
know,
kind
of
what's
going
on
with
kata.
A
A
If
you
get
a
copy
of
clear
linux
up
and
running
it,
ships
with
cauda
containers
out
of
the
box
and
then
you
can
easily
run
a
cotta
container
or
they
caught
a
runtime
rather
with
docker,
and
you
could
actually
run
a
docker
image
using
cantata
for
the
container
runtime
and
we're
going
to
look
at
what
that
means
later
and
then
hopefully,
I
get
a
kernel
compiled.
Everyone
said:
can
burn
old,
get
a
colonel
compiled
so
that
we
can.
We
can
run
cotta
here
on
arch
linux,
which
will
be
exciting.
So
let's
look
at
some.
A
Let's
look
at
some
of
the
the
links
from
the
week
outside
of
the
ducati
ecosystem,
so
the
first
one
I
wanted
to
bring
up
I
was
Justin,
says
I'm,
not
sure
I'm
wild,
but
not
Co,
anything
okay.
So
the
first
one
I
wanted
to
bring
up
here
is
the
operator
metering.
A
So
I
think
it
was
it
wasn't
last
week,
but
the
week
before,
I
did
an
episode
on
the
operator
framework
and
then
even
talked
it
about
or
talked
about
it
again
in
Iceland
and
live
coded,
a
puffin
operator
on
stage
for
folks
to
see
kind
of
how
kubernetes
worked
and
we
made
it
puffin
themed
because
we
were
in
Iceland
so
anyway,
the
operator
metering
is
pretty
cool.
It's
like
an
extension
to
the
operator
framework
and
it
helps
you
gain
knowledge
about
the
uses
you
can
cost
to
run
kubernetes
native
applications
or
operators.
A
So
it
gives
you
like
some
maeda
information
about
these
operators
that
you're
running
with
the
operator
framework,
so
pretty
exciting
stuff.
There's
they're
continuing
to
build
out
more
things
in
the
the
operator
framework
ecosystem
from
core
OS,
now
part
of
Red
Hat
and
excited
to
see
where
that's
gonna
go
also
I.
Think
here
in
issues
we
had
somebody
bring
up,
there
was
another
one:
where
is
it
at
operator,
lifecycle
manager?
So
this
is
sort
of
an
extension
to
the
operator
framework.
We
talked
about
last
time,
which
was
the
first
episode
really
only
covered
the
SDK.
A
Let's
cover
the
additional
components
such
as
the
lifecycle
manager,
so
this
is
a
huge
part
of
operators
that
we
didn't
even
really
touch
on
last
time,
which
was
how
do
you
version
and
how
do
you
upgrade
your
operators
over
time
and
this
lifecycle
manager
aims
to
to
help
you
solve
those
problems
and
give
you
to
lean
on
growing
and
maturing
your
operators
and
scaling
things
like
the
API
and
the
code
base
itself.
So
maybe
we
can
do
another
TG
I
can't
on
that
in
the
future
as
well.
A
So
thanks
for
the
issue
there,
so
let's
go
back
so
next
up
for
those
of
you
who
haven't
heard
I
Microsoft
totally
bought
github
I'm
not
going
to
go
into
any
of
the
the
definition
of
like
what
that
means
or
what
my
opinions
are.
But
it's
just
exciting
to
see
that
the
open
source
space
is
changing
so
much
and
it's
like
everyday
you
get
online
and,
like
you
know,
know
what
you
were
gonna
see.
A
This
totally
came
out
of
left
field,
I
think
I
was
in
Iceland
and
it
was
like
the
middle
of
the
night,
and
somebody
was
like,
oh
by
the
way,
I
don't
know
if
you've
noticed
this
or
not,
but
Microsoft
just
bought
github.
It
was
like
you
got
to
be
kidding
me
and
then
like
went
to
github.com
and
sure
enough
Microsoft
bought
github.
A
So
this
one
was
really
super
exciting,
especially
for
me
as
one
of
the
organizers
of
kubernetes
sig
AWS,
which
was
Amazon
finally
announced
eks,
and
just
to
kind
of
give
folks
like
a
quick,
teaser
here.
Well,
I
guess
I
shouldn't
say
they
announced
it.
They
finally
released
it.
So
it's
GA
now
they
announced
it
at
reinvent,
which
was
I.
Think
last
November.
A
A
So
that's
really
excited
and
there's
a
couple
of
good
blogs
here
that
we
asked
folks
at
Amazon,
so
I
think
we
have.
We
have
this
one
which
is
sort
of
like
talking
about
the
community
and
the
work
they
plan
on
doing
and
upstream
moving
forward,
and
then
we
also
have
just
like
the
general
announcement.
This
is
kind
of
what
we
just
talked
about.
That
explains
it
a
little
bit
more
detail
kind
of
how
kubernetes
works.
A
B
A
Servers
cotta
containers-
perhaps
that
might
give
you
some
insights.
Sorry
I,
can't
add
the
link
Suresh.
If
you
want
to
open
up
an
issue
or
a
pull
request
to
read
me
feel
free
to
and
we'll
have
that
there
forever
for
folks
watching
YouTube
for
the
foreseeable
future.
Greetings
from
Norway,
hey,
ro
I,
get
to
see
you
so
a
nod
says
so
not
a
fan
of
the
current
iteration
of
eks
way
too
much
involves
bringing
out
the
single
cluster.
A
I
know
that
they're
planning
on
iterating
on
it
pretty
quickly
so
I'm
excited
to
see
like
how
eks
grows
over
like
the
coming
months
and
I.
Think
that's
good
feedback.
So
I
can
you
know
we
can
bring
it
up
in
open
source
and
share
those
user
experiences
with
the
folks
at
Amazon
who
are
in
upstream
and
who
are
contributing
mystic
eight
of
us
and
maybe
see
you
if
we
can't
get
those
experiences
sharing.
So
next
we
have
Leonardo
eks
vers
cops.
A
That
would
be
great
I
agree,
especially
because
there's
like
90
different
ways
to
run
cops
with
public
and
private
topologies
and
all
the
different
subnets
and
I'm
I.
Don't
even
know
this
right
now,
but
I'm
curious
to
see
what
eks
does
and
you
know
how
are
the
network
configured?
How
is
the
V
PC
configured
like
I'm
kind
of
curious
there,
so
I
think
that
would
make
a
real
good
episode.
So
yeah
looks
like
folks
are
agreeing
40,
gik,
+,
EK,
yes
and
Noah
says
I
should
have
worn.
My
kata
hoody
today
for
a
solidarity.
A
I
didn't
I
had
caught
a
hoodies
and
can
I
can
I
get
a
kata
hoodie
like
can
I
just
like
come
over
this
weekend
and
grab
a
kata?
Who
do
you
like
you
just
have
like
a
supply
of
those
like
how
do
we?
How
do
we
get
free,
hoodies
hoodies
are
the
best.
So
anyway,
back
over
here
we
have
the
kubernetes
blog
that
Joe
wrote
and
this
talks
about
the
the
four
years
of
kubernetes.
A
So
you
can
actually
go
back
and
Google
like
kubernetes
announcement
in
2014
and
now
we're
here
in
2018
and
and
Joe
put
together,
this
sweet
blog.
That
sort
of
explains
like
the
almighty
first
commit
to
kubernetes
and
and
where
it
came
from,
and
surprisingly
all
the
folks
that
were
involved
in
getting
that
one
commit
up
and
running
and
Joe
is
just
the
person
who
did
like
the
copy
and
paste
and
actually
got
the
commit
in
github.
But
it's
actually
a
ton
of
work
that
a
lot
of
other
folks
from
Google
had
contributed
to.
A
It
looks
like
we
have
a
list
of
people
here.
We
had
veal,
a
Tim,
Brian
grant
dawn,
Daniel,
I'm
sure
there
were
more
and
there's
a
lot
of
ideas
that
went
into
it,
but
this
is
a
pretty
cool
little
introductory
of
like
the
first
four
years
in
kubernetes
and
what
it's
been
like.
So
that's
a
good
read
if
you
guys
want
to
go
check
that
out
so
now,
I'm
flying
blind
here
so
George
added
these
right
before
I
came
on
so
we're
just
going
to
kind
of
click
and
see
what
happens.
A
I
know
that
seems
scary,
but
I
trust
George.
So
this
first
one
we
have
Google
groups
here
excited
to
announce
full
release
of
2.0
today,
fleet
and
Helms
arts,
okay,
so
I,
don't
know
what
fleet
is
major
features
include
game
server
fleets
as
well
as
installation
via
home
charts.
What
is
what
our
fleets
game
server?
Fleets?
I,
don't
know.
A
Maybe
this
is
just
something
that
I
don't
know
about,
but
anyway,
congratulations
to
the
the
folks
working
on
fleets
and
happy
to
see
that
your
your
starting
to
use
helm
charts
to
encapsulate
your
application
there
and
really
excited
like
this-
is
another
big
one.
Helm
was
recently
pushed
into
the
scene
CF
as
a
like
a
core
CN
CF
project.
That's
no
longer
under
kubernetes
I
know,
there's
a
blog
on
that.
I
can
add
it
add
it
afterwards
to
the
the
readme.
A
If
folks
want
to
see
it,
we
can
add
a
link
and
we'll
put
it
in
the
YouTube
as
well,
but
so
now
it's
moving
up
to
one
of
the
CN
CF
level
projects.
So
if
we
pull
up
CN
CF
tayo,
you
can
actually
see
I
think
it
used
to
be
right
here
on
the
home
page
yeah.
Oh
look
at
Helm's
right
here.
This
is
totally
great,
so
so
yeah
you
can
see,
kubernetes
is
one
of
them.
Prometheus
is
another
pretty
popular
one.
A
We
all
know
G
RPC
is
our
favorite
way
to
transmit
data
across
the
network
and
then
now
helm
recently
added
as
well.
So
congratulations
to
the
folks
at
home,
I
see
going
back,
looks
like
at
this
next
one.
Oh,
this
is
one
that
I
had
it.
So
this
is
installing
cotta
containers
on
Ubuntu,
and
this
is
where
we're
going
to
start
today.
So
I
figured
we'll
start
by
just
running
a
basic
example
of
a
container
with
cauda
we'll
do
that
on
an
Ubuntu
server
up
in
Amazon
kind
of
go
through
the
process.
A
Explore
the
runtime,
see
what
the
installation
feels
like
look
at
how
we
could
run
that
with
docker.
Look
at
how
we
could
run
that
with
kubernetes
come
back
and
then
we're
really
going
to
do
a
deep
dive
into
what
actually
going
on
in
Calcutta
is
basically
a
lightweight
virtualization
tool
that
just
plugs
in
cleanly
to
the
rest
of
the
container
ecosystem
and
I
think
this
is
going
to
be
the
best
way
to
kind
of
explore
all
this.
A
So
let's
check
up
do
a
quick
check
up
on
chat
and
then
we'll
jump
into
an
Ubuntu
server
and
get
caught
up
and
running
there.
So
here
I
can
actually
pull
this
up.
So
we
can
see
what
I'm
seeing
here
will
not
be
able
to
authenticate
Kubek
Leticia
uks
without
using
hefty
Oh
AWS
Authenticator.
That's
a
great
point.
A
Why
we're
on
the
topic
of
what's
been
going
on
in
Kate's
this
week
there
was
a
ton
of
action
on
reddit
about
the
hefty
o
Authenticator
and
how
folks
are
using
it,
especially
now
that
ek
is
eks
was
launched.
So
if
anybody
has
any
experience
there
or
has
anything
they
would
like
to
contribute
or
any
feedback
at
all
feel
free
to
share
it
with
us
here
I'd
have
to
go.
A
B
A
A
Helm
v3
a
master.
These
look
like
they're
like
markdown
files
and
kubernetes.
Oh,
this
is
ancient
okay,
so
this
is
kubernetes
helm.
This
is
like
back
back
in
the
day
as
days.
I,
don't
know
if
there's
a
helm
through
repo,
yet
I
do
remember.
Somebody
I
had
asked
this
question
to
one
of
the
folks
at
Microsoft
and
they
said
there
is
work
going
on
somewhere,
so
I'm
sure
it's
open
source
I'm
sure
it's
just
a
matter
of
tracking
it
down.
A
If
anybody
knows
feel
free
to
share,
share
link
and
if
not,
we
can,
we
can
look
it
up
afterwards
or
shoot
me
an
email
off
to
the
the
folks
in
sing
apps
and
see
what
they
have
to
say
about
helm,
Street.
So
the
Maddy
says:
hey
Chris,
you
you
hear
when
helm
three
decks
will
be
out.
I
was
listening
to
software
engineering
daily
and
three
point:
X
sounds
pretty
sweet,
no
idea,
I
think
it's
probably
worthwhile.
If
Maddie
I'm
gonna
appoint
you.
This
is
you're
one
of
our
regulars
here.
I'm
gonna
pick
on
you.
A
If
you
want
to
kick
an
email
off
and
just
say
that
we
brought
it
up
on
TGI
K,
you
can
send
sick,
absent
I
bet.
We
can
figure
out.
You
know
if
and
where
there's
a
repository
release
timelines.
What
do
we
can
expect
to
use
it
when
it'll
be
in
alpha
and
get
all
that
information,
and
we
can
put
that
back
in
the
TGI
K
repo
as
well?
A
A
Let's
check
out
home
3
once
it's
out
I
love
how
we
can
use
I
github
for
all
this
stuff.
Now
it
makes
it
a
lot
easier
to
track
ideas
here
in
tgia,
okay,
so
let's
go
back
and
rewind
all
the
way
back
to
our
readme
and
let's
pull
up
our
jump
into
our
ec2
console
switch
over
to
my
deb
user
and
we
should
have
I
spun
it
up
beforehand.
A
I
also
have
a
kubernetes
cluster
up
and
running
using
cubic
or
in
40
tik
in
case
we
need
a
cluster,
but
we
should
be
able
to
grab
this
IP
address
here
and
then
this
is
cool,
because
I
actually
have
some
really
sweet
transitions
for
today.
So
we're
not
actually
going
to
be
moving
things
into
and
off
of
my
screen
I'm
actually
able
to
move
around
on
all
my
screen
real
estate
here
and
then
like
wave
my
hand
magically
and
poof
or
in
my
terminal
it's
like
a
cool
little
like
fade
effect.
A
There
and
I
can
like
jump
between
like
my
IDE
and
go
back
in
my
terminal
or
like
go
back
into
Chrome
like
it's
all
pretty
cool
and
then
here
on
the
screen.
I
have
all
of
my
windows
separated.
So
I
can
like
look
at
stuff
differently
and
map
that
to
to
what
you
folks
at
home,
are
seeing
along
the
way.
So
real,
quick,
let's
get
chat
here
in
case
anybody
has
any
questions
and
let's
SSH.
A
And
then,
as
screen
size,
good
that
looks
kind
of
small
for
me,
I
can
bump
it
up
a
little
bit,
but
let's
see
what
what
you
folks
say
so
I'll
give
you
about.
You
know
20
seconds
or
so
to
say.
If
you
would
like
to
see.
I'd
bet
your
screen
size.
Okay,
so
we
can
SSH
in
awesome.
A
So
we're
we're
bun
2
at
our
private
IP
address
here
and
we're
gonna
go
back
to
Chrome,
which
I
think
is
for
and
we're
gonna
pull
up
the
documentation
on
getting
this
up
in
front,
even
ubuntu,
so
joe
says,
screen
size
plus
plus
so
real
quick
before
we
jump
back
in.
Let's
get
the
screen
size
fixed,
so
I
can
come
here.
Bam,
Bam
and
I
can
zoom
in
a
little
bit.
A
There
and
let's
pull
it
up
a
little
bit,
damn
that
looks
good
to
me.
Let's
see
what
you
guys
have
to
say
about
that
one,
it's
a
little
bit
bigger
now
and
we
can
resize
it
again
if
we
need
to
because
we're
running,
Linux
and
OBS
allows
you
to
pull
everything
apart
into
their
own
individual
frames,
which
is
kind
of
nice,
ok,
cool.
A
So
let's
go
back
to
Chrome
and
let's
check
out
installing
Cotta
containers
on
Ubuntu
and
then
along
the
way
will
be
natural
teaching
elements
where
I
can
kind
of
say,
like
I
met
with
Ann
twice
this
week
we
talked
about
kata
I
was
up
at
the
OpenStack
summit,
I
learned
about
it,
some
up
there
and
we'll
kind
of
go
through
and
we'll
explain.
You
know
why
it
exists
and
how
it's
different
than
like
a
traditional
container
runtime.
A
But
first,
let's
just
look
at
getting
this
up
and
running
and
buntz
you
and
I've
never
done
this
before
so
we're
just
gonna
totally
go
through
this
together.
This
step
is
only
required
in
case
docker
is
not
installed
on
the
system.
So,
let's
see
what's
going
on
here,
where
they
wanted
to
apt-get,
install
app
transport,
HTTPS,
CA
certificates,
w
get
some
others
and
then
so.
I
think
this
is
just
getting
darker
up
and
running
and
stuff
in
our
terminal.
A
A
A
What's
next,
we
want
to
curl
with
TLS
download
docker
calm
the
GPG,
and
we
want
to
sudo
apt-get
apt
key
add
so
this
is
just
going
to
add
the
GPG
key
for
the
synaptic
package,
manager'
on
Ubuntu
you
so
that
we
can
install
the
rest
of
the
repositories
and
move
forward.
So
let's
copy
that
I'm
like
getting
pretty
good
at
the
whole,
switching
back
and
forth
thing:
oh
I
see
what's
going
on.
So
whenever
I
do
my
hotkeys?
Okay,
that's
where
the
41
keeps
coming
from.
A
Whenever
I
do
my
transition
hotkeys
it's
going
into
my
terminal
on
here,
so
we
can
just
do
that
manually.
All
the
way
and
I
couldn't
do
that
without
the
hotkeys.
Now
so,
let's
go
back
to
Chrome
and
we're
gonna.
Add
our
repositories
see
what
we
have
here.
Dabbe
arch
equals
arch,
download.com
/
linux,
ubuntu,
we're
gonna,
get
the
name
of
our
release
and
we're
gonna
pull
the
stable
version.
So
we
can
copy
that
come
back
to
our
terminal
here
without
using
our
hotkeys
and
paste
that
in
and.
A
A
A
I'll
use
I,
sorry
actually
cut
over
to
the
title
screen
there,
but
we're
back.
All
I
did
was
trying
to
run
an
app
to
get
update
and
it
looks
like
we
have
a
malformed
Etsy
apt
sources
that
list
so
we're
gonna
jump
into
our
text,
editor
of
choice
that
we
have
already
installed
on
the
system.
Nano
because
I
don't
know
how
to
easy.
Okay
and
we're
gonna
see
what's
going
on
with
her.
That's
the
apt
sources,
Dauntless.
A
A
So
Deb
arch,
download.com,
Sarge's
anneal
stable.
How
is
this
arch
equals
architecture
I
think
we
can
do
run
to
there
that
should
do
it
permission
denied.
Let's
see,
Oh
get
that
again
scroll
down
here,
so
I'm,
actually
a
whiz
in
yeah.
No,
don't
tell
anybody
this,
but
I
can
actually
fly
through
this
program.
Pretty
quick,
so
I
think
we
do
arch
equals
Ubuntu
and
snoop.
This
looks
like
and
if
not
I
might
just
cheat
and
do
the
apt-get
install
docker
we'll
see
how
that
works.
A
B
A
That
works
first
time,
yep
Oh
George
says
just
removed
the
Archie
and
I
just
added
that
you
Bucky
to
it,
and
now,
let's
see
what
goes
here
see
if
this
will
work
now,
doctor
CD
has
no
installation
cabinet.
So
let's
go
with
George's
recommendation,
which
is
to
remove
the
arch
equals
done
here
on
line
57.
A
Save
that
now
will
do
and
have
to
get
update,
and
then
we
can
probably
do
it
have
to
get
installed
RCE.
So
real,
quick,
why
that's
installing
yeah
there
we
go
I'm
gonna
check
up
on
our
chat
and
see.
What's
going
on
here,
yeah
George
says:
remove
me
arch,
joke,
amen,
Erik,
I,
still
working
in
key
mute,
ECG
mode
right,
you
have
a
typo
again,
you
have
a
type
of
bun
ooh,
Bunty
Ubuntu,
okay,
anyway,
we're
good.
A
We
got
docker
installed,
let's
see
what's
going
on
here,
yeah,
so
there's
our
docker
command
here,
I'll
run
it
again
so
to
keep
on
the
screen
a
little
bit
longer
so
yeah.
Now
that
we
have
docker
installed,
let's
go
back
to
Chrome
here
and
let's
see,
what's
next
for
more
information
on
stalling
docker,
please
go
to
the
docker
guide
and
now
install
the
kata
containers
components.
So
let's
do
we're
gonna
sell
something
out
here.
A
So
you
know
shell,
echo,
Deb,
okay,
so
we're
just
adding
our
kata
to
the
ubuntu
repositories
list
here,
let's
copy
that
go
back
to
our
terminal
and
give
that
a
run
back
to
Chrome
and
we're
gonna
curl
something
openSUSE,
org
repositories,
home
kata
containers
and
then
we're
gonna.
Add
the
key
okay.
So
now
we're
adding
the
key
for
kata.
A
There
we
go
that
on
the
way
back
to
Chrome
sorry
folks.
We
just
want
to
get
all
this
stuff
up
and
running,
and
then
we
can
I
run
a
container
in
cata,
so
we're
gonna
do
an
update
again
and
then
we
want
to
install
so.
Okay,
so
here
is
where
we
should
break
away.
So
we
have
the
kata
run
time.
You
cut
a
proxy
in
the
cottage
shim
and
I
think
if
we
go
to
github.com
slash,
kata
containers
we'll
go
to
the
documentation
first,
but
we
can
actually
come
in
and
see.
A
Let's
see
what
Eric's
a
pseudo
kata,
runtime
kata
check
will
tell
us
the
end
result:
hardware
support
yeah,
we're
gonna,
do
a
cottage
check
here
after
we
get
the
the
runtime
up
and
running
so
real
quick.
Let's
go
back
to
our
documentation
and
let's
grab
this
install
command
copy
that
go
back
to
our
terminal.
We'll
do
our
co-
e
update.
A
A
So
I
got
this
up
and
running
on
my
Arch
Linux
computer
already
and
I
really
didn't
know
what
I
was
getting
into
when
I
was
like.
Ok,
kata
containers
like
how
are
these
things
the
same
and
how
are
they
different
than
the
docker
containers
and
I
was
like
I
want
to
run?
You
know
some
CLI
tool
like
I
just
want
to
like
see
what
this
thing
looks
like
and
start
to
to
get
a
feel
for
it,
and
the
second
I
ran
caught
at
runtime.
A
I
was
like
oh
I,
get
it
now,
it's
it's
literally
just
a
replacement
for
the
the
docker
run
time
that
the
the
container
will
run
on
top
of,
as
you
start
to
run
a
container.
What's
cool
about
kata,
though,
is
it
actually
uses
its
own
kernel?
So,
in
order
for
us
to
build
this
on
my
arch
learning
system,
we're
gonna
have
to
build
a
Linux
kernel
and
then
we're
gonna
use
the
small
lightweight
virtualization,
which
the
cut
of
runtime
will
run
into
and
we're
actually
gonna
try
to
plug
that
into
docker
as
a
runtime.
A
A
Does
the
docker
shim
process
serve
the
same
purpose
as
the
docker
sham
I
do
not
know
I
bet
Eric.
You
could
probably
answer
that
for
us
and
if
not
we'll
jump
into
the
repo
and
see
see
what
it
looks
like
and
we
can
actually
check
out
the
code
too,
as
well.
I
wanted
to
just
look
at
the
the
shim
code
as
well.
We
all
know
I
love
poking
around
at
random,
go
repositories
and
see
what's
going
on
so
anyway
without
further
ado
caught
a
run
time.
A
A
This
is
call
api's
and
you
can
I
build
whatever
like
my
operating
system,
you
want
to
and
and
run
that
container
on
any
kernel
we're
all
in
those
corn
kernels,
rather
we're
still
working
on
the
freebsd
kernel,
whereas
with
cauda
containers,
you
actually
get
a
custom
kernel
that
you
would
build
on
your
system,
and
this
is
going
to
load
that
kernel
and
actually
get
that
up
and
running.
So
we
should
be
able
to
run
a
docker
image
using
kada
as
our
runtime
using
the
docker
CLI
tool.
A
So
that's
why
we
needed
both
docker
caught
a
runtime
install
Earnest
system
so
that
we
can
use
these
tools
together
and
then,
after
we
demonstrate
that
we
have
this
running.
We
can
look
at
and
talk
about
what
it's
going
to
take
to
run
that
in
companies.
So
folks,
here
eric
says:
yeah
the
shim
is
just
there
to
represent
the
process
running
inside
the
VM
and
so
that
we
use
docker
instead
of
stdio
and
send
signals,
etc.
Docker
or
a
CRM
would
just
see
this
process.
A
Okay,
thanks
for
the
clarification
there
Eric,
so
let's
jump
back
in
our
documentation
here
and
we
might
be
okay,
so
steps
3
here
says:
configure
docker
to
use
kata
containers
by
default
with
the
following
commands.
This
would
be
cool
I
because
if
we
had
this
set
up
so
like
I
said,
I
haven't
done
this
before.
A
If
docker
is
going
to
be
using
kata
containers
by
default,
I
wonder
what
the
cubelet
would
do
and
how
the
cubelet
actually
interacts
with
Coleen
docker
I,
wonder
if
it
execs
it
out
or
if
it
actually
like,
has
its
own
copy
of
the
docker
components
compiled
into
it.
If
it
just
execs
it
out.
This
might
be
a
really
clever
way
where
we
could
hack
kubernetes
by
changing
our
nodes
to
actually
run
the
kata
runtime.
A
The
cubelet
would
be
blind
to
that
and
then
kubernetes
would
still
work
and
we
would
have
an
extra
layer
of
virtualization
for
all
of
our
containers
running
in
kubernetes.
If
we
could
just
come
in
and
do
this
on,
one
of
our
kubernetes
knows
so
curious
there.
Maybe
we
can
check
out
and
see
how
the
cube
load
actually
interacts
with
docker
downstream.
A
So
I
think
we
can
probably
skip
this
for
now.
I,
don't
think
we
really
need
to
make
this
our
default
runtime,
which,
if
you
look
here
just
says,
we're
gonna,
do
dr.
Damon,
add
runtime
caught
a
runtime,
equals
user
been
caught
a
runtime
and
then
we
set
our
default
equals
to
utter
utter
runtime,
I'm
gonna
explicitly
call
out
the
the
runtime
flag
every
time
you
use
it
to
sort
of
reiterate
the
fact
that
we
actually
are
using
a
different
runtime,
but
still
using
familiar
docker
commands
along
the
way.
A
So
I
remember
seeing
maybe
here
in
the
install
guide,
let's
open
up
a
new
tab,
there
was
a
busy
box
command.
We
can
totally
build
it
from
scratch
if
we
want
and
but
it'd
be
nice.
If
we
could
find
the
one
that
they
have
in
there
commentation,
so
we
had
sort
of
developer
guide
yeah.
This
is
the
one
we
want,
so
this
is
what
we're
going
to
be
looking
at
later,
once
we
try
to
compile
it
on
Arch
Linux
scrolling
all
the
way
down.
So
this
is
the
whole
building
your
own
kernel
component.
A
A
Believe
yes,
Eric,
who
I
believe
is
one
of
the
kata
experts
here
on
the
session
with
us,
as
it
doesn't
end
up
working
in
that
way,
we
end
up
having
to
use
a
CRI
shim
like
the
docker
shim
to
use
kata
ie
cryo
container
diene
make
the
use
of
an
OC
I
compliant
runtime
like
like
caught
it
and
like
like
run
C,
so
real
quick.
That
brings
up
a
really
great
point,
which
is
what
o
CI.
What
is
that
whoa
CI,
and
why
is
that
relevant
to
this
entire
conversation
here?
A
So
we
can
do
the
open
container
initiative
here,
and
the
open
container
initiative
defines
two
things:
we
have
a
runtime
spec
and
we
have
an
image
spec,
and
this
is
a
sort
of
like
a
definition
of
what
it
takes
to
be
like
a
sort
of
like
conformant
container
runtime.
So
this
is
put
together
by
a
lot
of
folks
in
the
community.
Kata
is
OCI
compliant,
and
this
is
just
going
to
define
what
all
behavioral
program
is
going
to
need
to
satisfy
in
order
to
be
considered
an
OC
I
compliant
container
runtime.
A
And
if
this
is
satisfied,
we
should
be
able
to
kind
of
pull
those
those
runtimes
out
for
various
reasons,
in
this
case
to
add
an
extra
layer
of
nested,
virtualization
and
use
those
arbitrarily,
and
then
you
know.
Of
course,
we
have
the
the
more
familiar
docker
runtime
that
docker
comes
up
with
by
default.
That
is
also
CI
conform
it.
So
this
is
just
the
spec
that
defines
that,
so
that
we
can,
as
software
engineers,
build
to
the
spec
and
Trust
other
pieces
of
our
system
to
fall
into
place
accordingly,
so
really
cool.
A
If
you
didn't
know
about
the
the
OCI
runtime
spec,
it's
a
really
powerful
tool
and
folks
like
cryo
and
kata
and
docker
really
taking
advantage
of
it.
Also
in
the
same
sense,
we
have
the
open
container
initiative,
image
spec
and
it
does
the
same
thing
except
for
instead
of
defining
what
a
runtime
is.
It
just
defines
what
a
container
images
and
how
we
would
interact
with
that,
how
we
create
one,
how
we
would
read
one
and
what
we
expect
to
be
there
get
one
up
and
running.
A
It
looks
like
we
have
folks
here
eric
says:
yes,
each
pod
in
Kate's
case
or
each
container
in
doctor's
case.
So
what
Eric
is
saying
is
that
every
time
we
run
a
new
container,
we
actually
have
a
new
instance
of
our
unique
kernel
that
caught
it
is
going
to
be
using
to
to
run
our
container
image
well,
getting
deep
there,
okay
cool,
so
without
further
ado,
let's
run
a
container
with
copper,
so
come
back
here
and
you
can
get
rid
of
our
control
commands
there
and
we
can
paste
in
let's
grab.
A
A
A
Looks
like
at
this
command
before
you
ready,
so
a
sudo
we're
gonna
run
docker
we're
gonna,
do
run
minus
TI,
which
means
we're
gonna
start
a
new
tty.
We're
gonna
write,
interactively
we're
gonna
pass
in
the
runtime
flag
and
we're
gonna
use
the
Cata
runtime
binary,
which
of
course
we
just
learned
was
OCI
compliant
and
we're
gonna
run
a
busybox
container
and
we're
gonna
run.
Sh
busybox
is
just
a
really
great
sample
container.
We
use
it
for
a
lot
of
things,
especially
in
engineering
land
that
basically
will
just
run
as
I
run.
A
It
running
kubernetes
run
using
docker
and
it
doesn't
really
do
anything.
So
it's
almost
like
a
like
a
sleep
command
and
we
can
define,
however
long
we
want
it
to
run
for,
and
it's
good
for
stuff
like
this.
So
let's
see
unknown
runtime
specified
caught
at
runtime.
Is
it
not
in
our
path?
Do
we
need
to
maybe
define?
A
A
Let's
try
this
step
three
and
see
if
that
we
can
to
tell
docker
to
use
it
by
default,
and
if
this
is
gonna
solve
our
problem
of
docker
recognizing
the
runtime,
which
another
way
of
doing
it
is
well
we're
gonna
get
into
it
later,
so
always
explain
that
a
little
bit
okay,
so
first,
let's
make
a
new
directory,
which
is
gonna,
be
at
C
system,
D
system,
docker
service
that
D,
ok,
so
we're
gonna
make
a
new
unit
file
for
system
cuddle.
Here.
A
Cool,
let's
make
a
new
directory,
so
you
guys
see
that
so
yeah
we
just
made
the
new
directory
go
back
and
we're
gonna
catch
some
information
here,
so
we're
gonna,
T
kata
containers,
calm
and
then
we
just
define
a
real
basic.
You
know
file
here.
So
let's
grab
this
yeah
Eric
system,
cuddle.
I'm,
happy
that
you
also
appreciate
the
system
cuddle.
A
Cool
so
let's
cat
this
ban
go
back
to
our
Docs
and
now
we
can
restart
the
doctor
Damon.
So
we're
going
to
do
a
system,
cuddle,
daemon,
reload
and
then
we'll
do
a
system,
cuddle,
restart,
talker.
A
And
I
have
a
big
mechanical
keyboard.
Don't
you
guys,
like
notice,
that
yeah
but
I'm
like
banging
away
on
my
mechanical
keyboard,
my
sweet,
happy
Owen
kubernetes,
like
controlling
native
keys
anyway,
I
bet
we
can
now
run.
Let's
find
our
command
in
our
bash
history
here
run
time
caught
a
run
time.
I
bet
docker
should
not
recognize
this.
Let's
see
what
happens.
Oh
good,
unable
to
find
busybox
locally,
it's
gonna
pull
it
air
response
from
Damon.
Okay,
so
we
have
OCI
run
time,
create
failed
kim
you
light
it's
not
installed
on
the
system.
A
Okay
and
I
think
we're
gonna
hit
this.
When
we
look
at
running
in
arch
later,
this
was
what
I
was
trying
to
compile
right
right
before
the
episode
that
I
tweeted
about
I.
Think
if
I
remember,
as
I
was
googling
around
for
this,
we
can
just
apt-get
install
kami
light.
So,
let's
see
what
we
have
here:
sudo
apt-get
install
new
lights,
kami
light
set
to
manually
installed
it's
already
the
newest
version.
Interesting.
Let's
see
this
memory
all
right,
not
this
memory.
This
error,
one
more
time,
docker
air
response
from
daemon
OCI
runtime
create
failed.
A
Has
it
would
using
any
OC
I
compliant
container
runtime
and
now
we're
actually
getting
caught
it
up
and
running
and
we're
starting
to
go
through
the
process
of
running
in
this
container?
On
top
of
our
new
custom
kernel?
And
that's
where
kim
you
light
comes
into
play,
so
let's
do
kind
of
runtime
caught
a
runtime
I
think
it's
gotta
check,
ok,
cool!
So
yeah
again
this
is
written
go
so
we
see
some
familiar
logs
here
we
have
our
CPU
property
found.
We
have
an
air
CPU
property,
not
found
description.
A
V
virtualization
support
name
equals
B
MX
8
equals
143
source
runtime
type
flag
and
Lucas
asked
a
question
about
networking
in
cada
which
Eric
I'm
gonna.
Let
you
get
to
that
first
and
if
not,
we
can
talk
about
the
different
container
like
this
container
networking
interface
and
CNI
and
I'm
curious
they've
caught
if
cada
uses
that
as
well.
A
So
it
looks
like
we
have.
Another
error
here:
opens
this
module
KBM
parameters,
nested
no
such
file
or
directory
name
cada,
runtime
source
equals
runtime
interested
in
interesting
and
interesting
stuff.
Here
was
not
expected
to
hit
areas
along
the
way.
Let's
see
if
we
can
get
any
clues
from
our
studio
audience
and
if
not
I'm
gonna
totally
just
start
hacking
away
here.
A
Steven
says
yeah:
this
is
what
I
was
talking
about
before
yep.
It
takes
the
ve.
'the
inside
the
network,
okay,
so
yeah
eric
is
saying
it
does.
This
is
going
back
to
the
networking
questions.
I
know
I'm,
jumping
around
a
lot
Erika
sandy
God
does
take
the
V
ether,
s
short
for
Ethernet
inside
the
network
namespace
and
attaches
Kimio
via
MACV.
Tap
AWS
does
not
support
nested
k,
vm
virtualization.
A
Today
and
that's
what
it's
looking
for,
okay,
so
I
see
what's
going
on
here,
so
we're
trying
to
run
kata
in
a
virtualized
Ubuntu,
and
this
is
where,
like
the
kernel
stuff,
really
starts
to
get
interesting
when
we're
running
so
many
different
layers
of
virtualization.
Here
steven
is
saying
that
it
does
not
look
like
we're
gonna,
be
able
to
run
a
virtualized
kernel
in
an
already
virtualized
Amazon
instance.
So
there's
no
nested
virtualization
support
okay.
A
So
this
is
why
I
wanted
to
bring
my
computer
and
I'm
like
kind
of
bummed,
that
I
wasn't
able
to
get
it
up
and
running
before
him
with
Arch
Linux,
but
we
can
just
totally
move
over
to
arch
and
start
hacking
away
if
we
want
to
and
I
guess
this
brings
up
a
good
question
and
Eric
I'm
gonna
pick
on
you
a
little
bit
here
since
cata
dozen
are
since
Amazon
does
not
support.
A
Nested
virtualization
is
kata
and
the
kata
containers
is
that
going
to
be
a
viable
option
for
folks
who
want
to
run
like
say,
kubernetes
up
in
AWS
or
gke
and
Stephen
says
not
sure
about
Asher,
but
you
can
run
nested
in
GCP,
okay,
so
that
kind
of
answers
our
question
a
little
bit.
So
we
can
maybe
try
to
get
a
GCP
instance
up
and
running
and
fly
through
this
stuff.
Really
quick
I
can
currently.
A
Eric
says
they
have
some
bare
metal
offerings
as
far
as
I
know
and
I
sure
does
as
well.
Okay,
so
it
looks
like
the
one
cloud
I
picked
is
the
one
that
was
gonna,
give
cada
a
little
bit
of
a
hard
time
here
and
I
think
I
can
log
in.
If
here,
if
I
go
to
my
console,
hopefully
we
get
a
bunch
of
image
up
pretty
quickly.
I
don't
have
a
hefty
Oh
GCP
account.
Oh,
my
gosh
I
can't
believe
I'm
saying
this,
but
I
totally
have
hazard
portal.
A
Dodger
comm
sign
into
Microsoft
Azure
and
I'll
go
quick
through
the
install
Genson,
so
we
don't
have
to
jump
through
it
and
just
see
if
we
can
get
a
container
running
stay
signed
in
yes,
cool,
so
I
know
we
want
to
go
here
on
the
left.
Let's
go
to
virtual
machines.
Where
are
those
add
virtual
networks,
virtual
machines
and
let's
do
create
a
new
virtual
machine.
A
So,
let's
do
a
bunch
of
server
Eric
says
D
two
instances
and
John
says
Oh
Joe
says
I
had
to
run
an
ad
sure
when
I
was
doing
docker
for
Windows
as
it
needed
to
create
a
Linux
VM
under
the
covers
before
and
then
John
says
before
you
do
TCP,
you
need
to
use
an
image
that
supports
nested,
okay,
so
we're
gonna
try
to
fire
up
an
Ubuntu
virtual
machine
in
Azure
and
then
SSH
into
that
and
see
if
we
can't
get
the
container
run
time
and
get
busy
box
I'm
burning
there.
A
You
need
some
g-cloud
magic
to
enable
DMX,
for
instance.
Okay,
so
it
looks
like
steven
is
warning
us
away
from
using
g-cloud
eric
says:
Azure
is
easy:
Wow
Rogers
totally
dominating
this
one.
This
is
this
is
the
first
time
for
me
kind
of
excited
right
now.
Eric
says
ds4
under
Barbie
3,
for
example,
works
so
I
think
that's
the
size
of
the
virtual
machine
that
were
suggesting
to
run
on
Sen
says
this
caught
a
container
supports
AMD
also
I
mean
all
major
CPUs
and
arts
yeah.
A
That's
a
great
question:
does
cada
only
run
on
Intel
chips,
or
does
it
run
on
across-the-board
on
AMD
as
well?
So
anyway,
let's
go
back
to
get
narrower
bestie
server
up
and
running
and
Azure.
So
we
can
come
here.
We
can
do
a
bunch
of
server.
You
guys
are
gonna
watch
me
stumble
my
way
through
the
G
Azure
portal
here
so
I
think
here
on
the
right.
We
pick.
Let's
do
1604,
because
it's
LTS
and
then
I
think
we
come
down
here.
A
We
do
create
and
then
let's
give
it
a
name,
we'll
call
it
t
GI
k,
kata
I'm,
given
an
SSD,
username
Nova.
We
need
a
public
key.
So,
let's
jump
back
in
my
terminal
exit
and
we
can
cat
SSH
or
make
sure
I
don't
cat.
My
actually
you
know
I'm
going
to
do
this
in
a
different
screen,
just
because
I
don't
even
want
to
risk
a
I
want
to
make
sure
I'm,
not
gonna
cat.
My
my
private
key
accidentally,
oh
it's
too
YouTube
here
so
I
DRS
a
hub
and
let's
grab
this
and
copy.
A
So
now,
let's
go
back
to
a
sure
and
we
can
paste
in
our
public
key
BAM.
So
we're
knees
are
as
your
subscription
we're
gonna,
create
a
new
resource
group,
we're
gonna
call.
It
will
call
the
same
as
our
virtual
machine
t,
GI
que
cada
and
we'll
create
it
in
east
us
and
we
have
an
OK
button.
So
that
looks
good,
so
Joe
says
GCE
cloud,
google.com,
slash,
compute,
slash,
Docs,
so
I
think
that
what
Joe
is
leaking.
There
is
a
for
folks
at
home
who
water
on
GCE
how
to
enable
nested
virtualization.
A
Now,
let's
take
our
clue
from
our
friend.
Aaron
who
said
des
for
v3
is
what
we
want
to
look
for
here.
So
we
want
ds4
so
des
12
d
s
for
CD
s,
let's
see
if
we
can
search
here.
So
yes
for
underbar,
we
have
V
2,
we
have
standard
and
from
I
don't
know
any
of
this
stuff
means
I,
don't
think
we
have
a
v3
by,
but
a
V
2
is
probably
gonna,
be
in
a
good
place
to
start.
So,
let's
just
do
standard,
it
seems
more
standard
to
me.
A
Let's
see
what
folks
at
chatter
saying
Eric
just
has
any
of
the
DS
to
use
close
enough.
Okay
cool.
So
let's
do
select,
create
this
one.
Let's
do
availability
zone
I,
don't
know!
What's
going
on
here,
availability
set.
What's
an
availability
set
will
do
none
I
hope
that
works.
30
gigs
looks
good
virtual
network.
We
have
a
subnet
public
in
Brown
inbound
ports.
We
definitely
want
SSH.
A
A
A
Okay,
so
wait
for
this
to
come
up,
let's
jump
back
in
to
our
kata
documentation
and
thanks
Tom
for
the
thumbs
up,
jump
back
into
our
cotton
documentation
and
learn
a
little
bit
more
about
running
in
Ubuntu
and
the
requirements
for
the
custom
kernel
that
we
would
be
building.
If
we
tried
to
run
this
thing
locally.
A
So,
let's
go
back
to
our
Ubuntu:
do
I
have
that
pulled
up
I'll,
see
where
it
was
this
had
installing
kata
containers
on
Ubuntu,
so
I
bet
we
can
skip
off,
go
through
the
stuff,
pretty
quick
when
it
comes
time
to
we
run
it
and
then
yeah,
of
course,
that
the
last
thing
down
here
is
docker,
run
TI
busybox,
we're
gonna
start
the
boarding
or
not
boarding
in
shell,
and
we
don't
have
to
specify
the
front-end
command.
So
that's
pretty
straightforward
for
running
on
Ubuntu,
let's
see
what's
going
on
in
Azure
here.
A
So
real
quick,
why
we're
waiting
for
this
to
spin
up?
Do
you
folks,
have
any
questions
so
far
about
kata
containers
in
general?
What
the
use
cases
is
or
how
the
I,
like
the
broader
scope
of
things
fit
together
and
how
caught
it
fits
into
docker
and
how
caught
it
could
potentially
fit
in
to
kubernetes
moving
forward
happy
to
answer
those
as
well
and
happy
to
somebody
asked
about
the
shim
code
earlier,
so,
let's
jump
in
take
a
look
at
the
shim
code.
A
A
nod
says
what
I've
always
loved
about
Chris
in
these
sessions
that
I
learned
from
things
that
go
wrong
versus
things
that
go
right,
yeah
I
mean
I,
think
that's!
You
know.
Joe
and
I
talk
about
this
a
lot
one
of
the
big
you
know,
values
that
we
really
like
in
tgia.
Is
you
know
things
go
wrong,
we're
doing
some
live
debugging
and
we
get
to
explore
and
show
people
the
way
we
think
and
how
we
would.
You
know,
debug
this
stuff
on
the
fly
and
and
really
like
watching
other
folks.
A
Do
this
in
my
career
is
like
that's
how
I
learned
Lennox?
That's
how
he
learned
to
kubernetes.
That's
how
I
learned
like
all
of
these
broader
concepts
was
actually
seeing
what
people
do
when
something
goes
wrong
and
learning
where
people
start
poking
around
so
anyway,
let's
take
a
look
at
our
shim
code
here
and
even
see
what
this
thing
does.
This
project
implements
the
shim
for
the
cada
project.
The
goal
of
this
component
is
to
handle
standard,
I/o
and
signals
of
the
container
process.
A
Okay,
so
this
just
looks
like
it's
just
hand,
forwarding
system
signals
off
to
the
container
process
after
it
starts
running,
which,
of
course,
we'd
be
running
with
a
nested
virtual
kernel,
so
that
we
would
need
a
porthos
into
whatever
kernel.
We
were
running,
which
the
kata
kernel
according
to
an
is
a
flavor
of
the
Linux
kernel
and
I.
Think
I
think
we
think
it's
a
fork,
let's
see
if
we
can't
find
the
the
source
code
here
for
this
tom
says
yeah
the
whole
reason
why
I
watch
it
the
whole
troubleshooting
stuff
is
the
best
things.
A
So,
let's
see
what's
going
on
in
Azure,
okay,
so
I
think
it's
been
so
long
since
I've
done
this,
let's
go
to
virtual
machines
where
they
go
and
I
think
I
can
click
on
this
one
and
I
think
I
get
this
public
IP
address
and
let's
go
to
my
terminal
and
SSH
I.
Think
it's
Nova
at
my
paste.
Our
IP
address
yes,
clear
that
and
let's
really
quick,
install
I'm
not
going
to
go
back
and
forth
through
the
documentation.
A
Get
dr.
up
and
running
so
we're
gonna
install
our
doctor
dependencies,
we're
gonna,
add
our
GPG
key,
we're
going
to
add
our
docker
seee
repository
and
when
you
copy
this
command
and
I'll
paste
it
over
there
with
me,
one
second
BAM
and
then
we're
gonna
do
an
update
and
then
we
want
to
do
a
macaroon
or
an
apt-get.
Install
I.
Don't
know
why
I
keep
saying
docker
install
in
apt-get
install
dr.
C,
so
sudo
capital
e
get
update.
A
A
B
A
Just
my
last
name
and
what
a
lot
of
my
friends
call
me
and
then
I
feel
like
the
word,
Nova
is
usually
easy
to
find
whenever
I
do
things
in
the
cloud,
so
it's
it's
like
a
handy
string,
the
grep
for
cuz,
you
don't
ever
see
it.
So
that's
why
I
usually
like
to
name
things
some
form
of
Nova
or
k,
Nova,
or
something
like
that
anyway,
we
have.
Our
update
is
done.
A
Let's
do
sudo
apt-get
install
dr.,
ze
and
I
think
we
need
to
do
capital
e,
which
for
folks
at
home,
don't
know
the
capital
e
is
just
passing
whatever
environmental
variables
we
have
defined
in
our
current
shell
up
to
the
the
sudo
process
that
what
we're
creating
when
we
run
this
command,
so
just
environmental
variables
will
pass
through,
is
what
you
need
to
realize
here.
Okay,.
B
A
A
If
we
could
get
a
link
to
either
the
I
don't
know
if
cases
are
public
and
Amazon,
but
if
they
are
public,
if
we
be
grab
a
link
so
that
folks
in
home
could
have
following
along
or
if,
if
we
could
check
back,
maybe
in
a
couple
TJ
episodes
and
see
what's
going
on
and
given
update
there,
but
that's
really
cool
that
we're
starting
to
push
for
that.
I.
Think
perfect.
A
So
it's
probably
important
that
we,
you
know,
let
the
folks
at
Amazon
know
like
hey,
we're
we're
trying
to
do
this,
and
this
is
important
to
us
and
they
get
the
container
community
here
anyway
and
nod,
says
sorry,
my
bad
I
thought
I
had
something
to
do
with
virtual
images.
Settings
kind
of
like
it
is
for
the
bun.
Am
I
nope
nod
when
I
was
creating
my
my
virtual
machine
in
Azure.
It
asked
for
you
either
a
name
or
a
username,
and
just
from
working
at
Microsoft
I.
A
Remember
that
it
doesn't
default
to
bun,
you
like
it
doesn't
Amazon
it
defaults
to
whatever
you
name,
your
your
virtual
machine
or
your
user
or
whatever
it
was,
and
that's
how
you?
Yes,
it's
ijen
anyway
doctor's
installed,
listen
stall,
kind
of
okay.
So
again
we're
going
to
mutate
our
repository
list
here
then
we're
going
to
come
in
and
we're
going
to
add
our
GPG
key.
A
Then
we're
gonna
do
another
up.
I
guess
I
could
have
done
both
of
these
updates
at
the
same
time
whatever
so
then
we're
gonna
do
an
update
again
doo
doo,
doo
and
last
but
not
least,
we're
gonna
install
caught
a
runtime
caught
a
proxy
in
the
cotta
sham,
which
we
now
know
the
cauda
runtime
is
a
CLI
tool
that
is
actually
gonna.
Try
to
pass
a
container
process
off
to
our
new
kernel.
A
The
cauda
shim
is
just
going
to
handle
the
the
proxying
of
all
of
those
operating
system
signals
so
that
we
can
interact
with
it
and
I
think
we
still
are
waiting
for
Eric
here
to
kind
of
mention
like
kind
of
proxy.
Does
okay
folks
in
chat
sure
once
I
got
the
answer,
I
will
give
an
update.
Okay,
so
yeah
looks
like
again
I'm
really
sorry
here.
A
Ab
de
noir
is
going
to
try
to
keep
us
updated
on,
what's
going
on
with
nested
virtualization
in
Amazon
moving
forward,
and
it
looks
like
we're
waiting
for
the
Cotter
one
timing
to
install
so
that
wasn't
too
bad.
We
I
was
probably
at
five
minutes
and
we
switched
over
to
Azure
and
we
we're
gonna
be
able
to
run
the
kata
runtime,
a
unity
container
up
and
running
and
I'm
gonna.
Take
this
opportunity
to
grab
some
delicious
la
Crocs.
Why
we
waited
for
it.
Oh.
A
A
If
anybody
wants
an
elaboration
on
that,
yeah
la
crocs
is
what
I
call
it.
If
anybody
wants
an
elaboration
on
that,
we
can
probably
go
into
a
little
bit
more
into
the
proxy
and
actually
talking
about
what
it
means
to
multiplex
something
anything
multi
look
like
something,
but
that
might
be
a
little
bit
of
advance
for
folks
at
home,
so
looks
like
we
have
caught
a
runtime
up
and
running
so
without
further
ado.
A
A
Not
41
pseudovector
and
then,
since
I've,
already
kind
of
shown
this
I'm
just
not
going
to
skip
back
and
forth
because
it'll
make
things
go
a
little
bit
faster
and
then
we're
going
to
cat
this
unit
file
the
to
se
system
D
system.
Just
like
that,
and
then
do
we
get
our
EMF
in
there.
No,
we
did
not
oh
yeah,
we
did.
Okay,
cool
I
was
making
sure
meaning
get
our.
We
got
our
yo
app
at
the
end
of
file.
Otherwise
system
code
was
gonna
yell
at
us.
A
So
last
and
not
but
not
least,
we
want
to
do
a
daemon,
reload
and
then
restart
doctor,
so
system,
cuddle,
daemon,
reload
and
snow
system,
cuddle,
restart,
docker
and
then
so.
Real
quick
can
I
just
take
a
minute.
Why
I'm
on
the
air
and
people
are
watching
the
fact
that
admit
and
system
D
have
different
public
verb.
Parlance
like
before
you
used
to
do
Dockery,
restart
and
now
you
do
restart
docker
like
does
that
really
just
drive
anybody
else,
nuts,
because
I'm
constantly
doing
them?
A
A
Docker
run
command
here,
so
docker
run.
Ti
e
TTY
interactive
shell,
busy
box,
starting
a
shell
deep
breath,
pointer
image,
KBM
kernel
module,
so
close
air
response
from
daemon,
OCI
runtime
create
failed,
couldn't
access,
kb
and
kernel
module?
No
such
file
or
directory.
Oh
and
chat
just
started
exploding.
So
let's
see
what
folks
was
thinking
jack?
Really,
quick?
Okay,
let's
scroll
up
okay,
can
we
Eric
sighs?
Can
we
pseudo
caught
a
runtime
kind
of
check
one
more
time?
Yes,
Eric
I'll!
A
Do
that
in
a
second,
but
let
me
just
load
what
everybody
else
has
said
in
chat
into
my
personal
memory,
so
that
I
know
where
we
are
and
what's
going
on
in
chat
and
keep
up
with
other
folks
thought
patterns
along
the
way
so
Tim
Carr
says
I'm.
The
worst
gift
says:
yes,
it's
a
bit
like
my
experience
with
and
John
says
at
Eric
earnest
I
super
didn't
appreciate
how
much
your
animal
is
new
in
terms
of
system
setup.
When
I
tried
to
get
this
or
any
Indian
Kate's.
A
That
looks
like
a
little
bit
we're
working
on
right.
Now
you
can
both
you,
you
can
use
both
virtual
whisks
virtue.
Iosco
Zi
is
the
default
tom
says.
Yes,
whichever
way
I
want
to
try
first
is
wrong
and
John
is
giving
me
some.
Some
commands
here,
I
think
to
help
us
debug
our
problem.
So
first
things.
First,
let's
run
that
kata
check
I
think
that's
going
to
be
important
part
of
figuring
out
how
we're
going
to
get
caught
or
running
up
here
and
as
you're
an
Ubuntu.
So
we.
A
Runtime
cottage
check:
ok,
so,
let's
see
what
we
have
here,
we
have
this
air
CPU
property
not
found.
I
think
this
is
what
we
saw
before
when
we
were
running
in
Amazon
I,
wonder
if
I
need
it
like
enable
nested,
virtualization
and
azure
in
order
to
get
this
thing
running,
and
then
we
also
have
this
other
area
down
here:
sis
module
KVM
until
perimeters,
nested,
no
such
file
or
directory.
A
So
I
think
what
we're
doing
here
is
we're
just
doing
a
quick
check
to
see
what
the
system
looks
like
to
see.
If
we
can
even
shell
out
one
of
these
container
processes
or
I
shouldn't,
say,
shell
out
run
one
of
these
container
processes
using
the
cauda
runtime
using
our
new
kernel
and
I'm
gonna,
take
John's
advice
here
and
see
if
we
can't
cat
the
CPU
info
and
grep
forward
I
view
max
here.
So
we
want
to
cat
everybody's
favorite
directory,
slash
proc,
a
CPU
info
and
we're
gonna
drop.
A
A
So
a
lot
of
the
times
will
ask
a
question
and
we'll
get
some
context
back
and
there's
like
at
that
little
bit
of
lag
there
and
it's
like
not
really
sure
what
they're
referring
to
yep.
So
anyway,
John
you
hit
the
nail
on
the
head
here.
He
says
ya,
know:
vmx,
no
kata,
okay,
so
I
think
we're
learning
the
importance
of
nested
virtualization
with
kata
and
how
that's
gonna
work
but
I.
Think
like
the
big
takeaway
here
for
folks
who
have
never
looked
at
kata
containers
before
is
going
to
be.
A
Yes,
there's
going
to
be
a
custom
kernel
image
somewhere
on
the
filesystem,
which
we
can
look
at
the
documentation
for
that,
as
we
kind
of
close
out
the
episode
here
and
and
yes,
we
actually
want
the
docker
CLI
tool,
the
docker
daemon
up
and
running,
so
that
we
can
actually
create
one
of
these
container
processes
and
send
it
out
to
the
cloud
of
runtime
underneath
so
there's
a
lot
of
layers
going
on,
because
I'm
running
on
Linux
I.
Don't
have
my
handy-dandy
document
cam,
because
I'm
actually
using
that
one
for
my
face
right
now.
A
A
So
by
the
time
we
actually
look
at
the
the
kernel
here,
we're
probably
going
to
be
scooting
it
pretty
close
to
thee
to
the
end
here,
yeah
we
had
a
abdun
or
say
in
GCE,
enabling
vmx
is
done
by
the
attribute,
license
and
I
think
that's
the
link
Joe
pointing
to
earlier.
So,
let's
just
maybe
take
a
look
and
see.
Let
me
scroll
up
in
chat
if
we
need
to
get
this
up
and
running
BAM
enable
nested
virtualizations
for
VM
instances.
A
B
A
A
And
we
want
to
find
our
developer
guide.
Oh
and
Simon
Says
looks
like
you
need
v3
or
D,
or
e
servers
on
Azure,
okay,
so
I
think
what
Simon
is
saying
is
the
the
v2
that
we
actually
picked
to
get
up
and
running
that
we
thought
was
gonna
work
does
not
have
the
nested
virtualization
we
needed.
So
we
could,
you
know,
recreate
a
new
server
and
get
it
up
and
running,
but
I.
A
A
B
A
B
A
A
Okay,
so
what
just
happened
here?
Joe
says
that
an
has
been
messaging
people
offline,
I'm,
completely
blind
in
here.
If
it's
not
on
Chad
I
have
no
idea.
What
folks
are
doing.
I
knew
everything
that
Eric's
got
a
virtual
machine
we
can
use.
So,
let's
see
if
we
can't
send
Joe
my
public
key
and
Joe
is
gonna.
Send
me
back
an
IP
address,
I'm,
assuming
where
we
can
actually
go
and
get
the
Cata
runtime
up
and
running
so
one
second
here.
A
Tom
awesome,
best
episode
ever
yeah,
it's
kind
of
cool
cuz
like
Joe's,
actually
right
there,
so
it's
like
watching
at
home,
like
it
feels
like
you're
isolated
in
here,
but
it's
nice
to
know.
There's
like
folks
outside
the
room.
So,
let's
see
that's
gonna
go
over
to
Joe.
It
looks
like
I'm
sure
he's
gonna
send
that
off
to
an
which
for
folks
at
home,
who
don't
know
and
works
at
the
OpenStack
foundation.
A
good
friend
of
mine
here
in
Seattle
and
Eric
is
on
on
the
call
as
well,
not
on
the
call
bounty.
A
Jk
is
well
who's,
been
helping
us
out
and
they're
gonna
get
us
a
virtual
machine
for
us
to
to
get
the
rent
I'm
up
and
running
cool.
So
in
the
meantime,
looks
clean,
I'm
gonna
go
ahead
and
shut
down
this
server
since
we're
good
here
eric
says:
it'll
be
five
minutes.
Ok,
so
we'll
probably
that's
gonna
be
cutting
it
pretty
close
on
time,
I'm
wondering-
and
we
did
this
the
other
day
with
cubic
horn,
where,
after
the
fact,
we
just
did
a
quick
little
demo
and
we
kind
of
plugged
it
into
the
episode
afterwards.
A
That
said,
like
here's,
what
the
user
experience
would
be.
I
would
really
like
a
chance
to
like
actually
look
at
the
the
kernel
that
it's
running
and
see
how
the
the
container
process
is
interacting
with
it
and
I'm
sure.
Folks
at
home
are
curious
as
well,
and
it
was
going
to
be
five
minutes.
I
think
that
is
enough
and
we
can
splice
in
an
example
after
the
fact
here,
unless
folks
have
any
other
questions,
I
think
we
can
call
this
one
call
this
one.
A
An
episode
yeah
and,
like
George,
said
you
know,
big
hats
off
to
everybody
for
helping
us
out.
I,
just
don't
want
to
sit
here
and
wait
too
much
longer
for
a
virtual
machine
just
to
get
the
nested
virtualization
up
and
running
so
I
guess
real,
quick.
We
can
do
a
quick
summary
recap.
Everything
we've
learned
today
for
folks
at
home
and
how
how
it's
all
going
to
fit
together
and
then
we
can
come
in
and
we
can
splice
in
a
quick
demo
of
running
busybox
or
an
arbitrary
container
with
the
cauda
runtime
with
docker.
A
So
we
learned
that
kata
has
its
own
kernel.
That
needs
to
be
of
the
host
system
that
it's
running,
that
it's
gonna
send
a
container
process
to
you.
Let's
go
back
to
my
terminal
here
we
have
learned
that
you
can
use
a
tool
like
docker
to
create
a
new
container
process
and
shell
out
to
a
container
runtime.
We
know
that
you
can
configure
arbitrary
container
runtimes
in
kubernetes
and
we
want
to
find
out
how
the
cubelet
actually
interacts
with
creating
a
container
and
if
it's
gonna
start
it's
a
container
runtime
from
the
cubelet.
A
Therefore,
actually
going
to
shell
out
to
docker
to
see
if
we
can't
actually
spoof
that
and
get
cata
containers
running
in
kubernetes
enough
says
that
the
nested
virtual
is
I
shouldn't
affect
the
performance
and
Eric
says
affects
it
a
bit
depending
on
what
you're
doing
it
isn't
too
bad?
Okay.
So
anyway,
we'll
give
it
a
few
minutes.
We're
gonna
say
goodbye
is
really
quick
and
then
we
can
do
a
splice
in
with
an
example
after
the
fact
here.
So
thanks
for
joining
everybody,
it's
been
great
to
see
everyone
and
we're
gonna,
see
everybody
next
week.