►
From YouTube: CNCF SIG Runtime 2021-01-07
Description
CNCF SIG Runtime 2021-01-07
A
B
C
Good
morning,
hey
good
morning,
just
let's
give
it
a
few
minutes.
Let's
see
if
some
other
people
join.
A
B
C
B
C
I
think
australia
is
probably
not
the
best
time
and
for
you
know,
you
start
collaborating
with
other
places.
A
Yeah,
I'm
I'm
with
other
cncf
meetings
and
they're
all
like
at
crazy
times,
yeah
but
yeah.
We
we.
We
were
in
the
u.s
early
last
year
now
early
last
year,
but
then
again
coronavirus
it
forced
us
to
go
back.
There
was
no
way
so
insurance
companies
said
we're
not
insuring
you
anymore
and,
and
there
was
no,
there
was
no
way
for
us
to
say.
And
yes,
so,
oh
I
see
yeah
see
what
we
can
do
this
year.
A
C
It's
my
my
my
parents
got
vaccinated
two
days
ago,
so
that
was
that
was
good
to
hear.
Oh.
A
A
Yeah
it
works
at
the
moment.
I
think.
C
B
Okay,
good
I'll,
try
and
keep
my
voice
at
the
level
where
people
can
hear
me.
So
I
guess
what
we've
done.
Can
you
guys
hear
me
by
the
way
yep
yep?
All
right?
I
guess
what
we've?
What
we'll
take
you
through
is
just
we're
going
to
keep
this
very
technical.
It's
not
going
to
be
the
typical
salesy
presentations
that
we
do
ricardo.
So
I'll
tell
you
a
bit
about
the
background,
so
we
we
started
the
company.
B
Actually
in
2014
we
built
a
interface
to
integrate
into
a
vmware
vc
client
to
manage
containers
and
we
were
using
mesosphere,
chronos
and
docker
back.
Then
we
then
pivoted
the
company.
We
started
developing
a
uni.
So
actually,
yes,
on
the
line
like
he's
he's
written
the
uni
kernel
from
scratch
and,
like
you,
can
ask
against
it's
good
that
we
have
them
online
like
when
we
mean
from
scratch.
We
truly
mean
from
scratch
and
it
was
a
single
process.
B
Uni
kernel
we
could,
it
could
be
up
in
about
10
milliseconds
in
most
public
clouds,
it
had
a
785
kilobyte
footprint,
but
what
we
also
did
is
we
built
packaging
and
deployment
solutions
with
this
unicorn,
because
we
had
to
find
a
way
to
package
applications
and
run
them.
You
know
manage
them,
and
we
did
this
because
we
wanted
to
address
container
security
much
like
some
of
the
other
projects
that
are
out
there
at
the
moment.
B
So
we
took
it
to
our
first
customers
who
were
running
the
stuff
for
us,
and
I
said,
look
it's
absolutely
great.
If
you
build
this,
we
love
the
fact
that
you
can
package
all
the
the
applications.
The
way
you
do,
the
fact
that
you
can
you
know
migrate
containers,
but
we
just
cannot
trust
your
kernel
that
you
wrote
yourself
because
you
know
the
linux
kernel
has
years
of
bug,
fixes
and
everything
in
it.
So
we
went
back
and
well
that's
not
great,
but
what
they
did
say
to
us.
B
B
It's
it's
very
simple.
What
we've
done
engineering
is
obviously
a
bit
more
difficult,
but
what
we've
learned
is
we.
We
have
the
ability
to
run
apps
with
security
isolation
of
vms,
but
we
take
the
packaging
and
efficiencies
of
the
container
platforms
and
I'll
take
you
through
how
it
works
and
what
we
do
and
how
we
do
it.
B
Sorry,
all
right
so
very,
very
simple:
when
we
present
our
customers
as
well,
what
we,
what
we
talk
about
is
the
fact
that
we
can
take
this
stack
and
we
reduce
the
stack
basically
to
just
your
application
running
on
the
linux
kernel
inside
the
vm.
We
make
the
application
again.
The
three
things
that
we
focus
on
when
we
talk
to
these
customers
is
just
about
resources.
So
obviously
you
know
where
the
kernel
itself
is
yet.
B
I
think
seven
meg
at
the
moment
yeah
it's
about
7
meg,
so
that
includes
all
our
tools,
all
the
all
the
configuration,
everything
that
we've
built
into
it
as
well.
We
run
out
about
34
meg
memory
that
we
allocate
and
that's
for
the
kernel
itself
or
for
for
vortell
itself
and
the
cpu
you
know
cpu,
you
can.
You
can
have
a
look.
I
think
we
run
at
about
0.002
percent,
cheap
utilization
when,
when
we're
idling,
I'm
not
doing
anything.
B
Obviously
it's
about
reduced
complexity,
there's
a
whole
bunch
of
os
life
cycle
processes
that
get
eliminated.
If
you
don't
have
an
operating
system,
because
essentially
that's
what
we
have
as
an
example,
we
you
wouldn't
do
operating
system
patching
normally,
because
when
you
patch
your
applications,
you're
effectively
patching
the
operating
system
as
well,
because
the
rollout
is
a
is
us
taking
an
application,
binding
it
to
the
volatile
kernel
and
then
pushing
it
out
as
a
as
a
vm
increased
security.
B
Obviously,
there's
a
lot
of
things
that
we
can
talk
about
that
it's.
It
really
is
the
benefits
of
the
of
the
virtual
machine,
isolation
technologies,
but
also,
then
you
know,
having
stripped
out
all
actually
not
stripped
out,
not
having
any
of
the
components
of
a
normal
legacy
operating
system.
You
know
hardens
a
whole
bunch
of
items
on
the
cis
ironing
checklists,
so
it
is
really
about
the
three
things
that
we
talk
to
our
customers,
about
its
resources,
complexity
and
security.
B
What
we
do
talk
about
a
lot
is:
there's
this
idea
that
we
codify
the
micro
os
so
instead
of
managing
operating
systems,
the
way
you
used
to
do
them
or
even
containers
or
anything
or
it's
the
same
principles,
running
containers.
It
is
you
know,
codifying
the
operating
system
codifying
your
os
as
a
code.
Applications
is
runtime,
so
we
make
the
application
basically
the
right
time,
and
then
we
can
push
to
any
hypervisor
out
there.
B
We
support
all
the
major
ones
like
zen,
kvm,
vmware,
esx
hyper-v.
We
all
type
2
hypervisors,
like
vmware
fusion,
workstation
player,
virtualbox,
chemo,
firecracker
firecracker.
We
actually
use
heavily,
and
then
we
have
kubernetes
runtime
integration
as
well.
So
we
manage
our
vms
via
kubernetes,
it's
just
a
simple
container.
The
integration,
runtime
integration
and-
and
that
is
really
what
we
talked
about-
is
taking
everything
that
you
used
to
do
for
the
operating
system
and
just
you
know,
building
it
as
a
as
a
code
platform
or
a
codifying
operating
system.
B
I'm
not
going
to
spend
too
much
time
about
this.
If
you
understand
the
principles
behind
it,
you
can
understand
that
the
security
becomes
more
critical.
It
becomes
fully
portable.
The
packages
are
fully
portable
because
we
don't
ship
the
kernel
with
any
of
the
packages.
It's
it's
literally
like
a
application
package
and
then,
when
at
runtime
we
obviously
save
costs
because
of
infrastructure,
software
and
operational
costs.
B
B
The
kernel,
the
kernel
itself,
most
customers
will
never
see
the
kernel
because
we,
you
know
they
just
won't
need
to
know
of
it
and
then
the
build
server.
Our
build
server
is
like
a
repo
server,
but
we
also
offload
the
build
server
functionality
into
you
know,
hosted
clouds
or
private
clouds
or
public
clouds.
B
It
speeds
up
the
build
process
because
you
don't
have
the
latencies
of
uploading
and
downloading
from
your
desktop
and
you
can
instruct
the
build
server
to
build
and
send
into
an
environment
directly
and
the
build
server
is
distributable
across
any
public
or
private
cloud
and
it
runs,
and
we
depend
obviously
on
the
builds
of
it
to
push
the
images
out
and
you
can
do
it
from
your
local
desktop
as
well.
But
then
you
know
there's
latencies
and
uploads,
and
everything
else
to
public
clouds.
C
Yeah
yeah,
so
how
is
this
so
it's
similar
to
something
like
color
containers,
yeah,
actually
yens.
You
want
to
tell
them.
A
But
the
difference
is
color.
Containers
is
running
containers
within
a
kind
of
predefined
operating
system,
so
they
just
have
a
little
agent
inside
which
they
use
to
start
containers,
whereas
this
is
not
having
anything
around
it.
It's
it's
your
app
and
and
that's
it
so
there's
no
agent
or
something
on
on
the
machines.
It's
the
app
you
define
and
that's
it.
That's
that's
all
on
the
machine.
C
Yeah,
so
how
do
you?
How
do
you
talk
to
the
the
vm
the
over
the
application
that
is
running
in
the
vm,
so
you
you,
let
that
just
happen
or
what
mechanisms
do
you
use
to
communicate
between
the
house
and
the
vm.
C
I
guess
you
you
want
to
have
I'm
thinking
about
a
file
system,
for
example
like
it
and
I'm
thinking
about
like
something
like
kubernetes
with
maybe
volumes
right.
So
you
have
a
volume
on
the
host
and
you
want
to
talk
to
it
that
it's
inside
this
vm.
A
Yeah
yeah,
okay,
okay,
so
the
at
the
moment.
The
file
sharing
is
the
same
kind
of
how
kind
of
call
the
issue
card
has
as
well
that
the
vm
cannot
share
files
files
between
the
host
and
the
vm
we're
working
at
the
moment
on
a
on
a
with
firecracker
on
a
vsoc
implementation,
where
you
basically
have
a
vsa
kind
of
server
running
on
the
host
and
the
vsoc
client
on
the
guest
machine
which
can
which
can
share
files.
So
that's
the
idea
at
the
moment.
C
B
That
is,
that
is
obviously
specific
to
the
kubernetes
integration
for
for
any
of
the
customers
that
we
have,
that
run
it
like
in
a
normal
state.
It's
just
virtual
machines,
so
the
same
processes
they
have
in
ec2
the
same
load
balances.
They
have
an
ec2
on
specific
result
about
aws,
obviously,
and
those
all
processes
like
everything,
stays
the
same.
D
B
Most
actually,
all
of
our
customers
have
implemented
as
far
as
they
build
the
micro
vm
on
the
on
the
stereo
or
the
cli.
They
provision
it
using
ansible,
terraform
or
vmware
has
provisioning
tools,
and
then
they
run
it,
and
the
we've
only
had
one
customer
that
actually
used
the
kubernetes
integration.
We
did
it
as
a
proof
of
concept
for
them.
Most
customers
actually
use
just
the
vm
where
we
set
clients
on
the
public
cloud,
our
private
clouds
and
the
public
cloud
management
utilities
that
aws
and
azure
and
google
cloud,
and
everybody
else
has.
C
Got
it
yeah,
so
yeah
I
have
another
question
is
so
this
is.
This
is
a
stripped
down
kernel
too
right.
So
so
you
it's
something
that
you
remove
to
some
of
the
device
drivers
or
some
of
the
like
overheads
for
some
for
some
of
these
things
that
come
up
come
with
the
kernel
and
then
you
can
to
run
some
some
applications
for
most
of
the
the
main
applications
so
yeah.
So
I
guess
my
question
is
what
type
of
there's
a
kernel
right.
So
what
can?
A
The
kernel
that
has
a
few
minor
changes-
I
I
think
it's
I
mean
you-
can
look
it
up.
It's
probably
500
lines
additionally
and
to
start
our
own
v
in
a
d
has
a
custom
custom
boot
loader,
because
at
the
moment,
linux
is
is
usually
having
like
this
1.5
stage.
Boot
loaders
and
we
just
use
one
one-stage
bootloader,
so
it
it
doesn't
do
any
like.
A
What's
this
called
yeah,
it
jumps
straight
straight
to
the
linux
kernel
and
our
little
changes
are
only
to
mount
a
custom
file
system
where
our
stuff
is
on
and
then
it
it
starts
up.
So
that
is
a
minor
change.
They're
not
really
changes
to
the
to
the
kernel
itself.
C
Yeah,
I
was
thinking
about
this
other
project
from
ibm
lupine,
where
they
actually
grabbed.
You
know
linux
kernel
and
then
they
stripped
down
lots
of
parts
and
the
code
and
then
they
recompiled
it,
and
then
it
became
this
really
lightweight
kernel
it
that
didn't
actually
support
everything,
but
it
allowed
you
to
run
most
of
the
major
applications.
B
Now,
like
ricardo,
when
we
built
this,
we
had
to
support
everything
right
from
the
start,
so
I
bought
our
runs.
We
support
any
any
elf,
60,
32
or
64
bit
binary
and
and
the
most
important
part
was.
We
couldn't
go
to
any
of
the
customers
that
we
had
at
least
and
tell
them
that
they
had
to
recompile
their
software
or
their
applications
to
run
on
board
because
they
just
wouldn't
use
it.
B
So
that's
why
we
chose
to
leave
school
eventually
is
because
you
literally
just
drop
the
binary
in
and
it
runs
there's
nothing
else
that
you
need
to
do.
There's
no
special,
I
mean.
Obviously
the
only
thing
you
would
need
to
add
is
libraries
that
you
would
need,
and
but
that
was
the
whole
principle
behind
this-
is
that
you
have
full
control
over
the
libraries
that
you
include
in
your
vocal
machine.
So
we
import
certain
libraries
by
default.
It's
just
dns
library.
I
think
it's
lib
dna.
B
Oh,
you
there's
a
command
to
import
it
that
I
always
run
the
command
for
every
single
day.
Stuff
done
so
it's
it's
by
default
now,
but
yeah.
That
was.
That
was
the
whole
idea.
With
this
and
I'll
show
you
a
quick
demo
how
it
works,
it's
it's
actually
pretty
straightforward,
but
the
whole
idea
was
just
like,
like
a
you,
had
to
go
back
to
the
customers
and
tell
them
not
to
change
anything
that
they're
doing
now
and
b.
The
principle
was
that
while
we
well,
we
tell
them
about
the
security
isolation
of
vms.
B
C
It
yeah
that
makes
sense
so
yeah,
so
so
yeah.
So
typically,
if
you
don't
remove
anything
from
the
kernel,
you
can
just
run
anything
right,
so
yeah
yeah
I
mean
I.
A
Mean
we
stripped
it
down
the
default
kind
of
set
in
linux?
You
know
they're,
like
keyboard
drivers,
stuff,
we
don't
need
their
mouse
drivers
and
all
you
know,
but
it
comes
default
with
the
linux
kernel,
of
course
removed.
All
of
that
and
they're
just
the
basic
drivers
to
run
on
on
clouds
and
server
application.
B
This
is
vna
d.
This
is
actually
the
the
most
important
part
to
all
of
this.
This
is
common
knowledge.
It's
on
github
spite
of
the
open
source
project.
Sorry,
I'm
losing
my
voice
and
we
just
go
through
the
four
phases.
We
do
a
pre-set
up
to
set
up
the
post
setup
and
launch
and
like.
Instead
we
just
monitor
special
file
systems
and
in
general,
you
know.
Generic
resources,
like
nfs
nfs,
gets
mounted
ntp,
dns
all
the
things
that
you
would
need
just
to
start
up.
Our
application
gets
launched
by
default.
B
B
We
then
inject
information
on
demand
based
on
the
build
server
requirements,
so
memory
environmental
variables.
You
know
all
the
things
that
you
would
need
to
run
and
then
we
run
it
under
firecracker.
A
I
mean
the
v-stock
host
file
sharing
sharing
we
are
working
on,
but
the
rest
should
be
the
same
experience.
So
the
apps
within
for
your
port
can
talk
to
each
other
with
by
localhost,
and
so
that
should
be
pretty
much
the
same.
C
Another
question
is,
I
I
container
d
was
actually
working
on
some
integration
with
firecrackers.
Is
that
what
you're
using
now
so
that
that
no.
A
I
think
what
the
difference
is.
One
of
our
things
is
that
we
can
build
disks
very
fast,
so
we
implement
our
own
way
of
kind
of
stream
building
disks.
So
you
will.
D
A
Seconds
and
and
a
small
discs
it's
like
under
one
second,
so
we're
just
building
whatever
you
need.
We
built
the
whole
disk
with
linux,
your
app
all
your
stuff
within
just
a
few
seconds.
C
A
B
B
C
B
All
right,
let's
have
a
look
so
vote
our
board,
our
command
line,
I'm
going
to
show
you
the
command
line,
there's
a
studio
as
well,
which
is
like
a
graphical
interface,
but
that's
for
enterprise
customers,
so
it
it's
actually
extremely
simple
to
use.
So
the
first
thing
I'll
do
is:
let's:
let's
go
pick
a
app.
You
want
to
run
after
docker.com.
B
So
dockable
tomcat,
so
what
we
can
do
is
vortile
projects.
I
think
it's
convert
container
tomcat
from
docker
hub
and
obviously
this
you
can
add
different
repositories.
I'm
just
telling
it
to
pull
from
a
default
repository
which
is
docker.
B
There's
no
magic
behind
this
ricardo.
All
we're
doing
is
we're
connecting
to
the
tomcat
keeper
on
on
docker,
we'll
pull
down
all
the
layers
that
we
need,
unpack
it
and
build
the
virtual
machine
so
yeah
we've
we've
got
terrible
internet
in
australia.
B
B
B
We
create
this
default.vcfg
file.
So
essentially
what
happens
is
the
voltage
configuration
file
is
just
the
arguments.
You
could
have
split
this
up
into
binary
and
arguments
environmental
variables
that
we
pass
working
directly.
B
B
Not
adding
anything
so
that
that's
all
docker
container
stuff,
so
all
we've
done
is
we've
basically
pulled
it
down.
We've
unpacked,
it
converted
it
and
we
created
the
default.gcfg
file,
so
nothing
crazy
about
it.
What
we
can
then
use
vortile
run
tomcat
docker
and
what
is
we'll
just
do
this.
B
C
Most
very,
very
boring
demo
in
the
world.
Whatever
you
see
here
on
the
logs,
it's
what.
B
Obviously,
there's
a
lot
of
stuff
we've
done
that.
I
can't
just
easy
to
show
you
so
I'll.
Take
you
through
it
localhost
there.
It
is
so
that's
just
mapping
to
my
local
hose
running
on
virtualbox,
but,
like
we've,
we've
done
things
we'll
just
terminate
this.
B
B
So
in
this
case
it's
a
mysql
package
that
we
converted
from
docker
again,
because
we
can
see
the
entry
point
script
and
that's
the
binary
that
runs
we
passed
arguments.
These
are
like
mysql
arguments
as
an
example,
we
built
the
database
on
the
fly,
but
I'll
take
you
through
a
couple
of
settings
that
we
we've
added
in
the
config
file
as
well,
for
people
to
be
able
to
use
volton
more
efficiently,
so
we
simulate
users
and
super
user
privileges.
B
So
if,
if
application
absolutely
needs
to
have
super
user
privileges,
you
can
say
well
privileged
equal
super
user
or
you
know
some
other
type
of
privilege.
We
can
redirect
standard
in
standard
outstanding
errors,
everything
this
log
files
statement.
B
So
what
we
do
is
we
say
for
anything,
that's
written
to
var
log
mysql
star,
so
if,
if
mysql
writes
anything
to
that
directory,
instead
of
actually
creating
the
file
and
writing
to
the
disk,
use
the
logging
setting
and
stream
the
output
somewhere.
So
in
this
case,
what
we're
doing
is,
if
you
see
the
declaration,
type
equals
programs,
it's
going
to
read
the
log
files
comment
and
then
it's
going
to
send
all
the
log
files
to
whatever
config
you
give
it
here,
and
this
config
is
actually
a
fluent
with
config.
C
B
Any
output-
that's
in
here
we
actually
support
as
a
logging
output.
That
means
what
we
do
is
we
actually
send
all
log
files
there.
We
can
send
kernel
messages
there.
So
you
know
that
makes
it
completely
stateless.
Basically,
we
can
send
system
information
there.
So
in
this
case
I'll
start
it
up
and
I've
got
a
I've
got
a
cabana
instance
running
here.
B
B
B
It's
just
you
know,
programmatically
well,
actually,
within
the
config
file
being
able
to
do
some
programmatic
actions
in
the
back
end,
when
the
machine
starts
up,
and
then
we
have
ctl
settings,
you
can
set
different
file
systems.
You
can
change
what
we've
done
for
disk
size
as
an
example
is
very
important.
So
you.
D
B
B
If
you
add
the
plus,
then
we
will
build
a
machine
big
enough
to
house
your
application,
plus
the
amount
of
disk
space
that
you
add
additional
in
that
plus
command,
which
means
we
try
and
minimize
the
disk
usage
on
the
machines,
but
also,
if
you
have
something
completely
stateless
like
like
redis
or
you
know,
or
even
if
you
mount
nfs
file,
shares
somewhere
else
or
if
you
have
a
secondary
disk
mounted
to
have
your
disk
stored,
separate
to
the
virtual
machine,
then
you
can,
then
you
only
need
to
build
the
virtual
machine
big
enough
to
have
the
application
on
it.
B
C
B
B
A
B
A
B
We'll
get
to
that
and
and
that
that's
the
config
file,
so
what
we
can
do
is
we'll
do
volt
our
run
again.
B
So
these
are
now
building
a
four
gig
disk
and
you'll
see
how
quick
this
goes
and
we'll
bring
up
virtualbox.
So
that's
all
good
start
it.
So
it
started
my
xavix
agents.
It's
starting
my
mysql
database.
Now
it
actually
initializes
an
empty
database,
creates
the
database
with
the
config
file
that
I
have
and
then
start
so
that's
mysql
started,
and
then
I
mean
in
cabana.
B
And
you
can
see
with
this
whole
bunch
of
well.
Actually,
this
is
all
stuff
that
we're
testing
at
the
moment
you
can
see,
but
this
is
what
the
the
messages
look
like.
So
this
is
our
our
system
messages
coming
from
from
the
water
machines,
and
then
I
don't
need
to
show
you
the
pretty
grass
and
combine
that's
actually
pretty
simple
there.
You
go
some
cpu
memory,
disc.
B
A
And
just
as
one
little
comment,
you
don't
have
to
run
app
if
you
want
to
run
containers
within
that
machine.
If
you
want
to
use
it.
D
B
We
actually
run
k3s,
we
built
the
k3s
again
inside
it's
so
early
in
the
morning.
It
was
inside
the
rancher.
No,
no,
the
goodness.
Sorry,
no.
B
Yeah,
it's
around
you
guys,
so
we
run
the
the
kubernetes
integration
on
our
platform
and
it's
it's
so
easy
because
it's
just
you
know
we
spin
it
up
really
quickly
really
fast
and
there's
a
yeah
rancher
k3
is
sorry.
I
wrote
an
article
on
to
show
how
we
do
it
and
it's
basically
you
you
can
have
a
kubernetes
platform
from
scratch
in
like
a
couple
of
seconds,
because
you
can
just
follow
these
steps.
B
We
convert
kubernetes,
sorry
k3s
and
we
run
it
up
in
volton
and
essentially
we
have
this
running
in
like
a
mac
or
we
push
it
to
vmware
up
to
aws
or
google
cloud
and
those
places
and
people
just
use
it.
C
Yeah
yeah,
so
is
this:
I
mean
the
the
use
case:
edge
type
of
applications
like
101
k3s
in
the
vm
or
something
okay,
exactly.
B
Yeah
the
whole
reason
we
we're
doing
is
like
for
the
the
people
that
we
work
with
are
the
larger
isvs
who
they
they
don't
necessarily
want
to
run.
Kubernetes
to
you
know,
you
don't
want
to
run
connected
to
manage
for
the
full
containers
and
they
already
have
virtualization
platforms
built
on
it,
like
kvm
and
all
those
things.
B
So
you
know
this
is
a
extremely
lightweight
alternative
to
run
a
container
as
an
isolated
virtual
machine
or
isolation.
Of
course,
of
course,
isolation
is
a
big
thing,
but
it
also
you
know.
The
whole
premise
of
this
is
that
we
dragged
into
kubernetes
eventually,
so
you
can
run
a
mix
of
the
containers
and
the
virtual
machines
without
losing
the
interoperability
between
the
two.
B
D
All
right,
sorry,
I'm
sorry!
I
I
have
a
question
here.
This
is
broad
right,
okay,
so
it's
very
interesting
what
you
guys
are
doing.
I
got
a
question
in
regards
to
the
workflow
it's
possible
to
get
the
documentation
from
docker
hope
and
and
build
the
vm.
I
understood
that
is
there
anything
you
guys
are
working
on
the
other
direction.
For
example,
let's
say
that
you
make
some
changes
to
your
vm.
D
A
A
Well,
you
could
only
you
can
always
in
in
theory
if
you
use
it
as
a
base
start
with
a
docker
file
in
that
directory,
because
with
at
the
end
with
the
first
step,
we're
doing
we're
just
taking
the
whatever
docker
has
as
a
file
system
and
go
through
the
same
steps
when
they
start
a
container.
A
You
could,
you
could
add
at
your
docker
file
there
and
start
from,
but
what
is
it
called
from
the
the
nothing
in
it
image
from
empty
or
something?
I
think
it
is?
But
you
can
you
can
still
do
your
the
conversion
all
the
time.
So
you
push
to
docker
and
then
yeah.
D
A
Can
convert
and
run
it?
That
would
probably
be
the
easier
way,
although
once
it
converted,
you
can
again,
you
can
add
your
dockerfile
to
that
directory
and
just
say
you
know,
add
all
these
files
and
and
that's
it
yeah.
D
I'm
more
concerning
I
mean
I
guess
what
I'm
trying
to
get
at
is.
How
do
I
reuse
this?
Let's
say
that
you
have
that
vm
you
make
a
bunch
of
changes
in
that
game.
You
you
still
keep
cuddle.
Would
you
launch
your
cluster?
How
do
you
snapshot
that?
How
do
you
save
that?
I
guess
that
your
vm
image
is
already
there,
but
that
the
image
now
is
going
to
be
your
environment.
You
will
need
to
move
that
image
everywhere.
You
go
right.
That's
your
source
of
information!
At
this
point
correct,
oh.
B
Yeah
yeah
and
that's
why
we
had
the
build
server
originally
because
the
build
server
has
like
a
repository
built
into
it
as
well.
So
I
should
actually
import
our
for
the
tories
list.
B
Also
what
I'll
report
connections?
Yes,
there
you
go
so
there's
a
couple
of
repos
that
we
have
like
in
aws
or
there's
a
dev
repo,
and
you
can
push
packages
into
out
of
these
repos.
It's
the
same
as
a
docker
hub,
basically,
but
yeah.
This
is
stuff
that
runs
somewhere
outside
you
can
easily
just
download
them,
unpack
them
and
run
them.
I
mean.
D
This
is
a
repo
of
virtual
disk
images
that
you're
seeing.
B
Yeah,
it's
actually
not
virtual
disk
images,
it's
the
it's
the
packages.
So,
okay,
let
me
show
you
how
this
works.
If
I
take
this
my
scalp
as
an
example,
for
example,
what
I
can
do
is
packages
back
my
scale
and
what
it'll
do
is
it'll
actually
create
a
portal
mysql
package.
B
Well,
it's
going
so
yeah
that
that's
the
whole
principle
behind
this.
I
got
it
anything
else
I
mean
the
the
demo
is
pretty
it
is
it's
so
simple,
but,
and
it's
so
boring,
I'm
sorry,
but
it
is
it's
yeah.
It's
really
powerful.
Hang.
D
On
here,
is
there
any
constraint
in
terms
of
the
images
that
you
can
pull
from
docker
hub,
convert?
Let's
say
that
you
have
a
previous
container
image
or
just
an
image
that
is
suspected
to
be
launched
as
a
privileged
container,
and
it
has
docker
in
docker,
for
example,
in
there
is
there
any
constraint
you
know
when
you
launch
your
converter
or
your
compiler,
whatever
you
call
it?
Is
there
any
constraint
in
terms
of
which
images
you
can
source.
A
Actually,
not
that
we
are
aware
of
and
again
because
we
are
getting
whatever
is
in
that
image
file
and
on
docker
hub
and
and
getting
all
the
commands
it's
supposed
to
run
with
the
environment,
variables
and
everything,
and
so
far
we
have,
I
think,
all
the
stuff
in
the
linux
kernel
enabled
in
in
our
in
a
d
that
everything
should
run
yeah.
You
know
it's
I.t,
there
might
be
something
which
doesn't,
but
so
far
I
haven't
seen
any
okay.
Okay,
thank.
B
This
is
how
we
mount
nfs
as
an
example.
So
at
startup
we
mount
the
nfs,
and
you
know
you
can
write
to
your
heart's
content
on
the
nfs.
It's
yeah.
We
tried
to
make
this
as
bulletproof
as
possible,
actually
just
as
simple
as
possible
to
use.
B
We've
got
a
whole
bunch
of
apps
that
we've
tested
and
tried
and
used,
and
most
of
them
are
all
docker-converted
apps
just
because
we
don't
want
to
keep
rebuilding
things.
If
you,
if
you
do,
want
to
build
your
own
app,
let's
there's
things
like
s-trace
built
into
it.
It's
probably
better
if
like.
If
you
have
a
look
at
the
docks,
yeah
the
debugging
side
of
it,
you
can
run
shell
scripts
and
what
we
do
with
the
shell
scripts
is
we
we
actually
include
busy
box.
I
think
gents
yeah
with.
D
B
Box,
if
you
run
it
with
the
shell
command,
it'll,
actually
execute
shell
commands
for
you.
You
we've
built
in
s
trace.
So
if
you
start
up,
the
program
with
s32
will
actually
run
an
asterisk
on
the
app.
So
if
you
are
missing
a
library-
or
you
know,
there's
some
secure
library
that
gets
called
we'll
highlight
it
to
you
highlight
it
for
you.
If
you
know
what
stress
does
you'll
work
out,
what
they
stress,
how
we
do
it?
B
We've
got
something
called
the
import
shed
objects.
So
if
you,
if
you
run
both
our
project
space,
import
shed
objects
on
the
on
a
linux
machine,
what
we'll
do
is
we'll
import.
The
shared
objects
from
your
linux
host
that
you're
running
it
on
into
the
voltage
package
and
those
shared
objects
are
typically
like,
like
I
said,
the
lib
libraries
for
dns,
the
dynamic
linker.
D
A
If
you
create
your
disk,
there's
nothing
on
this
disk,
we
have
like
of
the
first
partition,
which
is
with
our
stuff
and
it's
you
can't
mount
it
and
nothing.
It's
some
the
magic
and
then
the
disk
you
use,
there's
nothing
on
there.
So
if
you,
for
example,
just
do
golang
and
you
do
a
static
link
and
you
want
to
run
this
app,
then
it's
only.
This
app
is
on
that
disk.
There's
no
shared
library,
there's
no
linker,
there's
nothing!
B
This
is
manio
that
we
run
and
you
can
see
there's
only
a
mino
binary
in
there
nothing
else
we
don't
even
yeah,
literally
with
mino.
It's
just
a
self-contained
binary,
there's
nothing
in
absolutely
nothing
else.
So
it.
D
B
Is
for
us
about
like?
Well
it
it's
about
running
as
lean
and
small
as
possible,
and
not
putting
the
owners
back
on
the
user
to
try
and
work
out
what
to
do
and
there'll
be
there'll,
be
outliers
where
you
know
you
have
to
run
airstrikes
to
find
some
obscure
library
that
gets
called,
but
in
most
cases
it's
it's
pretty
straightforward.
D
One
quick
question:
I'm
not
sure
if
you
guys
covered
this,
but
is
it
possible
to
run
from
a
docker
file
import
the
image
not
from
docker
hop,
but
just
for
my
recipe
file.
A
You
can
import
from
your
local
docker
local
docker,
slash
container
d,
so
you
can
convert
from
your
local.
What's
the
service.
What's
it
called
yeah
from
your
local
docker
local
content,
id.
D
I
mean
I
was
not
referring
to
my
local
image,
I'm
just
talking
about
it.
You
can
convert
or
import
the
image
from
a
docker
file
itself
from
the
file.
You
know
from
the
recipe.
A
B
C
Yeah
questions,
so
I
don't
anybody
else.
Has
another
question
so,
but
with
kubernetes
do
you
actually
have
anything
specific
on
the
yama
configuration
to
run
it,
or
this
is
just
very
like
straightforward?
Basically,
you
just
kind
of
the
way
you
configure
it.
You
just
you
don't
need
anything.
It
did.
It's
actually
transparent
to
users.
A
It's
it's
yeah.
There
are
a
few
things
in
when
you
run
this
when
your
machines
start
up
in
kubernetes,
which
you
don't
see.
For
example,
we
wanted
to
support
that
all
the
virtual
machines
see
each
other
as
local
house,
so
we
are
doing
some
some
some
magic,
of
course,
that
if
you're
in
the
virgin
machine
said
localhost
port
88
or
something
you
end
up
on
a
different
virtual
machine
on
port
88,
there's
some
some
magic
in
the
background.
But
you
don't
have
to
change
your
yaml
file.
B
Cool
all
right,
that's
it!
I
think
you
get
the
gist
of
it
though
yeah
you
can.
You
can
download
it
and
go
play
with
it.
It's
it's
all.
There.
C
Oh
yeah,
another
question
is
you:
are
you
planning
to
maybe
donate
some
of
this
to
a
foundation
like
the
cncf
to
get
more
traction
or
there's
no
plans
yet.
A
Well,
it's
it's
it's
open
source.
I
didn't
know
that
this
is
actually
yeah.
Oh,
I
think
we
are
not
aware
of
the
different
pathways
into
cncs.
I
think
that's.
Let's.
C
Yeah,
because
some
I
mean
the
cncf
they
you
know
they
host
the
projects
right
and
there's,
or
there
are
different
stages
for
the
projects.
You
know,
there's
a
sandbox,
there's
activation,
there's
a
graduated
stage,
and
so
the
idea
is
just
to
have
a
project
hosted
on
a
neutral
foundation.
C
Of
course,
the
open
source
components
right,
not
not
anything
proprietary,
but
the
idea.
The
idea
there
is
to
you
know,
help
the
projects
you
know
get
gain
more
traction
and
more
contributors,
and
and
and
also
more
users
right,
so
yeah
so
and-
and
I
I
just
brought
it
up-
is
maybe
something
to
consider
right
in
the
future.
If
you,
if
you're
interested
yeah.
B
C
Yeah,
so
there's,
if
you
look
at
the
the
meeting
notes,
there's
a
sandbox
application
process,
so
you
can
follow
the
link
and
and
yeah
and
probably
go
from
there.
But
if,
if
you
have
any
questions
this
you
can
you
can
ask
me
yeah
or
any
anybody
from
the
cncf
staff.
C
B
Thanks:
okay,
yeah,
we'll
we'll
take
it
away
as
a
note
and
what
we'll
do
is
yeah
when
we,
where
we
go,
we're
going
to
go
back
to
bed
now
sleep
for
another
three
hours
and
then
we'll
send
you
an
email
and
ask
you
how
we
how
we
get
into
that
that
part
of
the
program
yeah.
C
Yeah
I
mean
yeah
it's
something
to
consider
right,
but
I
think
it's
yeah.
I
think
it's
good
stuff.
I
mean
it's
useful.
I
think
you
know
people
want
to
streamline
how
they
run.
You
know
some
of
the
isolated
vms
and
with
kubernetes,
and
then
I
mean
I've
been
working
with
the
kata
containers
project
quite
a
bit
too,
but
but
I
see
some
of
the
differences
here
where
it's
all
packaged
up
and
then
it
may
be.
C
You
know
more
of
a
better
use
case
for
people
who
want
to
have
maybe
that
faster
experience
you
know
when
they
have
everything
packaged
right.
So.
B
C
B
A
D
A
The
requests
within
the
pod
to
your
one
vm
and
then
the
rest
is
just
what
it
was
before
pretty
much
right,
where
the
network
setup
was
a
little
bit
more
difficult.
If
you
run.