►
From YouTube: CNCF SIG Runtime 2020-06-18
Description
CNCF SIG Runtime 2020-06-18
A
A
A
A
B
D
I'm
just
gonna
mention
that
the
cube
edge
due
diligence
document
has
been
prepared
and
I
still
need
to
go
through
it
and
just
make
sure
that
it's
complete
and
accurate,
but
I'm
happy
for
the
members
of
the
sig
to
take
a
look
at
it
in
parallel.
My
plan
is
to
make
sure
that
everything's
in
order
there
and
then
I
put
it
up
for
public
comment.
Hopefully
next
week,
that's
great
yeah.
The
link
in
the
in
the
meeting
notes.
Sure.
C
A
Nice
to
meet
you
all
right,
nice
I
know:
I've
met
you
before
Rico,
but
nice
to
meet
you
again
again.
D
A
Great
yeah,
so
you
know
thank
you
for
for
asking
me
to
to
talk
and
and
for
the
interest
in
the
work
that
we've
been
doing
so
a
little
bit
about
myself.
Can
you
hear
me?
Okay,
yeah,.
B
A
How's
that
yeah
yeah,
okay,
okay,
great
yeah,
so
yeah
I'm
gonna
talk
about
some
of
the
some
of
the
research
work
that
we've
been
doing
it
at
IBM.
Research
around
you
know,
I,
guess,
I
guess.
Secure
containers
is
how
we
refer
to
it
lately,
but
mostly
this
has
to
do
with
isolation
between
containers
and
especially
kind
of
how
virtualization
fits
into
that
picture.
A
So
so,
okay,
yeah
so
I,
know
I.
Don't
need
to
tell
this
audience
that
containers
are
great.
This
is
this
is
something
that
you
know
sometimes
depending
on
where,
where
I'm
talking
to
we
need
to
talk
about,
but
I,
don't
know
about
you,
but
in
terms
of
containers.
I
think
that
a
lot
of
the
benefits
of
them
are
really
really
obvious
when
you're
doing
a
lot
of
development,
so
things
like
they're
lightweight
characteristics.
A
B
A
That
to
reproduce
the
environment
whenever
I
want-
and
you
know,
I
think
this
is,
without
a
doubt,
a
huge
advantage
that
containers
have
brought
and
I
also
know
that
this
at
this
group
that
I'm
going
to
now
is
perhaps
one
that
also
asks
this
question,
which
is
okay.
So
if
containers
are
great
for
a
lot
of
this
packaging
stuff
and
a
lot
of
the
sort
of
you
know
some
of
these
build
cases
and
things
like
that,
are
they
also
a
good
a
good
candidate
for
the
unit
of
execution
that
we
might
use,
for?
A
You
know
bigger
software,
so
a
lot
of
these
lightweight
performance
characteristics
that
they
have
are
really
attractive
for
that
right.
So,
like
the
fact
that
they
start
up
so
quickly,
the
fact
that
they
can
share
you
know
share
pages
in
the
host
for
a
memory
density.
All
this
type
of
stuff
is
sounds
very
attractive
in
terms
of
a
runtime.
You
know
a
unit
of
execution.
A
A
However,
one
one
thing
has
sort
of
been
I
guess,
a
thorn
in
the
side
of
containers
from
a
runtime
perspective
for
a
while,
which
is
the
attack
surface
to
the
hosts.
So
this
this
kind
of
stems
from
the
level
of
abstraction
that
the
applications
went
in
the
containers
are
using
to
talk
to
the
hosts.
They
have
the
full.
You
know,
350
plus
system
calls
and
Linux
that
they
can
sort
of
poke
around
at
and
and
look
for,
vulnerabilities
and
other
things.
A
The
good
news,
though,
is
that
we
know
how
to
reduce
when
we
have
a
large
attack
surface
like
this.
We
know
some
basic
approaches
that
we
can
use
to
reduce
the
attack
surface,
namely,
if
you
take
that
kind
of
shared
functionality.
That's
super
highly
privileged
because
it's
in
the
kernel-
and
you
reduce
the
privilege
of
it
somehow
and
you
unshare
it.
So
not
everybody
has
it
anymore.
Then
you
effectively
get
a
much
thinner,
interface
and,
of
course,
the
most
kind
of
familiar
way
to
do
this
is
through
virtualization.
A
You
basically
take
so
the
way
that
I'm
positioning
virtualization
in
this
the
way.
The
way
that
I'm
thinking
about
it
is
that
you're,
essentially
taking
Colonel
functionality
that
was
highly
privileged
and
you're
running
it
in
a
less
privileged
mode
in
a
virtual
machine.
So
a
guest
colonel,
for
example,
is
a
less
privileged
way
of
implementing
the
stuff
that
the
that
the
the
host
Colonel
may
have
implemented.
A
You
know
the
same
abstract
thing
of
the
privileged
thing
and
I'm
sharing,
for
example,
things
like
G
Weiser
would
have
a
user
space
kernel,
so
in
some
sense
the
the
century
and
divisor
would
be
sort
of
a
way
of
taking
some
of
this
functionality
that
would
be
in
the
kernel
and
implementing
it
a
less
privileged
way.
You
could
imagine
doing
something
similar
with
user
mode
Linux
or
something
like
that
and
that's
another
one.
That's
another
way
to
do
it.
A
A
We
looked
at
it
because
I
don't
know
how
familiar
you
are
everything
the
girls
in
the
next
slide.
I'll
talk
a
little
bit
about
them,
but
eunuch
Colonels
have
this
philosophy
of
being
only
what
you
need:
they're
they're,
these
virtual
machines,
with
only
what
you
need
inside
of
them,
which
comes
along
with
all
this,
like
kind
of
lightweight
characteristics.
A
So
what
we
did
in
the
past
couple
years,
some
of
the
the
previous
work
that
we
had
was
to
try
to
take
these
unique
kernel
ideas
and
apply
them
directly
into
containers
and
the
Noblet
containers
were
our
effort
to
do
so.
So
just
a
little
bit
more
about
unique
kernels.
So
one
way
to
think
about
unique
kernels
is
that
they're,
just
like
virtual
machines,
except
instead
of
having
a
guest
kernel
inside
it's
just
an
application
linked
with
only
those
library,
OS
components
that
the
application
needs
in
order
to
perform.
A
A
They
still
don't,
but
you
know
the
ideas
in
it.
We're
we're
we're
very
interesting
and
I
think
a
lot
of
a
lot
of
people
sort
of
jumped
on
this
and
start
to
think
about
how
to
support
more
things.
In
the
internal
case,
and
so
several
like
more
legacy
oriented
puny,
kernels
came
about,
one
of
them
was
called
rum
front,
which
is
based
on
net
bsd.
A
Other
ones
like
hermit,
tux
and
OS
v
even
goes
so
far
as
to
claim
binary
compatibility
with
linux.
In
the
case
of
hermit
x,
+
OS
v,
the
kernels
are
sort
of
written
from
scratch.
They're
not
reusing
legacy,
kernels
like
front
windows,
so
so
anyway,
so
unique,
unique
kernel
is
just
a
sort
of
like
wrap
this
up
a
little
bit
unique
kernels
are
just
think
of
them.
As
these
tiny
little.
D
E
D
So
does
this
approach
the
unicorn
approaches
it
fundamentally
prevent
multi-process,
you
know
containers
essentially,
so
the
typical
application
has,
you
know
a
bunch
of
related
processes
running
on
the
same
machine.
Is
that
not
possible
with
unique
kernels
or
with
a
can
they
share
this
library?
You
know
if
the
assume
these
are
friendly
processes
that
don't
need
to
be.
You
know
that
isolated
from
each
other,
but
they
do
need
to
run.
You
know
in
the
same
environment,
yeah.
A
So
you
know
when
you
think
about
more
general
containers.
Things
like
running
multiple
processes,
start
to
you
know,
start
to
be
obviously
pretty
big
concerns,
but
but
there
I
don't
want
to
I,
don't
want
to
say
that
there's
no,
no
place
where
having
a
restrictive
computing
computing
model
like
that
makes
sense,
because
I
think
there's
there
still
may
be
some
models
anyway.
So
so
we
took
the
we
took
the
unique
kernel
stuff
when
we
tried
to
apply
it
to
containers
as
best
we
could
and
with
the
novel
container.
A
Things
were
based
on
rum
front,
which
was
one
of
the
legacy
Bay
yuuna
kernels,
and
what
we
sort
of
I'm
not
going
to
talk
too
much
about
the
novel
containers
today.
But
what
we
sort
of
learned
through
this
process
was
that
you
know
the
virtual,
the
virtual
machine
like
characteristics
or
the
virtualization
like
characteristics.
A
They
didn't
really
get
in
the
way
of
these
lightweight
things.
We
could
achieve
very
lightweight
properties,
even
though
these
things
were
these
little
virtual
machines.
So
even
though
virtual
machines
had
a
very
kind
of
heavyweight
connotation
at
the
time
less
so
these
days
they
you
know,
you
know
we
found
that
that
amount
of
things,
but
as
as
you
you,
you
know
already
recognized,
it
was
at
a
high
cost
right.
The
thing
we
were
paying
a
lot
for
it
and
that
was
mostly
through
generality.
A
So
the
question
that
we
that
we
started
to
ask
was:
can
we
run
more
like
normal
Linux
applications
on
on
these
things
in
some
way,
and
so
the
the
content
or
the
thing
that
I
want
to
talk
about
today
is
is
mainly
about
that
which
is.
Can
we
take
some
of
these
unique
kernel
like
philosophies
or
lessons
that
we
learned
from
from
doing
the
naba
Container
stuff
and
apply
it
directly
to
normal,
Linux
virtual
machines
and
that's
that's
sort
of
the
subject
of
this
talk.
A
So
the
the
project
that
we
that
we
worked
on
is
called
loop
on
Linux.
So
it's
a
Linux
and
unique
Arnold
clothing
and
the
basic
idea
here
that
we
were
trying
to
do
is
we
were
trying
to
take
a
normal
Linux
VM
and
make
it
as
unique
kernel
like
as
possible,
so
that
you
know
any
sort
of
distinction
about
whether
or
not
this
was
a.
You
know
a
traditional
virtual
machine
or
a
library,
OS
or
user
space
OS
or
any
of
those
two
privileged
in
ways.
A
Right,
so,
if
you
look
at
the
virtual
machines,
I
mentioned
that
that
there's
you
know
at
some
point,
there
was
sort
of
a
big,
a
lot
of
kind
of
I
guess:
negative
negativity
around
virtual
machines
for
a
while
about
them
being
very
heavyweight,
and
this
is
saying
this
that
sort
of
started
to
be
challenged.
So
for
one
the
monitor
process.
E
A
Since
then,
the
same
types
of
things
have
started
having
to
come
you
to
so
that's
one
piece
of
the
puzzle.
If
you
want
VMs
that
seem
lightweight
the
monitor
is
used,
one
thing
that
can
become
more
lightweight
and
people
have
indeed
start
to
do
that.
The
second
piece
of
the
puzzle,
though,
is
also
the
guest,
so-
and
people
have
done
this
also
to
various
degrees,
so
you
can
think
about.
In
the
user
space
of
your
containers.
A
For
example,
people
talk
about
running
into
containers
versus
offline
containers,
the
latter
being
lighter
weight
than
than
the
former
down
at
the
guest
kernel
level.
If
you're
talking
about
virtualization
that
people
have
looked
at
different
kernel
configuration
options,
there's
a
project
called
tiny
hex.
There
is
also,
if
you
look
at
the
Firecracker,
the
micro
VM
configuration.
A
So
it's
like
almost
there,
they
get
rid
of
potential
vulnerabilities
that
may
be
there
like.
They
reduce
the
attack
surface.
However,
as
we
said,
they
suffer
in
particular
because
of
this,
this
lack
of
the
winning
support.
So,
just
to
give
you
a
little
bit
more
detail
about
some
of
these
things,
so
Herman
tuckson
osv
are
two
of
the
unique
kernels
that
claim
binary
compatibility
with
Linux
there's
a
bit
of
a
caveat
there
with
that
binary
compatibility
claim,
for
example,
herman
tux
supports
ninety
seven
system
calls
which
is
not
the
full
annexed
thing.
A
So
if
your
application
is
requiring
more
system,
calls
you're
you're
a
little
out
of
luck.
There,
OS
b,
on
the
other
hand,
has
a
has
a
whole
whole
list
of
kind
of
caveats
that
might
happen.
So
if
your
application
isn't
filed
with
p
IE,
your
application
uses
TLS
if
your
application
is
statically
linked,
if
your
application
does
fork
or
exact
and
and
a
number
of
other
things
are
kind
of
like
little
little
caveats
that
that
make
OSB
difficult
to
use
in
general.
A
A
You
know
the
belief
that
this
will
improve
performance
dramatically,
so
we
also
do
that
to
Linux
through
an
existing
patch,
which
is
called
KML,
which
also
Delta
now,
let's
show
how
how
we
put
it
all
together
and
and
we
how
we
got
those
numbers
that
I
imagined,
okay,
so
for
specialization.
So
if
you
think
about
the
the
unique
kernels
and
like
what
what
is
sort
of
the
primary
way
that
they
they're
getting
the
the
primary
kind
of
philosophy
that
they're
organized
around
it,
this
is
really
specialization.
A
So
they
only
include
what
is
needed
for
the
application
that
is
going
to
run
and
that's
that's
by
design.
So
if
you
look
at,
if
you
look
at
Linux,
it's
it's
a
very
general
system.
It's
not!
It's
not
typically
used
to
specialize
for
a
particular
application.
However,
it
is
extremely
configurable.
So
if
you're
familiar
with
Linux
you'll
recognize
this,
then
you
can
fake
screenshot
here,
there's
about
16,000
options
in
the
cake
and
think
maybe
more
I
think
there's
more
at
this
point.
A
It's
always
increasing
lots
of
number
four
drivers
file
systems,
processor
features,
but
also
a
lot
more
stuff.
So
what
we
started
to
do
is
we
start
to
think.
Can
we
use
that
that
kernel
specialization
that
is
already
existing
in
Linux
to
kind
of
tailor
the
kernel
for
whatever
application
we
want
in
the
same
way
that
a
unique
kernel
would
tailor
its
library
less
for
a
particular
application?
A
So
what
this
picture
here
is
showing
is
how
we
broke
down
all
the
configuration
options
and
thought
about
them
in
terms
of
making
a
Linux
kernel
that
would
be
specialized
to
a
particular
application.
So
we
started
with
an
already
pretty
specialized
kernel
configuration
which
was
the
micro
VM
configuration
that
comes
with
firecracker,
so
in
terms
of
how
many
configuration
options,
so
these
numbers
are
that
that
are
shown
have
to
do
with
whether
something
is
selected
as
config.
A
Yes,
so
micro
VM,
there's
basically
833
configuration
options
which
have
been
selected
as
yes,
the
vast
majority
of
these
other
ones
are
things
like
drivers,
so
cutting
down
to
that.
To
that
much
isn't
should
shouldn't
be
too
much
of
a
shock
inside
those
833
options.
We
then
looked
inside
and
we
determine
which
ones
we
had
to
have
which
ones
were
required
for
any
application
to
be
able
to
run
it
as
a
VM,
and
we
basically
selected
283
that
we
just
had
to
have
to
boot.
A
You
know
this
isn't.
This
is
no
longer
a
it's.
No
longer
a
very
black
and
white
choice,
it
becomes
a
very
slippery
slope.
So
what
we
wanted
to
do
here
is
get
something
that's
as
much
as
unique.
Are
alike
as
possible
and
then
see
how
how
we
could
sort
of
you
know
change
change,
that
of
how
it
would
degrade
over
time
and
so
I'll
get
to
that
a
little
bit
later
and.
E
D
If
I'm
correctly,
so
the
16,000
Linux
configurations
options
that
you
can
turn
on,
basically
of
you
took
5%
of
them
and
then
you
trimmed
that
down.
You
took
34%
of
5%
and
then
you
identified
that
I
guess.
That's
44%
of
those
are
actually
kind
of
fundamentally
useful
to
everything,
multi,
processing
and
Hardware
management,
and
the
other
so
you're
down
to
44
percent
of
34
percent
of
5%
of
16,000.
A
So
it's
not
quite
so
the
multi
processing
in
the
hardware
management.
We
said
these
don't
match
the
unique
kernel
thing.
So
if
you,
if
you're
interested
in
running
unique
kernels
for
whatever
reason
you
don't
need
those
because
it
doesn't,
it
doesn't
match
your
model.
Furthermore,
you
may
not
need
any
of
that
that
50%
56%
of
66
percent
of
5%,
which
is
the
the
311
applications
for
specific
options
and
I'll,
go
a
little
bit
more
into
detail
now
about
about
what
each
one
of
those
categories
are
and
why
we
categorized
it
that
way.
A
The
idea
is
that
if
you
have
a
unique
kernel
that
does
require
them,
you
would
put
them
back
in,
but
we
think
that
the
multi-processing
in
the
card
room
is
when
any
unique
kernel
that
you
have
is
not
going
to
require
those.
This
is
our
sort
of
associative,
okay,
so
so
application.
So
just
to
give
you
a
sense
of
what
the
application
specific
options
are.
So
a
very
kind
of
straightforward
example
is
some
kernel
configuration
options
toggle,
whether
or
not
a
system
call
is
is
present
in
the
kernel.
A
A
Similarly,
with
with
a
lot
of
these
other
system
calls,
you
may
not
need
them,
some
of
them.
You'll
notice
are
ones
that
you
pretty
much
always
like
food
decks
pretty
simple,
but
nevertheless,
though,
that
that
is
one
kind
of
class
of
application,
specific
options
that
we
took
out,
the
other
the
other
types
of
ones
are,
you
must
have
an
application
that
does
not
use
proc
at
all
or
systems
control.
Was
there
a
question?
Oh.
A
Okay,
so
if
they
don't
use
various
kernel
services-
and
you
know
you
can
kind
of
just
not
put
those
in
if
you
wanted
similarly
there's
a
bunch
of
library
functions
that
may
be
in
the
in
the
kernel
library,
as
well
as
debugging
in
information
type
of
functions.
So
these
are
the
types
of
things
that
we
classified
as
these
are
applications
specific
options.
A
So
every
time
you
want
to
run
a
application
as
a
unique
kernel,
you
would
select
a
certain
number
of
these
for
the
application
that
you're
trying
to
run
the
other
ones
that
these
are
the
other
two
categories
that
as
we
mentioned.
So
this
is
the
multi
process
one.
So
there's
a
bunch
of
features
in
the
kernel
might
go
away
from
that.
If
you
think
about
the
unique
kernel,
trust
model
that
thing
inside
the
guests
is
kind
of
all
one
thing:
uni
kernels
are
used
to
having
that
link
together.
A
A
Yeah,
so
that
that-
and
that's
an
important
point
that,
though
I'm
going
to
come
back
to
a
little
bit
later,
okay,
so
the
question
comes
up
very
quickly,
which
is
how
do
you
get
that
application
specific
kernel
configuration
because
the
you
know
your
eat?
How
do
you
know
what
your
applications
may
use
and
we
don't
have
a
great
answer
for
this?
This
is
this
is
quite
a
hard
problem.
A
D
A
So
yeah,
so
that's
a
little
bit
more
complicated
than
then
it.
It
seems
right
so,
depending
on
your
application,
it
may
it
may
load
some
library
that
will
do
some
more
system
calls.
Some
execution
paths
may
not
be
may
not
be
triggered
in
whatever
test
you're
doing
so.
We
had
a
couple
things
we
had
to
do.
We
had
to
figure
out
what,
like
the
sort
of
success
criteria,
what
the
what
the
test
is,
what
the
load
on
the
application
should
be
so
that
we
would
have
a
representative
run
of
that
application.
A
Then,
once
we
had
that
we'd
have
to
have
to
figure
out,
so
we
could
certainly
ask
trace
that
we
get
all
the
system
calls
some
of
these
other
things
that
were
not
Cisco
based.
Sometimes
it
would
break
so
if
we
took
out
proc
support,
for
example,
and
the
application
in
some
execution
path
decided
that
needed
to
look
in
proc,
then
that
would
be
something
that
we
wouldn't.
We
wouldn't
get
with
s
trace,
for
example,
dude.
Do
you
know
what
I
mean
like
it's?
A
A
Yeah
so
I
mean
but
I
guess
I
guess.
Another
thing
to
say
is
like
if
you're
familiar
with
putting
set
comp
policies
on
applications.
This
is
a
very
similar
problem,
so
sec
comm,
you
know
in
sec
numbers
a
sort
of
it
can
be
used
as
a
way
to
specify
system
calls
that
can
be
allowed
or
denied,
and
by
doing
that
sometimes
you
deny
a
system.
Call
that
some
ice-cubes
pathway
use,
which
is
again,
is
a
test
test
coverage
issue
in
general.
I
think
what
happens?
Is
you
end
up
with
more
permissive,
more
conservative
policies?
A
It
seems
possible
that
perhaps
some
kind
of
static
analysis
type
things
could
also
help
here,
but
in
general
I
think
you're.
Absolutely
right,
it's
it's
not.
This
is
not
like
particularly
a
new
or
a
different
problem.
I
think
it's
a
problem
that
comes
up
a
lot
which
is
just
coming
up
again.
You
know
like
whether
it's
like
test
coverage
or
a
sec
bomb
for
whatever
it
is.
Okay,.
D
A
A
A
So
I
had
mentioned
that
there
was
a
patch
to
Linux
which
allows
you
to
run
an
application
in
kernel
mode.
It's
called
kernel
mode,
Linux
and
basically,
what
it
does
is
allows
applications
to
be
running
in
you
know
in
in
kernel
mode,
so
that
the
so
the
system
calls
can
be
replaced
with
just
regular
calls.
A
Just
so,
you
can
see
how
this
changes
the
the
system
calls.
We
made
some
small
changes
to
the
Lib
C.
This
is
muscle,
Lib
C.
Then
you
see
that
we
are
basically
calling
instead
of
system
instead
of
calling
the
Cisco,
we
call
to
a
particular
location
which
is
exposed
by
the
kernel,
the
applications
themselves.
A
This
all
also
it's
just
just
we're
totally
clear
on
this.
This
also
predated
the
kernel
page
table
isolation,
so
the
user
and
the
kernel,
if
the
user
in
the
kernel
were
already
in
the
same
address
space,
so
they're
in
the
same
this
in
same
address
space,
so
the
only
thing
that
really
was
happening
anyway
in
those
kernels
was
the
was
the
switch
of
processor
mode,
and
that
is
what
has
been
removed
here
too.
A
D
A
Is
within
the
guest
and
so
so
yeah
the
isolation
boundary
now
is
just
so
we
basically,
we
have
totally
removed
that
isolation,
boundary
that
was
within
the
guest
soul,
mate,
yeah,
okay.
So
so
to
put
this
all
together,
we
started
off.
We
got
the
Linux
kernel,
the
global
kernel
source,
and
we
have
some
unmodified
app
that
we
want
to
run
with
some
libraries.
The
first
step
that
we
had
was
with
the
specialization.
So
the
way
we
do
that
is,
we
somehow
get
a
application,
specifically
playing
configuration,
and
so
this
is
that
process.
A
A
You
know
your
your
application
might
need
the
network
to
be
initialized
or
something
like
that.
So
there
are
initialization
scripts,
which
we
don't
necessarily
run
here,
but
those
can
be
application
specific.
So
if
you,
if
your
application
does
not
require
the
network
or
the
disk
or
bra
or
whatever,
you
might
not
need
the
initialization
script
for
those.
A
So
that's
another
piece
that
we're
gonna
add
to
the
picture
here.
So
we're
gonna
take
the
container
image
which
has
all
that
stuff,
but
then
we're
also
gonna
put
in
an
application,
specific
startup
script
and
then,
in
this
case
we're
getting
this
by
hand
again,
it's
possible
that,
depending
depending
on
whether
or
not
you
have
a
way
to
automatically
generate
the
application,
specifically
any
configuration,
you
could
potentially
also
do
the
same
for
for
the
startup
script,
because
this
is
basically
like.
A
If
you
select
this
option,
then
you're
gonna
have
to
set
up
you
just
like
networking.
You
have
to
set
up
the
network
that
type
of
thing
anyway.
After
getting
that
we
can
take
all
those
files
from
the
from
the
container
image
the
starving
script
etc
and
create
a
loop
line
root
FS
now
so
now
we
have
the
kernel
image
and
the
root
filesystem,
which
can
be
run
by
a
normal,
monitor
such
as
firecracker.
A
Well,
not
that
firecrackers
a
super
normal
monitor,
but
this
could
be
run
by
camy
or
Lord
firecracker
or
something
in
our
case,
because
we
were
going
for
lightweight.
We
went
with
firecracker,
so,
given
all
that
that
we
saw,
then
we
ran
in
some
experiments
to
see
if
we
could
start
to
match
the
performance
that
we
were
getting
from
from
the
from
the
from
the
uniforms.
So
we
used
basically
just
a
single
machine
here
with
with
firecracker.
A
A
All
ok,
great
so
yeah,
so
Iowa
I
won't
spend
too
too
long
on
these.
So
there's
a
couple
of
interesting
ones
that
came
out
so
the
first
thing
we
looked
at
was
the
configuration
diversity.
So
how
how
much
this
application
specific
configuration
actually
manifested
for
a
bunch
of
applications
that
we
tried.
So
we
took
the
top
20
popular
applications
on
docker
hub
and
we
went
through
this
manual
process
of
finding
out
what
their
application
specific
configuration
was
and
what
we
found
so
this
graph
here
the
x-axis
is
the
the
numb.
A
It's
the
the
support
for
the
X
number
of
top
apps
and
then
the
y-axis
is
the
number
of
configuration
options.
It's
a
union
of
all
the
options
that
was
necessary
to
run
all
of
those
top
X
applications,
and
so
what
you
see
is
you
know
when
you
get
to
20,
you
only
need
19
configuration
options
in
addition
to
loop
on
base.
A
We
only
did
20
because
again,
this
is
like
a
this
was
a
manual
process
for
us,
but
you
know
this
kind
of
gave
us
a
sense
that
maybe
there's
a
more
general
configuration
that
doesn't
need
to
be
specified
per
application
that
we
could
use
and
we
called
the
one
which
has
all
those
19
apps
and
functions
in
the
evaluation.
We
call
that
one
blue
pie
in
general,
so
the
special
configurations
we
have
we
have
lupine
base.
D
A
This
is
this
is
so
this
graph
is
showing
the
union
of
all
those
options,
so
it
may
so
all
20
of
the
top
applications
can
be
run
with
this
same
configuration,
which
is
as
19
options
in
it.
Ok,
this
this
table,
which
I
know,
is
quite
small
over
here.
This
has
the
actual
number
that
we
found
for
each
one
of
them.
So
if
you
see
like
you
know,
some
of
them
require
the
most
it
looks
like
is
13,
and
some
of
them
require
0
in
addition
to
the
loop
line
based
again
so.
A
Yeah
the
19
options,
yeah
so
I
mean
so
in
some
sense.
This
is
a
little
bit
promising
because
you
know
the
application
of
specific
that
that
has
all
those
issues
that
have
to
do
with
like
state
exploration,
and
you
know,
coverage
and
and
all
that
stuff
which
are
which
are
difficult
problems.
If
there
is
something
general
that
you
know
that
we
feel
confident
supports
a
lot
of
things.
A
This
would
be
much
easier
to
for
people
to
sort
of
get
behind,
but
that
question
of
whether
or
not
it's
it's
general
enough,
he's
always
gonna,
be
there
so
I'm,
just
gonna
fight
like
in
the
interest
of
time.
I'm
just
gonna
go
through
these
pretty
quick.
So
you
know
we
basically
measured
the
current
image
size.
A
So
in
this
case,
Harmon
tucks
is
the
smallest
that
OS
VN
rump
are
slightly
bigger.
I
believe
this
is
because,
oh,
it's
being
lumpar
a
little
bit
more
extensive
in
their
support
than
hermit
X's.
So
if
her
vertex
the
vaults
to
support
more
things,
it
probably
will
start
looking
more
like,
though
it's
being
run
in
any
case,
lupine
is
kind
of
it's
it's
its
competitive
with
them
an
image
size.
A
similar
story
happens
with
boot
time.
A
You
know
here
you
see
that
we're
actually
getting
better
than
some
of
them.
Although
there
are
various,
there
are
various
configurations
on
the
unique
URLs
which
really
changed
changed
their
performance,
so
like
OS,
we,
for
example,
we've
read
the
literature
and
it
said
sub
ten
millisecond
uptime,
and
we
when
we
ran
it,
we
were
getting.
You
know,
50,
milliseconds
or
something,
and
when
we
looked
into
it,
it
had
to
do
with
the
file
system
choice.
If
you
change
it
to
a
read-only
file
system,
you
can
get
some
10
millisecond,
but
again,
I.
A
Think
the
overall
story
to
say
here
is
that
the
loop
line
is
kind
of
in
the
same
ballpark
as
these
unit
kernels,
even
in
general
memory
footprint
is
another
one.
You
know
again
similar
story
here,
system
called
latency.
This
is
this.
One
gets
a
little
bit
more
interesting
because
this
is
where
we
start
to
see
the
benefits
of
having
that
KML
punch.
So
if
you
look
here,
we
have
loop,
I
know
KML
and
lupine
that
we're
comparing
against.
So
this
is
the
advantage
that
you
get
by
running.
A
Without
that
processor
mode,
switch
for
system
calls
and
again
like
comparing
to
the
Uni
kernels,
it's
very
comparable,
it's
better.
In
some
cases.
This
is
a
system
called
latency
micro
benchmark,
so
assistant
Pauline
cement,
but
micro
benchmark
is
actually
the
best
case
for
this
kernel
Mullenix
for
for
this
overhead
elimination,
but
that
KML
benefit
it
goes
away
very
quickly.
So,
if
you
have
stuff
that
that's
happening
in
between
your
system
calls
that
tends
to
end
more
times
the
benefit
that
you're
gonna
get.
A
So
this
is
another
interesting
one,
we're
very
limited
by
the
way
and
what
we
can
use
to
to
evaluate
these
things,
mostly
by
what
you
can
run
in
the
unique
kernels.
But
here
what
we
end
up
with
is
33%
advantage
over
micro
VM
and,
having
looked
a
little
bit
more
into
this,
we
think
this
is
because
a
lot
of
those
security
options
like
the
kernel
page
table
isolation,
ORS
or
sec
nom,
or
things
like
this-
that
are
fairly
expensive.
If
have
a
single
trust
only
and
they
can
be
removed.
A
How
much
gives
you
some
some
more
some
more
performance,
takeaways,
so
specialization
of
the
guest
Colonel
seems
very
important.
We
saw
big
improvements
even
over
my
chromium,
which
is
fairly,
you
know,
has
some
degree
of
specialization
already.
However,
it
does
seem
like
specialization
per
application,
may
not
be
super
important,
so
this
is
the
difference
between
a
mullah
in
general
and
lupine.
A
A
So
first
it
was
a
bit
surprisingly
small,
especially
because
when
you
start
with
micro
benchmark
the
macro
benchmarks,
you
get
very
little
overhead
and
I
guess
the
other.
The
other
takeaway
here
is
just
that
by
using
Linux
a
lot
of
these
common
problems
that
you
have
about
not
being
able
to
support
applications
just
go
away
and
to
this
point
that
we
made
before
you
know
this
is
this
is
a
really
important
point,
which
is
that
loop
line
is
still
Linux,
and
so
you
get
like
sort
of
a
great
graceful
degradation.
A
A
If
you
decide
that
you-
and
you
know
when
we
started
measuring
these
things,
adding
in
separate
processes,
especially
if
there
are
control
processes
that
don't
have
high
context
switch
rates,
it
has
virtually
no
overhead
that
we
could
measure
when
we
started
looking
into
running
multiple
processors
on
these
things,
which
also
is
not
typically
supported
in
a
lot
of
the
unique
kernels.
These
also
have
fairly
low
overhead.
So
again
it
becomes
a
service
slippery
slope,
but
you
have
some
some
choice.
A
There
I'm
gonna
fly
through
because
I'm
out
of
time,
but
there's
a
bunch
of
benefits,
I'm,
not
trying
to
say
that
unique
URLs
are
not
good
for
anything.
That's
not
what
I
say
here.
There
are
benefits
that
do
not
compare
language
base
as
Unicode
benefits,
especially
when
you
get
to
use
the
language.
You
get
a
lot
of
benefits
from
that,
but
yeah
yeah
I
think
I
gotta
stop
there,
unfortunately,
but
the
next
bit
I
wanted
to
talk
about
was
how
to
get
this
into
the
container
ecosystem,
but
but
yeah
anyway.
Yes,.
A
So
so
what
we're
looking
at
now
and
so
I'll
just
gonna,
give
it
a
little
teaser
this
next
piece
that
we're
working
on
now.
So
what
we
want
to
do
is
like,
so
we
think
that
there's
a
big
advantage
in
having
a
lighter-weight
guest
in
some
of
these
micro
VM
type
of
proaches
to
containers,
so
cata,
containers
or
or
something
like
that.
What
we're
trying
to
look
at
is
this
tension
between.
A
We
would
try
to
look
at
how
how
pods
fit
into
the
picture,
because
some
of
these
things
that
we
were
talking
about
here,
some
of
the
benefits
that
we're
getting
have
to
do
with
getting
rid
of
that
trust
domain
inside
the
guest.
So
you
know
you
asked
about
like:
are
you
feeling
away
all
the
protections
in
the
inside
the
guest?
A
That's
okay,
to
throw
away
if
everything
is
in
the
same
trust
domain
once
you
have
a
pod
with
sidebars
that
may
not
be
in
the
same
trust,
domain
or
agents
that
may
not
be
in
the
same
trust
domain
that
that
gets
a
little
bit
more
tricky,
so
we're
trying
to
figure
out
what
how
you
know
really.
The
question
is
how
how
this
can
apply
in
the
context
of
pods,
but
I.
Think
if
we
get
a
good
grasp
on
that,
then
I
think
that
that
we
can
get
this
into
we
can
we
can
get.
A
C
C
Great
I
think
yeah
I
think
this
is
really
useful
and
I
think
it
can
be
you.
It
can
be
used
as
a
replacement
for
some
of
the
runtimes,
but
my
kind
of
containers
what
they
use
in
a
you
know
their
own
kernel
and
then
maybe
there
might
be
able
to
use.
But
you
know
stripped-down
kernel
for
for
these
applications
and
to
improve
performance.
Yeah.
A
Yes,
yeah,
certainly
like
a
sort
of
a
you
know,
a
target
would
be
to
say
if
we,
if
we
could
say
like
hey
in
your
kernel,
can
you
put
these
things
into
your
configuration?
The
problem
that
we're
having
now
is
that
we're
not
sure
if
the
agent
design
or
the
the
the
fact
that
you
have
the
agent
inside
the
guest
in
the
way
that
you
do
and
the
way
that
that
the
pods
are
supported?
C
A
C
And
then
yeah
and
I
think
they're
working
on
on
an
agent
just
lighter
way
to
would
to
have
an
agent
in
rust,
yeah.
A
Yeah
yeah
I
saw
that
I
think
it's
really
cool
part
of
part
of
what
I
would
like
to
be
able
to
do
is
identify
when
what
it
means
what
it
means
to
the
kernel
configuration
for
example,
when
you
say
this
is
lighter
weight
because
really
like
taking
out
code.
Sometimes
it
means
that
you
need
less
things
from
the
kernel.
Sometimes
it
means
that
you
don't
depending
on
what
it
is
you
know
so
that
there
can
be
some
kind
of
if
we
have
more
insight
into
what
it
is.
A
That
gives
you
these
performance
benefits
like
which
parts
should
you
really
try
to
cut
out
in
which
parts
should
do
not
really
matter?
If
you
have
it.
Unfortunately,
it
seems
like
the
answer
to
that
question
is
probably
gonna,
be
something
around
security,
so,
like
things
like
set,
comm
kernel
page
table
isolation.
Things
like
this.
If
you
need
to
have
those
those
security
domains
within
your
guest,
those
are
expensive.
Yeah.
C
And
the
other
question
is
how
this
work
actually
can
become
some
sort
of
project
by
itself
right
so
I
think
because
Karaites
its
own
project,
but
then
maybe
kernel
specialization
and
would
be
like
its
own
unique
kind
of
kind
of
domain
right
so
and
then
I'm
trying
to
see
how
this
this
can
be
something
separate
from
from
cotta
containers
or
a
fire
cracker
right,
so
so
that
it
has
some
some
some
of
its
own
y-you
know
tacit
Ori,
so
that
you
can
yeah
it.
So
it
can
become
sort
of
a
separate
project.
Yeah.
A
A
Some
of
the
stuff
that
we
did
for
the
paper
that
we
wrote
so
a
lot
of
these
things
that
I'm
talking
about
like
these
scripts
to
run
this
stuff,
a
lot
of
the
configurations.
They
are
open
source.
But
right
now
we
don't
have
anything
kind
of
concrete
to
the
that
we
feel
we
can
contribute
to
the
community.
A
C
A
C
A
C
You
just
run
see
yeah
so,
but
in
in
your
case,
I
guess:
you're
using
containers,
you're
pulling
out
container
images
right
so.
A
Yeah
yeah,
so
in
this
case,
in
this
case,
for
this
work,
what
we're
doing
is
we're
running
the
firecracker
directly,
we're
not
we're
not
doing
it
through
the
OCI
or
anything
where
we're
in
the.
What
we
are
using
the
containers
for
is
to
pull
out
the
images.
So
that's
like
a
doctor's.
What
is
it
when
we
get
the
tarball
of
the
image
from
the
dog
from
the
doctor,
then
contra
for
the
commence
thing
is
yeah
talk,
but
we're
not
we're
not
actually
running
them
that
way
in
the
stuff
that
that
is
kind
of
ongoing.
A
We're
we're
looking
into
the
difference,
then
we're
running
it
with
cata
containers,
which
has
the
cataract
time
as
well
as
something
called
run.
Queue,
which
is
a
kind
of
very
very
it
run.
Queue
stand,
stands
for,
run,
commune
in
the
same
way
in
front
sees
or
in
the
container,
which
is
a
have
a
more
lightweight
form
of
of
virtualized
containers,
not
as
fully
featured
but
we're
trying
to
understand
the
difference
between
running
a
full
pot
inside
and
not
running
a
full
answer.
Yeah.
C
A
Yeah
right
I
mean
there's
a
lot
of
there's
been
a
lot
of
work
where
you
know
you
have
learning
phase
where
you
run
applications
and
you
us
traipse
them,
and
you
figure
out
what
they're
doing,
and
you
know
over
a
certain
number
of
you
know.
Like
I,
said
a
learning
phase,
you
can
start
to
develop
a
a
second
profile.
A
These
things
are
I,
don't
know
I,
guess
to
me.
They
always
feel
a
little
bit
a
little
bit
like
you
know,
like
you
know,
as
it's
only
as
good
as
your
learning
phase
was
as
good.
You
know
and
I
think,
because
of
that
I
think
that
it
leads.
It
leads
most
projects
from
from
what
I
can
tell
it
leads
most
projects
to
be
not
as
strict
about
their
second
as
they
could
be,
but
yeah.
That's
definitely
related
for
sure.
C
And
the
other
thing
that
just
I
thought
about
is
maybe
this
is
a
good
fit
for
like
a
work
group
or
something
and
within
there.
So
so
a
lot
of
the
community
members
can
collaborate
and
maybe
come
up
with
a
better
solution
for
isolation
and
also
for
for
trimming
down
the
Linux
kernel
within
that
I
situation.
C
A
A
A
Yeah
so
say
it
feels
like
that's
like
it
feels
like
that's
a
train.
That's
coming
and
you
know
I,
don't
I,
don't
know,
I,
don't
know
you
know,
I,
don't
know
what
the
Cata
people
are
thinking
about
it.
But
I
was
wondering
if
there
was
any
groups
that
are
kind
of
talking
about
that
as
well,
because
it
seemed
very
related
as
well.
Yeah.