►
From YouTube: IETF95-NFVRG-20160406-1620
Description
NFVRG meeting session at IETF95
2016/04/06 1620
A
C
B
B
B
B
B
A
D
B
A
Close
to
the
doors
both
mind,
would
you
mind
to
close
the
doors
please
brother
Raphael,
would
you
mind
forgot
to
close
the
doors?
Thank
you.
Well,
that's
for
you
who
are
not
aware
of
it.
This
is
the
first
session
of
any
veerji
welcome
and
if
you
don't
know
what
is
this
about,
this
is
not
your
room,
I
would
say.
Let's
start,
we
have
a
very
packed
agenda.
Let's
start,
this
is
the
noticing.
The
IPR
policy
is
not
a
note
well,
because
we
have
not
in
an
ITF
meeting.
A
This
is
a
meeting
of
the
research
group,
but
please
note
that
we
follow
the
same
IP,
a
policies
that
the
ITF
RF
follows.
Well,
just
a
very
short
introduction
on
the
administrivia.
You
have
most
of
today's
and
tomorrow's
lights.
That
URL,
though
we
are
still
making
a
final
adaptation
of
some
of
the
tomorrow's
rights,
so
wait
till
tomorrow
to
be
sure
the
today's
are
unstable,
so
view,
so
is
that
other
first,
if
I
remember,
was
the
first
five
slide
x.
I
have
not
seen
I,
don't
see
anyone
but
now
have
45
people
participating
in
me.
A
Take,
oh
that's
good,
and
precisely
for
that
swell
day
and
a
partner
to
you,
have
the
mailing
list,
information
and
the
usual
web
and
wiki
nothing
new
I'm
sure.
Since
we
have
a
note
taker
whose
work
I
forgiven
now
I
put
your
ninja.
So
so
you
could
not
escape
we.
We
would
need
at
least
one
JavaScript.
Let
me
say
is
this:
a
very
easy
is
a
very
easy
task.
A
A
Okay,
I
would
try,
but
the
problem
is
that
I
have
my
world,
no
I
can't
even
try.
I
mean
in
the
worst
case
again
it
can't
even
try
to
be
at
worked
as
a
JavaScript
I
will,
but
anyway,
I
have
been
patient,
is
going
I.
Have
the
intention
to
notice
in
the
minutes
that
this
is
a
drawer,
not
a
very
collaborative
community.
A
A
According
to
our
schedule,
we
are
I'm
going
to
the
first
one
in
speaking
anyway,
just
in
the
previous
meetings,
we
have
had
some
scent
of
announcements,
all
ongoing
research
event
related
to
an
EV,
but
the
problem
is
that's
well,
you
have
too
many
of
them.
It's
sort
of
that.
The
the
slide
will
have
become
completely
crutch
by
the
number.
Well,
it
would
be
not
accurate
and
it
would
be
useful
or
whatever.
A
So
it
is
that
we
will
refrain
from
doing
so
in
the
commute
here
and
in
the
community
meetings,
but
if
you're
sure
you're
still
work,
you're
welcome
to
distribute
any
call
for
papers
or
or
whatever,
which
is
related
to
research
events.
And
let
me
let
me
insist
on
this
because,
for
example,
what
is
not
a
sexual
and
will
be
stopped
by
the
chairs
is,
for
example,
if
you
are
called
for
any
kind
of
commercial
event,
so
a
tool
availability,
whatever
we're
talking
about
research
event,
research,
conferences,
special
issues
on
scientific
publications,
etc.
A
Please
be
careful
about
that.
The
the
RG
list
is
not
for
commercial
announcements
of
any
kind
events
products,
whatever
okay.
So.
Finally,
this
is
the
agenda
for
this
position.
Apart
from
this
welcome
that
is
allowed
to
finish
and
the
question
whether
you
have
any
concern
about
this
part
of
the
agenda.
We
will
dedicate
approximately
55
minutes
in
total
to
an
introduced
via
to
the
open
source
manga
project.
That's
I,
do
myself,
then
we
have
Philippe
are
talking
about
their
experience
with
which
kind
of
the
transition
took
note
the
performance
of
different
kind
of
utilization
technologies.
A
A
Okay,
now
among
acting
as
presenter,
so
it's
a
continuum
so
is
about
introducing
this.
This
project
that
is
open
for
my
open
source,
Manuel,
OS
n,
that
is
essentially
tha,
is
the
consist
of
the
building
an
open
source
community
around
some
seed
projects
where
unity,
one
of
them,
is
our
open,
mono
framework
that
we
were
presenting
Julie
ideas
before
in
Dallas.
A
If
I
remember
well,
this
one
place
so
the
idea,
basically,
what
is
it
that
is
you
have
had
their
first
is
a
working
on
an
open
sore,
a
man,
you
stack
that
is
first
open
source
and
second,
I'm
very
important
that
is
aligned
with
him
with
the
information
and
data
models
I
agreed
by
yet
the
HP
Envy,
not
necessarily
with
all
the
function
of
the
composition.
We
will
see
later
on
what
we're
talking
about,
and
that
is
committed
to
apply
this.
A
In
fact,
these
these
models
in
the
real
operation
and
provide
feedback
to
the
to
the
HC
n
fe
community.
In
the
of
this
application,
very
much
focus
on
something
that
you
know
has
been
one
of
the
of
our
main
goals
is
precisely
a
shooting:
a
predictable,
predictable
performance.
When
you
make
a
deployment
on
a
bit,
Rossi
infrastructure
being
sure
that
the
natural
functions
behave
according
to
a
stable
and
predictable
performance,
not
necessarily
the
highest
possible
but
predictable
and
well,
as
you
can
imagine,
enable
enabling
their
an
ecosystem
of
solutions
that
are
basing
this
model-based
approach.
A
That's
one
place,
so
is
an
open
source
community
that,
at
the
end,
was
hosted
by
my
etsy,
is
hosted
by
T
is
part
of
their
strong
commitment
with
nav,
etc,
and
that
that
provides
us
with
with
a
very
easy
and
natural
alignment
with
that
the
nav
AG
inside
that
series
them
with
which
we
intend
to
make
it
driven
by
a
service
provider
requirements.
You
can
see
there.
There
is
a
list
of
the
current
participants.
A
In
fact,
this
is
not
completely
updated,
but
because,
while
fluid
flying
here,
I
I
knew
that
are
a
couple
of
other
companies
that
have
joined,
but
anyway,
that's
will
help
you.
You
have
an
idea.
The
ones
on
the
top
are
the
ones
that
are
supposed
to
be
driving.
The
process
is
not
that
they
are
special
in
any
sense
that
that's,
they
are
users
of
the
manual
staff.
A
Are
the
ones
around
at
the
bottom
are
supposed
to
be
precisely
their
supporters
of
the
year
and
the
is
open
to
whoever
who
wants
to
either
join,
formerly
the
community
or,
as
usual,
contribute
a
piece
of
software
contribute
ideas,
contributes
experiences
of
using
the
the
platform,
is
one
place
so
the
essential
requirements,
apart
from
providing
an
orchestration,
support,
etc
and
address
addressing
the
functions
of
a
man.
You
stack
inside
the
nav
environment
Argus
for
one
is
the
capability
of
using
enhance
performance
awareness.
A
So
the
idea
is
that
when
you
make
a
deployment,
you
don't
make
a
deployment
and,
let's
say
via
vanilla,
coke
style
of
here.
You
are
Club
orchestrator,
do
whatever
you
want
to
with
my
with
my
functions,
because
I
cannot
care
less
is
about
a
any
certain
I
want
to
be
sure
that
the
performance
is
going
to
be
between
this
range,
so
I
want
to
know
where
I
am
or
where
the
orchestrator
is
going
to
deploy
my
my
function
and
in
which
conditions
and
apply
this
these
conditions
to
volunteer
a
certain
performance
seconds.
A
Looking
at
that,
what
is
our
real
experience
and
the
diversity
of
organizations
like
Telefonica
I
cannot
speak
from
the
rest
of
operators
in
the
world,
but
the
phone
is
extremely
diverse
and
has
a
very
wide
footprint
in
very
diverse
countries
in
the
site
that
most
of
them
speak
Spanish,
but
you
know
believe
it
or
not.
The
differences
between
Argentina
and
Spain
are
huge
and
the
differences
between
Argentina
and
Chile
being
neighbors
August,
even
even
even
larger,
so
that
requires
that,
for
example,
these
two
multi
things.
What
is
that
about
being
multi-room?
A
It's
difficult
to
mandate,
you
are
going
to
use
OpenStack
and
only
open
sack
and
a
particular
version
of
OpenStack
in
all
over
or
for
footprint,
is
well.
That
gets
worse
if
we
talk
with
our
customers
etc.
So
the
idea
is
precisely
to
be
able
to
cover
a
wide
variety
of
different
bins.
A
multi-site,
by
definition,
is
that
it
will
work
from
the
beginning
in
several.
A
single
or
sm
instance
will
work
with
several
sites
from
the
beginning
and
with
the
idea
of
rethinking
the
architecture
and
identifying
two
main
blocks.
A
Building
on
the
one
side,
with
what
we
call
service
orchestration
and
what
we
call
resource
orchestration.
That's
one
place
we
have
decided
and
I
said.
As
I
said
at
the
beginning,
we
didn't
start
from
scratch.
We
are
not
rewriting
the
whole
thing
again,
because
several
of
the
project
partners
have
pieces
of
code
that
they
were
willing
to
contribute
and
to
adapt
and
regenerating
to
the
first
release
of
OS
em.
So
we
have
a
special
orchestrated.
A
We
have
one,
the
lovely
oppor-
we
have
due
to
chance
to
manage
precisely
vnf
the
the
too
many
tnf
modeling
and
for
doing
BNF
configuration
and
we
have
a
real
early
lunch,
but
for
service
orchestration
of
work
and
whatever.
A
To
dns
management,
this
is
well
in
some
advantages,
among
other
things
that,
before
the
formal
start
of
the
project
we
are,
we
were
able
to
demonstrate
some
results
already
of
what
we
wanted
to
achieve
and
the
well.
We
are
in
the
position
of
showing
by
example
in
which
direction
before
doing
the
errands
or
whatever
we
have
running
code
to
show
in
which
direction
we
want
to
move
the
the
project,
but
they
are
only
an
initial
starting
point.
Yes,
that's
all
the
components
will
be
work.
A
When
it
comes
to
life
cycle
management,
we
check
out
life
cycle
management
about
what
happens
with
something
you
need
to
do
something
with
the
with
the
status
of
a
certain
component
there
that
has
been
visualized.
We
believe
that
all
right
circle,
management's
at
the
end,
is
affecting
the
service
and
has
to
be
built
with
with
a
service
view,
doesn't
make
that
much
sense.
Thinking
in
our
view-
and
this
isn't
it
that
we
can
discuss,
doesn't
make
sure
the
mesons.
You
say
no
I
have
a
beautiful
machine
that
has
to
scale
come
on.
A
You
don't
have
to
scale
the
Whitman
machine.
You
have
to
scale
up
the
service.
The
goal
is
the
service,
not
that
not
that
beautiful
machine.
You
have
to
inform
the
service
that
you
are
doing
something
in
the
virtual
machine,
because
that
will
affect
in
general
function.
It
will
affect
the
attachment
point
it
will
affect
at
the
end,
how
you
are
interacting
with
outside
of
the
of
the
service.
So
all
the
way
of
these
decisions
is
something
that
we
want
to
be
taking
at
the
service
level.
A
This
show
you
achieve
is
that
you
can
think
that
well
and
if
yo
has
a
part
of
it
has
light
cycle
management
as
he
as
the
DN
event,
but
you
have,
and
you
have
resource
orchestration
in
both
parts.
Why
not
unify
them
in
a
single
functional
components
for
sure
that
may
that
is
true,
that
that
implies
that
you
would
want
to
have
if
you're,
using
OS-
and
you
won't
have
a
subject
that
you
can
call.
This
is
the
NFL
I
can
ply
it
and
plug
in
something
different.
A
But
let
me
insist:
we
are
trying
to
achieve
fully
functional
environments.
That
is
much
more
focused
on
function
and
not
necessarily
on
architecture.
That's
one
place
and
again
is
a
model
driven
approach.
This
is
something
that
probably
you
have
had
several
times
about
when
talking
about
things
like
odl,
but
essentially
the
idea
that
you
keep
consistently
using
the
same
models
of
development
for
for
testing
and
for
final
deployments
and
and.
A
And
service
provisioning-
and
that
implies
that
you
can
support
this-
also
are
so
fashionable
things
like
dev,
ops
and
and
continuous
integration,
and
we
are
strongly
committed
to
contribute
back
to
a
geography.
Next
one
please
trying
to
accelerate
a
little
bit.
This
is
a
list
of
the
challenges
we
see
there.
Are
this
several
slide
on
this?
One
is
when
you
talk
about
the
resort
orchestration.
A
A
Well,
I
would
say
that
we
should
not
be
thinking
any
longer
on
that
we
should
be
thinking
on
which
are
the
functions
that
we
can
visualize
and
the
functions
is
a
forwarding
playing
a
filtering
g
opposing
force
or
whatever
that
under
the
box
is
with
great,
were
used
to
think
we're
talking
about
functions
about
building
building
services
by
combining
those
functions
that
are
not
right.
Now,
let
me
insist,
they
note
note.
A
Carbon-Carbon
notes
have
several
functions.
There
is
to
the
compulsions
functions
and
I
lo
LOL
resort
on
interracial,
and
this
is
it.
Maybe
this
is
one
of
the
essential
challenges
that
we
have
when
we
want
to
hug.
We
have
to
have
to
support
that
new
decomposition.
That's
one
place
when
it
comes
over
from
the
service.
A
A
On
the
one
hand-
and
the
second
is
facilitated,
inventing
a
direct
integration
work
with
OPN
fe,
so
we
are
part
of
the
air
of
the
ecosystem.
Next
one
please
so
I'm
Jack.
This
is
just
to
finish.
We
run
a
demo
at
the
resident
Mobile
World
Congress
in
Barcelona.
This
februari
is
using
I
tell
you
is
using
a
very
realistic,
a
scenario
of
VPN
using
vol
te
for
an
enterprise
environment.
It
is
completely
automated.
It
runs
using
this.
A
This
right
deal
Launchpad
interface
this
and
you
can
and
is
a
choice
to
prove
essentially
the
main
concepts
that
are
behind
us
and
when
it
comes
to
full
end-to-end
service,
automation,
support
role
for
EPA,
the
capability
of
the
multi-site
able
to
beam
and
the
combination
of
this
of
the
different
vns
depending
of
their
performance
and
requirements
and
already
a
well.
It
has
helped
us
a
lot
in
identifying
limitations
and
potentially
blue
shoes
next
one
that
I
guess
is
the
last
well.
A
These
are
you
have
there
in
that
you
have
there
the
couple
of
our
videos
on
the
how
the
left
hand
side
what
the
demo
was
about
on
the
right
hand,
side
if
you
are
more
curious-
and
you
have
time
enough
time
more
detail
about
how
the
demo
infrastructure
was
built,
its
address
do
I,
won't
show
anything,
don't
panic
about
that
today.
But
if
you're
curious,
do
you
want
to
have
a
look?
A
I
will
be
more
than
happy
to
try
to
arrest
any
question
you
have
while
here
what
go
back
home
whatever
this
one
and
it's
yes,
the
last
one
please
this
is
simply
last
one
get
ya.
This
is
simply
has
to
remember
if
you're,
if
you
are
interesting
in,
have
a
look
at
what
we
are
doing,
how
we
are
doing
how
we
are
planning
to
return
and
you
want
to
to
join
the
party.
Is
there
undone,
isn't.
D
C
E
A
C
D
Okay,
thank
you
and
also
remain
during
the
blue
sheet
in
case
you're
in
time.
Please
do
sign
them.
Thank
you.
So.
A
F
Okay
thanks
good
afternoon,
my
name
is
Philippe.
We
see
I'll,
be
talking
for
the
next
20
minutes
or
so
about
some
experiences.
We've
had
while
doing
performance
tests
with
different
virtualization
technologies,
ranging
from
VMS
uni,
kernels,
tiny
distributions
of
general
purpose,
OSS
and
containers.
Next,
please,
ok!
So
in
the
beginning,
the
vm
king
was
on
charge
of
everything,
and
everything
is
good.
That
was
the
only
game
in
town
and
then
what
happened
was
we
got
a
couple
of
more
options
around
virtualization
for
running
via
naps
and
for
other
things
as
well.
F
One
of
them
is
what
people
are
calling
Linux
taina
fication
is
basically
taking
a
sort
of
stripped-down,
kernels
and
distributions
and
building
VMs
around
those
click.
Please,
along
with
that,
we
have
something
called
unique
kernels
which
are
specialized
VMs
on
top
of
basically
minimalistic
OSS.
So
not
a
general
purpose.
Sources
like
linux
or
freebsd,
but
some
sort
of
realistic
ones
and
the
basically
single
application,
VMs
and
I
will
explain.
But
these
are
more
detail
later
on
quick
and,
of
course,
containers,
and
so
what
happens
is
that
the
vm
king
is
not
happy
anymore.
F
He's
not
the
only
game
in
town
anymore,
but
click.
The
question
is:
where
are
we
in
terms
of
performance
and
performance
can
be
very
many
different
things,
so
we
took
a
number
of
different
metrics.
One
of
them
is
how
big
is
the
virtual
machine
image
size
because
it
matters
in
terms
of
shipping
it
out
and
lifecycle
management,
now
memory
consumption?
Of
course
we
have
things
like
the
M
creation
time,
destruction
time
migration
times
we
have
delay,
which
is
important.
It's
only
for
nfe
and
we
have
of
course
throughput
next
ok.
F
So
we
have
a
sort
of
line
that
sort
of
stretches
between
higher
overhead
and
lower
overhead
click.
We
probably
have
the
standard
vm
towards
the
sort
of
higher
overhead
range
of
things,
and
then
we
have
these
tiny
fight,
VMs,
which
are
probably
have
lower
overhead,
and
then
we
have
these
specialized
VMs
call
Colonels
and
then
all
the
way
on
the
right.
We
have
containers
right,
but
the
question
is
you
know.
Probably
this
is
the
order,
but
how
far
right
or
how
far
left
are
they
next
right?
F
So
it
could
be
that
unique
kernels
are
actually
close
to
containers
or
not
click,
or
it
could
be
that
unique
kernels
for
some
metrics
are
ahead
of
containers
or
not.
So
the
question
is:
can
we
actually
quantify
some
of
this
so
I
already
mentioned
the
metrics
I
can
give
you
a
little
bit
so
say
a
little
bit
about
the
methodology
so
for
vm
imaging
member
consumption
via
standard
tools
like
LS
top.
If
we're
on,
then
we
use
excel
for
the
vm
creation
time.
F
What
we
do
is
we
create
the
vm
and
then
we
do
a
syn
flood,
and
then
we
measure
when
we
get
a
reset
for
the
way
to
reset
detection.
Basically
for
throughput,
we
use
iperf
and
modifies
versions
of
Viper,
because
I
perf
will
run
on
a
Linux
VM.
Ok,
it'll
run
on
containers.
Ok,
but
if
you're
running
on
the
minimalistic
OS,
that's
not
the
same
API
as
Linux,
for
instance,
so
you
need
to
slightly
modify
I
prefer
to
run
on
those
for
RTT.
C
F
F
A
F
Problem
we'll
get
there
eventually,
no
ok.
So,
as
I
said
on
that
timeline,
we
had
sort
of
four
points.
We
have
standard
VMs
for
this.
We
use
the
debian
based
linux
vm.
We
have
these
tiny
five
ems.
We
use
something
called
tiny-g's,
which
is
essentially
a
sort
of
stripped-down
linux
kernel
and
a
very
small
district,
which
is
essentially
busy
box
and
I
perf,
and
nothing
more
and
I'll.
F
Give
you
more
details
about
that
for
Ginny
kernels
on
then
we
use
mini
OS
as
the
minimalistic
OS
and
we
put
something
called
mini
pearl
on
top,
which
is
I,
modified,
I
perf,
basically,
and
for
kvm
we
use
osv
as
a
mystical
s
and
then
I
perf
containers
is
darker
click,
so
now
I'll
speak
a
little
bit
more
about
so,
of
course,
containers
you're,
aware
of
Stan
events
are
aware
of.
Maybe
these
25
years
in
New
kernels
are
a
little
bit
more
complicated.
Also
a
few
things
about
us
for
tiny
five
years.
F
Okay,
so
a
standard
vm
is
an
application.
On
top
of
the
district,
you
have
a
lot
of
stuff
different
layers
at
the
colonel
services.
Libraries.
You
have
a
lot
of
elements
quick,
but
ultimately
you
only
use
a
few
of
them
in
this
case,
if
you
want
to
run
engines,
there's
only
a
few
libraries,
you
use
only
a
few
kernel
modules
you
use
and
the
rest
is
sort
of
just
their
Idol
okay.
F
So
the
idea
under-
and
this
is
not
our
idea-
there's
a
lot
of
people
working
on
what
they
call
the
next
Anna
fication,
especially
from
the
abetted
world,
is
to
sort
of
take
on
the
charter
bill,
the
the
smallest
distress
and
kernels
possible,
while
still
keeping
the
Linux
avi,
because
then
you
have
compatibility
with
applications,
and
so
what
we
call
time
exists.
Tailor-Made
distribution-
and
this
is
an
example
with
endings,
but
we
we
used
one
with
iperf,
of
course,
click
and
just
to
give
you
an
idea.
F
This
is
what
the
PS
command
looks
on
on
time
mix.
It
looks
busy,
but
actually
click.
Most
of
those
are
just
colonel
kernel,
threads
and
processes
and
click.
Again,
it's
only
the
bottom
part.
That's
the
user
space
stuff.
And
if
you
look,
you
have
four
in
jenks
threads,
you
have
an
ssh
server
so
that
we
can
log
in
for
convenience
and
and
then
you
have
a
shell
and
nothing
more
right.
So
there's
really
nothing
much
running
on
there.
Ok,
click
and
click
again,
please,
okay,
click!
F
So
that
was
the
time
five
vm
unique
kernels
go
ahead,
please
once
more
okay!
So
what's
the
Uni
colonel?
It's
a
specialized
vm,
so
it's
normally
a
single
application,
plus
a
minimalistic
OS.
These
minimalistic
OSS
generally
have
a
single
address:
space,
cooperative,
scheduler,
so
there's
very
few
overheads,
there's
no
context
which
is
no
preemption
things
like
that.
So
and
of
course
well,
you
know,
there's
no
extraneous
code
click.
F
F
So
what
about
what
we
used?
So,
to
do
benchmarking
we
will
tag
unique
kernel
that
has
just
I
prefer
on
top
of
it.
So
on
then
click
please.
The
application
is
iperf
and
it
sits
on
top
of
mini
OS
and
on
kvm
we
have
high
perf
and
then
osv
at
the
bottom,
and
no
SV
is
just
a
open
source,
minimalistic
OS
for
not
only
for
kvm
but
originally
for
kvm
yep.
F
Okay,
you
should
know
that
there
are
quite
a
few
optimizations
to
zen
and
kvm
when
we
run
our
small
VMs
at
uni.
Kernels
I
don't
have
time
to
go
over
those,
whereas
the
container
numbers
are
just
off
the
shelf
container
numbers.
We
haven't
gotten
around
to
optimizing
those
okay,
so
some
of
the
results,
the
first
one
image
size.
So
how
big
are
they?
These
are
the
blue
bars,
so
you
can
see
it's
in
megabytes
and
then
at
the
bottom
and
the
x
axis,
you
have
all
the
different
setups
right
so
from
left
to
right.
F
You
have
the
standard,
vm,
Songz,
n
and
k
vm.
You
have
the
containers.
You
have
unique
kernels
on
kvm
n
on
zen,
and
then
you
have
this
time
next.
Small
vm
on
zen
and
kvm,
so
blue
bars
are
image.
Size
and
sort
of
the
takeaway
message
here
is
that
the
uni
kernels
finance,
if
the
Uni
criminals
on
mini
us,
are
the
smallest
but
containers
income.
And
then
you
have
things
like
the
timings
is
still
pretty
small,
3.7
max
and
so
forth.
So
memory
usage
you
can
see
it's
about
containers,
unsurprisingly
wins
3.8,
makes
unique.
F
F
This
is
unoptimized
containers
at
1.7
seconds,
and
then
we
have
time
mix
beating
that
400
milliseconds,
the
smallest
ones,
are
the
unique
kernels
on
Zen
31
milliseconds,
but
we
can
also
get
it
down
to
under
10
milliseconds
next
see
the
containers
win
here
so
about
four
milliseconds
gimme
kernels
are
not
far
behind
five
and
nine
milliseconds
on
Zen
and
kvm,
and
then
tie
nicks
and
standard
VMs.
Not
surprisingly,
some
of
the
standard
VMs
have
the
same
timings,
because
the
kernel
itself
is
basically
the
same
next
throughput,
TX
and
rx
I'm
just
really
summarizing.
F
Now
we
get
we've
optimized
TX
on
Zen,
so
we
get
the
highest
throughput
with
the
the
mini
earth
on
them.
Containers
are
not
far
behind.
Rx
is
sort
of
the
same
story.
Ok,
so
the
conclusions
is
that,
basically,
what
people
repeat
is
pms
have
really
good
isolation,
but
our
heavyweight,
so
you
have
to
choose
between
either
containers
or
VMs,
depending
on
whether
you
want
isolation
or
performance.
What
I'm
trying
to
suggest
with
these
numbers
is
that
this
is
a
lot
more
nuanced.
F
It's
not
such
a
clear
winner,
one
way
or
the
other,
depending
on
the
metrics
and
depending
what
you
mean
by
a
vm
right.
So
the
sort
of
take
away
message
is
you
know
if
you,
if
you
thought
this
was
sort
of
clear
and
easy?
It's
it's
actually
not
things
like
universal
kernels
and
sort
of
tiny
5
EMS
provide
you
with
other
points
in
that
timeline
that
make
it
so
that
they
may
be
a
viable
alternative
to
containers.
F
Instead,
multi-tenant
deployments
were
isolation
is,
is
a
must
at
least
until
isolation
and
containers
catches
up
the
last
one
yeah
the
vm
king
is
sort
of
happy
again
last
one
I'm
not
going
to
read
through
this,
but
this
is
broke
potential
contributions
that
are
that
we
could
do
towards
this
containers
draft.
Thank
you.
Thank
you.
G
F
Do
that
yeah
we
haven't,
we
haven't
looked
at
optimizing
containers
because
we
just
haven't
gotten
to
that.
So
you
may
say
this
is
somewhat
unfair
comparison,
because
you've
optimized
the
heck
out
of
Zen
kvm
and
unique
kernels
and
you've
sort
of
taking
vanilla
containers
and
that's
a
fair
enough
comment.
You
should
take
away
orders
of
magnitude
comparisons
out
of
this,
rather
than
the
actual
numbers.
A
D
So
very
nice
work,
so
one
of
the
things
we
have
an
active
track
theory
of
containers.
Of
course
we
are
actually
right
now
more
investigating
the
issues
and
what
you're
finding
is
I
think
the
security
implications
is
kind
of
one
of
the
biggest
blockers
to
contain
their
adoption.
Besides,
the
other
issues
around
you
know
be
end
up
being
offenders
moving
slow
and
also
the
OS
support.
So
what
is
your
take
on
the
security
front?
Have
you
done
any
benchmarking
work
there
so
far,
so
winning.
D
So,
in
essence,
I
say
so:
for
example,
you
know
PM's
by
nature
and
securit.
When
you
go
to
the
Container
landscape,
say
through
see
groups
and
all
the
situation,
we
can
add
those
additional
security
hooks.
How
will
that
impact
the
performance?
That's
what
I
meant.
So,
basically,
you
know
anything
you
add
to
make
the
container
deployment
more
secure.
Then
how
does
start
comparing
on
the
performance
yeah.
B
H
Okay,
so
want
to
talk
about
performance,
high
performance
nfe.
Give
you
a
bit
of
background
of
where
we're
coming
from
is
that
we've
actually
been
using
building
our
network
appliances
using
a
commodity
office,
commercial,
off-the-shelf,
Intel
processors.
For
about
14
years
now,
inside
our
appliances,
we've
been
achieving
horizontal
scale
using
load
balancing
techniques
and
something
that
was
kind
of
like
a
service
Cheney,
and
recently
we
demonstrated
what
one
terabit
per
second
in
ten
rack
units
of
commercial
off-the-shelf
hardware,
with
in
conjunction
with
with
Dell
and
Intel.
So
next
slide.
H
Please
I'll
tell
you
how
to
get
information
on
that
later.
So
the
way
we're
coming
at
this
is
we're
looking
at
being,
you
know,
transparent
middle
box,
think
of
it
like
a
bump
in
the
wire,
possibly
multiple
wires,
in
which
the
traffic
is
being
routed
in
a
symmetrical
manner.
I'll
talk
a
bit
more
about
that
later
too.
H
H
Know
one
of
them
is
to
think
about
breaking
your
jobs
down
into
multiple
independent
threads,
locking
those
threads
to
physical,
where
they
get
the
whole
chord
of
himself
connecting
those
threads
to
physical
Hardware,
where
they,
you
know,
really
are
closely
connected
to
the
drivers
and
using
the
software
techniques
of
zero
copy
forwarding.
Next,
please.
A
H
H
We
found
that
you
know
any
use
of
semaphores
between
threads,
even
if
they're
not
contested,
are
they
really
cut
down
on
your
memory
bandwidth?
So
our
approach
is
slice
up
the
network
data
so
that
each
thread
can
work
independently
on
on
its
chunk
of
data.
So,
if
you're,
an
Internet
service
provider
a
natural
way
to
slice,
this
is
to
put
individual
subscribers
traffic
on
different
processors
and
not
have
to
have
any
crosstalk
between
these.
H
Other
approaches
are
just
if
you
don't
think
about
subscribers,
think
about
using
IP,
address,
hashing
or
something
like
that
and
when
I
say
thread
here
by
the
way,
there's
different
kind
of
approaches.
You
can
take
one
approaches,
you
know
a
lightweight
thread
in
a
process
or
you
can
have
independent
processes
or
even
individual
machines,
they're
different
ways
of
breaking
it
down.
Next,
please.
H
Okay,
we
found
it
also
important
to
assign
these
threads
to
really
lock
them
down
to
the
cores
that
they're
allowed
to
use.
So
in
a
virtual
environment.
This
is
you
know
two
things
on
the
on
the
host
you
want
to
dedicate.
You
want
to
dedicate
the
specific
Hardware
course
to
specific
virtual
machines
and
then
within
the
virtual
machines.
You
have
to
lock
your
threads
to
specific
course.
H
If
you've,
given
your
virtual
machine
more
than
one
core,
you
know,
and
the
reason
for
this
is
you
get
the
benefit
of
caching
instruction,
caching
and
data
caching,
and
also
the
the
packets
you
want
them
to
arrive
in
to
the
right
core
and
the
one
else.
Other
thing
worth
pointing
out
is
on
some
of
these.
Some
of
these
blades,
for
example,
that
have
multiple
CPUs
on
them.
H
Not
all
the
CPUs
are
connected
to
the
hardware
in
exactly
the
same
way
that
the
Numa
architecture,
also
their
different
memories
and
is
a
closer,
is
faster
on
different
processors.
So
you
know
we
found
that
some
times
you'll
have
one
of
the
cpu
sockets
will
be
connected
to
the
pci
bus
for
the
interfaces
and
the
other
one
is
not
so
it's
actually
more
expensive
to
get
the
traffic
to
the
other
cpu.
H
H
Another
thing
that
we
found
very
important
is
to
use
the
physical
function
pass
through
or
sr
IOV
now
I.
Don't
everyone
may
not
know
what
this
is
physical
function
pass
through,
in
a
nutshell,
is
when
you
give
the
device
driver,
you
give
the
actual
device
hardware
to
the
virtual
machine,
it
owns
it
and
it
disappears
from
the
operating
system
of
the
host.
H
Sr
iov
is
a
technology
in
which
the
hardware
take
one
physical
interface,
give
it
multiple
MAC
addresses
and
the
hardware
sorts
the
packets
into
different
queues
for
each
each
of
your
threads.
So
you
can
attach-
and
so
now
the
hardware
is
sorting
the
packets
for
you
and
you
don't
need
a
piece
of
software
to
sit
there
between
the
physical
function
and
the
virtual
machines.
I
think
I've
explained
that
now
thanks.
H
Sees
a
typically
you
say:
I
want
to
allow
a
flow
in
if
I
saw
the
syn
packet
out
and
then
I
met
allow
the
traffic,
then,
if
you're
only
seeing
the
incoming
traffic,
you
can't
make
the
right
decision
anyway,
but,
as
we
know
in
in
the
internet,
packets
can
take
multiple
routes.
They
can
take
multiple
lengths,
so
we
want
to
bring
those
together
for
processing.
So
next
slide
please.
So
this
is
a
typical
way.
We
put
together
a
solution.
The
device
is
on
the
left.
H
Those
would
be
your
intersecting
a
link,
so
these
would
be
like
transparently.
You
want
to
make
a
bump
in
the
wire
view
there.
So
the
traffic
coming
in
say
on
the
top
one:
the
red
path,
traffic
comes
to
a
device
and
I'm
showing
this
like
a
service
function.
Cheney
now,
where
we
hit
a
classifier,
choose
a
virtual
machine
to
send
a
traffic
through
the
virtual
machine
returns
the
traffic
to
the
link.
It's
supposed
to
go
out.
H
I
can
have
another
link
on
the
bottom
shown
by
the
blue
path,
with
traffic
in
the
opposite
direction
being
sent
to
the
same
virtual
machine,
because
it's
the
same
subscriber
or
it's
the
same
IP
address
or
it's,
and
we
call
this.
You
know
you
can
think
of
it
as
there's
two
things
going
on
here:
there's
a
load
balancing
we're
trying
to
make
use
of
all
the
virtual
machines
in
a
fair
way
and
also
we're
consistently
removing
the
asymmetrical
traffic
next
slide.
Please.
H
So
something
also
want
to
talk
about.
Is
we
think
of
the
east-west
bottleneck?
You
know
the
more
virtual
mission
every
time
the
traffic
goes
in
and
out
of
a
virtual
machine
under
the
switch
fabric,
you've
you're,
using
up
interface,
bandwidth
and
as
I
mentioned
earlier,
because
the
software
generally
can
keep
up
with
the
interface
rates.
You're
really
become
bottleneck
on
on
the
bandwidth.
We
also
point
out
that,
as
you
encapsulate
traffic,
you
make
the
traffic
bigger
and
it
traffic
that
comes
in
on
a
link
can't
go
to
the
when
it
goes
to
the
service
function.
H
So
maybe
this
is
kind
of
obvious,
but
if
you
have
a
to
touch
solution,
if
your
packet
goes
to
two
machines,
you
really
need
twice
the
gear.
Love
one
touch
solution,
so
you
really
that's.
Okay,
I
guess
for
the
functions
themselves,
but
if
you're,
adding
extra
components
to
just
you
know,
touch
the
traffic
to
forward
it,
it
becomes
an
extra
cost.
Ok!
Next,
please!
So
one
of
my
suggestions
for
the
service
function.
Chaining
everyone
here
may
not
be
familiar
with
it,
but
there's
a
service
function,
forwarder,
sff
and
there's
a
service
function.
H
These
are
identified
as
architectural
components.
We
think
it's
really
a
good
thing
to
put
them
in
the
same
thread.
So
even
though
their
architectural
e
are
distinct
components,
we
suggest
putting
them
together,
because
you
know,
if
you
have
a
separate
software
component
to
do
the
service
function,
forwarding
you're,
dedicating
at
least
the
core
to
do
the
forwarding
or
interface
you're,
adding
extra
latency
extra
queuing,
and
it's
also,
if
it's
on
a
different
device,
it's
consuming
your
east-west
budget
next,
please.
So.
H
This
is
a
service
chaining
one
service
chain
which
we
think
of
it
as
one
packet
through
two
functions
with
the
service
function,
forwarder
inside
the
function
itself.
So
you
know,
packet
can
come
in
be
classified,
choose
a
virtual
machine,
the
virtual
machine,
because
it's
got
the
sff
function,
it
okay,
five
minutes;
okay,
it
can
be
forwarded
to
the
next
virtual
machine
and
then
it
can
be
forward
the
final
hop,
whereas
next
slide.
H
If
there's
an
external
forwarding
function,
you
know
it
has
to
go
to
the
the
SF
to
the
sff
back
to
the
next
SF
s
FF,
so
right
that
function
on
the
bottom
is
really
every
packets
going
in
and
out
twice
and
its
really
cheering
up
a
lot
of
interface
bandwidth.
Next,
this
so
my
point
about
the
to
minimize
encapsulation
overhead.
You
know
it's
not
really
an
MTU
question
to
us,
because
we
can
control
the
end
to
you
in
this
environment.
H
It's
a
question
that
every
time
you
encapsulate
you
make
the
packet
bigger
is
reducing
the
effective
bandwidth
because
your
packets
are
bigger.
You
know,
10
gigabits
in
10.1
gigabits
come
out.
You
may
not
be
able
to
take
10
gigabits
in
so
in
the
service
function.
Sorry,
an
n,
sh
environment
in
kind
of
the
architecture.
I
was
showing
you.
We
would
choose
direct
encapsulation
on
top
of
the
ethernet,
so
we
would
take
the
mac
followed
immediately
upon
it,
and
it's
a
cheddar
went
on
the
same
layer
to
segment.
H
We
would
resort
to
using
an
IP
encapsulation
only
if
the
devices
were
separated
by
an
IP
network
and
they're
different
metadata
types
of
proposed
for
an
SH
and-
and
we
would
take
the
type
to
when
we
don't
need
the
extra
space
for
the
metadata
next,
please,
okay
and
then
I
was
reflecting
I.
Think
of
almost
at
the
end,
I
was
reflecting
on
the
question.
Everyone
asks.
Is
you
okay,
so
how
many
VMS
do?
I
do
I
neat
and
unfortunately,
the
answer
right
now
is
you.
H
H
H
Does
your
infrastructure
support
that
am
I
able
to
assign
kors
threads
two
cores?
Can
my
virtual
machines,
except
that
configuration
of
which
course
to
run
the
threads
on?
And
then
you
know
the
real
hard
part
comes
in
thinking
about
the
path
of
your
packets.
How
many
interfaces
do
they
have
to
go
in
and
out
to
satisfy
your
solution
and
is.
B
H
Switching
and
interfaces
are
they
bottle
macking
your
solution,
so
it
may
not
just
be
the
vm
performance.
That's
bottle,
making
it
next,
please
so.
Zero
coffee,
packets
use
your
hardware
acceleration
pass
through
sr
IOV
slice
up.
The
traffic
really
think
about
the
east-west
traffic.
Try
to
minimize
that
minimize
number
of
touches
and
to
do
that,
but
your
forwarding
decisions
in
the
thread
it
of
the
function
itself
and
keep
the
packet
overhead
low,
I
think
my
last
slide
is
the
next
one
and
I've
included
a
link
to
our
blog
post
on
the
terabit
per
second
performance.
H
It's
a
no
it's
marketing
material,
but
it
does
have
a
lot
of
details
in
all
the
difference.
You
know
exactly
what
hardware
we
used
and
how
we
assigned
functions,
two
cores
and
how
we
wired
up
and
all
the
different
parts
we
purchased
and
put
together
for
the
demonstration-
and
some
of
this
is
also
discussed
in
the
draft
in
the
sfc
section,
but
that's
just
something:
I
contributed
recently.
Don't
have
any
questions.
Thank.
D
D
H
H
I
didn't
think
about
that,
but
yeah
I
think
you
could
think
about
over.
Ladies
there
as
well.
So
if
you
can
remove
you
know
an
extra
thread,
it's
only
job
is
to
say
encapsulated
and
do
next
hop
I
think
that's
a
good
performance
choice.
Now
it
may
become-
or
maybe
you
may
want
to
trade,
that
off
with
four
other
objectives,
but
from.
C
D
So
what
I
meant
was
essentially
from
what
is
seeing
under
slower
laser
combined
with
such
an
idea.
I
mean
it
doesn't
get
the
full
deployment
benefit.
That's
all
I
mean
overlays.
A
A
A
F
F
And
and
the
second
one
is,
he,
we've
been
doing
a
performance
optimization
for
a
fee
for
a
while
and
performance
measurements.
So
a
lot
of
the
things
you
touch
the
phone
they're
certainly
tricky
another
one,
that's
even
worse
is
that
a
lot
of
the
nav
functions
are
dependent.
How
expensive
they
are
is
completely
dependent
on
the
traffic
matrix
ending,
but
in
the
traffic
I'm
thinking
DPI's
and
things
like
that.
How
do
you
cope
with
those.
H
H
A
Is
this
is
the
kind
of
thing
of
this
question
I
would
love
to
see
in
the
list?
Please
I
mean
this.
Is
it
if
you,
if
you
have
some
particular
patterns,
20
to
22
value
ideals,
and
this
is
the
kind
of
thing
that
would
help
us
in
making
evolve
this
and
make
a
real
contribution
now
I'm?
Sorry
about
this?
No
should.
G
Be
then
dan
McGann,
which
more
and
more
especially
the
center
operators,
are
complaining
about?
You
know
the
power
consumption
at
the
heat
dissipation
when
you're
a
Teen
you're
looking
into
the
performance.
Are
you
looking
in
any
of
those?
You
know
what
is
the
power
cost
per
bit?
You
know
and
the
heat
dissipation
cost
per
bit,
because
this
is
also
some
in
those
tamara
dynamics
are
giving
them
more
and
more
headache.
Yeah.
H
As
our
you
know
is
one
of
my
second
criteria,
if
you
go
to
a
report,
you
can
get
the
exact
numbers
that
we
talked
about.
We
talked
about
watts
per
gigabyte
or
gigabits
per
watt.
I
can
remember
which
it
was.
We
found
it
just
a
little
bit
more
than
hour
and
in
appliance,
then
you
can
angry
see.
Some
comparisons
are
made
good.
A
Thank
you.
They,
the
last
fear
that
the
last
one
should
be
very
pretty
quick,
because
ronke
asked
it
explicitly
to
be
to
make
it
very
short.
So
you
do
go.
Oh.
D
Yeah
I
just
go
to
the
next
slide.
Please
fade
away
yeah!
Thank
you.
So
when
you
started
off
n
fer
g,
one
of
the
key
goals
was
I
mean
not
just
drugs
and
ideas,
but
influence
ray
how
to
influence
real
implementations
happening
and
the
third
you
would
we
wanted
to
make.
A
direct
impact,
opens
the
most
relevant
open
source
projects
which
include
OpenStack,
open,
daylight
and
OPN
Fe
and
the
dad.
Essentially,
what
we
did
was
we
all.
We
saw
that
the
policy
based
resource
management
work
item
has
had
the
maximum
interest
in
a
community.
D
You
know
I,
think
10
plus
in
fact
close
to
15
drafts
and,
in
fact,
specifically
on
the
resource
management.
Anglica
policy
is
something
we've
been
working
with:
the
NF
ERG
team
and
also
the
OpenStack
community
for
almost
last
four
months,
refining
a
proposal
and
the
output
of
it
is
I'd
like
to
call
as
policy-driven
platform
of
our
scheduler.
D
Essentially,
if
you
look
at
you
know
the
OpenStack
scheduling
framework-
and
you
know
several
presenters-
talked
about
the
need
for
platform.
Awareness
be
a
go
talk
about
it.
You
know
Dave
talked
about
it,
I
mean
basically
it
sorry,
we
all
those
and
hardware
acceleration.
How
do
you
bring
down
bring
them
on
board?
So
some
of
the
key
problems
we
see
with
the
current
OpenStack
scheduling
framework
or
that
framework
is
not
extensible
I
mean
if
you
want
to
add
a
new
feature.
You
have
to
wait
six
months
right.
D
Basically,
you
know
next
OpenStack
release
and
also
when
it
comes
to
usability
and
in
terms
of
you
know
whether
you
made
the
right
placement
decision,
how
you
actually
verify
whether
things
happen.
Even
there,
there
is
a
gap
and
also
the
other
big
problem
you
see
is,
like
you
know,
lack
of
a
single
representation
for
monitoring
and
placement.
That
means,
essentially,
you
know
when
resource
utilization
changes.
You
want
to
go
verify
and
see
whether
you
know
there
should
be
the
right
placement
or
it
should
be
something
else.
D
D
You
know-
and
you
know,
tacker
OpenStack
Thakur,
which
is
another
OpenStack
orchestration
project
or
ok,
tnte
com.
You
know
it
can
be
anything
because
the
way
we
are
defining
the
northbound
API
is
through
OpenStack
heat,
complete
template
and
we're
not
saying
that
consumption
model
it
can
be
easily
consumed.
So
that's
all
I
had.