►
From YouTube: IETF99-BMWG-20170717-0930
Description
BMWG meeting session at IETF99
2017/07/17 0930
https://datatracker.ietf.org/meeting/99/proceedings/
B
A
A
A
C
A
B
G
G
C
This
is
our
working
group
meeting
for
ITF
99,
if
you're
not
subscribed
to
the
mailing
list
and
you'd
like
to
do
so.
I've
grabbed
these
slides
and
put
that
link
and
join
us
on
the
mailing
list,
how
many
people
are
attending
in
a
BMW
G
for
the
first
time?
Quite
a
few.
Thank
you
about
seven
or
eight,
very
good,
so
welcome.
Welcome
you'll
find
that
this
is
a
really
easy
group
to
join.
C
If
you
have
some
testing
background,
there's
not
that
much
literature
that
is
absolutely
necessary
to
be
able
to
have
read
and
to
join
our
discussion.
So
if
you
RFC's
will
see
and
hear
as
frequent
references
and
those
are
the
ones
that
you
can
start
out
with
to
get
a
good
background
and
actually
come
and
talk
to
either
one
of
us
afterwards.
C
C
A
C
We
have
several
documents
here
that
describe
the
rules
for
IPR
disclosure
regarding
the
contributions
and
contributions,
our
oral
statements
or
written
our
electronic
communication
made
at
any
time
or
any
place
which
are
addressed
to
all
of
these
bodies.
Ye
ITF
plenary
iesg.
Any
ITF
mailing
lists
any
working
group
which
we
are
right
now
any
BA
birds
of.
C
Specially
the
ID,
the
arts,
the
editor
and
the
time
all
the
contributions
are
subject
to
our
c
53
78
and
3979
updated
548
70.
So
they
we
basically
ask
that
everyone
be
aware
of
these
things
and
when
you
registered
for
the
meeting
you
check
the
box.
That
said
you
were
aware,
and
and
as
a
result,
so
we're
expecting
everyone
to
follow
the
rules
and
what
does
it
mean?
C
All
right
so
onto
the
onto
the
real
work.
So
here's
here's
our
agenda.
We
haven't
we
cook,
they
asked
for
volunteers
as
note-taker,
volunteers,
okay,
Marius,
thank
you
and
anybody
else.
We
usually
really
try
to
get
to
well,
so
that,
while
one
note-taker
is
speaking
another
another
can
help
out.
Anybody
else
want
to
volunteer
alright.
C
Well,
we'll
hope
to
get
some
some
help
with
that
and,
in
fact
feel
free
to
join
the
etherpad,
the
details
of
which
I
can't
easily
provide
you,
because
my
people,
my
my
pcs,
locked
up
here,
but
that
we
can,
we
can
get
that
going.
So
here's
our
here's
our
plan.
Oh,
we
still
don't
have
a
jabber
scribe
right.
Okay,
Sarah
is
gonna,
take
care
of
jabbering,
we've
talked
about
IPR,
the
blue
sheets
are
circulating
and
ticky
I
noticed
that
you
need
to
sign
the
blue
sheet.
Please.
C
So
it's
right
up
front
here,
just
the
and
and
and
try
to
keep
the
sheet
in
the
back,
and
if
anybody
walks
in
the
door
hand
it
to
them
and
suggest
that
they
do.
Is
that
all
all
the
rest
of
us
is
gonna
decide,
and
that
would
be
great
all
right.
So
here's
our
agenda
we're
first
going
to
talk
about
the
working
group,
graphs
and
the
status
of
the
working
group.
B
C
C
We
had
good
support
at
the
last
meeting
for
adopting
we
had
many
people
say
on
the
mailing
list
that
they
would
support
it,
but
only
a
few
said
that
they
would
actually
review
the
document
when
I
reminded
people
that
that
that's
a
part
of
supporting
the
document
writing
one
word
in
an
email
reply
of
support
that
doesn't
get
it
over
the
edge.
We
really
need.
The
truth
is
we
need
expertise
in
these
areas
to
be
able
to
complete
the
work
in
a
in
a
manner
that
that's
going
to
be
useful
to
the
industry.
C
So
that's
that's
the
kind
of
thing
we're
looking
forward
to
be
demonstrated
in
in
the
review
comments
when
you
say
you
know
that
you
support
a
draft
and
you've
reviewed
it.
You've
had
this
question
about
it.
That's
how
you
demonstrate
that
so
so
we've
got
that
draft
and-
and
it
is
you
know,
moving
forward
toward
adoption
because
that's
the
best
way
to
put
it
now,
then
we've
got
several
other
drafts
which
have
been
proposed.
C
One
on
service
function,
chaining
a
key
Kim
is
here
to
present
that
and
then
we
had
this
other
one
considerations
for
benchmarking,
network
virtualization
platforms
in
the
data
center
environment
and
that's,
unfortunately,
gonna
be
wiped
off
the
agenda.
Today,
presenters
weren't
able
to
join
us
in
it
hasn't
been
an
update
on
that
either,
but
we
did.
C
They
would
come
through
and
join
us
so
then,
after
after
the
the
drafts
that
we
discussed
today
will
have
a
reach,
are
during
discussion
and
we'll
kind
of
put
down
our
ideas
and
get
some
feedback
from
the
group
and
then
discuss
a
schedule
for
retarding,
as
we
have
very
nearly
completed
all
of
our
charges
work
items
and
that's
a
good
thing.
The
question
is:
do
we
have
enough
work
to
continue?
I
mean
personally,
I
can
be
as
yourself.
So
then.
C
The
final
thing
we'll
do
today-
and
this
is
the
kind
of
thing
that
we've
been
able
to
do
a
little
bit
when
we
have
a
long
session
like
this,
is
to
take
a
look
at
some
related
presentations
and
topics
from
other
organizations.
For
example
West
some
are
we
heard
from
a
software-based
traffic
generator
design
team,
the
moon,
gen
traffic
generator?
They?
C
They
were
here
with
the
advanced
network
research
workshop
last
year
and
and
then
stayed
on
to
give
our
group
would
have
talked
about
this
room,
gen
traffic
generator,
and
then
we
also
heard
from
the
the
FDI
o
continuous
integration,
testing
team
and
kind
of
work
that
they
were
doing
to
test
the
Cisco.
Now
the
FBI
or
open-source
VPP
virtual
switch
so
this
year,
what
we're
gonna
hear
is
talk
about
the
OPN
at
the
via
spark.
C
We've
heard
about
the
project
here
many
times,
because
that
group
on
the
contributions
to
our
work
in
the
area
of
sort
of
test,
setups
and
configuration
parameters
for
repeatability
and
now
we're
gonna,
hear
a
bit
about
what
they,
what
they've
done
with
recent
testing
the
results
that
they've
collected
and
there
are
of
course,
implications
on
our
future
work
and
I'll.
Try
to
emphasize
that
as
I
go
through
so.
F
C
C
C
C
Through
the
iesg
the
same
week,
which
made
it
very
interesting
lots
of
lots
of
traffic
on
all
this
and
then
most
recently,
we've
gotten
iesg
approval
of
the
data
center
benchmarking
drafts,
which
were
working
here
for
quite
a
while,
and
that
caused
a
lot
of
commentary
and
an
additional
feedback.
So
and
now
the
working
group
is
really
seeing
all
this
feedback.
It's
it's
really
quite
extensive
and
I
mean
it
informs
us
in
ways
that
we
need
to
know
in
order
to
get
our
graphs
through
more
efficiently.
C
C
C
On
benchmarking,
mat
and
and
look
the
very
next
one-
that's
going
to
be
announced-
probably
sometime
today,
I
would
guess,
or
or
this
week
at
least
as
eighty
172
on
the
vnf
virtual
network
function
and
NFV
network
function,
virtualization
infrastructure,
benchmarking
considerations.
This
was
the
first
step
in
our
our
working
group
charter
pivot,
begin
to
take
on
the
bench
marking
of
virtualized
network
functions
and
which
was
a
key
step
in
our
reach.
C
Our
during
activity,
the
last
time
around
we've
previously
done
a
benchmarking
of
physical
network
devices
with
their
dedicated
hardware,
but
now
we're
expanding
our
focus
into
this
virtual
network
function
and
their
infrastructure
potential.
So
this
was
first
Stefan
and
glad
to
get
that
complete
if
you're,
if
you're
beginning
to
work
in
this
area.
This
is
an
excellent
draft.
C
Considerations
and
contributions
there
to
consider,
as
we
begin
this
so
we'll
have
a
discussion
of
reach
are
during
that.
That's
our
Charter
update
discussion
and
very
soon
we'll
have
our
supplemental
BMW
G
page
announced
Sarah's,
been
working
on
that
and
and
has
created
the
place
where
we'll
or
well
sort
of
keep
our
our.
C
Interesting
information
about
you
know
for
people
who
haven't
been
able
to
attend
a
meeting,
the
kinds
of
things
that
I
said
when
we
got
going
here.
You
know
how
easy
it
is
to
join
the
group
and
what
you
need
to
do.
First
and
second
and
third,
things
of
that
nature,
and
in
the
past
we've
included
reviews
of
particular
documents
there,
an
analysis
of
comparisons
and
so
forth.
So
it's
been
useful
to
have
our
own
supplemental
age
and
unfortunately,
Comcast
decided
that
they
no
longer
provided
web
pages
for
residential
customers.
C
C
Gonna
have
this
this
discussion,
so
I
think
I
will
hold
up
the
presentation
there
and
ask
if
there's
any
questions
about
the
status
of
the
working
group
or
direct,
no
good,
okay,
so
so
yeah!
This
is
a
you
know:
Maurice
it's
been
a
very
successful
last
year
or
so
and
getting
all
of
these
drafts
move
forward
and
that
that
helps
us
to
make
space
on
our
in
our
attention
span
and
our
working
group
agendas
to
be
able
to
adopt
some
good
work.
Now,
that's
why
we're
beginning
to
consider
these
things
in
earnest.
F
With
feedback
from
owl
the
last
time
this
went
out
for
working
group
last
call.
There
were
some
comments
that
came
in
those
were
addressed
when
we
uploaded
the
draft
I'll
caught,
something
that
maybe
I
should
have,
which
was
hey.
It's
not
clean
when
I
do
reviews
for
ops,
Directorate
I'm,
usually
the
first
one
to
pick
out
on
nits,
so
I
found
it
very
ironic
that.
I
F
F
So
in
any
event,
we've
gone
back
through
and
cleaned
up
those
knits
that
was
I
believe
submitted
two
weeks
ago,
so
it
was
under
the
deadline
and
so
at
this
point,
I
think
we're
looking
or
asking
the
working
group
to
do.
Another
working
group
last
call
it's
been
a
long
time
coming.
This
draft
has
been
thoroughly
reviewed
and
worked
on
for
quite
some
time.
So,
if
you're
new,
you
might
be
wondering,
wait
what
it's
going.
The
working
group
last
call,
but
it's
been
around
for
some
time.
F
C
So,
first
off
any
any
comments
from
the
attendees
about
this
Sdn
controller
that
work
okay.
Well,
so,
as
I
mentioned
it
to
Sara
before
I
have
a
couple
of
comments,
and
one
of
those
is
related
to
one
of
the
drafts
that
we've
just
got
approved
and
basically
all
the
way
through
the
publication
process,
and
that
is
the
considerations
for
benchmarking
of
vnfs
and
related
infrastructure
will
be
the
RFC
81
72
is.
C
J
A
C
Draft
and
and
the
way
this
metric
matrix
works
is
this:
we
have
these.
These
different
phases
of
communication
listed
in
the
first
column,
its
activation
operation
and
deactivation,
so
you
can
kind
of
think
of
like
setting
up
a
path
evaluating
the
performance
of
that
path
in
terms
of
its
throughput,
the
transmission
rate,
latency
cost
ratio
and
and
reliability.
A
C
C
A
C
Here
is
the
additional
column,
but
this
was
the.
This
was
the
the
matrix
that
comes
from
an
ANSI
standard,
NC
X,
3,
dot,
102,
and
it
was
originally
designed
for
data
communications
evaluation
back
in
a
long
time
ago,
back
in
the
eighties.
But
it's
been
very
effective
to
apply
it
to
our
work
here
and
we've
done
that
down
into
graphs
the
SDN
controller
draft
and
also
in
the
vs
/
contributions
from
oakley
empathy
into
the
big
guy
into
our
work.
Here
we.
C
Tests
that
this
project
has
designed
so
in
making
sure
that
these
citations
were
were
correct.
I
looked
into
the
I,
looked
into
the
the
controller
performance
draft,
and
this
is
one
of
one
of
Sarah's
graphs.
That
she's
just
discussed
here
and
in
fact,
when
I
noticed,
was
that
the
the
active
the
accuracy
column
was
was
missing
and
I
originally
thought
that
that
was
just
a
synchronization
issue
between
this
is
the
the
terminology.
C
Draft
were
where
the
benchmarks
and
other
terms
are
carefully
defined
and
the
and
then
the
draft
on
methodology
and
and
that's
what
this
one
is.
So
here's
the
section
in
Sarah
and
her
author
team's
draft
on
controller
asynchronous
message,
processing.
Great
so
of
imagine
a
case
here
now,
where
you
have
a
many
switches
connected
to
a
controller
and
they
all.
D
C
Association
with
the
controller,
where
the
controller
is
going
to
accept
their
request
to
establish
a
flow
and
then
to
give
back
the
information
necessary
to
insert
in
their
flow
table
to
be
able
to
accommodate
that
flow
in
the
future
on
a
switch
by
switch
basis.
So
as
a
plow
enters
the
the
network,
that's
controlled
by
the
controller,
couldn't
see
many
of
these,
these
packet
in
messages
and
it's
important
to
know
for
a
given
number
of
switches.
How
many
of
these
flow
requests
are
packet
in
in
the
open
flow
of
car
lights?
C
Can
they
accommodate,
but
one
of
the
one
of
the
things
we
learned
in
the
process
of
this
is
that
many
many
of
the
originals
control,
our
benchmarking
tools,
didn't
pay
attention
to
pack
it
in
which
is
like
a
flow
update,
request
message:
loss
ratio.
If
a
request
came
in
and
it
was
dropped
or
not
honored,
then
a
flow
would
be
sitting
there
wanting
to
continue
to
send
packets,
but
no
way
to
do
it
and
that's
kind
of
that's
really
bad
for
network
operators,
but
yet
that
somehow
this
was
overlooked.
D
C
K
C
The
walls
so
there's
there's
really
two
kinds
of
parameters
here
and,
and
the
point
was
to
try
to
capture
both
of
them
here.
The
old
benchmarking
tools
have
typically
characterized
the
maximum
message,
processing
rate.
But
what
we
like
to
add
and
we've
begun
to
do
that
based
on
comments
here,
is
to
characterize
the
the
rate
where,
where
there's
no
message,
dropping
so
zero
loss
ratio
and
that's
always
going
to
be
less
than
the
maximum
with
the
frame
dropping
there
could
be
as
much
as
10
percent
or
20
percent
frame.
C
C
C
C
E
C
M
C
F
I
think
they're
really
good
changes.
I,
don't
think
the
other
authors
are
gonna,
have
an
issue
with
this,
so
I
think
it's
minor
to
add
it,
but
especially
if
you
have
new
folks
reviewing
I
think
it's
a
lot
cleaner
to
go
in
with
a
clean
draft.
Yeah
fine
it'll
only
take
a
day
or
two
and
we'll
upload.
Okay,.
C
C
C
All
right,
so
our
next
next
item
is
a
draft
on
benchmarking
methodology
or
Ethernet
VPNs
and
the
provider
backbone.
Evp
ends
sadena
Jacob.
This
is
here
from
juniper
and
we
have
a
related
draft
here.
That's
run
through
many
iterations,
so
Sedin
I
invite
you
to
come
up
and
make
your
presentation
of
the
latest.
N
C
D
D
O
N
Hi
yeah
good
morning,
everyone
there's
this
bench
marking
on
evpn
at
pbbb
PN.
This
is
a
you
know:
the
new
RFC,
which
came
year
and
a
half
back.
So
this
was,
you
know,
isn't
new
RFC,
seven,
six,
two
three!
So
this
is,
you
know
it's
land.
Now
it's
widely
deployed
in
the
provider
arena
and
the
main
feature
of
this
PB
e
VPN
is
like
you
can
have
both
routers
in
the
forwarding
compared
to
VPLS,
because
you
have
to
run
either.
You
know
spanning
tree
or
workers
active
and
standby.
N
N
So
which
I
explain
yeah
the
comments
it
was
from
the
last
idea
of
it
was
they
want
to
type
five
routes
because
I
find
out
was
not
a
part
of
RFC
which
came
as
a
separate
draft,
because
there
are
lot
of
drafts
which
is
going
in
best
working
group.
It,
which
is
it's
like
a
vanilla
feature.
A
VPN
and
ppbv
penis
like
a
to
RFC
spaniel,
are
features.
N
They
add
a
lot
of
other
things
like
say,
for
example,
they
they
wanna,
run
VP,
WS,
/
e,
VPN
e
2
e
3
eel
and
kind
of
different
service,
and
so
they
spin
of
a
new
draft.
So
actually
this
is
what
this
was
I
fire
out
itself
is
a
draft
which
is
adopted
by
the
IETF
best
working
group.
So
they
want
that
also
to
be
benchmarked
as
part
of
EVP
and
when
we
benchmark
EVP,
so
they
ask
okay.
Why
can't
you
do
that?
N
So
I
incorporated
that
as
one
of
the
parameter,
because
we
have
defined
the
parameter
so
now
the
problem
is
EVP
knows
implemented
by
different
providers
and
it
is
very
difficult
when
you
buy
when
in
the
test
community,
as
well
as
when
you
buy
this
product
or
the
services
rendered
by
different
vendors.
So
you
have
to
have
apples
to
apples
comparison.
So
you
don't
know
okay,
say
vendor
X
or
service
provider.
X
is
giving
this
so
how
I
rated
this
would
be
some
standard.
This
would
be.
N
Some
extra
core
parameters
have
to
be
measure,
so
so
there's
a
that.
That's
how
the
motivation
behind
this
drop
and
type
fire
route
was
to
ask
in
last
idea.
So
we
incorporate
that
as
one
of
the
parameters.
So
this
is
a
test
setup
because
you
know
the
duty
is
one
of
the
multi
Hampi.
This
is
a
typical
evpn
deployed
scenario
so
because
it
will
be
active
active
because
that
is
the
the
most
deployed
scenarios
where
active
active,
because
that
is
where
both
routers
will
be
forwarding.
N
N
N
Did
is
that
what
you're
saying
yeah
the
beauty
is
where
you
know
we
have
to
have
an
in
test
cell
scenario.
We
have
to
have
a
D
UT,
barring
you
run
the
test
and
you
measure
the
parameters,
but
to
place
the
duty.
You
need
to
have
other
elements,
network
elements
there
and
the
traffic
which
is
pumping
by
direction
traffic
or
you
need
direction,
a
traffic
which
is
pumping.
That's
why
we
return
to
our
teapots
or
the
router
test
of
like
XE.
Our
Spirent
will
be
there.
So
the
duty
is:
where
is
a
reference
point?
M
N
Mentioned
there
in
that
test,
setup
clearly
mentioned
that
the
reference
point
which
is
mentioned
and
the
traffic
is
by
a
you,
know,
pump
from
unidirectional
or
bi-directional
traffic
is
sent
and
the
framerate
all
the
details
of
clearly
I
think
so.
I
want
to
raise
the
two
idea
of
back.
So
we
have
added
all
those
points,
the
framerate
and
all
those
things
which
is
clearly
mentioned.
You
say
layer,
two
frames
we
are
doing
it
so
I
looked
at
the
procedure.
M
N
It's
clear
I'm,
just
saying
that,
for
me,
it's
not
okay,
no,
actually,
that's
what
it
that
section
test
set
up
it
was
mentioned
there
I
mean.
Is
that
the
reference?
So
in
that
it?
The
question
came
so
we
added
explicitly.
This
is
where
it
was
like
that
we
added
that
if
that
is
not
clarified
you
people
once
again,
you
know,
look
into
it
and
we'll
definitely
look
into
it,
because
that
is
where
test
set
up
in
that
is.
Explanation
is
given,
and
all
that,
because
usually
the
one.
M
Of
them
devices
or
the
device
under
test
for
part
of
the
the
maybe
should
be
I,
don't
know,
put
the
frame
like
one
of
the
solutions
I
can
think
of
just
like
a
frame
around
anything.
That's
like
anything,
that's
aside
that
the
deity
should
be
well
a
box
that
says
tester
like
and
then
like
explain
that
the
tester
is
a
multitude
of
machines
like
router
one
RRP,
c
e
and
m
PHP.
So
that's
that's
how
I
would
kind
of
try
to
take
care
of
it
because
it's
otherwise
it's
really
confusing
it's.
M
F
To
add
to
that
I
think
Sudan,
a
couple
of
things
having
your
are
one
in
your
ce-1,
slash,
tester,
right,
I,
think
a
lot
of
us
when
we
see
multiple
routers
on
a
diagram,
I
think
we
had
speaks
if
we
think
those
are
all
being
tested
in
and
I
see
your
point.
It's
not,
but
I
also
see
Mary
at
this
point,
I
think
we
talked
a
little
bit
about
that
yesterday.
F
Reading
back
through
the
draft,
with
your
test
hat
on
with
fresh
eyes
and
I
realize
that
can
be
a
little
harder
when
you're
the
raw
offer.
But
reading
back
through
and
saying
I
was
gonna,
go
sit
down
and
test.
This
does
this
make
sense
and
I
suspect
you'll
get
a
little
more
appreciation
for
what
Mary
is
to
say,
I.
N
A
point:
it's
not
it.
So
if
that
set
up
test
set
up
and
the
duty
details,
we
will
make
it
more
explicit
with
a
you
know:
bullet
points
in
fact,
okay,
thank
you
thanks.
Various
will.
Definitely
this
appreciate
your
points
and
I'm
I
think
so
I
can
move
to
the
next
yeah.
Thank
you.
So
these
are
the
parameters
which
we
defined
it.
You
know
these
are
them.
You
know,
parameters
like
for
the
evpn,
the
basic
evpn.
N
So
it's
a
back
learning.
What
we
do
is
like
the
there
are
different
types.
You
know
the
local
learning
remote
learning-
and
you
know
that
is
when
with
the
Mac
is
sent
from
one
direction,
then
that
is
a
local
learning.
Then
the
Mac
is
coming
from
the
remote.
You
know,
router
the
time
take
to
learn
the
you
know
certain
amount
of
Mac's
that
and
when
you
are
sending
the
you
know
by
direction
traffic.
N
And
what
is
the
amount
of
time-
and
this
is
you
know,
repeated
in
interval
and
it
is
plotted
because
this
Mac's
are
advertised
by
BGP.
It's
unlike
VPLS.
It's
a
data
plane
learning,
but
here
is
the
Mac.
Details
is
advertised
to
other
routers
as
a
type
to
route
because
they
have
defined
certain
NLRs
in
EVP
on.
C
N
J
N
Actually,
this
is
my
via
a
Lego
bridge.
The
traffic.
The
local
learning
is
the
traffic
which
is
bricks
from
here
and
reaching
here
the
beauty,
because
it's
a
multi
home
scenario.
The
traffic
will
be
reaching
here,
so
the
traffic
will
be
reaching
here,
the
time
taken
to
learn
the
local
impacts
and
the
time
taken
to
advertise
to
this
remote
offer,
because
the
local
learning
it
will
be
in
the
camp
table
or
the
local
map
table
from
there
he
took
it
has
to
keep
to
the
BJP
and
VTP
has
to
are
obeys.
N
That
depends
on
the
out
of
pass.
The
PDP
update
goes,
and
so
that
is
where
the
Delta
comes
into
picture
with
the
learning
and
the
rotation
rate
will
be
different.
You
learn
in
the
magnitude
which
we
follow
the
previous
RFC,
which
explained
about
how
the
Mac
has
to
be
learned,
that
federal
you
use
and
that
attachment
is
from
our
frequent.
You
know
from
the
BGP
advertisement
how
long
it
will
take
to
a
device
that
n
number
of
facts
or
X
number
of
fans
to
r1.
So
so
that's
good
for
local.
N
Give
me
give
me
the
next
one,
the
room
remoteness
from
that
you
know.
Traffic
is
sent
directly
to
our
because
it's
connected
directly
to
r1,
so
the
R
1.
It
will
be
learned
the
Mac
and
advertise
to
here.
So
the
same
thing
reverse
Park
will
have
clickable
here,
the
local,
the
max
in
r1,
will
be
advertised
as
a
BGP,
so
it
will
be
coming
as
a
BGP
advertisement.
Here
then,
if
from
the
evpn
database,
it
will
be
populating
to
the
local
path.
Ok,.
G
N
C
So
the
so
then,
the
fundamental
answer
to
the
question
which
I
think
Marius
was
raising
is
that
really
all
we
need
to
do
in
order
to
accomplish
mac
learning
is
to
generate
packets
with
max
that
need
to
be
learned
and
then
to
measure
the
benchmarks.
We're
going
to
be
looking
at
the
control
plane
interactions
between
the
devices
in
your
setup
in
order
to
determine
when
a
max
been
learned
and
when
it's
been
advertising,
did
you
Sakura.
N
Yep,
that's
the
same
way,
I'm
doing
it
because
all
right,
the
BGP
itself,
which
has
serialization
delay
them
the
NLRA
and
Lara.
You
know
the
back.
You
know,
as
for
the
RFC,
does
to
hook
up
and
send
it.
So
that
is
where
the
actual
benchmarking
has
to
come.
How
fast
the
you
know
device
like
you,
know
various
vendors
device,
how
we
can
work
through.
L
N
Ip
this
test
is
a
post
I'm
discussing
about
the
Mac.
So
the
beauty
is
that
a
friends
point
so
in
beauty,
a
set
of
friends,
point
Jim,
so
you
have
the
local
learning,
which
is
there
and
the
remote
routes
which
is
coming
from
the
r1
I
have
to
populate
to
my
Mac
table,
because
that
is
coming
as
a
type
to
route,
because
r1
will
be
advertising
to
the
duty
as
a
type
to
route.
Then
I
have
to
receive
that
routes.
Then
I
have
to
put
it
in
the
Mac
cable.
L
L
E
L
L
A
control
plane
is
good.
So
one
other
comment
about
this:
in
the
multihoming
active
active
environment,
you
may
never
get
a
Mac
advertisement
from
M
HP
E,
just
the
ESI
advertisement
that
is
equal
to
the
one
on
the
D.
You
yep
this.
So
when
you
benchmark
and
say
how
quick
this
is
where,
if
you
remove
the
ESI
F
of
M
HP
e,
you
invalidate
all
the
max
at
all
one
that
have
been
welcome
to
you.
So
it's
not
you
it's
possible
in
Bulgaria
the
Mac
advertisement,
the
type
2
from
both
yep.
P
L
N
N
Know
one
of
the
trigger
trigger
you
know
benchmarking.
Yet
this
is
a
learning
rate.
You
know
bringing
back
the
ESI,
so
withdrawal
type
for
withdrawal
comes,
and
you
know
that
does
not.
You
know
not
that,
because
this
is
learning
it's
a
vanilla,
vanilla
test,
so
it's
not
a
trigger,
so
that
is
not
added.
Yes,
I
withdrawal
is
not
added.
L
F
F
Carrier
feedback
that
maybe
you
should
consider
the
withdrawal
scenario
and
I
think
you
should
also
explicitly
consider
active,
active
and
non-active
active
in
both
snip
scenarios,
because
I
hear
you
this
is
about
learning,
but
there
is
no
learning
without
what
they're
off.
At
the
end
of
the
day,
anyways
I
agree
that
learning
might
be
its
own
discrete
task,
but
I
think
having
withdrawal
is
also
very
important
to
measure.
There's
never
been
a.
We
all
do
it
anyways.
When
we're
all
testing
for
vendors,
we
all
do
it
for
our
customers.
That.
N
Jim,
actually,
we
added
that
as
a
link,
because
normally
in
provider,
the
link
failures
is
frequent,
so
we
added
that
link
failure,
local
and
remote,
which
is
added
that
for
benchmarking,
because
ESI
be
drawn
there,
because
what
we
have
done
is
a
link,
if
that's
good,
what
we
have
added
as
a
link
failure.
That
is
where
the
Mac
flesh
is
a
parameter
next
I'm
coming
to
that
the
local
failure.
That
means
is
as
good
as
ESI
failure,
the
local
as
well
as
remote.
So
how
fast
is
it
is
getting
flesh?
So
that
is
the
word.
N
N
L
It
now
I
here
so
just
go
back
to
the
picture
of
second
I
guess
as
as
an
operator.
This
is
what
I
sort
of
want
to
know.
If
I
had
an
active
backup
scenario
where
I
was
using
de,
do
UT
as
my
active
to
send
traffic
to
r1
and
all
one
to
do,
UT,
+,
n,
HP
e2
is
my
backup
and
then
I
crush
D
ut.
How
fast
does
it
take
my
service
to
come
out?
L
N
N
Because
that
is
covered
in
the
link
failure
scenario,
which
happens
in
the
provider,
is
covered
like
the
ESI
cutting
off.
You
know
that
is
indirectly,
which
is
you
know,
link
cutting
is
indirectly,
which
is
referring
to
the
ESI
cut
off,
so
that
mat
flesh
the
to
a
scenarios
are
covered
bonus.
The
local
failure,
which
is
frequent
in
the
provider
environment
in
the
metro
ring
so,
and
the
remote
failure,
which
is
in
the
remote
failure.
How
fast
is
flashing
the
routes,
because
the
type
to
withdrawal
is
the
important?
Otherwise
traffic
will
be
black
hauling.
F
What
I
recall
reading
you're,
covering
Mac
flush
on
the
D
UT,
that's
explicitly
I!
Think
not
what
you
mean
saying
and
and
I
think
again.
We
can
take
it
to
the
list.
We
can
talk
about
it
afterwards,
because
I
think
we're
starting
to
rathole
a
little
here,
but
this
is
all
again.
This
is
all
from
the
perspective
of
the
duty.
That's
not
what
he's
saying
right
for
the
other
PE
to
be
come
and
go
through
the
depilation.
D
F
Think
that's
the
conversation
that
maybe
we
should
have
after,
if
you're
open
to
hanging
around
for
a
little
bit,
and
we
can
talk
about
how
to
potentially
add
that
in
but
I
do
think
it's
a
to
put
out
the
draft
as
an
RFC
without
that
in
there
I
think
it's
something
I'd
like
to
hope.
We
convince
you
to
consider
okay.
N
I'd
be
a
failure.
Okay,
fine
and
I'll
get
back
give
me
you
know
I.
Let
me
complete
I'll
get
back
to
you.
Okay,
just
give
me
a
sensing.
So
now
the
Mac
aging,
a
Mac
learning
we
covered
it.
Mac
aging
is
it's
it's
a
normal
aging
scenario.
We
are
referring
to
it,
so
we
scale
to
n/a
max
like
we
are
not
N.
We
are,
you
know
as
a
tester.
It
was
up
to
you
know
the
tester
to
reference
the
point
as
n
so
scale
and
stop
the
traffic.
N
So
how
long
it
will
take
to
flash
off
that
n
number
of
Macs
from
the
table
and
as
well
as
the
remote
you
learn
as
a
type
2
from
the
remote,
so
the
remote
traffic
is
stopped.
So
it
has
to
you,
know,
age
out
from
the
remote
router
and
it
has
to
send
the
type
through
withdrawal,
so
the
type
to
withdrawal
comes.
It
has
to
fresh.
N
It
has
to
signal
the
local
Mac
table
and
you
know,
remove
all
the
Macs
from
that
table
so
because
that
is
where
one
important
parameter,
because
if
the
withdrawal
didn't
come,
if
the
patrol
there
is
a
problem.
So
the
issue
is
like
your:
even
though
the
traffic
is
stopped,
the
Mac
it
remains
there
and
the
black
Halling
will
be
there.
I
L
Once
you
h
out
right,
you
know,
I
would
just
say
that
you
know
because
again
back
to
the
optimization,
like
the
ESI,
optimization,
if
a
whole
bunch
of
Mac's
timeout
associated
with
certain
Ethernet
segment,
you're
not
going
to
get
individual
withdrawals
for
that
you're
going
to
get
the
withdrawal
for
the
ESI
and
then
you're
gonna
invalidate
everything
until
you
know
your
back
table
hey
and
validate
these.
You
know
please
max
this
part.
Okay,.
N
N
You
know,
that's
my
Mac
concept
is
there,
so
it
will
advertise
the
Mac
and
IP
to
the
remote
routers
so
that
you
know
in
the
interval
and
routing
so
unnecessary
are
flooding
and
all
does
out,
which
is
not
there
in
the
conventional
VPLS,
so
interests
in
the
via
mobility
and
lot
of
other
features
are
dependent
on
this.
So
it
depends
on
the
Box,
how
you
can't
know
how
many,
because
it
depends
Mac
plus
IP,
that
NLRA
payload,
so
how?
N
F
Before
you
keep
going,
can
I
ask
for
next
meeting
when
you
bring
these
slides
back
in
with
updates
presuming
you're
gonna
present
next
in
Singapore,
could
you
put
the
diagrams
the
diagram
that
you
had
originally?
Can
you
put
them
in
on
the
side
and
then
highlight?
What's
where?
Oh
okay,
because
a
lot
of
times
you're,
saying
the
sender
and
while
I
think
I
can
guess
what
the
sender
is
it'd,
be
nice
I
think
that's
part
of
the
problem
too
Mary.
N
F
Go
back
now,
I
realize
that
you
need
to
do
it
next
time.
I
just
eat
one
of
these
slides
that
you've
been
going
through
where
you're
talking
about
learning
and
flushing,
and
you
put
down
and
then
specifically
call
out
where
and
what,
because
I
think
you
and
through
the
questions
that
Jimmy's
asking
you're
doing
it
with
a
highlighter
on
the
screen
here,
but
unfortunately
for
the
meet
echo
folks,
it's
not
caring
to.
D
C
That
the
actual
measurement
and
benchmarks
are
understandable
and
and
if
you
an
executable
person
and
if
and
if
you
can
reference
documents
from
the
RFC
that
you're
referring
to,
if
there's
figures
there-
that
then
that
may
be
sort
of
place
and
starting
to
available
to
incorporate
these
figures
so
that
we
can
so
that
everyone
can
understand
these
test.
Styles
I,
think
you
know
we
all
sort
of
understand
the
concepts
here.
Certainly,
but
the
unique
details
of
evpn
are
not
my
expertise.
C
N
F
Right
but
Sudan
I'll
point
out
there's
a
lot
of
pushback
to
the
feedback
you're
getting,
but
if
you're
getting
the
feedback
it
tells,
you
I
think
that
it's
the
way
you're
describing
things
here
and
the
way
it's
reading
in
the
draft,
is
it
I,
don't
think
coming
across
as
crisply
and
cleanly
as
you
expect
it
is,
and
if
we're
struggling
with
it,
then
other
folks
who
read
the
RFC
would
struggle
with
the
two.
So
you
know,
let's
circle
back
to
this,
based
on
the
discussion
we
had
yesterday,
but.
N
No
fine-tuning
and
it's
you
know
the
things
things
has
to
be
as
Mary
said,
as
we
find
you
do,
that
fine
tuning,
it
was
Sir
I'd.
It
has
done
because
maybe
you
know
certain
things
as
Jim
also
told
the
fine
tuning
will
be
done
at
us
and
in
the
present
Asian
like
giving
you
small
couple
details
in
the
sender
receiver
that
also
okay.
F
Soon,
not
to
nitpick
but
I,
think
it's
a
little
more
than
just
fine-tuning,
I
think
a
good
chunk
of
the
methodology,
while
you're
doing
the
test
of.
What's
what
and
what's
where
that
is
missing
from
the
test
cases,
so
I
think
it's
a
little
more
than
just
fine-tuning,
I
think
there's
a
good
amount
of
surgery.
F
N
F
So
so
I'll
also
point
out
I,
don't
know
that
I'm
it's
familiar
as
Jimmy
is,
but
this
is
not
my
first
rodeo
with
you,
VPN
and
I
still
was
a
little
unclear
as
to
where
you
were
going
with
a
good
amount
of
test
cases.
Sometimes
it's
just
not
clear
and
I'm
making
assumptions,
and
sometimes
when
you're
talking
in
here,
they
turn
out
to
be
the
right
assumptions.
But
I
shouldn't
have
to
assume
the
document
should
be
very
clear
and
straightforward
as
to
what
you
should
be
doing.
N
F
N
D
M
Maya
so
I'm
just
trying
to
echo
Sara
here
it's
it's
not
just
fine-tuning
it's
like
a
spring
cleaning.
It
needs
a
lot
of
changes,
not
just
fine-tuning.
It
has
to
be
clear
to
you
that
the
the
draft
at
this
point
is
not
in
a
stage
to
be
adopted.
It's
it's
unclear,
sound,
clear
to
people
that
are
very
familiar
with
evpn.
It's
unclear
to
do
also
everyone
else,
so
it
needs
a
lot
of
changes,
not
just
like
fine
team
that
has.
M
N
M
N
Sorry,
if
I'm
rude
but
I'm
just
you
know
just
I
want
to
know,
like
you
said
like
you
know,
it
has
to
be
clean
code.
Me
an
example
because
then
it
will
be
helpful
for
me,
which
is
which
is
a
viewpoint
which
that,
because
you
know
you
state
okay,
it
has
to
be
clean.
It
has
to
be
done.
That's
a
word.
I
accept
the
feedback.
M
N
D
N
That
you
know
I'm
at
ease.
It's
like
a
fine-tuning
as
we're
cleaning
up
all
this.
That's
a
term
I
used
it.
Okay,
don't
misquote
me,
I
mean
I,
respect
your
feedback,
but
the
thing
is
fine-tuning.
It
means
all
the
things
has
to
be
corrected.
That
is
what
the
English
is
about,
which
I
understand
it.
I
respect
your
feedback.
Definitely
it
will
be
done
that
that's
a
commitment,
I
told.
E
N
So,
and
if
you
can
you
point
at
me,
I
have
noted
that
this
is
a
things
has
to
be
done,
I
respect
that
it
will
be
taken
care
because
each
feedback
I
have
taken
very
seriously
I've
gone
back
to
that
and
have
done
it.
So
I
will
go
through
that
in
a
different.
You
know,
because
the
the
test,
what
the
EVP
intestines,
which
I
sat
with
and
you
know
the
parameters
which
I
have
given,
so
they
were
able
to
understand
it,
but
the
feedback
which
I
got
from
you
guys
is
different.
F
F
I
remember
it
was
a
mess
and
it
took
somebody
really
sort
of
saying
you
know
very
comic:
wake
up,
here's
how
it's
done
so
could
I
ask
in
the
spirit
of
him
being
new
and
alleviating
some
of
the
pushback
that
we're
getting
on
the
feedback
here,
I've
agreed
to
and
I'm
wondering
if
you
would
I
think
you're
a
fantastic
writer
and
you've
gone
through
the
process
really
well.
Could
you
take
a
look
at
the
diagram,
the
first
diagram
in
the
in
the
document
and
provide
surgical
feedback
around
that
here's?
What
doesn't
make
sense
to
me?
F
Here's
what
does
because
I
think
he's
really
struggling
with
what
I'm
hearing
is
I'm,
not
entirely
sure,
where
give
me
some
examples
of
what
you're
talking
about.
So
if
you
take
the
first
diagram,
I'll
take
the
first
use
case
and
I
think
between
the
two
of
those
and
we
can
even
talk
before
we
send
it
over
sure.
M
Iii
would
I
can
come
up
with
a
solution
to
what
I
am
trying
to
correct.
Of
course,
I
don't
have
the
the
EVP
M
background
that
maybe
is
very
critical,
but
I
I
can
also
ask
for
help.
We
have
people
that
are
experts
in
European
and
the
the
thing
is,
it's
I
think,
as
you
are
saying,
even
for
people
that
know
the
stuff,
don't
it
doesn't
make
sense
right.
F
N
I'm
sure
constructive
way
I
mean
so
then
it
would
be
easy
because
he
will
be
thinking
something
in
the
mind,
so
I
am
assuming
okay.
This
is
what
he
assumes
it
so
because
I
was
thinking
in
the
mind
where
EVP
and
actual
scenario
which
I
testing
so
I
was
thinking,
assuming
that
maybe
that
is
a
delta
between
us.
If
you
can
come
up,
you
don't
need
to
go
deep
into
the
e
beeping,
but
this
is
a
delta,
which
is
there
in
the
you
know,
transforming
the
idea
into
that.
M
Seems
like
given,
from
the
first
time
I
reviewed
the
document,
I
told
you
pretty
like
simple
things:
it
wasn't
very
technical.
It
was
most
of
almost
about
anything
and
like
there
was
a
lot
of
pushback,
so
I.
Think
if
you,
if
you
can
try
to
I,
don't
know
I
dig
deeper
into
the
feedback
that
and
then
consider
it
some
more
and
then
make
make
sure
that,
like
it
actually
got
covered
because
you
would
say
you
were
saying
this
got
covered.
This
got
covered.
N
And
then
okay
see,
this
is
like
you
know,
give
and
take
like
I
mean
like
if
you
are
okay,
if
something
is
missing
so
this
is
you
know
that
is
action
item
on
us
like
we
are
doing
that,
so
that
is
the
reason
yeah,
as
like
feedback
I
mean
I'm,
giving
that
okay.
This
has
this
was
action
item
on
us,
so
we
have
covered
that.
So
that's
why?
Yes,
I'm.
F
E
N
N
C
Do,
let's
do
as
much
of
this
as
possible,
the
exchangers
Wow
well
I
was
gonna,
say
if
you,
if
you
want
to
type
stuff
up
this
week
and
get
that
on
the
list,
that's
great,
but
it
would
be
really
good
if
we
can
make
use
of
the
face-to-face
time
and
availability
there
as
well
to
be
sure
that
what
we're
writing
down
and
gets
communicated.
Can
it's.
C
On
both
sides,
I
think
that's
the
that's
the
kind
of
thing
that
that
will
really
help
this
week.
So
Sabine
I
think
that
now,
if
you
give
us
a
sort
of
a
summary
of
some
of
the
other
benchmarks
that
you
plan
to
measure,
I,
think
that
would
be
a
good
use
of
the
the
rest
of
the
time.
With
your
presentations
lot
here
today
and
we'll
accept
that
there's
lots
of
feedback
coming
by
the
way
it's.
C
If
we
adopt
this,
then
you
and
your
co-author
become
agents
of
the
working,
and
what
that
means
is
that
you're,
representing
the
working
groups,
opinions
in
the
draft,
so
you've
got
to
be
very
receptive
and
an
efficient
in
provide
in
in
providing
the
changes
that
address
the
comments,
because
the
comments
are
a
gift.
Yep
I
really.
N
Finish
up
with
with
what
you
got
and
again,
let's
never
move
on.
So
it's
a
you
know.
This
is
a
type
fire
out
which
is
added
in
that
the
latest.
You
know
comments
which
you
receive
because
they
want
that,
because
this
is
not
part
of
the
RFC.
This
is
in
a
separate.
You
know
draft
which
came
so
this
is
where
we
are.
You
know,
testing
how
you
know
what
is
the
scale
of
the
type
5
in
a
particular
day
of
duty?
It
can
support
it.
N
L
So
type
five
is
essentially
ipv4
round,
with
no
associated
Mac
type.
Two
is
Mac,
plus
IP
they're.
Both
the
IP
routes
have
to
be
stored,
regardless
of
its
type
fiber
type.
Two,
so
maybe
you
know
I,
don't
know
what
you're
looking
for
here
exactly
with
type
five,
as
opposed
to
a
generic
type
to
know.
N
L
An
ipv4
prefix
get
it
yes
say
if
you're
testing
to
see
how
many
of
these
prefixes
I
can
store
on
a
du2,
that's
correct.
So
in
many
ways,
why
is
type
five
different
than
type
two
is
different
in
ipv4
is
different
than
VPN
before,
like
what?
What
essentially
are
you
going
for
here
and
as
a
provider
I
would
be
like
well,
this
is
good,
but
what
about
my
tattoos?
L
You
know,
and
what
are
you
trying
to
tell
me
how
it
stores
the
IP
prefixes
when
it
comes
from
and
evpn
address,
family
versus
a
type
by
47
address
family
versus
an
ipv4
address?
So
what
do
we?
This
is
not
about
evpn
as
much
as
it
about
capturing
updates,
with
IP
prefixes
from
the
a
piece
a
few
of
evpn
and
and
storing
them.
So
this
is.
This
is
a
capacity
metric.
Is
that
this.
N
L
I
understand
that
part
of
it
but
I
guess
what
I'm
trying
to
say
is
that,
regardless
of
how
you're
using
the
route
in
this
capacity
with
IR
day
or
Mac,
plus
IP
you're
testing,
how
many
routes
you
can
store,
not
the
IRB,
not
the
functionality
of
evpn.
This
is
not
the
functionality,
because
this
is
the
scale.
Actually
what
I
understand
that?
But
why
is
the
scale
specific,
so
this
one
type
5
an
ipv4?
L
F
N
N
So
this
is
a
high
availability
test,
so
it
is,
you
know
when
the
failover,
so
the
ideal
case
is
you
know
it's
a
zero
packet
loss,
but
in
I
have
a
Liberty
we
expect
now.
If
the
route
should
not
be
withdraw,
there
should
not
be
any
change
in
the
DF
elections
or
the
type
type
two
routes
or
the
type
of
droughts
and
the
type
on.
So
this
is
where
the
high
availability
Estes-
and
this
is
a
scale
tests,
so
we
scale
it
to
any
VPN
instances.
N
E
L
P
N
That
context,
cost
money,
I
mean
that.
So
what
is
the
context
and
each
context?
What
is
the
capacity?
Because
that
is
where
I
know
provider?
That
is
where
the
money
comes
into.
So
this
is
the
convergence,
because
it
the
BGP
updates
which
is
going
on,
and
this
is
the
convergence
where
parent
we
scale
it
to
the
N.
N
You
know
evpn
as
well
as
the
Mac,
so
we
measure
you
know
the
flood,
because
how
far
because
the
the
main
benefit
the
evpn,
which
is
said
the
flat
towards
the
core,
is,
you
know,
reduce
minimize,
unlike
in
airplane
learning
of
VPLS,
so
we
measure
you
know
scale
to
any
VPN
instances
and
to
the
Mac.
So
how
fast
it
is
in
a
by
direction
traffic.
How
fast?
You
know
it
learns
and
starts
forwarding
in
the
you
know,
forwarding
in
you
know
bi-directional
traffic
and
how
cube
the
flood
is
prevented.
F
I'm
sorry
go
back
yeah,
so
what
you
just
said,
just
a
match
of
slide
at
all:
yeah
I
mean
I'm,
not
entirely
sure.
I
can
even
read
that
last
sentence.
To
be
honest,
it
doesn't
quite
slow
for
me.
So
it's
what
you're
saying
what
you're
saying
you
measured
or
is
what
you've
written,
what
you
think
you've
measured
measure.
N
The
time
period
of
the
flood
you
know
this
is
the
flat
traffic.
Like
you
know
it
is
the
duty
has
to
advertise
this
to
the
remote
and
the
the
flood
should
be
reduced
the
plant
we
are
measuring
in
the
duty.
So
if
it
is
the
flood,
the
measuring
the
flood
traffic,
so
it
should
not
be
sent
to
all
others
at
all
only
sent
to
the
remote
out
us
okay,
so
that
that
is
what
we
are
meaning
it.
So
the
flood
has
to
be.
That
is
the
important
part
in
the
in
this
particular
test.
N
The
flood
has
to
be
reduced
it.
How
fast
it
has
to
learn
this
and
programs,
the
you
know
the
Mac
and
it
has
to
send
unicast
the
Y
direction
it
has
to
establish
and
avoid
the
flag.
So
that
is
a
flood
we
are
measuring
like
how
fast
you
know
it
is
learning
it
so
reducing
the
necklac.
So
what
so?
What
son?
Let's.
C
N
G
N
C
E
Q
My
name
is:
techie
came
from
Korea
Telecom.
It
is
nice
to
meet
you
and
last
time
in
Seoul,
I
wrote
benchmarking
test
method
for
SFC
performance.
Well,
as
you
know,
SFC
standardization-
it's
not
finished,
but
we
KT
have
to
preparing
for
five
yung-chun.
So
we
are
trying
to
have
service
and
a
2
e
networks
license
and
enterprise
services
using
SMC.
So
we
do
not
have
enough
time
to
wait
using
any
such
things.
So
before
to
benchmarking
performance,
we
have
to
find
out
the
IFC.
Reliability
is
okay
or
not.
Q
Q
So,
as
a
perspective
of
operator,
we
think
reliability
of
SMC
is
SFC.
Operations
must
be
done
at
the
right
time
and
at
the
right
path.
But
before
start
this
session,
I
discuss
with
the
SMC
reliability,
meaning
with
our,
and
then
he
give
me
comments
that
I
think
reliability
of
SFC
terms.
It's
not
a
fitness
topic.
Q
C
Sfc
establishment
so
kind
of
the
first,
the
first
row
of
the
matrix.
That's
that
those
are
the
kinds
of
things
that
you
can
measure,
SFC
creation
and
then,
when
you
have
SFC
deletion,
that's
actually
the
bottom
drugs
mas.
So
then
you
have
when
you're
deleting
and
SFC.
Do
you
delete
the
right
one?
That's
accuracy!
How
fast
can
you
believe?
That's
the
speed,
so
you
actually
throw
for
all
of
these
for
all
of
these
categories,
creation
and
deletion
and
modification
there
are
there's
the
potential
to
evaluate
the
speed,
the
accuracy,
reliability
and
also
the
scalability.
D
Q
Q
Yeah
so
scope
as
I
before
we
have
to
launch
the
Enterprise
Services
and
between
Network
slicing
in
5
Z,
even
though
it
is
seems
like
a
POC
that
we
do
not
have
enough
time
as
a
mention.
It's
a
at
this
draft.
At
this
time
we
do
not
use
NS
8,
so
this
draft
does
not
consider
an
S
age,
and
you
know
at
the
5g
network,
slicing
and
end-to-end
network,
slashing
there
is
multiple
domains
like
such
as
excess
domain,
transport
and
court
or
at
the
premises
at
J's
and
court.
So
at
that
time
we
develop
service.
Q
Actually,
at
this
topology
there
is
just
Sdn
controller,
but
we
expecting
Sdn
Orchestrator,
then
top
of
there
is
HV
Orchestrator
and
then,
at
that
time
some
domain
controllers
ruling
every
domains
and
then
it's
especially
in
Korea
in
out
the
small
regions.
So
this
is
not
that
make
sense,
because
we
do
not
have
net
much
delay
or
then
much
lay
for
the
like
a
propagation,
but
I
expect
it's
going
to
be
standardization
and
so
I
think
it
is
kind
of
broad
region
such
as
like
America
or
China
things.
Q
So
at
this
point
we
I
mentioned
the
scope
is
marching
on
my
network,
but
and
I
want
to
try
to
test
mark
people
regions.
So
if
someone
have
interested
and
then
I
want
to
some
collaboration
with
you
guys
anyway,
I
was
started
the
other
test
yeah.
So
there
is
actually.
This
is
the
whole
items
of
districts
very
simple.
There
are
only
five
parameter
at
this
time,
so
this
is
configuration
parameters
for
benchmarking,
the
SFC
accuracy,
so
the
first
is
a
type
of
switches.
Q
Now
there
are
a
lot
of
hard
working
on
get
rid
of
virtualization
and
the
Fisker,
but
still
there
is
exist,
so
the
types
of
switches
which
is
a
virtual
or
physical
switches
going
to
be
affect
to
the
accuracy
of
SFC
and
the
second
one
is
the
number
of
switches
in
the
target.
Essebsi
domains,
and
the
third
is
the
uses
of
the
flow
table
of
the
target
switches
which
is
like
tecum,
usages
or
flow
table
entries.
Q
It's
going
to
be
effect
from
the
when
the
Sdn
control
are
going
to
set
up
the
rules
to
the
cedron
estudian
switches.
This
state
test
of
a
stretch
is
going
to
be
affect
the
accuracy
of
the
SFC
and
the
fourth
is
the
physical
distance
between
control
switches.
But
this
is
what
I
mentioned
before
and
Korea
is
not
that
kind
of
very
effective
in
Korea's
modulus
when
the
five
the
last
one
is
two
traffic
laws
or
laws
on
the
target
switches.
Q
It
is
same
like
the
state
rests
of
switches,
and
then
we
are
testing
the
first
time
as
a
first
thing
is
a
rule
activation
time,
because
when
I
write
write
when
I
wrote
these
draft
SFC
reliability
means
SFC.
Operations
must
be
done
at
the
right
time
and
at
the
right
path.
But
some
switches
doesn't
listen
to
my
rule.
Okay,
so
the
rule
activation
time
was
the
important
things
that
we
have
to
test.
So
the
rule
activation
time
at
the
3
by
3
matrix.
A
Q
C
C
Q
C
F
Can
you
remind
me
I
did
not
read
that
the
current
version
of
the
draft
I
saw
there
was
a
bunch
of
things
that
change
and
I
meant
to
go,
pull
to
see
what
had
changed
and
I
ran
out
of
time.
But
can
you
remind
me,
do
you
have
a
because
your
slides
reference,
physical
switches,
as
well
as
virtual
switches-
and
you
start
to
measure
things
like
t
cam
usage?
Q
G
Q
Q
C
The
bus,
that's
very
good
thanks
thanks
so
much
for
coming
again
and
presenting
your
work
there.
Okay,
keep
very
interesting
I,
think
that,
as
you
mentioned,
the
income,
the
network
service
header,
which
you
weren't
able
to
incorporate
in
your
own
work,
we
have
others
who
do
have
some
expertise
in
that
area.
Maybe
a
crime
scene
test
fit
within
its
age
and
then
we
can
incorporate
it
either
in
this
draft
or
as
a
part
to
burden
something
like
that.
So.
A
R
C
Our
current
work
proposals
look
like
this.
We've
got
these
different
work
areas
proposed
and
we
kind
of
discuss
the
status
of
them
in
these
dimensions.
That's
whether
we
have
a
kind
of
a
written
proposal
is
it
in
the
scope
of
the
Charter
and
that's
just
sort
of
sort
of
might
might
guess
whether
we've
got
drafts
available,
whether
we've
seen
significant
support
of
meetings,
significant
support
on
the
list
and
any
notes
on
dependencies.
So
we
had
a
draft
which
was
on
performance
monitoring
event.
C
It
was
actually
focused
on
iqt
why
1731
recommendation
for
Ethernet
Oh
amen,
so
we've
liaison
with
iqt
studied
about
15,
and
we
got
a
response
back
from
them.
They
own
the
protocol
and
they
said
this
isn't
worthwhile
benchmark.
So
I
summarized
that
as
don't
III,
think
that's
valid
feedback
and
so
I,
so
we've
sort
of
dropped.
This
work
from
the
proposals
now
we've
just
discussed,
evpn
and
ppb
and
where
we
believe
we're
on
the
way
to
adopting
this
graph.
C
So
I've
made
that
one
green
and
you
can
see
the
kind
of
the
result
here
that
we've
got
significant
support
of
meetings
on
the
list.
I
will
next
time
mention
that
we're
still
looking
for
additional
clarity
hearings
before
adopting
we
had
a
proposal
on
virtual
benchmarking
as
a
service
which
seems
to
be
dead.
It
was
more
like
a
research
project.
I
see
the
look
on
your
face.
Yes,
it
was,
it
was
really
sort
of
research
oriented
and
we
kept
challenging
them
to
make
it
more
about
engineering.
B
C
About
research
and
I
think
the
answer
is
empty
mic,
so
we're
not
but
I
think
that's
pretty
much
dead,
but
we've
got
a
proposal
from
our
two
friends
at
VMware,
Samuel
and
Jacob,
which
had
good
awareness
and
many
comments
at
IETF
by
98,
but
no
review
on
the
list,
since
so
I'd
like
to
sort
of
have
folks
take
a
look
at
that.
Here's
tikkis
proposal
on
SFC
reliability,
accuracy
performance
in
the
setting
up
flow
of
these
things.
C
The
the
issue
that's
been
raised
with
their
method
is
that
they've
got
measurement
devices
sharing
resources
with
the
virtual
switches
that
they've
been
charged
and
we
keep
raising
this
problem
and
we
haven't
gotten
that
good
answer
so
and
and
and
it's
kind
of
overtaken
by
events,
the
contribution
from
the
OPM
fee.
If
you
switch,
the
project
basically
said:
let's,
let's
take
the
measurement
systems
out
of
the
picture
and
get
them
outside
the
server
physical
interfaces,
and
actually
our
network
function.
Virtualization
developed
on
the
environment,
the
that
actually
looks
a
lot
like
what
our
deployments.
C
C
Will
will
revise
that,
but
I've
mentioned
all
three
of
these,
and
so
now
I'd
like
to
open
the
microphone
for
any
additional
proposals
or
ideas
that
folks
are
thinking
about,
but
they
might
like
to
write
a
draft
and
bring
it
in
and
climb
toward
the
the
sole
meeting
and
Marius
has
responded
to
the
challenge.
Yeah.
M
Marius
I
I
had
a
short
discussion
yesterday
with
all
about
something
that
we
are
interested
in
and
that
was
well
Wi-Fi
benchmarking
and
there's
like
benchmarking
for
VLANs
and
I.
Actually,
I
was
looking
to
see
if
there
are
any
other
people
that
are
interested
to
see.
If
we
can
start
going
discussion
on
writing
the
draft
and
also
alt
sent
me
the
draft
Alexander,
which
was
a
proposal
that
at
some
point,
was
in
Ãmwg
and
for
some
reason
he
didn't
got
very
far
like
it
only
had
two
iterations
yeah.
M
C
He
was
working
with
802
11
in
their
testing.
He
saw
some
gaps,
we
actually
exchanged
in
a
liaison
with
802,
11,
T
protesting
and,
and
they
said
sure,
go
ahead
and
do
this
and
I
was
the
only
person
who
reviewed
the
draft
with
the
possible
exception
of
Scott
Radner,
who
actually
became
a
co-author
because
of
Tom's
acknowledgment
of
his
contributions.
But
Tom.
C
F
I
was
gonna,
say
it's
been
a
while,
since
I
worked,
Enterprise,
Wi-Fi
and
I
used
to
do
the
Wi-Fi,
Alliance
and
I'm
wanting
to
see
hey,
let's
check
with
them
to
see,
if
they're
doing
anything
already
down
this
path,
I,
don't
think
so,
which
means
we're
clear
but
separately,
there's
a
whole
school
of
folks
who
do
the
protocol
stuff
here
for
Wi-Fi
and
so
we'd
have
a
couple
like
I
already
have
a
couple
of
context.
We
can
solicit
to
see
hey
if
we
were
to
do
this
kind
of
stuff.
F
Can
you
guys
help
because
I
don't
do
enterprise
Wi-Fi
anymore?
Otherwise,
I
would
take
the
stuff
home
when
we
could
do
the
benchmarking
there
too,
but
I'm
I'm
certainly
happy
to
at
least
look
into
a
draft
with
you,
and
then
we
can
connect
with
a
couple
of
folks
who
are
just
checked.
They
are
here
this
week
to
see
hey
what
do
you
think?
E
C
Yeah,
oh
yeah
I
mean
we
were
we
were.
We
were
like
goals
for
all
the
meetings
that
so
Tom
raced
this
point
in
trying
to
get
people
to
read
this
graph
to
say
back
at
ITF
71,
the
hospitals
were
beginning
to
pick
up
Wi-Fi
for
the
communications
between
their
systems
and
that,
if
the
Wi-Fi
didn't
work
properly,
lives
might
be
at
stake.
Yeah.
F
Be
I
think
that's
because
we
don't
have
Wi-Fi
expertise
in
this
room,
but
I
think
getting
those
folks.
There's
old
I
mean
these
whole
folks
who
do
all
the
triple
a
we
do
on
wire
line.
They
do
all
the
triple
a
on
the
wireless
side
of
the
house
there,
so
we
just
I
think
we
need
to
get
the
cross-pollination.
The
is
benchmark.
F
M
M
M
C
F
Right,
we
don't
know
where
it's
going.
Yet,
if
you
read
and
it
doesn't,
it
goes
down
a
path
where
you
don't
care
or
it's
not
your
thing.
You
can
always
withdraw
it's,
not
a
big
deal,
but
getting
folks
to
contribute
ideas,
even
just
as
a
user.
With
me,
the
hotel
room,
the
crappy
Wi-Fi
you
care,
so
give
it
a
Rishi.
F
C
That's
not
not
the
V
switch
stuff,
so
we
had
a
draft
on
that
as
well
again,
that
was
the
kind
of
thing
that
didn't
garner
enough
interest
in
the
working
group
and
we
weren't
certain
that
we
had
sufficient
expertise,
sort
of
connection
with
industry
to
take
that
up
if
anyone's
interested
in
that
topic
in
particular,
I
can
certainly
find
and
share
with
you
the
previous
work
on
that
to
see
ya.
You
can
make
that.
C
But
we
won't
do
that
unless
somebody
really
wants
to
championship
champion
it
and
let's
see
oh
yeah
so
connected
with
the
talk
I'm
about
to
give,
which
is
on
the
OPN
of
the
piece
which
characterization
is
that
we've
done
actually
measured
results.
There's
a
couple
of
ways
in
which
we
could
update
RFC,
2544
and
I'll
I'll
get
to
that
in
a
few
minutes,
but
they
have
a
back-to-back
brain
test,
which
I
think
needs
a
correction
factor.
So
that's
a
spoiler
alert
and
obviously
we
needed
to
do
something
with
latency.
C
The
latency
measurement
RFC
comes
back
is
based
on
the
single
packet.
Of
course
the
the
manufacturers
do
more
than
that,
but
we
should
have
a
standard
that
captures
what
we
really
think
of
this.
So
I'm
gonna
tell
that's
so
in
terms
of
in
terms
of
time.
Between
now
and
the
next
meeting
I'd
like
to
discuss
new
work
items
intently
intensely
on
the
list
here,
the
proposals
see
the
drafts
prepared
and
presented
to
the
group.
C
If
we
had
to
have
an
interim
meeting,
we
could
certainly
arrange
that,
but
the
target
time
scale
I
think
is
that
we
would
have
proposed
text
for
the
next
charter
by
IETF
100,
which
is
in
the
November
this
year,
anything
to
sing
for
and
then
you
know,
have
a
face-to-face
discussion
about
that
to
really
sort
of
finalize
what
we're
going
to
do
and
clean
it
up
and
then
in
the
next
meeting
timeframe
then
go
forward
with
that
text
and
attempt
to
get
it.
It.
K
C
Absolutely
right,
so
you
know
in
the
end,
in
the
end
of
the
day,
Warren
it
sort
of
comes
up
to
the
way
that
you'd
prefer
to
see
charters
written
the
feedback
we've
had
in
the
sort
of
the
earliest
15
years
of
benchmarking.
Methodology
was
that
we
had
this
general
Charter
and
we
decided
what
fit
within
that
charter
and
what
didn't?
C
It
increases
our
visibility
to
do
that
and
nails
down
some
of
the
things
that
we're
doing,
but
we've
always
had
proposals
come
in
like
the
ipv6
transition
technologies,
where
we
went
to
the
our
ad
and
said,
can
you
do
this
under
our
general
charter
and
back
then
Joel
said?
Yes?
So
maybe
maybe
our
our
charter
exercise-
and
this
is
something
we
can
discuss
this
week-
I
didn't
get
a
chance
to
get.
C
You
know
to
do
this
before
Monday
morning,
but
if
your
preference
is
to
is
to
go
back
to
the
general
charter
under
which
we
evaluate
as
a
leadership
team
together
what
work
fits
and
what
work
doesn't
and
and
not
worry
about
the
specific
bullet
items.
Then
that's
a
lot
shorter
exercise,
and
so
we
could
look
let's.
Let's
think
about
that
as
well.
All
right,
all
right,
very
good,
but.
F
Be
consistent
because
looking
at
the
Charter
I
realize
holy
crap,
you're
elected
there
are
like
three
quarters
of
our
charter
are
just
paragraphs
on
them,
working
on
the
work
that
we've
done
yeah
and
some
of
it's
been
done
for
some
time
and
then
with
the
way
the
tool
is
set
up
now
with
milestones
underneath.
So
anybody
wants
to
understand
under
general.
What
are
they
doing?
He'd
see
in
anyways
yeah?
That's.
C
R
C
Yeah
yeah
I'm,
but
at
the
same
time
we
should
have
a
good
idea
of
what
the
milestones
are.
Gonna
be
sure,
and
so
we
need
the
same
discussion.
It's
just
that
we're
not
writing
we're
not
making
yourself
crazy.
Writing
that
it's
love
it!
Alright,
that's
that's
the
most
awesome
feedback.
I
could
have
imagined
for
saying.
Thank
you
makes
our
jobs
all
around
the
world.
C
C
D
C
Alright,
everybody
hear
me
yeah,
okay,
great
so
so
this
is
a
talk
about
data
playing
performance
capacity,
benchmarking
in
the
V
switch
performance
project
that
Opie
nfe,
my
co-authors,
are
Trevor
Cooper
who's,
the
project
team
leader
for
this
benchmarking,
V
switch
project
and
Sridhar
Rao
from
spirit
communications
who
arranged
and
conducted
a
lot
of
the
testing
we're
looking
at
here
during
our
pug
fest
and
in
times
actor
using
our
dedicated
Intel
supplied,
testing,
pod
and
so
forth.
C
C
So
here's
the
agenda
there's
some
background
here
on
data
playing
performance
measurement,
we'll
have
the
results
and
analysis
and
then
looking
where
price
gap
over
this
last
part,
the
like
vs,
perf
Oh,
looks
like
in
the
future.
So
here's
some
some
background.
The
network
has
networks
going
to
have.
Let's
see
here.
C
This
thing
is
that
and
if
I
point
to
it
there,
you
can't
see
it
there
we
go
so
the
network
currently
has
lots
of
dedicated
devices
or
replacing
them
with
these
servers
and
virtualizing
our
network,
and
so
we
have
end-to-end
performance
that
all
the
folks
with
these
you
know
these
end
devices
need
and
expect
to
support
their
application
in
applications.
In
the
in
the
current
environments,
we've
got
the
three
by
three
matrix
they're
looking
at.
Actually
that's
the
three
by
four
matrix.
C
Looking
at
the
service
performance
indicators,
we
use
it
there
as
well
in
networking
we're
gonna
have
all
these
different
SLA
s
and
important
ones
are
packet
loss
packet,
delay
packet,
delay
variation.
When
we
look
at
the
end
to
end
performance,
it's
got
to
support
these
applications
and
what
that
means
is
we
have
to
allocate
performance
to
specific
devices
across
the
network
so
that
when
the
when
their
performance
is
concatenated,
accumulated
impairments
and
so
forth,
there
we
have.
C
You
know
we'll
have
an
accurate
summing
up
of
of
all
these
small
contributions
in
order
to
support
the
end
in
performance,
so
a
V
switch,
which
is
a
small
component,
but
which
is
a
small
component
inside
a
server
inside
the
network,
is
only
gonna,
have
a
really
small
contribution
to
the
end-to-end
performance
and
that's
the
challenge
here.
We're
only
looking
at
once,
which,
at
the
time
we're
gonna
be
very
demanding
of
these
switching
points.
C
Okay,
while
the
audience,
while
the
audience
reshuffles,
so
we
have
these
deployment
scenarios,
which
I
mentioned
here
and
you
can
see
the
the
V
switch
is
represented
in
red.
We've
always
got
our
test
devices
external
to
the
V
switches,
so
they
go
through
physical
ports
in
and
out
when
we've
got
what
we
call
a
what
we
call
a
physical
two
v
n'j
to
vm.
C
C
Yeah
I
didn't
name
it,
but
I,
but
I
love
the
name
so
so
this
is.
This
is
what
vyas
perf?
Does
it
basically
posh?
It
basically
takes
all
these
steps.
It
sets
up.
Various
things,
sets
up
the
workload
the
traffic
generator
and
so
forth.
Executes
collects
the
data
generates
the
test
statistics,
we've
got
the
you
know
the
full
full
enchilada
here
and
what
we
do
so
vias
perf
is
one
of
the
OPN
Fe
projects
in
the
testing
working
group.
C
So
it's
it's
located
there
in
the
circle,
and
you
can
see
that
we
have
other
things
for
functional
testing.
That's
way
off
at
the
left
here,
funk
funk
test
and
yardstick
is
general
performance
for
the
platform,
storage
performance.
Obviously,
benchmarking
for
the
server
specific
stuff
bottlenecks
looks
at
now
analyzing
tests
in
order
to
find
bottlenecks.
C
So
here
are
all
the
data
plane,
performance
testing
options,
and
this
is
kind
of
an
eye
chart.
There's
lots
of
detail
here.
What's
marked
in
green
is
what
we
emphasized
in
this
testing.
So
what
you'll
see
is
that
we've
got
the
capabilities,
for
example,
under
traffic
generators
we've
got,
we've
currently
got
an
Ixia.
C
We
also
have
virtual
commercial
available
to
us
and
we
have
three
open
source
generators
and
receivers
that
we
install
on
bare
metal,
hosts
and
again
connect
these
through
physical
interfaces
to
the
to
the
device
under
test.
So
that
and
that's
key
information
for
this,
so
we
can
also
do
OBS
and
OBS
DB
DK,
and
also
the
VPP
sort
of
versions
of
V
switch
and
some
other
things.
C
So.
Here's
the
example
test
results
an
analysis
and
we're
gonna.
Do
all
three
of
these
we're
gonna,
compare
OVS
and
VPP
different
traffic
generators,
the
impact
of
a
noisy
neighbor
and
the
back-to-back
frame
testing
I
want
to
be
sure
to
get
to
that
today,
because
that's
that's
actually
the
thing
that
has
the
most
implication
on
our
work
here.
C
So,
let's
look
at
so
the
reason
I
made
these
slides
available
on
the
Opie
Anna
feast
is
that
if
they
get
converted
to
PDF
in
the
ITF
site,
you
won't
be
able
to
see
these
things,
build
and
there's
actually
some
animation
here
in
fly-ins.
That
will
cover
up
other
information.
So
you
want
to
look
at
the
original
slides
build
like
this,
so
here
we've
got
a
comparison
of
both
OBS
and
with
DP
DP
DK
and
VPP,
using
the
RFC
2544
throughput
testing
bi-directional.
C
One
key
point,
though,
is
that
we're
testing
a
single
flow
at
this
point
right
and
in
green
we've
got
the
theoretical
maximum
based
on
the
the
interfaces.
I
think
these
are
all
ten
gig
interfaces
and
for
a
single
flow.
What
we
see
is
that
the
blue
OVS
throughput
at
different
packet
sizes,
it
pretty
much
matches
VPP
so
and
that's
consistent
with
with
previous
results.
So
we
expected
to
see
that
and
they're
all
you
know
about
the
same
level
with
respect
to
the
theoretical.
C
In
fact,
where
we're
really
getting
some
limitations
here
is
only
at
the
lowest
packet
sizes,
so
we're
seeing
packet
header
processing
limits.
It
is
mostly
at
the
the
smallest
tax
itself:
mote
smallest
frame
size,
64
bytes.
So
let's
build
them.
Okay
and
that's
the
circle
that
that
emphasizes
it.
So
let's
click
this
one
three
times
and
see
what
happens?
Oh
yeah,
okay.
So
so
now
we're
looking
at
average
latency
here
and
what
we
notice
is,
let's
say:
we've
got.
C
C
Green
is
OVS
and
brown
is
VPP,
so
the
the
message
that's
being
delivered
here
is
that
the
the
delay
is
pretty
minimal
for
the
the
low
packet
sizes,
but
we
do
see
a
lot
of
variation.
There
note
that
the
maximum
delay
is
go
way
up
from
the
minimum,
and
that
really
only
happens.
It's
64
byte
frames
and
then
we
get
fairly
consistent
results
across
the
Maxima
for
the
different
frame
sizes.
C
But
we
see
the
we
see
the
large
packet
size
averages
going
up
here
and
getting
close
to
the
getting
closer
to
the
max
as
we
increase
those
sizes.
So
that's
a
that's
a
different
form
of
characterization
and
as
I
mentioned
right
now,
this
isn't
asked
for
in
RFC
2544
this
kind
of
latency
examination.
So
that's
this
is
one
of
the
areas
that
I
think
we
definitely
need
to
expand.
Sorry
did.
C
I
think
that's
the
instability
of
the
maximum,
in
other
words,
when
you,
when
you
run
these
tests,
you
probably
see
some
variation
there
and
and
and
that's
gonna,
be
dependent
on
the
stochastic
coincident
of
processes
that
affect
the
the
packets
going
through
I
mean
we're
talking
about
the
delay
of
one
packet,
that's
the
max
right
and,
and
so
I
think
that's
I!
Think
that's
what
that's
all
about.
C
C
E
H
H
C
C
What
are
we
getting
at
here?
Vpp
throughput
can
go
up
70
percent
higher
than
OBS.
That's
what
we
learned
and
that
was
consistent
with
previous
light.
Reading.
European
advanced
the
test
center
testing,
but
there's
there's
some
inconsistencies
here.
The
the
4000
flows
had
lower
throughput
versus
1
million
flows.
So
that's
what
we're
looking
at
here.
We've
got:
1
million
flows
there's
some
cases
here.
In
fact,
I
should
probably
click
it
one
more
time
and
we'll
see
that
yeah
yeah,
so
OBS
actually
had
better
throughput
for
1
million
flows
than
for
yeah.
C
It
doesn't
make
sense,
it
doesn't
make
sense,
but
we
saw
this
over
and
over
again.
So
there
must
be
some
something
here
that
we
don't
fully
understand.
Of
course
it's
it's
at
the
smallest
packet
sizes
to
other,
in
other
cases.
Well,
actually
the
case
next
to
it
is
obvious,
was
still
high
are
there.
So
this
is
something
something
that
that
needs
to
be
chased
down,
but
this
is
the
kind
of
thing
that
you
can
do
with
these.
C
So
here's
the
possible
reasons:
the
packet
handling,
architectures,
the
actual
construction
of
the
packets
in
the
test,
generators
and
and
and
whether
the
text,
the
test
traffic,
is
actually
fixed
size.
These
are
something
to
look
at,
oh,
so,
cash
missing.
This
is
an
internal
measurement.
I'm!
Sorry
Sarah
go
ahead!
No.
C
F
G
C
We
rejected
it
because
it's
currently
not
a
capability
of
vs
perfect
to
do
it,
okay
and
and
we're
still
sort
of
arguing
about
how
we're
gonna
do
that
generically.
So
it's
a
kind,
it's
kind
of
it's
more
of
a
development
thing,
because
it
turns
out
when
you
specify
I
mix
the
different
generators
they
all
want
it
different
yeah.
So
that's
a
pain
for
us
we
want
to.
C
We
want
to
do
it
one
way
and
then
have
all
the
generators
adopt
that
and
we
kind
of
put
it
on
the
back
shelf,
but
that,
but
it's
it's
a
it's.
It's
not
a
generator
of
limitation.
That's
more
of
vs
birth
limitation,
so
well-spotted,
my
hats
off
to
you
all
right,
so
cash
missing
was
supposed
to
people
thought
that
cache
miss
was
supposed
to
account
for
the
difference
in
performance
between
VPP
and
OVS.
C
So
I'll
pass
that
along
in
the
updated
agenda,
so
the
results
are
use
case
dependent.
You
know
you
there
there
there's
going
to
be
impact
of
the
deployment
scenarios
and
so
forth.
Obviously,
there
could
be
realistic
and
more
complex
tests
that
may
impact
the
results
significantly
I
mixes,
as
we
mentioned,
the
searching
for
throughput
maximum
may
be
a
factor
here
and
and
that's
the
the
searching
algorithms
differ
from
system
to
system.
The
the
device
under
test
always
has
multiple
configuration
dimensions.
C
Hardware
and
software
can
limit
the
performance
and
it
may
not
be
in
obvious
ways
and
the
and
the
metrics
can
be
deceiving.
You
know
without
proper
consideration
to
all
people,
so
now
we're
going
to
look
at
a
comparison
of
the
bare
metal,
but
software
based
the
traffic
generators,
so
they're
comparable
to
the
hardware
reference
for
larger
packet
sizes
in
general,
but
the
small
packet
sizes
show
inconsistent
results
between
VPP
and
o-p-s
and
between
the
single
and
multi
stream
scenarios.
C
C
That's
that's
exactly
what
yeah
yeah
in
fact,
I
think
what
we're
gonna
end
up
with
when
we
finally
do
a
grand
RFC
2544
update,
is
many
more
specifications
on
the
generators
themselves.
We
got
yeah
really
have
to
nail
down
what
we
mean
when
we
are
asking
for
a
consistent,
continuous
bit
rate
traffic
stream,
which
was
specified
enough
in
the
in
the
early
days.
But
now,
let's
look
at
these
results
and
you'll
see
what
I
mean.
C
Okay.
Is
there
a
thing
here?
No,
let's
see
oops
nope
I
didn't
want
that.
Okay.
So
now
we're
gonna
look
at
four
types
of
traffic
generators.
Here
we've
got
the
hardware
generator
that
I
mentioned
before
and
three
bare
metal
generators
and
what
we
notice
in
this
is
four
OVS
throughput
and
over
here
for
VPP
and
and
here's
the
comparison
I
want
you
to
make
for
the
red
bare
metal
generator.
C
B
we've
actually
got
significant,
lower
throughput
here
at
64,
byte
sizes
and
even
at
128,
when,
when
looking
at
OBS
but
VPP
now
the
red
generator
is
is
doing
as
well
as
the
hardware
based
and
actually
it's
the
C
generator.
That
has
some
reluctance
to
produce
packets
in
a
way
that
results
in
it's
well.
I
mean
it's
pretty
close
to
its
previous
throughput
with
with
OBS,
but
it
it's
it's
greatly
reduced
now,
here
and
and
of
course,
with
the
large
packet
sizes.
C
It's
it's
it's
not
such
a
big
deal
to
evaluate,
but
these
are
the
important
ones,
because
this
is
where
we're
really
seeing
queuing,
where
we're
really
seeing
packet,
header
handling
limitations
and-
and
so
the
the
characteristics
of
the
generators
here
are,
are
going
to
be
a
factor
and
I'll
say
tipping
some
future,
slides
that
when
I
looked
at
this
I
said
okay,
these
are
bare
metal
generators.
They
may
not
be
producing
packets
in
the
constant
bitrate
stream
as
carefully
as
as
some
of
the
others.
C
3,
if
this
is
these
are
all
25
million
frames
per
second
maximum
25
million,
okay,
all
right
yeah.
That
was
one
of
my
complaints
about
these
things
too.
I
can
never
count
all
the
zeros
I
yeah
yeah.
Thank
you!
So
that's
yeah,
that's
25,
so
that
so
we
have
something
else
here
too,
which
comes
up
like
this.
C
So
let's
see,
oh
now,
so
now
we're
looking
at
the
four
thousand
flows
and
we're
looking
at
two
of
the
hardware
generate
one
Hardware
generator
and
two
software
generators
on
bare
metal
and
and
now
for
some
reason,
we're
seeing
really
improved
a
throughput
for
the
software
generators.
The
hardware
generator
is
really
suffering
here
for
some
reason
on
OBS
and
yet,
let's
look,
you
know
here,
they're
all
pretty
much
everybody's
happy.
The
only
difference
here
is
OBS
with
DB
DK
and
VPP.
So
this
is
a
really
great
cause
of
concern.
C
C
What
do
we
got?
This
is
oh,
this
is
with
the
these.
Are
tests
with
the
traffic
generator
as
a
VM,
it's
mostly
restricted
to
the
lower
packet
sizes,
where
you've
got
a
difficult
comparison
here
with
the
theoretical,
and
this
didn't
a
chatty
achieved
the
same
throughput
levels
as
the
others,
so
the
this
VM
based
thing,
even
though
it's
running
separately,
you
need
bare
metal.
It's
not
working
this
path,
so.
C
And
this
is
a
complicated
thing.
Probably
nobody
can
read
so
I'll.
Let
you
take
a
look
at
this
and
the
the
other
thing
for
latency,
but
this
is
the
one
that
produces
a
pretty
good
measurements
of
latency
in
the
VM.
It's
just
that
you
can't
get
the
the
throughput
levels
that
we
would
want.
So
here's
the
the
summary
of
the
lessons
learned,
inconsistent
results
seen
across
the
different
ones,
characteristic
packet
stream
characteristics
may
affect
it.
Bursty
traffic
actually
more
realistic,
so
we're
gonna
look
at
the
back-to-back
tests.
C
All
right,
so
we
also
did
some
noisy
neighbor
tests.
What
happens
if
you've
got
a
stressing
VM
and
it's
going
to
affect
your
performance?
Obviously,
and
the
CPU
configuration
and
the
Numa
isolation?
That's
the
you
know
the
non-uniform
memory
access
nodes
where
the
cpus
are
in
a
ring
that
can
protect
you
from
the
majority
of
noise,
but
the
last
level
cache
is
the
key
to
creating
that
noise,
because
you
can't
get
a
segmentation
or
partitioning
there.
So
also
that's
the
general
finding
and
I.
Don't
think
that's
a
big
surprise!
C
C
The
the
goal
of
the
test
is
to
seek
the
maximum
burst
length
sent
with
the
packets
are
sent
with
the
minimum
packet
spacing
on
Ethernet,
so
they
have
a
minimum
Ethernet
frame
gap
and
a
preamble.
That's
the
only
thing
that
separates
them:
they're
transmitted
through
the
dut
without
loss,
and
so
your
your
increasing
the
size
of
the
first,
and
if
you
see
it
without
loss,
then
you
go
to
a
higher
burst
and
so
forth.
So
it's
another
form
of
searching
okay.
C
So
we
did
this
with
the
the
hardware
traffic
generator.
We
did
it
in
the
Phi
to
Phi
configuration
that
I've
reproduced
up
here,
just
with
the
OBS,
with
DP,
DK
and
and
what
we
recognized
is
that
we
have.
We
have
tests
running
in
what
we
call
continuous
integration,
which
is
where
we
test
our
software
every
few
days
and
make
sure
that
it's
still
running
make
sure
that
that
we
haven't
done
something
in
the
test
code
that
causes
the
results
to
very
horribly
and
and
so
we're
we're
performing
exactly
the
kind
of
tests.
C
That's
got
Brandner
envisioned
when
he
ran
the
original
back-to-back
tests
and
was
unable
to
get
consistent
results
in
some
circumstances.
This
is
I
really
wish.
We'd
done
this,
while
Scott
was
still
here
and
I
plan
to
have
a
kind
of
an
offline
discussion
with
him
anyway,
but
I
I
recognize
that
our
CI
testing
basically
makes
a
perfect
place
for
us
to
evaluate
these
results.
C
So
in
the
particular
pod
that
we
tested
in
until
pod
12,
we
have
results
every
few
days
from
February
through
May
this
year
and
what
I've
plotted
here
is
the
the
time
series
of
results
ordered
by
the
frame
size.
So
you
see
all
the
64-byte
results,
then
all
the
128
512
s
and
and
1020
floors
and
518,
so
so
as
time
go
time
actually
goes
to
the
to
the
right
here
and-
and
that
was
just
the
easy
way
to
represent
this.
C
So
so
here's
the
interesting
thing
that
happens.
You've
basically
got
this
model
of
the
traffic
generator,
sending
traffic
into
a
buffer
and
that's
handled
by
a
hair
processor,
and
then
the
the
burst
goes
out
to
the
receiver.
Now,
if
the
header
processing
is
able
to
keep
up
with
the
traffic
generation
for
packet
sizes
in
various
with
various
sizes,
then
that
means
basically
that
you
don't
create
a
buffer
and
what
you're,
actually
seeing
if
you
go
back
to
the
in
your
mind,
go
back
to
the
throughput
results
that
we
showed
for
the
higher
packet
sizes.
C
We
aren't
seeing
a
throughput
limitation
at
in
the
PvP
case
at
any
of
these
packet
sizes,
one
128
512
518.
In
fact,
we're
sort
of
seeing
it
a
little
bit
here
and
the
and
the
test
generator
is
reporting.
These
incredibly
large
burst
sizes
that
are
accommodated
in
the
buffer
quote
quote
of
the
test
device.
That
can't
be
true.
It
can't
they're
the
these
these
all
these
burst
sizes
correspond
to
30
seconds
of
packets
having
been
sent.
C
So
we
know
that
that's
actually
a
limitation
of
the
test
device,
it
stops
sending
after
30
seconds,
but
it
took
some
sorting
out
to
go
through
all
this
and
figure
at
that
point.
In
fact,
if
I
jump
to
the
next
slide,
so
back
here
in
red,
here's,
here's
all
the
the
different
frame
sizes
plotted
over
the
same
time
scale
and
what
I've
plotted
there
is
the
buffer
time
in
seconds
that
you
would
assume
comes
from
the
length
of
the
burst.
C
That's
been
sent
and
look
at
this
they're
all
30
seconds
down
to
512
128,
so
it
shows
some
variability
but
they're
on
the
order
of
3
4
seconds.
We
know
this
isn't
true
either.
This
is
this
is
really
just
you
know
having
some
trouble
characterizing,
the
throughput
there,
and
once
it
looks
like
the
128
packet
rate
was
on
the
cusp
of
just
sometimes
being
accommodated
by
the
Packer
hit.
I
can
hit
her
processing
and
sometimes
not.
C
So
that's
why
we
see
some
inconsistency
here,
but
we
see
really
consistent
results
for
64
byte
and
those
were
the
cases
where
throughput
was
limited.
It
was
always
less
than
the
maximum
theoretical
throughput
at
64.
Well,
I've
got
this
one
up.
We
had
an
old
pod
that
we
used
to
run
these
tests
on
back
in
2016
and
there
we
see
consistent
results
too.
So
changing
the
hardware
changing
the
networking
changing
a
lot
of
things.
We
still
got
consistent
results
for
the
back-to-back
testing
so
backing
up.
C
So
here's
the
main
point:
we
have
this
average
Burton
birth
burst
length
that
we've
accommodated
was
twenty-seven
thousand
seven
hundred
frames
but,
like
I,
said,
there's
header
processing
going
on
here
all
the
time.
So
for
what
we're
trying
to
do
is
measure
the
size
of
the
buffer,
then
we
have
to
account
for
how
many
frames
have
been
pulled
out
of
the
buffer
by
the
header
processing
operation.
C
When
we've
sent,
you
know
twenty
seven,
twenty
six
thousand
seven
hundred
frames,
it
turns
out
to
be
more
than
half
of
them.
So
when
I
correct
for
the
throughput
that
we've
measured
the
corrected
buffer
size
is
actually
fifty-seven
thousand
thirteen
frames
on
average
or
only
0.38
for
milliseconds,
and
that
seems
to
make
a
lot
more
sense
with
the
hardware
we
thought
we
had
now.
The
trouble
is
and
with
similar
results
for
the
other
Intel
pod,
so
that
the
trouble
is
none
of
these
Corrections
are
mentioned
in
the
back
to
back
frame
discussion
in
RFC
2544.
C
So
so
this
and
the
latency
measurements
I
think
are
parts
of
that
that
we
should
update
soon
and
that's
why
I'm
proposing
this
work
for
our
next
charter
and
I
see
some
nodding.
That's
good
yeah,
so
I
will,
of
course
discuss
this
with
Scott
our
chairman
emeritus
who's
kind
of
kind
of
retired
from
all
this,
but
I
still
still
see
that
he's
monitoring
his
emails.
So
he
still
with
us
in
in
spirit
and
and
so
I'll
share
this
for
them.
C
So
I
mean
you
guys,
can
take
a
look
at
this
we're
running
out
of
time,
but
any
comments.
Any
other
I
really
appreciate
the
feedback
I
got
today
and,
like
I,
said
on
the
on
the
list.
You'll
see
a
link
to
our
wiki,
where,
where
we've
tried
to
explain
especially
this
calculation
of
the
correction
factors
in
back-to-back
testing
that
I
I
went
over
very
quickly,
I
show
that
for
all
the
packet
sizes
and
and
and
why
the
throughput
is,
is
a
critical
factor.
There
Marius.
S
M
M
C
C
S
J
C
It
may
be
on
the
it'll.
It'll
definitely
come
out
on
the
mailing
list,
so
please
subscribe
to
be
mwg
and
and
and
if
not
just
give
me
your
card
and
I'll
get
you
these
things
quickly.
Alright,
thank
you
very
good
thanks.
Everybody
I
really
appreciate
your
good
discussions.
Today
we
made
a
lot
of
progress
here.
Thank
you.