►
From YouTube: IETF102-BMWG-20180717-1330
Description
BMWG meeting session at IETF102
2018/07/17 1330
https://datatracker.ietf.org/meeting/102/proceedings/
A
And
all
all
we
ask
is
that
during
the
course
of
the
meeting,
someone
occasionally
captures
all
the
text,
because
there
are
experiences
in
our
not
so
distant
past,
where
the
whole
thing
is
disappeared,
but
this
is
but
it's
really
easy.
We're
gonna
stick
to
this
agenda.
It's
all
right
here
and
for
example,
you
just
you
know,
stick
your
cursor
there
and
press
return
and
then
you're
typing
notes.
A
B
C
B
You
are
next
on
my
list
good.
Try,
though,
but
yeah
would
you
would
you
mind
backing
up
and
taking
a
second
for
the
notes
for
us?
Thank
you,
alrighty,
okay,.
A
B
A
This
is
Sarah
banks
and
we
are
the
co-chairs
and
we're
going
to
entertain
you
as
much
as
we
possibly
can
today.
With
all
the
things
that
we've
prepared
to
hand
off
the
mic
to
our
friends
and
neighbors,
who
have
worked
so
hard
to
prepare,
slides
drafts
and
done
research
at
home
to
make
this
an
interesting
meeting,
our
ad
advisor
is
right
here
in
front
Warren.
Kumari
welcome
back
Warren,
incidentally,
did
you
ever
get
your
benchmarking
setup
running
cool
cool?
A
Obviously,
all
our
slides
are
available
on
the
meeting
materials
page
so
feel
free
to
grab
all
of
them
and
there's
actually
going
to
be,
probably,
especially
in
my
decks
of
slides,
there's
gonna,
be
backup,
slides
that
you
might
want
to
refer
to
later,
so
feel
free
to
do
that?
Okay,
the
note!
Well,
it's
still
early
in
the
week
we
have
an
IPR
policy.
It
means
that
you
have
to
disclose
all
your
IPR
in
a
timely
manner.
A
B
D
E
B
A
Here's
our
agenda.
First,
we
do
status
and
we're
gonna
look
at
our
new
charter
and
and
then
we've
got
this
one
draft,
which
is
I,
think
it's
been
working
group
accepted
actually
at
least
that's
what
I
did
the
last
time,
but
if
the
draft
name
doesn't
reflect
that,
so
that's
on
the
EVP
NPPB
we're
gonna
look
at
the
vnf
benchmarking
methodology,
which
is
now
about
automation
according
to
Rafael,
the
title
has
changed:
I
didn't
change
it
here.
Sorry
about
that
consideration
for
benchmarking,
Network
platforms,
actually
Jacobs
going
to
talk
about
that
in
the
online
agenda.
A
I
fixed
that
benchmarking
for
modern
firewalls
is
Tim
winters
here,
yes,
Tim
welcome.
Thank
you
for
joining
us
and
for
filling
in
for
the
rest
of
net
SEC
open,
much
appreciated.
I'll
talk
about
the
back
to
back
frame
benchmark,
which
is
followed
up
with
some
real
testing,
and
this
is
what
gets
everybody
exciting
here
when
we
really
talk
about
laboratory
work,
that's
been
done
and
right
behind
me
is
Paul
Emmerich
who's
going
to
also
talk
about
real
stuff
and
the
implications
of
the
real
testing
on
traffic
generator,
calibration
accuracy
and
precision.
A
So
that's
our
planned
agenda.
What
I'd
like
to
do
is
I'd
like
to
I'd
really
like
to
get
to
item
7
about
20
after
2,
something
like
that.
So
that
gives
us
about
40
minutes
for
the
testing
stuff,
which
I
think
is
going
to
get
really
interesting
in
Rawkus.
So,
let's,
let's
leave
plenty
of
time
for
that
and
and
two
minutes
again
for
Warren
to
tell
us
about
his
benchmarking
experience.
A
Okay,
any
questions
about
the
agenda.
Any
bashing
needed
everybody's
looking
at
their
laptops.
Let's
move
on
okay,
so
here's
the
quick
status,
the
SDN
controller
drafts
for
benchmarking,
their
approved
array,
congratulations
to
the
authors.
Sarah
was
one
of
them
and
and
I
supposed
to
the
you
know
that
approval
process
will
not
approval,
but
the
editing
process
will
continue
now
so
you'll
eventually
be
contacted
by
the
RFC
editor
to
clarify
a
whole
bunch
of
stuff
I'm
sure
nothing
goes
unscathed
there.
So
the
proposals
keep
coming.
I've
already
talked
about
these
industry
discussion
topics.
A
This
is
what's
going
on
everywhere
in
our
industry,
we're
talking
about
buffer
sizes,
the
assessment
of
them
search,
algorithms
and
the
traffic
generation
calibration.
That's
it's
actually
happening
everywhere,
but
we've
got
some
good
talks
on
all
of
this
today.
So
here's
our
current
milestones
boom.
A
Excuse
me
and
I
I,
let's
see
so
the
one
that
we
originally
planned
to
get
going
very
quickly
was
in
August
2018
methodology
for
next-generation
firewall,
I
think
we're
probably
gonna,
be
a
little
bit
behind
and
missed
that,
but
that's
alright.
All
of
these
were
really
aspirational,
and
the
next
step
here
is
to
I
wanted
to
I
wanted
to
mention
that
you
know
our
reach
are
during
discussions.
A
Basically,
two
things
happened.
One
is
that
we
codified
our
permanent
attention
to
the
virtualized
network
platform.
Benchmarking.
Here,
that's
now
sort
of
a
written
part
of
our
charter.
It
was
always
it
was
a
bullet
item
for
the
last
three
years,
and
now
it's
a
explicit
part
when
we,
where
is
it?
Oh
yeah,
the
one
in
the
middle
there
draft
on
selecting
and
applying
models
for
benchmarking
to
iesg
review.
That
was
actually
the
the
thing
in
our
Charter.
A
Where
we
got
the
most
comments,
it
actually
blocked
the
approval
of
our
Charter
for
a
while
and
and
the
reason
that
happened
was
that
in
fact,
there's
lots
of
models
being
prepared
for
network
services
and
possibly
applicable
models
being
developed
around
the
ITF
that
we
need
to
pay
attention
to
so
so
we
have
this
draft
on
the
table.
It
was
updated.
It
was
updated
back
in
May
actually,
but
the
author,
Shaun
Wu,
didn't
send
anything
to
the
list
about
it.
A
So
it
actually
kind
of
surprised
me
to
see
the
update
there,
but
we're
still
I
mean
we're
still
looking
at
this
topic,
and
what
we
need
to
do,
though,
is
look
a
little
more
broadly
than
the
proposal
we've
received.
That's
the
feedback
we
got
from
iesg,
it's
good,
it's
good
cross,
ietf
feedback.
Otherwise
we
might
get
all
the
way
up
there
with
something
that
is
just
gonna
get
crushed
with
discusses
blocking
comments,
any
questions
about
the
milestones,
I
think,
actually,
that
that
draft
on
selecting
and
applying
models.
A
A
We
sort
of
keep
track
with
the
work
proposals.
We've
adopted
the
work
on
evpn
PBB
evpn,
but
we
still
have
to
have
you
know
more
review
of
this
and
more
comments
and
so
forth
before
we
go
further
with
it.
It's
definitely
on
the
Charter,
though,
and
and
so
that's
good.
The
rest
of
this
stuff
is
kind
of
represented
in
the
Charter
as
well.
Although
we
haven't
seen
much
on
SFC
service
function,
chain,
chaining,
lately
and
actually
vnf
is
gonna
have
to
be
really
titled,
vnf
benchmarking
automation.
A
A
We
also
have
a
standard
paragraph
that
helps
people
understand
who
review
our
drafts
from
outside
the
area.
This
is
kind
of
like
you
know
this
is
this:
is
testing
for
the
laboratory
security
people
don't
flip
out,
because
we're
saturating
interfaces
and
stuff
like
that
I
mean
it's
just.
It's
just
advice
to
the
world
that
that
if
you
haven't,
read
our
Charter,
it's
kind
of
encapsulated
right
here
in
these
few
sentences.
A
A
An
absolute
requirement
must
is
an
absolute
requirement
and
when
those
you
see
those
capitalized
words,
that's
exactly
what
it
means.
That's
now
clear,
with
8174
and
there's
slightly
new
version
of
this
typical
terminology,
paragraph
so
I
encourage
everybody
to
adopt
that
I.
Just
had
two
graphs,
two
drafts
go
through
the
iesg
and
got
dinged
on
this,
for
both
of
them,
so
I
went
back
and
fixed
all
of
my
stuff
and
I
encourage
all
you
draft
authors
to.
Please
do
that
and
that's
it
for
the
chairs,
unless
you
want
to
add
anything
there.
A
B
A
H
H
So
evpn
enter
PBB
VPN
became
the
RFC
in
best
workgroup
and
it
is
widely
deployed
in
service
provider
network
since
to
the
2014
it
started
as
a
draft.
Then
it
became
RFC
in
2016,
so
it
is
widely
deployed.
So
there
was
no,
you
know
benchmarking
criteria
to
rate
this
particular
services
when
deployed
in
two
different
service
promised
ly
the
service
power
consumed
this
EVP
an
MPLS
and
PB
ve
VPN.
So
we
defined
certain
parameters
to
benchmark
this.
A
H
I
caught
it
correctly,
I
was
pressing
the
other
key
okay
yeah.
This
were
the
comments
received
from
the
previous
I
mean
IETF,
so
the
terminology
is
sections
expansions
and
the
ordering
of
the
test
cases
and
objectives.
So
you
know
the
placing
of
terminology
section
about
the
topology
and
the
topology
requires
some
more
dig
into
it,
and
so
this
were
all
the
comments.
So
we
have
addressed
that,
and
these
are
the
basic
you
know
we
had
defined
certain
parameters.
You
know
we
define
eight
parameters
to
benchmark
these
services,
then
deploy
it.
H
How
how
to
differentiate
you,
one
particular
when
your
duty,
when
we
are
testing
a
one
particular
box
from
other
howdy.
How
do
you
differentiate
this
so
based
on
this
particular
euro
parameters?
We
defined
it
based
on
that
we
benchmark
this,
so
it
is
based
on
the
Mac
learning
the
Mac
fresh,
the
Mac
aging
high-availability
are
scaling
the
scale
and
the
scale,
convergence
and
the
soak
test.
H
So
these
are
the
parameters
we
define
to
benchmark
these
services,
so,
based
on
that,
it
will
be,
you
know
it
will
be,
test
will
be
conducted
and
the
measuring
measurements
will
be
taken
and
it
will
be
plotted
and
rated
on.
You
know
how
the
each
box
is
performing
it
so
acknowledgement
as
thanks
Sarah
for
helping
us
and
alpha
the
support
and
a
lot
of
feedbacks
and
back
and
forth
changing
this
and
thank
Sarah
and
Al
for
this
support
and
next
as
her
option.
A
B
Good,
does
anybody
in
the
room
have
routing
experience
testing
wise
besides,
the
folks
will
raise
their
hands
so
it'd,
be
very
helpful
for
the
folks
who
have
the
experience
to
read
will
definitely
cross
pollinate
this
outside
of
the
working
group,
but
from
an
editor
perspective
on
helping
students
sort
of
lock
into
place.
The
different
steps
so
I
think
this
dropped.
This
version
and
the
next
version
student
has
coming,
will
be
really
approachable
and
and
I
think
you'd
be
able
to
go
on
and
read
this
and
say
yep.
B
E
Jimmy
taro
AT&T
just
a
question,
so
evpn
encompasses
a
family
of
services,
one
of
them's
VPLS,
there's
access,
there's
a
number
of
different
sorry:
vp,
&,
v,
p,
WS
c
d,
pn,
f,
XE,
v,
pn,
v
pls.
So
when
you
talk
about
benchmarking,
it
is
it
specific
to
the
EVP
NV
pls
with
the
multihoming
and
all
of
those
features
or
intending
also
to
do
these
other
aspects
of
evpn.
No.
H
E
7432
actually
articulates
the
control
plane
to
realize
not
only
the
creation
of
a
VPLS
but
also
a
vp
WS,
and
also
an
FX
c,
so
I'm
just
curious.
Maybe
you
might
want
to
I
mean
what
I'm
seeing
is
many
people
using
a
VPN
for
a
lot
of
different
use
cases.
One
of
them
certainly
is
in
the
data
center
with
active,
active,
active,
active,
active,
active
multihoming
and
that's
a
huge
thing,
but
there's
a
lot
of
people
also
using
it
for
access
like
bring
me
circuits
across.
You
know,
layer,
3,
network
and
land
somewhere.
That's.
H
E
Guess
if
you
can
efficacy
takes
VLANs
and
treats
them
as
state
and
appetize?
Is
that
state
to
index
up
and
says
these
VLANs
live
at
this
next
half
in
this
context
of
fxa?
So
you
know
if
you're,
if
you're
looking
at
programming,
that
on
a
box
and
that's
part
of
the
evpn
scope
that
might
be
useful
to
people
but.
H
Currently,
anyway,
that's
a
good
point,
because
that
cannot
be
taken
into
the
VPLS
per
se,
because
the
parameters
we
defined
it
that's
a
good
point,
because
the
fxe
came
later,
it's
still
running
as
a
ietf
drop
zero
zero.
Earlier
it
was
individual
draft.
Now
they
adopted
it,
zero
geez,
I'm,
sorry
yeah!
This
is
a.
This
is
a
basically
benchmarking,
this
RFC,
because
it's
widely
deployed
since
2040.
So
we
want
to
make
this
as
the
platform,
then
the
this
add-on
services
come
so
so
we
are
benchmarking.
The
base
RFC's
this
tool
so.
B
H
B
I
think
what
I'm
hearing
Jim
say
is:
if
we're
not
clear,
you're,
not
clear
on
what
your
scope
is
in
the
draft,
and
could
you
clarify
it?
Would
that
be
correct?
Slash
I,
think
he's
also,
maybe
potentially
saying
you
may
want
to
consider
a
larger
scope
and
I
think
adds
an
author.
It's
your
right
to
not
do
that.
B
However,
you
also
made
the
point
so
my
comment
as
a
personal
contributor,
if
you're
going
to
say
it's
them
widely
deployed
since
2014
I
think
Jim
just
laid
out
two
very
typical
use
cases
you
may
want
to
consider.
One
I
agree
define
the
scope,
but
I
do
potentially
think.
Maybe
you
should
broaden
that
scope
because
it's
been
out
there
for
four
years.
So
if
we're
gonna
put
out
something
that
says,
here's
how
you
benchmark
it
you'd
want
to
cover
the
use
cases
of
what
folks
are
deploying
right.
H
At
this,
at
this
point,
I
cannot
comment.
I
need
to
think
because
the
parameters,
because
the
services,
what
Jim
mentioned,
is
totally
different.
What
evpn
is
kind
of
a
you
know
super
VPLS,
and
this
is
a
point-to-point
service
with
VLAN
features
and
all
so.
The
parameters
will
change,
but
I
need
to
think.
I
cannot
comment.
Jim,
I,
take
your
point,
but
I
cannot
comment
at
present
because
I.
B
H
I
really
appreciate
your
point
just
give
me
some
time,
because
you
know
I
need
to
think
I'll
get
back
because
at
present,
I
really
apologize.
I
can't
give
you
the
comments
on
that,
because
the
parameters
I
need
to
think,
because
you
know
it
right.
The
services
are
different,
but
it
won't
so
name
a
pin,
but
that
one
of
the
key
points
I
need.
Give
me
some
time,
sir.
So
I
get
back.
Thank
you.
So.
A
That's
it
that's
a
good
ending
point.
I
think
well,
well,
sort
of
leave
it
to
you
to
think
about
this.
There's
two
ways
to
proceed.
We
could
add
the
stuff
to
your
graph
if
you
prefer
to
go
that
way,
Sudi,
but
also
it
could
be
a
it
could
be
an
additional
effort
that
folks
propose
if
they
see
a
need
for
that.
So
that's
a
you
know
there
could
be.
You
know
a
sort
of
a
part,
1
and
part
2
kind
of
approach
here
and
I
think
that
that
could
work
well.
Yeah.
H
C
So
some
of
the
changes
so
one
thing
that
I
think
Samuel,
the
co-author
of
the
draft
did
we
went
to
the
nvo
three
working
group
off
and
kind
of
talked
a
bit
about
this.
So
one
thing
that's
actually
coming
in
this
next
revamp
is
quite
a
bit
of
changes
and
actually
some
of
the
scope
as
well
as
some
of
the
terminology.
C
What's
already
there
in
the
nvo
three
working
group
it'll
make
this
draft
a
bit
more
clear
of
what
exactly
we're
going
after
in
term
of
benchmarking
and
then
once
we
update
the
actual
good
terminology
as
well
in
this
it'll
dice
the
aligned
and
then
one
thing,
the
split
MBE
is
one
thing
we
haven't
covered
that
we're
adding
in
it's
kind
of
a
work
in
progress.
The
split
NDE
is
actually
the
network,
then
de
for
those
unfamiliar
as
a
network
virtualization
edge.
C
Exactly
that's
what
that's
what
their
needs?
That's
exactly
like
why
we
want
to
kind
of
define
in
each
test
like
here's,
here's
if
it's
co-located
with
hypervisor
vs.
but
and
have
kind
of
different
considerations
for
each
and
that's
kind
of
the
big.
The
big
updates
that
we're
going
after
in
this
specifically
I
mean,
if
you
look
at
the
83
94,
there's,
there's
some
ambiguity
and
how
things
come
to
an
agreement.
C
They
there's
literally
the
one
sentence:
it's
like
okay,
when
when
they
come
to
some
sort
of
agreement-
and
it's
like
okay,
hey,
that's
an
interesting
yeah
okay,
so
we
need
to
kind
of
come
up
with
benchmarks
to
say:
okay,
how
do
okay,
I
get
it?
This
ambiguous
and
every
implementation
is
different,
but
how
do
you
benchmark
it
so
that
you
know
exactly
what's
happening
when
it
happens
for.
A
It
so
so
Jakub
this
is.
This
is
the
start
talking
about
the
overlay
layer?
Yes
in
the
previous
slide,
which
I
don't
even
want
to
try
to
go
back
to
yes,
there
you're
talking
about
application
layer
benchmarks.
Yes,
so
we've
got
kind
of
this.
You
got
overlays
an
application
layer,
I'm
trying
to
get
the
scope
clear
in
my
head.
Here
it's
got
its
got
application
layer,
stuff
and
over
weary
stuff,
which
is
I.
Don't
know,
I
mean
I'm,
not
sure
how
that
fits
together.
Yet
I
think.
C
The
the
application,
meaning
you
could
call
application
or
layer
benchmarks
but
I,
think
the
goal
is
that
defines
also
some
benchmarks
that,
when
you're
benchmarking
from
a
server
itself,
what
are
those
have?
Those
benchmarks
change
so,
instead
of
like,
instead
of
just
having
a
traffic
generator,
that's
blasting
traffic
at
unlit,
Sun
stateful.
You
can
actually
go
in
and
okay,
here's.
What
are
the
application
layer
benchmarks?
You
can
do
to
actually
benchmark
this
in
a
different
way.
I
think
that's
the
okay.
C
A
I
I
C
C
C
B
B
J
C
A
C
So
they
yeah
even
in
the
84.
What
is
it
80
14?
They,
they
kind
of
call
that
out
as
well.
It's
like
these
things
could
be
used
for
non
hypervisor.
Workloads
like
just
need
a
container
workloads,
but
it's
very
specific
to
hypervisor
today
yeah,
but
though
I
think
they
have
to
do
even
further
work
to
get
to
that
point
where
it's
container
but
I
mean
it's
they're
actually
in
the
world
today,
so
it
just
will
have
to
work
its
way
way
in
there.
Now,
no
yeah,
that's
pretty
much.
C
J
A
K
In
the
race
of
convenience,
my
only
concern
is
like
I'll,
say
initially,
I.
Think
I
read
the
draft
like
like
six
months
ago,
but
my
main
concern
was
like
about
the
approaching
not
only
VMS
but
also
containers.
So
I
thought
that
the
the
draft
was
like
very
folks
on
a
hypervisor,
but
if
you
bring
that
world
I
provide
her
two
containers,
it
might
might
change
the
terminology
little
bit,
yeah.
C
Absolutely
I
think
I'm
yeah.
Definitely
the
I
don't
know
if
it.
If
in
this
scope
of
this
document
or
if
it's
like,
then
containers
or
other
workloads
outside
of
the
hypervisor,
and
if
there
is
no
hypervisor.
What
does
that
change
from,
because
we've
known
being
very
specific
around
like
okay,
here's
the
or
for
the
splits
there's
like
there's
at
env
and
then
the
and
then
enemies,
and
this
external
and
II
thing
and
and
there's
very
specific
things
to
like
okay?
How
the
hypervisor
does
this,
but
they're
so
I
mean
it's
it's.
C
C
J
C
B
J
B
G
E
I
Well
correct,
so
this
is
next-generation,
firewall
benchmarking.
This
is
actually
the
work
of
a
group
called
net
sec
open.
That
is
a
group
of
about
15
individuals
who
are
working
to
getting
together.
We
get
together
once
a
week,
have
a
comments,
call
and
talk
about
these
types
of
test
cases.
So
we've
done
two
drafts.
This
is
a
third
wrap
I'm,
really
going
to
focus
on
the
differences
that
we've
had.
I
If
you
want
to
ask
any
questions
about
any
of
it
feel
free,
there
might
be
some
of
the
test
cases
where
I'm
not
slightly
less
involved,
where
I
might
defer.
I'll
go
back
to
the
group
and
come
back
to
you
guys
if
you
have
any
specific
questions,
so
the
first
thing
we
did
in
the
third
work
of
this
draft,
we
reordered
it.
I
So
if
you
look
at
the
diff,
it
looks
like
a
mess
because
we
deleted
like
entire
sections
and
move
them
to
the
bottom
and
move
them
around
so
they're
just
kind
of
hard
to
read
worse,
I
won't
do
it
again.
This
was
a
one-time
thing,
so
there's
a
lot
of
reordering,
so
it
makes
it
look
like
they
were
massive
changes,
the
only
other
section
we
touched
editorialize
that
was
worded
kind
of
funny
because
different
authors
and
this
kind
of
gave
it
one
voice.
So
7
7
through
7
one
got
a
little
bit
of
a
rework.
I
Those
are
mostly
editorial
changes,
not
major
detailed
tests,
test
cases
7.
We
had
a
bunch
of
test
cases
in
particular
HTTP
throughput
in
HTTP
throughput,
and
we
removed
the
binary
search
requirement.
After
talking
to
some
of
the
test
and
measurement
guys,
this
was
kind
of
overly
complicated.
It
was
just
easier
to
take
out
that
requirement
that
we
always
do
a
binary
search,
so
we
removed
that
from
the
version
to
the
other
draft.
We
cleaned
up
the
objectives
of
72
and
73.
I
In
particular,
we
had
a
spirited
conversation
about
object,
sizes
and
those
are
the
ones
we
settled
on.
If
anyone
has
any
feedback
about
those
types
of
things
feel
free
to
read
the
draft
and
send
comments
or
get
up
to
the
mic.
Those
are
what
we've
chosen
for
now.
The
second
version
of
the
draft
did
not
have
any
real
test
cases
for
7
3
through
7
6.
I
Those
are
all
brand
new
in
this
draft,
where
we
wrote
down
the
individual
test
cases,
so
we're
closing
in
on
getting
some
of
these
wrapped
up
some
of
them
also,
it's
sort
of
a
rinse
and
repeat
situation
where,
once
we
wrote
it
for
one
technology,
we
could
just
input
input
the
different
parameters
just
made
it
pretty
easy.
So
once
we
got
the
general
setting
up,
we
were
able
to
update
those
test
cases
before.
A
I
Complicated
was
what
the
test
and
measurement
guys
were
very
concerned.
In
particular,
we
work
with
this
group
includes
xes
by
those
types
of
test
and
measurement
vendors
said
the
binary
search
will
really
complicated.
Some
of
the
results
that
we
were
going
to
have
to
get
out
like
there
is
easier
ways
for
us
to
get
to
the
throughput,
instead
of
actually
making
a
binary
search,
be
done
for
over
all
of
the
sets
of
results.
I
A
I
A
I
I
So
you
know
going
forward,
I,
don't
know
how
many
people
have
read
the
draft.
Obviously
we
would
love
to
get
some
on
lists.
Revision.
I
will
say
there
is
lots
of
review
going
on
inside
of
this
group
before
it
gets
listed.
There's
like
I,
said
there's
somewhere
between
ten
and
fifteen
participants
who
are
reading
these
test
cases
as
we
write
them,
but
the
more
the
merrier
you.
A
B
A
If
people
were
willing
and
in
this
group
to
join
our
mailing
list
and
to
share
comments
that
the
during
the
development
of
this,
that
will
look
a
lot
more
yeah
it'll,
look
a
lot
more
engaging
to
our
group
yeah.
The
folks
who
are
kind
of
being
you
know
just
presented
with
a
big
document
once
in
a
while,
yeah
and
and
an
activity
on
the
list
is
the
best
way
to
get
some
attention.
Pay.
I
Yeah
I
think
we're
getting
to
the
point
now
where
the
test
cases
are
stabilizing.
That
I
think
we
should
start
to
move
the
conversations
over
I
think
before
we
were
working
out
lots
of
details.
Might
it
kill
the
list
but
I
think
maybe
after
this
next
revision
I
think
absolutely
they
should
start
to
do
all
of
the
stuff
on
the
lists.
Great
again,
like
you
said
just
to
show
interest,
because
there
are
people
that
are
interested
absolutely.
A
A
I
So
I
will
make
that
suggestion.
Yeah
we're
working
right
now,
one
of
the
things
missing
before
we
think
this
is
ready
to
go.
Is
it
doesn't
have
the
security
effectiveness
in
particular
you're
talking
about
vulnerabilities
and
things
you
need
to
do.
The
list
is
closed.
We
have
an
idea
of
what
we
want
to
say
about
this
or
how
we're
gonna
go
about
getting
that
list,
so
that'll
be
in
the
next
revision
of
the
draft,
and
then
we
missed
a
cup.
I
I
C
C
I
C
I
A
C
G
A
Thank
you,
Rafael
I
completely
blew
the
agenda.
You
were
supposed
to
go
before
Jacob
and
him,
and
and
and
I'm
now,
gonna
demonstrate
how
it
happened.
Let's
see
here
so
I
got
to
back
this
thing
up
and
I
thought
I
had
them
all
in
order
here,
but
that's
now
see
that's
mine,
that's
mine
and
somehow
that
one
got
mixed
missed.
So,
let's
see
here
yeah,
that's
right
right
there.
It's
that
one
all
right.
So
this
magic
button
helps
and
then
go
this
magic.
One
else.
K
Yeah,
okay,
go
so
I'm
here
to
talk
about
the
the
updates
that
we
made
in
this
draft
starting
by
the
title.
I
think
we
we
had
a
lot
of
reviews
in
the
last
meeting
about
okay,
you're,
addressing
generic
ways
of
vnf,
Mitch
marking
soul,
and
it's
like
based
on
the
research
behind
this
dis
draft,
there's
a
lot
of
focus
on
automation,
so
we
said
yeah.
We
agree
with
that.
K
I
think
that's
some
consensus
we
had
and
then
we
change
the
title
for
methodology
of
net
marking
automation,
so
why
we
dated
basically
the
main
changes
are
about
automation.
As
we
discussed
in
the
last
meeting,
I
mean
there
there's
the
new
title:
the
scope
has
changed.
We
added
there's
some
modifications
in
the
methodology.
The
idea
is
that
we
do
not
aim
to
explore
all
the
UNF
marking
methodologies
but
ring.
We
aim
to
create
this
kind
of
like
approach
of
methodology,
for
if
you
have
a
DNF
benchmarking
methodology.
K
K
So
if
you
provide
all
the
tools
and
kind
of
like
a
generic
methodology
for
that
to
be
automated,
we
we
understand
that
the
developer
can
do
that
better
than
us
and
I
think
that
new
coming
gnf
method,
benchmarking
methodologies-
might
use
this
draft
as
a
reference
implementation.
For
that
we
have
also
new
contributors
so
from
Paderborn
university
that
are
doing
similar
work
as
we
are
doing.
We
have
for
its
implementation.
Also
added
to
the
draft.
B
That's
live
for
just
a
sec.
For
me,
the
the
biggest
problem
I
have
with
what's
written
here.
It's
a
large
part
of
what
we
want
to
be
able
to
do
in
BMW
is
have
repeatable
results
and
have,
if
I,
execute
the
test,
and
you
execute
the
test
and
we
were
to
execute
them
against
the
exact
same
thing.
We
come
out
with
the
exact
same
results.
The
problem
is,
if
you
leave
the
metrics
in
methodology
up
to
the
developer,
I'm,
not
entirely
sure
I'd
trust.
My
developers,
I
mean
I
love
my
developers
by
trust.
B
Can't
trust
but
verify
if
I
follow
this
methodology,
so
I
have
some
I.
Have
some
sympathy,
I
think
I've
been
at
the
receiving
end
of
these
comments.
I
think
I'll
gave
them
to
me
once
upon
a
time
to
you.
So
I
definitely
have
some
sympathy
there,
but
I
do
suggest.
Maybe
we
take
a
look
at
how
to
have
at
least
something
be
automated
and
give
yourself
some
wiggle
room,
but
something
needs
to
be
there
so
that
I
could
repeat
this
and
get
the
exact
same
apples
to
apples.
Comparison,
okay,.
K
Okay,
let
let
me
get
there,
okay,
all
right.
No
no
I
mean
this.
Is
a
generic
I
think
it's
a
major
concern
about
the
graph,
how
much
how
it
can
be?
Not
generic
or
not
to
be
specific
about
the
automation
but
I
mean,
and
here
he
is
we're
trying
to
restrict
it
to
have
this
developer
or
I.
Don't
know
this
the
creator
of
the
benchmark
methodology
to
come
here
and
specify
what
we
call
it,
the
the
benchmarking
descriptor-
and
this
comes
the
final
I
mean
the
main
of
the
changes
in
the
draft.
K
Then
we
focus
on
say,
like
the
main
focus
of
the.
The
draft
is
to
orbiting
a
vnf
niche
marketing
report
in
an
automated
way-
and
we
do
this
by
having
the
benchmarking
report
composed
of
two
parts.
One
is
the
benchmarking
descriptor
and
the
other
is
the
profile.
So
the
descriptor
is
that
one
that
is
the
one
that
I
give
it
to
you
and
you
say
like
now:
you
can
repeat
it
and
it
gives
some
kind
of
like
boundaries
and
and
I
Street
kind
of
like
environment
and
requirements,
parameters
that
you
can
say.
K
If
you
put
these
in
the
environment
that
you
describe
you,
it
will
give
you
the
same
performance
profile.
So
that's
what
you're
trying
here
I
know
it's
like.
We
are
in
a
zero
two
version,
but
we
will
get
there.
So
then
we
in
the
benchmarking
descriptor
made
mainly
we
described
the
procedures,
configuration
the
overall
description
of
the
the
benchmarking.
The
automated
benchmarking
scenario,
the
target
informations
about
the
vnf
itself.
Diversion
I
mean
that
the
target
vnf
diversion
the
model.
K
Although
the
specific
specifics
of
the
vnf,
we
have
the
deployment
scenario
that
basically
I
forget
to
tab
here.
It's
basically
the
topology,
the
requirements
and
the
parameters
for
each
one
of
the
components
that
we
have
described
it
in
the
BNF
for
benchmarking
setup
decision
in
the
draft,
and
then
we
have
the
vnf
performance
profile
that
it's
the
result
of
the
execution
of
the
benchmarking
descriptor
I
mean
it
gives
you
the
execution
environment.
That
means
where
the
benchmarking
descriptor
was
but
and
then
and
how
it
was
executed
to
create
that.
K
So
it
gives
you
hardware,
specifications
and
software
specifications.
This
is
also
detailed
in
the
draft
and
also
the
measurement
results.
We
we
put
the
measurement
results
as
active,
matrix
and
passive
matrix.
The
active
ones
are
from
the
direct
relationship
of
agents
in
the
vnf
and
the
passive
is
the
monitoring
or
the
infrared
matrix
from
the
dns
when
it's
possible.
So
here's
the
basic
I
mean
scenario
I'm.
A
B
K
Is
it
specified
in
the
considerations
of
the
graph
like
we
classify
the
components
as
manager
agents
and
monitor,
so
the
it
would
be
a
parameter
for
the
agent
like
the
packet,
tracer
being
used,
the
rate
and
all
the
parameters
of
the
and-
and
this
goes
also
further,
because
we
have
this
idea
that
the
agent
can
have
Apache
8-tooth
to
it
and
call
them
pro
base
with
specific
interfaces
to
these
specific
technologies
like
moongeun,
for
example,
yeah.
B
K
Then,
in
this
case,
you
just
need
specify
how
to
attach
moonshine
to
as
a
probe
into
agent
and
put
the
parameters
to
configure
that,
and
that
goes
as
a
I
mean
a
requirement
for
the
test
itself.
I
mean
you
have
must
have
an
interface
to
an
agent
with
moonshine
and
you
must
have
required
the
parameters
to
execute
with
certain
I'd
say,
generic
guidelines
and
packet
traces.
So.
A
B
K
I
totally
do
new
contributors
now
I
mean
we
have
a
lot
of
psychos
offi
yeah
I
mean
what
do
you
find
the
draft
and
giving
the
ideas
and
the
terminology
was
changed
from
the
last
draft
and
I
think
it
will
you
come
out
to
change
again,
so
nothing
is
written
in
stone
in
a
draft.
So
basically
the
idea
here
is
that
I
told
you
have
this.
This
picture
of
you
know
you
give
the
vnf
benchmarking
descriptor
as
the
definition
of
the
method,
how
to
benchmark
the
vnf.
K
You
have
the
benchmarking
process
that
generates
the
benchmarking
report
that
contains
basically
the
descriptor
and
the
performance
profile,
and
with
all
of
that,
you
can
compare
repeat
experiments
for
further
this
benchmarking
descriptor.
In
this
case,
we
considering
the
Draft
at
multiple
steps
or
procedures
of
this
benchmarking
of
the
benchmarking
itself
can
be
automated
as
and
we
define
like
orchestration
of
the
components
itself.
The
placement
can
be
manually
or
automated,
please
put
in
your
menu
or
automated
way
the
management
and
the
configuration
of
the
components
also,
and
then
you
have
the
execution.
K
After
of
the
the
benchmark
itself
and
the
output,
we
consider
also
possible
ways
of
pricing
the
matrix
in
a
in
a
row
format
in
a
specific
format,
extracting
some
new
analytics
ways,
for
example,
clustering,
matrix
or
or
so
that
might
be
specified
in
the
vnf
paint
performance
profile,
and
there
are
many
issues
that
are
unresolved
in
this
draft
I
mean
at
least
now.
We
we
understand,
we,
the
co-authors,
that
we
have
a
good
skeleton
of
I
thinks
that
we
can
work
from
now.
K
On,
for
example,
we
need
to
clarify
the
automated
benchmarking
procedures,
I
mean
which
what
is
happening
actually
in
each
one
of
the
bullets
that
we
have
in
the
draft.
We
need
to
specify
clearly
each
one
of
this
particular
case
in
subsection,
5.4
I
mean,
for
example,
when
you
have
the
case
of
a
noisy
neighbor.
K
When
you
have
the
case
of
failures,
or
you
have
the
case
of
flexible
V&F
that
inside
of
the
NF
itself,
it
might
have
multiple
components
that
might
scale
depending
depending
on
the
the
traffic
workload,
and
this
must
be
represented
in
the
VN
MH
marking,
report,
yeah
and
so,
what's
still
missing.
We
think
that
we
must
detail
these
interfaces
like,
like
you
say,
of
the
agent
monitor
with
the
prober
and
listener
this
terminology
might
might
be
updated.
We
need
specify
how
these
interfaces
work
actually
in
the
draft.
K
The
the
action
of
the
each
one
of
the
components
like
the
manager,
agent
and
monitor
might
take
on
the
messages
and
how
they
parse
the
the
actions
that
they
must
take
to
run
the
benchmarking
stimuli
and
parse
the
metrics.
The
possible
issues
of
the
automation
approach.
I
think
we
need
to
find
out
like,
for
example,
there
if
you
are
going
to
but,
for
example,
a
binary
search
as
some
way
of
a
procedure
in
this
automation,
scope,
I
think
we
need
to
say
hey.
K
This
might
be
a
problem
here
here
and
there
and
I
mean
as
a
consideration
for
for
the
reader
of
the
draft.
In
parallel,
we
really
the
quarters
we
are
developing
an
information
model
that
we,
because
we
have
this
reference
implementations.
There
are
a
gene,
the
1i
I
coded
in
the
other
one
TNG
bench
from
from
the
manual
the
other
quarter,
and
we
are
doing
tests
side
by
side
and
we
are
creating
this
half
restore
model
information
model
that
we
might
be
useful
further.
K
The
whole
draft
I
don't
know
if
it's
in
this
as
we
talked
earlier,
it
might
be
in
the
scope
of
the
draft
or
not.
We
need
to
still
see
how
we
we
gonna
put
in
inside
the
draft
or
or
as
a
half
restore
I.
Don't
know
if
formative
reference
for
the
draft
and
now
we
he
I,
put
that
RFC
21
19,
but
it's
no
8174,
so
yeah
we
are
going
to
update
the
draft.
B
A
Had
another
question
here:
what
about
I
mean
Jim,
the
project
Jim.
Do
you
I
am
was
a
was.
K
K
K
A
K
A
A
A
Right
well,
thank
you
rocky,
but
they
did
good
work.
B
A
All
right,
so
this
is
the
draft
on
the
updates
for
the
back
to
back
frame
benchmark.
This
is
where
we're
desperately
trying
to
measure
the
size
of
the
buffer
of
the
device
under
test
and
RFC
25:44.
Our
most
fundamental
RFC
specifies
a
method
to
do
this
and
that
the
thing
we're
trying
to
measure
is
actually
the
longest
burst
of
frames
that
a
device
under
test
can
process
without
loss.
It's
intended
to
examine
the
extent
of
data
buffering
the
the
material
there
was
extremely
concise
in
terms
of
the
procedure
and
reporting.
A
In
other
words,
there
wasn't
much
there.
It
was
like
one
page
so
so
we
ran
some
tests
in
the
vias
perf
project
as
part
of
the
open
platform
for
nfe
last
year,
publish
the
results
at
the
at
the
summit
and
what
we,
what
we
basically
decided
there
was
that
a
quite
a
few
considerations
could
be
improved.
I've
skipped
over
a
lot
of
those
here,
but
the
main
one
is
this
idea
that
when
we
measure
the
longest
burst
that
a
device
can
accommodate,
while
that
burst
is
being
transmitted
into
the
device,
the
device
is
also
thumb.
A
Reading
packets,
basically
forwarding
frames
and
the
previous
calculation
didn't
account
for
that
at
all.
In
other
words,
by
the
time
you
try
to
assess
the
size
of
a
buffer,
and
you
count
the
number
of
packets
which
have
been
sent
some
of
those
packets
aren't
in
the
device
anymore,
they've
actually
gone
out.
So
so
that's
that's
what
the
correction
factor
is
simple
and
that's
what
it's
all
about,
accounting
for.
So
when
we
tried
to
calculate
the
number
of
packets
in
the
buffer,
we
calculated
the
average
length
of
this
burst
the
number
of
back-to-back
frames.
A
We
divided
it
by
the
maximum
theoretical
frame
worried
because
they've
been
sent
at
the
back-to-back
rate,
and
that
gives
us
basically
the
time
of
that
is
represented
by,
though
all
those
back-to-back
packets
put
together,
but
now
I've
explained
the
second
part
of
this
that
basically,
we
know
that
the
throughput
over
that
amount
of
time
is
going
to
have
extracted
the
maximum
throughput.
It's
going
to
extracted
some
number
of
frames.
So
that's
why
we
have
this
corrected
factor
here.
A
So
we
have
the
implied
buffer
time
and
we
correct
it
by
this
fraction,
which
is
the
measured
throughput
over
the
maximum
theoretical
frame
rate,
and
it
turns
out
that
reduces
the
buffer
size
quite
nicely
to
something.
That's
more,
probably
a
little
closer
to
accurate.
Now
this
this
test
only
ever
intended
to
work
with
a
single
egress
port,
where
we're
sending
traffic
and
and
a
both
a
single
ingress
and
egress
port.
So
it's
completely
different
from
the
test
that
was
described
in
the
data
center,
benchmarking
80
to
39
I.
Think
it
was.
A
A
So
we
had
some
questions
in
the
draft
for
discussion
and
we
said,
should
just
particular
search
algorithm
be
used.
We
think.
Yes,
the
answer
is
yes,
we
think
it
should
probably
be
binary,
search
and
and
and
also
should,
search
trial.
Repetition
should
include
trial,
repetition
whenever
there's
a
loss
observed
to
avoid
the
effects
of
background
loss
unrelated
to
buffer
overflow.
A
So
now
we're
thinking
of
two
kinds
of
sources
of
loss
here:
kind
of
a
transient
loss
that
might
be
present
on
the
links
really
high-speed
links
or
in
the
devices
itself,
if
it's
a
virtualized
device
and
and
and
so
yes,
we
think
the
answer
is
yes
there
as
well.
So
we're
gonna
see
some
results
about
that
in
a
moment,
so
the
next
steps,
who's
read
the
draft
one
hand.
Thank
you
Oh
two
hands.
Thank
you.
A
So
if
you
guys
have
comments
that
would
be
great
to
hear
about
now,
but
otherwise
we
need
more
readers
before
we
can
adopt.
This
we've
got
a
milestone
on
our
Charter,
but
we
can't
adopt
this
if
it's
of
course
I'm
speaking
as
a
participant
now
I'm
glad
to
qualify
that
so
other
ideas
are
always
welcome
here
for
doing
this
kind
of
benchmarking,
we'll
hear
more
I
think
a
little
more
about
that
later.
To
possibly
so
comments
are.
A
E
B
Running
into
this,
a
lot
from
customers
who
are
asking
us
about
buffer
and
how
things
are
run
and
there's
generally
no
box.
That's
left
anymore
with
a
single
with
with
memory
per
port,
it's
a
shared
pool,
and
so,
while
I
applaud
and
I
think
we
should
have
this
refinement.
Don't
get
me
wrong.
I
do
think
it
sort
of
begs
the
question,
but
but
how
big
is
the
the
pool
and
how
big
is
the
buffer
when
it's
shared
across
imports
right.
C
Go
ahead,
yeah
that
was
pretty
much
my
question
that
yeah
okay,
it's
kind
of
like
the
I
think
this
dopamine
may
be
a
correct
me
if
I'm
wrong.
Here's
like
it's
a
device
is
not
actually
capable
of
doing
full
lane
right
so
think,
you're
able
to
that's
where
your
we're
worried
about
that,
but
they
do
agree.
Was
there
with
your
point
that,
okay,
what
about
multiple
ports
going
at
the
same
time
with
one
ingress
one
ingress
so.
C
A
Got
devices
that
can't
perform
at
full
rate
and
and
that
and
their
single
port
testing
is
enough
and
and
that-
and
that
applies
to
a
lot
of
the
virtualization
world
frankly,
I
mean
that's,
why
benchmarking
is
cool
again
all
this
virtualized
Network
function,
stuff,
alright!
So
so
so
that's
a
good
clarification,
be
sure
you
type
it
there.
So
I
can
see
it
again.
Sometime
we've
covered
your
point,
but
my
next
slide
here
procedures
now
don't
go
away.
Jacob,
oh
yeah,
yeah,
so
Yoshiaki
to
assent
to
our
list.
A
So
now
we're
getting
into
the
test
results
here.
What
what
Yoshiaki
is
showing
us
is
he's
measuring,
latency
per
frame
and
he's
got
one
percent
rate
on
this
combined
set
of
ingress
ports
which
slightly
over
scribed
the
egress
and
so
he's
tracking
the
the
delay
there
and
when
he
first
sees
a
loss.
He's
saying:
okay,
that's
24
frames,
so
that's
the
latency
any
thoughts
on
this
yeah.
B
E
B
A
B
B
C
C
Some
some
I've
seen
some
of
the
devices
that
have
four
ports
per
buffer.
So
when
you
add
a
fifth
port,
it
completely
changes
everything
else,
because
the
ingress
of
the
fifth
is
taking
a
more
buffer.
I
mean
same
thing
with
this
right,
so
this
is
exactly
what
we
want
to
see
actually
yeah
these
results.
You.
B
J
B
C
C
A
A
Yeah,
that's
exactly
right,
so
and
and
and
the
in
the
case
I'm
worried
about
is
where
you
put.
If
you
put
a
hundred
percent
of
line
rate
here
now,
you
see
losses
immediately,
so
you've
got
to
back
down
from
that
and
that's
where
the
burst
test
is
worthwhile.
So
two
different
categories
of
things:
yeah.
B
A
Right,
good
I
think
that
was
it
other
reporting.
Well,
you
know:
we've
got
good
reporting.
I
already
took
care
of
that
all
right.
So
that's
it
any
other
questions
I'm
very
thankful
for
the
discussion
on
this,
and
also
for
Yoshiaki
who
got
in
touch
and
is
really
doing
some,
some
of
that
some
of
this
stuff
glad
to
get
this
feedback.
Thank
you.
So
I'm
gonna
follow
up
now
with
what
I
threatened
to
do,
which
was
the
OPN
Fe
hackfest
plugfest
results,
so
this
so
we're.
A
So
now
we
kind
of
reached
the
point
where
we're
not
talking
about
drafts
anymore,
but
we're
talking
about
measurements
which
are
going
to
influence
our
future
and
development
and
and
so
forth,
and
it's
always
good
to
bring
this
stuff
in
we're
always
going
to
try
to
fit
this
stuff
in
if
you've
got
measurement
as
well,
then
then,
this
is
what
we
get
excited
about
here.
So
let's,
let's
do
that.
B
A
K
A
So
sweet
RL
is
the
project
team
leader
now
of
the
OPN
fe
open
platform
for
BIA
for
network
function,
virtualization,
it's
a
Linux,
Foundation,
open
source
project,
he's
the
project
team
leader
for
the
vs
/
V
switch
performance
project
there
and
we've
been
contributing
I'm
a
contributor
committer
on
that
project
for
since
almost
since
it
started
and
we've
been
contributing
to
BMW
G
for
a
long
time
got
a
couple
of
drafts
as
a
result
of
it
as
well.
So
it's
really
his
work.
A
My
work
in
the
kibitzing
and
designing
and
the
vyas
project
team
for
keeping
everything
up
and
running
I
mean
it's
all
of
us
really
getting
this
stuff
done.
So
here's
the
quick
problem
statement,
there's
lots
of
ways
in
which
our
test
results
and
repeatability
can
be
sort
of
threatened
when
we
test
overtime
over
minor
changes,
identical
nodes,
different
test
management
tools
and
so
forth,
using
different
test
equipment.
We
actually
examined
this
as
part
of
the
Danube
plugfest
and
presented
some
results
here
last
summer
and
at
the
OPN
Fe
Summit.
A
So
so
now
we're
covering
the
time
aspect.
Here
we
were
planning
to
do
it
with
multiple
nodes.
If
possible,
we
did
do
some
testing
with
multiple
tools,
but
that's
that's
yet
to
be
included
in
our
summary
here.
So
our
initial
focus
was
on
search,
algorithms
and
looking
at
the
search
algorithms
in
ways
that
we
could
improve
them
so
that
we
might
improve
our
repeatability.
A
So
a
quick
view
of
the
test
combinations.
We
looked
at
some
of
the
test
configurations
where
we're
already
running
a
continuous
integration
tests
to
give
us
some
view
of
the
potential
issues
we
might
see,
but
we
used
the
t-rex
traffic
generator
in
order
to
do
this
testing.
The
reason
we
did.
That
was
because
we
could
easily
control
the
search
algorithm
there.
A
A
So
here's
a
quick
view
of
our
deployment
scenarios,
which
I
will
call
Phi
2
Phi
as
labeled
at
the
bottom
there
and
PvP
when
it
includes
a
VM
and
PV
VP,
where
it
includes
two
VMs
and
the
simplest
scenario
is
is:
is:
is
this
Phi
2
Phi
case?
It
goes
in
and
out
of
the
V
switch
but
notice
that
our
test
device
is
always
a
physical
test
device.
It
could
be
moon,
gen,
don't
worry,
but
it's
always
separate
from
the
device
under
test
by
these
physical
interfaces
and
that's
to
avoid
the
possible.
A
You
know
conflation
of
workload
versus
the
test,
traffic
generator.
We've
we've
stayed
with
that
and
and
believe
me,
this
is
I
mean
these
are
realistic.
Nmv
deployment
scenarios
as
well.
You
have
one
or
two
and
network
network
functions,
virtualized
network
functions,
vnfs
or
a
V
NFC,
and
another
V
NFC
that
are
working
together
and
and
there
they
out
of
the
physical
port.
That's
it.
A
A
What
we've
collapsed
many
trials
with
at
different
offered
loads
with
a
into
a
single
test
which
has
a
measurement
goal,
trying
to
find
the
maximum
throughput.
It's
your
loss,
something
like
that.
If
we're
gonna
repeat
those
tests,
and-
and
actually
we
do
that
in
this
study,
then
we
have
sets
of
tests
and
when
we
change
any
of
the
parameters,
then
that's
part
of
a
method,
so
I've
Illustrated
here
as
a
set
of
multiple
frame
sizes,
but
we
test
with
fixed
frame
sizes,
so
this
is
actually
Raphael.
A
This
is
something
that
you
could
include
this
set
terminology
in
your
hierarchy
of
testing
I,
think
that
would
that
would
be
good,
we're
about
to
see
the
value
of
repetition
at
the
test
level,
so
doing
good
on
time.
Okay,
so
this
is
binary
search.
This
is
the
day
in
the
life
of
a
classic
benchmarking.
A
This,
oh
here
we
go
so
we
start
out
at
at
a
hundred
percent
of
the
maximum
frame
rate.
So
that's
the
maximum
theoretical
throughput
for
a
given
frame
size
and
then,
when
we
see
loss
lost
frames,
something
like
42
million
there.
You
reduce
it
in
half,
that's
the
way
the
binary
search
works,
but
we
have
to
go
all
the
way
down.
In
this
case,
we
have
to
go
all
the
way
down
to
six
point:
two:
five
and-
and
we
you
know,
that's-
that's
actually
fairly
low
throughput.
A
This
is
a
p2p
case
here,
Phi
2
Phi
case.
We
see
this
loss
in
the
700
range
interesting,
so
we
search
around
and
we
we
end
up
with
a
value.
That's
four
point:
two:
eight
percent
of
the
total
maximum
offered
a
load.
A
The
and
one
of
the
other
things
that
we
saw
here,
the
duration
of
the
test
it.
What
that
really
means
is
that
we
were.
We
were
sending
a
constant
number
of
frames
to
set
our
trial
duration
and
in
fact,
the
device
didn't
deliver
them
at
I,
see
Paul
laughing,
but
the
device
the
test
device
didn't
deliver
them
at
the
maximum
rate.
So
this
duration
actually
exceeded
the
fifteen
seconds
that
we
planned.
So
it's
good
to
have
these
checks.
You
know
you
you.
A
A
Basically,
what
we're
you
know
every
time,
you're
kind
of
changing
your
search
area
and
and
if
you
see
no
loss,
you
search
above
it
if
you
see
lost
you
search
below
and
if
you
have
a
transient
impairment
along
the
way
here,
like
this
one,
this
I'm
going
to
I'm
going
to
tip
the
scales.
This
was
actually
a
transient,
and
some
of
these
are
transients
too.
So,
when
we,
when
we
see
that
we,
we
greatly
influence
the
future
search
and
how
are
we
gonna
handle
that
that's
the
problem
so.
A
B
B
A
That's,
except
for
that's,
that's
kind
of
a
tip
that
there's
some
sort
of
transient
thing
happening
here,
so
so
we're
gonna,
combat
that,
but
well
spotted
so
I
should
have
said
first
in
fact,
I
plan
to
switch
around
here.
So
when
you
have
a
search
space,
you
divide
it
up
into
kind
of
like
your
minimum
tolerance
and
and
in
this
diagram
on
the
on
the
top
of
this
thing
here,
I've
divided
a
12
mega
frames
per
second
into
equal
1
mega
frame
per
second
steps.
A
So
this
this
creates
the
search
array
and
what
we're
going
to
test
for
is
whether
a
loss
occurs
or
not,
and
that's
our
indicator
of
whether
the
resources
have
been
exhausted
in
our
device
under
test
so
I
when
I
was
I.
I
wrote
this
part
in
the
draft
about
retesting
and
and
checking
test
results,
and
should
we
repeat
the
results
but
I,
but
I
went
deeper
into
this.
After
writing,
some
of
those
sentences
and
talking
to
people
about
it
and
what
I
found
is.
Is
this
tiny
little
reference
here
at
the
bottom
here?
A
Unfortunately,
it's
it's
searching
games
with
errors,
50
years
of
coping
with
Liars,
and
that's
exactly
the
situation
we're
confronted
with
when
we
have
two
different
sources
of
loss
in
our
device
under
test.
So
our
questioner
I've
drawn
the
model
up
here
that
sort
of
fits
this
this
graph,
this
what
they
described
in
this
paper.
A
The
questioner
asks
a
question:
the
questions
delivered
error
free
to
the
responder,
the
device
under
test
and
what
we're
basically
asking
there
is
whether
the
resource
limit
has
been
exceeded
by
the
offered
load
that
we're
generating
there
and
when
the,
when
the
resources
have
been
exceeded,
we
answer
with
a
lot
weather
loss
occurred
or
not.
So
if
Glaus
occurred,
that's
true
if
the
loss
didn't
occur,
that
would
be
false.
Now,
there's
a
background
error
process
and
that
that's
the
liar
here
that
changes
the
the
answer
that
the
responder
gives
back.
A
So
an
error
converts
a
zero
loss
or
a
false
response
to
true,
but
we're
working
with
a
system
of
half
lies
and
that's
terminology
from
the
pulk
paper.
Only
one
of
the
answers
can
be
influenced
by
this,
this
lying
phenomenon.
So
if,
in
fact,
we've
seen,
loss
due
to
resource
exhaust
and
a
background
error
process
jumps
in
and
causes
more
loss,
that's
not
going
to
influence
our
search
results
where
that's
actually
a
very
happy
outcome,
because
it
means
that
we're
we're
not
going
to
have
to
repeat
all
kinds
of
outcomes.
A
Oh,
we
can
trust
all
the
false
ones
we
have
to.
We
have
to
sort
of
retest
all
the
true
ones,
the
cases
where
we
have
loss
it's
a
time
saver
and
that's
a
good
thing,
and
this
is
the
timing
of
it.
So
what
we're
hypothesizing
is
that
these
transients
that
cause
loss
appearing
over
time
near
the
red
vertical
arrows
if
they
occur
during
a
trial,
then
they're
when
we're
gonna
see
loss
due
to
that
and
on
the
other
hand,
if
we
repeat
the
trial
with
the
right
duration
fairly
quickly,
now
we'll
see.
A
Now
we
have
a
chance
to
see
whether
you
know
that
loss
was
true
and
it's
really
resource
exhaust
or
whether
it
was
caused
by
transient
phenomenon.
But
what's
also
clear
in
this
diagram,
is
you
have
to
choose
your
trial
duration
very
carefully?
And
that
means
you
have
to
do
some
long
duration,
testing
I'll,
try
to
get
that
in
a
moment.
A
So
a
long
slide
here,
but
basically,
basically
here's
what
we
what
we
chose
in
this
testing
minimum
step
size
of
0.5%.
It
works
a
little
differently
than
the
the
fixed
frame
size
thing.
We
were
looking
at
percent
of
the
the
maximum.
The
maximum
times
that
we
would
repeat
a
trial
is
two
with
a
and
and
we'll
also
created
a
loss
threshold
on
repeating
trials
in
the
v2
version
of
the
algorithm
we
tested
30
seconds
and
15
second
durations,
but
that,
as
I
said,
that's
actually
best
determined.
A
After
doing
some
on
a
long-term
tests,
we
did
64
and
128
frame
size
and
the
maximum
number
of
repeated
tests
we
did
was
actually
four
in
this
book,
but
I
think
mostly
would
not
show
today's
is
these
are
four
three,
so
so
we
we
also
calculate
these
interesting
metrics
about
our
test.
The
number
of
repeated
tests
where
the
loss
outcome
changed.
So
basically,
this
this
state
right
here
and
we
also
look
for
a
metric
on
consistency
of
the
results.
A
Basically,
to
be
able
to
tell
is
this
repeating:
trial
is
actually
doing
anything
good
for
us,
so
here's
the
long,
duration
test
that
would
be
planned.
We
haven't
actually
done
them,
yet
we
actually
need
a
better
tool
to
do
this.
You
can't
just
look
at
the
results.
At
the
end,
you've
got
to
be
able
to
you've
got
to
be
able
to
see
how
much
loss
occurs
and
and
when
it's
where
it
occurs
in
the
narrow
time
space
to
be
able
to
accurately
process
so
we've
got
a
we've
got
more
to
do
with
that.
A
So
here's
some
results
for
the
64
octet
and
the
128
octet
results.
So
I'll
work
through
this
search
algorithm
details
with
you
in
fact,
I'm
gonna
do
it
over
here.
You
know
I'm
gonna.
Do
it
right
here?
Sorry,
you
can
see
it,
but
I'm
gonna
do
with
it
I'm,
basically
going
to
do
it
with
a
pointer,
because
that
way
it
that
way
it
gets
recorded,
where's
the
pointer
all
right.
So
this
is
the
Phi
defy
what
we
call
p2p
case
and
here
we're
looking
at
binary
search
with
the
number
of
received
frames
counted.
A
This
is
the
percentage
of
the
maximum
offered
load
along
all
along
here,
so
these
are
repeated
trial.
Sorry
repeated
tests
at
64,
octets
after
we've
done
the
binary
search.
These
are
this
is
the
outcome
and
the
binary
search
always
took
twelve
iterations
so
that
that
was
true
across
all
of
these
cases.
Alright,
so
let's
look
at
binary,
search
with
a
loss,
verification
and
basically
what
we
do
is
exactly
what
I
showed
we.
If
we
see
who
us
we
again
and
if
we
see
noble
us,
we
trust
that
outcome.
A
So.
The
interesting
thing
is
that,
with
the
the
phi
2
phi
e
case,
you
remember
this.
This
goes
from
the
physical
up
through
the
v
switch
and
back
out
again
we're
not
seeing
very
much
difference
here.
In
fact,
these
are
pretty
consistent
results
in
terms
of
the
percentage
for
the
binary
search
and
we're
also
seeing
consistent
results
here.
For
the
binary
search
with
loss,
verification
and
the
Devils
in
the
details,
isn't
it
how
many
cases
did
we
see
reversal
when
we
tested
a
second
time?
A
None,
so
our
results
for
the
Phi
defy
case
are
really
troubled
by
this
transient
phenomenon
and
that's
actually
very
interesting
news,
because
it
tells
us
that
when
the
transients
come
around
to
bother
us
they're
coming
from
someplace
else
in
the
architecture,
so
this
was
this
was
a
very
valuable
result.
So
oh
go
ahead.
B
E
B
A
We've
seen
this
once
in
the
in
the
in
the
actually
we've
seen
it
occasionally
in
the
in
the
continuous
integration
testing
that
once
in
a
while
for
some
reason,
128
octets
goes
through
it
maximum
through
it
actually
happens
and
we've
seen
it
in
the
burst
testing
as
well,
occasionally
128
byte
packets,
for
some
reason
they
they
make
it
through
the
architecture
that
we're
testing
here.
So
it
it's,
it's
a
real
result,
but
it's
it's
a
it's
a
platform
variability
that
we
don't
fully
understand.
A
B
A
That
those
are
those
are
always
interesting
tests
to
run
and
I.
Think
that's
I
think
that's
worthwhile
doing.
In
fact,
that's
what
the
new
spec
we're
writing
in
Etsy
testers
are
nine
recommends,
so
absolutely
absolutely
worth
worth
a
shot
for
that.
So
let's
look
at
okay,
I
meant
to
mention
these
are
all
30-second
trials.
A
We
chose
30
seconds
because
it
was
what
we
were
currently
running
in
our
continuous
integration
test,
but
we
didn't
do
any
long-term
testing
yet
so
we
really
had
no
idea
whether
we
fit
in
between
the
transients
or
what
was
really
happening
here.
Okay.
So
so
now
we
tried
pvp
and
now
we've
got
something
really
interesting:
we've
got
for
the
64
where's,
my
cursor
there.
A
It
is
for
the
64
octets,
very
consistent
4.5%
of
offer
to
load
for
binary,
search
with
loss,
verification,
some
more
variation
that
also
much
higher
values
and
clearly
I
mean
this
is
this
is
fairly
consistent,
although
you're,
seeing
some
jumping
around
here
and
and
and
quite
a
bit
higher
percentage
of
offered
load
for
the
binary
search
with
a
less
verification,
fairly
consistent
thereto
and
notice.
Now
the
reversals
are
happening.
They're
working
in
each
one
of
these
cases,
so
the
the
loss
verifications
doing
some
work
to
try
to
characterize
the
resource
exhaustion.
A
Alright.
So
let's
press
this
so
we
we
went.
We
went
about
investigating
this,
so
I
potted.
Here
the
number
of
the
number
of
losses
for
each
of
the
binary
search
trials
that
were
at
six
point,
two
five
percent
of
max
offered
load.
Remember:
I
showed
that
as
a
special
case
before
all
of
them
had
packet
loss
counts
on
the
order
of
700.
A
When
we
had
when
we
ran
loss
verifications,
sometimes
we
saw
that
too,
but
other
times,
zero,
less
verification.
Again.
This
was
the
first
trial.
It's
Aussie
row,
another
one
here:
a
loss,
verification
at
six,
forty
four
and
then
lost
verification
repetition.
It
seems
to
have
seen
now
two
of
these
two
of
these
transients
that
are
roughly
accounting
for
about
seven
hundred
packets.
So
this
so
this.
K
A
A
We
could
we're
still
searching
for
that.
That's
that's
what
Serena
Rao
is
at
home
doing
right
now,
so
what
we're
gonna,
what
we're
gonna
track
that
down
and
and
and
this
is
that
this
is
the
trial
where
we
saw
us
twice
during
less
verification.
It's
it's
a
really
low
value.
So
we've
obviously
got
some
improvement
to
make
here
in
terms
of
trial
duration,
because
we're
not
really
avoiding
the
transients.
So.
A
We
looked
at
this
in
15-second,
trial,
duration
and
now
look
what
happens.
Binary
search
saw
the
saw
this
nominally
700
frame
transient
about
half
the
time
and
look
at
the
effect
on
throughput
when
you
saw
it,
it's
4.5%
when
we
didn't
see
it,
it's
9.1
and
so
on
and
so
forth,
and
and
interestingly,
almost
all
of
the
Lost
fairy
K
ssin
cases
didn't
see
it,
though,
in
some
cases
where
actually
where
we
saw
zero
loss
at
12.5%
of
offer,
flood
and
and
the
one
case
where
we
did
see
it.
A
So
then
we
looked
at
128
octet
frames.
Maybe
this
may
be
this
frame
count.
This
lost
frame
count
is
somehow
related
to
the
frames
per
second.
Well,
it
turns
out.
It
looks
like
it's
not
there,
I
mean
these
are
actually
a
little
bit
higher,
but
they're
all
in
the
700
frame,
loss,
account
range
and
we're
seeing
the
same
kind
of
thing.
With
the
30
seconds.
All
the
binary
search
was
was
bothered
by
it
lost
verification
once,
but
the
repetition
took
it
away
and
if
we
go
on
to.
A
So
I've
got
here
that
show
kind
of
how
the
the
you
know
graphically
how
the
binary
search
works.
One
of
the
things
we
saw
was
that
when
we
tested
twice
at
a
hundred
percent
of
maximum
offered
load,
we
were
kind
of
kind
of
wasting
our
time
as
I
said
we're
seeing
like
50
million
frames
lost.
So
we
put
a
threshold
in
to
reduce
the
number
of
repeated
trials
when
we
see
loss
over
a
certain
threshold.
A
A
This
is
for
a
PvP
setup,
128
octets,
and
this
is
binary
search
with
loss,
verification,
so
we're
seeing
we're
seeing
some
losses
in
the
kind
of
in
the
onesie
twosie
range
here
and
some
are
greater
than
greater
than
2000
they're,
not
plotted
they're
above
the
of
the
range
so
we
may
have.
We
may
have
more
phenomenon
in
this.
This
PvP
case
that
we
need
to
account
for
here
it's
it's
not
the
same
kind
of
thing.
We
saw
it
at
six
point
two,
five
percent
of
loss.
A
Another
similar
thing
I'll
just
skip
over
that.
So,
let's
see
here,
oh-
and
this
is
30
second
trial
for
P
V,
V
P.
This
is
where
we
got
two
VMs
in
the
case
and
I
mean
we're
seeing
about
the
same
consistency
here
for
binary
search
with
the
lost
verification,
really
bad
consistency
for
binary
search
alone.
It's
almost
doubling
here
in
some
cases,
so
you
know
that
and
and
and
obviously
again,
binary
search
with
the
West
verifications
doing
some
work
we're
seeing
some
reversals.
A
A
A
This
but
I
looked
at
the
histograms
of
lost
frames
for
each
of
the
cases
we
examined
and
so
we're
seeing
kind
of
the
difference
between
30-second
trials
at
64,
octets
and
15-second
trials
at
64,
locked
hats
and
what
we
and
what
we
see
is
that
some
of
the
some
of
the
instances
of
counts
in
the
Iligan
that
we
expect
a
lot
here
near
700
right.
We
saw
a
lot
of
those
at
6.25.
A
Those
all
went
away
there
they're
kind
of
gone
now
there
they've
got
distributed,
you
know,
maybe
down
in
here
or
or
maybe
even
lower,
so
there
may
be
more
than
one
transient
causing
those
causing
those
700
nominally
700
counts
and,
and
unfortunately
there
was
multiple
more
trials
at
the
15-second
case.
So
you
can't
really
look
at
the
exact
numbers,
but
the,
but
the
big
range
is
kind
of
show
the
difference
same
thing
here,
for
which
one
is
this,
though
this.
I
A
So
our
next
steps
I
asked
already,
but
you
know
we
still
need
to
identify
the
processes
that
costs
that
caused
the
loss.
We
need
a
better
measurement
tool
that
can
help
us
characterize
the
bursts
and
run
in
this
long
term
mode,
I'd
like
to
add
a
heartbeat
check
to
our
test.
Vmware
it
just
loops
back
to
the
packets
I
think
that
would
be
valuable,
because
that
way
we
can
tell
when
that,
when
that
VM
itself
has
been
interrupted
and
observe
the
logical
interface
drops
on
the
sort
of
the
northbound
side
of
the
OBS
I.
A
B
A
A
L
Okay,
okay,
thank
you.
Well
I'm,
well,
I'm
a
researcher,
and
this
is
joint
work
with
my
colleagues
Sebastian,
our
student
Alex
and
our
professor
Georg,
and
we
are
looking
at
packaging
artists
in
particular,
researching
package
generators
and
whether
they
are
reliable,
oops.
This
the
wrong
mode,
I'll
cut
you.
This
is
crawling
the
wrong
way.
Oh,
it.
A
L
Okay-
and
that
starts
again
so
pick
it
generate.
Yes,
there
are
expensive
package
generators.
They
are,
of
course,
also
man
work.
Perfectly
then
they're
cheap
packet
generators
on
network
as
lackeys
intercuts
are
they
awesome?
We
don't
know,
maybe
they
are,
but
that's
something
we
are
looking
into.
So
the
first
question
is,
of
course,
what
are
the
metrics
that
you
are
looking
at?
What
do
you
want
of
your
package
generator?
What
do
you
expect
your
package
generator
to
do
like
that
would
be
functional
features
or
reliability.
L
Features
then
can
exceed
package
generator,
be
reliable
and
how
to
measure
it,
and
how
would
you
relegate
a
packet
generator,
whether
it
works
as
advertised?
If
you
look
at
the
package
or
if
you
have
a
new
package
generator,
can
you
test
the
package
generator
to
see
if
it
works
as
specified,
and
then
is
that
precise?
And
this
is
accurate
and
one
thing,
you'll
notice,
I
am
asking
a
lot
of
questions.
I,
don't
have
a
lot
of
answers.
L
I
just
have
a
lot
of
questions
to
maybe
get
something
started
about
how
to
answer
these
questions,
maybe
hopefully
yeah
so
messed
up
was
the
first
question:
what
should
your
packet
generator
do
here?
I
just
looked
at
in
the
Etsy
specification
that
was
previously
mentioned
a
few
turns
here
and
it
has
actually
a
section
on
requirements
for
the
packet
generator
to
be
used
for
the
benchmarking
there
and
her
first
section
here.
He
said
there
the
main
requirement.
L
It
says
he
has
to
accurately
generate
constant
frame
at
specified
rates,
so
you
have
to
configure
some
written
or
incense
packets,
okay,
it
seems
like
a
very
reasonable
requirement.
Then
it
also
wants
you
to
generate
bursty
traffic.
That's
a
specified
rate
specified
burst
length,
okay,
nice,
and
there
are
two
some
more
requirements.
I
haven't
listed
here
like
multiple
flows
and
so
on,
and
then
the
third
main
requirement
is
it's
supposed
to
do
accurate,
latency
measurement
and
the
time
stamp
is
applied.
This
is
very
crude
as
close
as
possible
to
actual
transmission
and
whatever.
L
That
means,
if,
as
close
as
possible,
is
somewhere
a
millisecond
away,
and
there
just
can't
move
it
closer
than
it
would
be.
Okay,
for
that
specification
because
was
as
close
as
was
possible,
was
there
picture?
No,
it
just
wasn't.
It
would
pick
a
generator
and,
and-
and
one
thing
I
want
to
talk
about-
is
since
since
the
specification
here,
only
men
should
accuracy
and
never
precision.
L
I
just
wanted
a
short
picture
about
these
two
terms
that
are
often
intermixed
and
so
on
accuracy
versus
position
in
on
the
example
of
latency
measurements,
but
can
be
applied
to
other
metrics
as
well,
and
it's
two
different
things.
For
example,
you
have
a
packet
generator
here
and
we
just
connect
two
parts
of
the
package
owner
that
was
a
cable
like
you're
measuring
the
cable,
the
ideal
packet
generator
worked
with
well,
the
cable,
of
course,
doesn't
lose
any
packets
and
so
on.
L
But
if
you
measure
the
latency
of
the
cable,
it
should
be
the
same
value
every
time,
but
at
least
as
close
as
possible
to
the
same
value.
So
the
position
here
would
be
that
the
deviation
between
individual
measurements
of
time
stamps
is
low,
meaning
each
each
time
of
flight
of
the
packet
is
measured
as
X
nanoseconds
and
the
typical
source
of
a
measurement
error
in
the
packet
generator
here.
L
But
it
includes
a
kingdom
a
on
the
generator
or
receiving
site
at
the
timestamp
is
taking
in
software
before
it's
transmitted
and
to
the
card
of
the
packet
generator
yeah,
then
the
second
thing
is,
you
can
look
at
it.
Accuracy
is
that
it
reports
the
correct
latency
as
well.
The
correct
latency
would
only
depend
on
the
length
of
the
cable
and
yeah.
The
typical
source
here
is
that
some
processing
time
is
included
and
there's
some
static
overhead,
and
the
big
question
here
is
that
is
there
they
gone
to.
L
What
is
the
correct
latency
of
our
cable?
You
can
estimate
that,
and
we've
done
this
measurement
with
a
million
package
generator.
We
have
done
it
actually
with
a
few
different,
cable
lengths.
Here
we
have
just
attached
30
meter,
cable
to
the
packet
generator
and
we
send
packets
to
it,
and
the
idea
is
the
cable
should
always
report
the
exact
same
latency
and
on
this
graph
you
can
see
the
x-axis
is
applied
load
to
the
cable.
So
we
apply
an
increasing
load
to
the
cable.
Was
the
64
byte
packets
and
the
device
axis?
L
L
I
think
it's
better
than
you
usually
need
and
the
accuracy
is
more
tricky
to
measure,
because
what
is
the
ground
to
is
how
fast
should
a
cable
be,
and
and
this
we
can
estimate,
we
can
say,
ok,
the
propagation
speed
and
that
it
was
a
single
mode.
Fiber
optical
cable
should
be
around
2/3
the
speed
of
light.
L
You
can
estimate
it
should
take
a
hundred
and
50
nanoseconds
for
a
packet
to
pass
through
that
and
the
error
rate
reported
latency
here
is
161
nanoseconds,
so
it's
kind
of
close,
but
we
can't
really
quantify
how
good
it
is
because
they
can
only
say
it
should
be
low,
but
how
low
should
it
be?
One
one
thing
you
can
the
could
be
done
is
it's
like
use,
different,
cable
lengths
and
see
if
the
difference
between
carry
lengths
is
somewhat
reasonable
and
so
on.
L
But
it's
it's
a
tricky
thing
and
I
don't
have
an
answer
to
this.
Maybe
one
can
use
a
very
short,
cable
and
say:
okay,
this
should
be
basic.
No
latency
does
its
report
basically
zero
or
does
it
report
some
absolutely
high
value,
and
this
also
something
there
lots
packet
generators?
Will
already
fail,
we
have
also
done
this
test
worse
software
time,
stamping
that
you
have
implemented
as
good
as
beaker
to
me,
like
it,
took
all
the
precautions
and
just
in
this
test.
L
This
fails
completely
they're,
like
micro
seconds
of
latency
of
the
cable
and
sudden
picture
and
the
latency
increases,
and
so
on
then
I've
there's
two
things.
I'm
basically
want
to
talk
about
the
first
one
was
this
latency
measurement?
How
can
that
be
well
measured
and
the
second
thing
is
traffic
patterns
or
not
only
traffic
patterns,
also
the
the
speed
that
packets
are
sent
out.
The
the
main
thing,
the
main
traffic
pattern
that
you
see
is
the
typical
constant
between
traffic
pattern
that
is
defined
as
same
space
between
all
the
packets
meaning.
L
You
want
to
send
a
thousand
packets
per
second,
you
configurator
to
do
that,
and
then
you
tell
it
to
do
CPR,
and
then
you
expect
it
to
insert
a
space
of
one
millisecond
between
each
packet.
Then
they
of
course
bursty
traffic
bursty,
some
packets
are
sent,
it
was
no
gap
between
them
and
then
a
larger
gap.
More
and
quite
nice
one
is
the
Poisson
distribution,
whereas
an
exponential
distribution
of
the
gap
of
the
packets,
then
what
is
usually
used
well
CBR
and
FC
25:44
just
says
by
default.
L
You
see
BR,
but
you
can
also
do
more
tests
with
other
traffic
patterns.
If
you
want
to
I,
don't
think
anyone
does
that,
but
yeah,
then
the
a
T
and
a
Fiat
is
t9.
They
won't
explicitly
CBR
and
bursty
traffic,
which
was
fine
because
bursty
traffic
is
really
easy
to
generate.
If
you
tell
your
software
package
generator,
you
want
some
traffic.
L
It
will
give
you
bursty
traffic
by
default,
because
it's
easier
to
implement
bursty
traffic,
because
that's
just
how
all
the
api's
ends
and
all
the
hardware's
optimized
it's
optimized
to
work
in
a
burst
of
traffic.
The
problem
is
sometimes
it
will
give
you
even
bursty
traffic.
If
you
explicitly
asked
for
a
constant
bit
rate
traffic,
which
is
bad
for
your
sites,
then
CBR
is
actually.
It
sounds
like
the
easiest
pattern
because
just
use
the
same
gap
every
time
sounds
easy,
but
it's
actually
hardest
to
implement
and
the
the
most
annoying
to
implement.
In
particular.
L
It's
annoying
too
scared
to
scale
to
multiple
cores,
because
then
one
coil
would
have
to
know
when
the
last
one
was
sent
without
the
support
that's
annoying
per
so
traffic.
It
has
a
few
nice
nice
properties,
for
example,
if
you
take
Prasad
traffic
from
two
different
links
and
just
mix
them
by
a
switch
on
a
network,
Arthur's
multi
cue,
just
somehow,
multi-processing
mix
stuff
together
and
adding
to
a
Poisson
distribution,
use
a
new
Prasad
distribution.
So
you
can
easily
scale
up
and
Prasad
is
arguably
the
most
realistic
traffic
pattern
for
a
real-world
traffic.
L
But
let's
look
at
whether
that
actually
matters
because
it
sounds
like
such
a
distinction
like
who
cares
how
the
packets
are
really
spaced
on
the
wire
just
pack
and
it's
fine.
So
this
is
just
an
example:
measurement
setup
really
simple:
one
host
sending
packets
one
host
running
open,
V
switch,
no
fancy
open,
V
switch
just
this
standard,
open
research
in
the
kernel,
no
fancy
optimizations
and
we
apply
an
increasing
load
to
two
ports
where
it
forward
packets
was
just
a
static,
open
flow
rule,
no
magic.
There.
L
We
are
using
the
entry
xgb
driver,
which
has
some
dynamic,
interrupt
sorting,
and
we
have
the
kernel
point
mode,
map,
e
and
yeah,
but
you
can
see
in
this
graph
is
we
apply
an
increasing
load,
measured
in
million
packets
per
second
on
the
x-axis
and
measure
the
latency
of
the
traffic
in
microseconds
on
the
y-axis?
So
what
happens
here
is
everything
seems
kind
of
fine
packet
rate
increases,
latency
also
increases,
but
then
suddenly
something
what
happens
down
here
turns
out
that
switches
into
a
different
mode.
L
It
goes
into
poly
mode,
and
it's
complicating
with
our
paper
about
it.
I
won't
want
to
go
into
it,
but
it's
basically
just
an
example
of
what
a
measurement
could
look
like
and
what
you
can
now
do
is
we
can
repeat
the
same
measurement
and
the
only
thing
we
change
as
we
changed
the
traffic
pattern,
meaning
now
we
don't
use
a
constant
gap
between
the
packets,
but
you
just
use
a
random
number
generator.
L
L
Yeah
and
the
point
is
we
get
a
completely
different
response
from
the
device
under
test
just
by
changing
one
parameter
at
the
packet
generator
and
it's
a
parameter
that
is
often
ignored
by
packet.
Generators
not
supported
well,
and
it
can
be
a
problem
if
you're
looking
at
latency
measurements-
and
we
found
it
to
be
not
a
big
news
for
throughput
measurements
but
for
latency
measurement.
L
It's
kind
of
a
big
deal,
so
I've
more
questions
here,
he's
to
two
things
I've
mainly
talked
about
is
say,
for
example,
if
we,
if
we
wanted
to
specify
how
to
benchmark
a
package
generator
and
it
would
be
probably
defined
several
tests,
that
would
start
with,
say
this
package
generator
suppose
latency
measurement.
So,
let's
measure,
how
good
is
the
latency
measurement
by
measuring
the
latency
of
different
cables
of
different
lengths
of
a
specified,
cable
type
and
report?
L
What
is
the
latency
that
the
packet
generator
reported
and
then
this
test
needs
to
be
repeated
by
applying
different
different
types
of
traffic
to
the
cable,
because
that's
changing
anything
in
the
packet
generator
affect
how
well
it
performs,
especially
increasing
the
load,
can
can
affect
your
latency
measurements
if
it's
not
done
properly
because
queues
might
fill
up
and
the
open
question
is
what
is
the
ground
to
use
for
the
latency
of
the
cable?
Can
you
just
use
a
shortest
cable?
You
can
find
and
say
it
took
me
around
0
Tecna,
saying
RBS
traffic
pattern.
L
That's
what's
really
hard
to
measure
to
to
quantify
what
this
is
really
really
going
on.
Maybe
that
once
two
years
ago-
and
it
was
really
annoying
to
do
because
we
it's
really
hard
to
measure
on
commodity
hardware,
if
you
receive
a
lot
of
packets
and
it's
really
hard
to
take
a
time
some
of
each
packet
on
just
your
normal
commodity
hardware,
with
the
necessary
position,
you
have
done
it
using
an
FPGA
and
that
bit
work
at
Versailles,
quite
annoying
setup
and
realistically
speaking,
yeah.
L
It's
not
that
easily
repeatable,
because
no
most
people
don't
have
FPGA
is
lying
around
and
their
service
yeah
and
two
things
since
I've
just
heard
about
the
packet
generators
apparently
fail
to
even
generate
the
packet
rate
that
was
specified.
Then
there
should
also
be
tested,
but
I
just
assumed
that
get
the
basic
stuff
right
and
then
well.
I
actually
knew
better
and
the
basics
for
that.
L
But
I
thought
that
got
that
our
last
year
or
well
last
open
question
I
had
here
is
the
traffic
pattern
at
this
graph,
comparing
CBR
pasado
traffic
and
often
see
the
ass,
who
cry
out
or
wanted?
Is
it
really
a
good
idea
to
have
CBR
traffic
as
handy
for
it?
Because
it's
not
a
realistic
thing
that
your
device
will
encounter
in
the
universe,
so
the
benchmark
shot
somehow
modeled
a
rewrite.
L
Yeah
I'm,
also
going
to
show
one
more
back
up
slide
that
I
had
here.
This
is
an
example
that
we
get
is
the
FPGA
setup
that
I
mentioned.
We
had
a
few
packet
generators
here,
and
this
is
a
histogram
where
we
configure
each
of
the
packet
generators
to
send
constantly
to
air
traffic
and
to
histogram
the
buckets
are
the
actual
into
a
packet
cap,
and
the
land
in
the
middle
is
one
moment
is
one
microsecond,
so
the
target
was
hitting
exactly
that
line.
L
L
But
it's
usually
quite
close,
accept
some
package,
a
notice
to
just
like
to
ignore
the
configuration
and
and
some
things,
because
when
you
increase
the
packet
rate
here,
we
increased
it
to
four
million
packets
per
second
and
then
one
of
them
just
decided
it
will
send
bursty
traffic
anyways
because
that's
easier
and
the
other
ones
are
not
that
close
and
thurmond
Ronde
results
are
with
software
everything
we
have.
Otherwise
everything
matters
that
actually
hit
that
quite
precisely
like
that.
L
That's
what
it
should
look
like-
and
this
is
what
it
usually
looks
like
so
put,
but-
and
was
it
something
that
could
be
could
be
reported
for
package
generator.
But
it's
hard
to
measure
so
not
sure
if
it's
a
good
idea-
and
there
are
some
fresh
so
that
someone
else
have
questions
or
maybe
even
answers.
E
B
A
There's
some,
it
seems
to
me
there
that
one
of
your
one
of
your
tests,
which
is
very
simple
but
also
very
informative,
is
something
that
appeared
in
the
kind
of
in
the
I
ppm
framework
a
long
time
ago.
The
idea
of
testing
the
measurement
system
with
this
simple
cross
connect,
cable,
and
it
seems
to
me
that
that
we
could
make
a
series
of
recommendations
like
that
for
people
putting
together
their
test
setup
and
turning
instead
of
a
lot
of
hand,
waving
and
I.
D
Warren
quarry
this
is
probably
kind
of
a
tangent
but
I
just
interested
for
the
30
meter.
Cable
you
had
0.66
see.
Did
you
get
that
just
from
a
lookup
table,
or
did
you
actually
look
on
the
cable
because
you
can
get
cables
which
are?
Actually
you
know
when
they
report
what
their
propagation
speed
is
that.
D
L
L
Witches
yeah
in
this
specific
measurement
was
actually
a
more
complicated
setup
where
we
had
five
attempts
before
after
a
cable
and
had
the
package
generator
and
the
latency
measurement
separate
and
like
a
packet
generator,
inserted
sequence
numbers.
And
then
we
had
two
tech
ports
on
the
fire
attack
going
to
a
packet
connector
and
taking
the
timestamps
there.
So
assuming
that
both
the
fiber,
the
transceivers
on
the
on
the
receiver,
have
about
the
same
than
would
cancel
out
and
there's.
C
L
A
Well,
thanks
very
much
for
bringing
this
in
from
me,
informational,
testers
on
to
us
and
making
some
recommendations
about
how
we
can
do
this
work
better
I,
hope,
there's
a
I
hope
you
can
imagine
sometime
in
your
future
to
all
up
with
with
some
of
these
things.
Work
in
this
calibration
recommendations,
area,
I,
think
I.
Think
you'd
have
some
welcome
reviewers
here
and
maybe
even
some
welcome
co-authors.
A
A
A
Has
everybody
signed
the
blue
sheets,
everybody,
okay,
cuz!
We
need,
we
need
to
get
every
name
on
there,
so
we
don't
get
assigned
a
tiny
little
room.
That's
that's
the
that's
the
defensive
participation.
So
thank
you.
Okay,
please
everybody
read
all
the
drafts,
that's
how
we
get
good
work
done
here.
We
really
appreciate
you
doing
that.
Thanks
for
joining
us
folks
who
joined
us
for
the
first
time
and
we'll
see
you
on
the
list.