►
From YouTube: IETF108 BMWG 20200727 1410
Description
BMWG session at IETF108
2020/07/27 1410
B
So
I
actually
don't
you
asked
me
if
I
could
see
your
screen.
I
actually
don't
see
anything
from
your
screen
at
this
point.
A
Okay,
do
you
see
the
oh
wait?
Wait,
wait.
A
B
Was
just
gonna
say
kind
of
look
like
facing
mirrors.
A
A
Let's
see,
I
noticed
that
sarah
hasn't
been
able
to
join
us
and
and
but
rob
rob
wilton
has
been
able
to
join
us
rob
is
our
our
our
new
management
area
director
welcome,
rob.
A
And-
and
just
so
everybody
knows
it's
going
to
be
a
bit
of
a
challenge
for
me
to
track
the
chat
as
well
as
the
as
well
as
the
participants
and
the
okay.
That's
a
hum.
I
I
don't
expect.
I
don't
expect
us
to
have
to
do
any
homes
today.
A
All
right,
so
so,
let's,
let's
start
up
slowly
but
I'll
I'll
I'll.
Do
this
a
little
tutorial
first!
Here
I'm
going
to
share
all
the
slides
today,
I
think
that's
probably
the
safest
option,
and-
and
so
when
you're
speaking,
please
tell
me
to
advance
the
slides,
then
there's
a
there's,
a
microphone,
basically
four
buttons
right
below
your
your
participant
view
here
and
and
the
first
the
button.
A
All
the
way
to
the
right
will
give
you
audio
access
immediately
for
some
reason,
when
we
have
a
mic
line
going,
then
you
can
join
the
queue
by
pressing
the
microphone
next
to
that
and
and
that,
of
course,
that's
what
we
usually
do
for
questions
during
talks,
so
without
further
ado,
so
this
is
the
benchmarking
methodology
session.
A
Sarah
banks
is
my
co-chair.
I'm
al
morton
if
you'd
like
to
join
our
mailing
list,
welcome
it's
very
early
in
the
week,
so
I'll
touch
on
no
well.
Here
we
work
as
individuals
and
try
to
be
nice
to
each
other
and-
and
that's
particularly
true
this
week,
because
the
the
the
chairs
are
all
using
a
brand
new
version
of
me
techo
and
we're
trying
to
run
the
sessions
as
best
we
can.
A
There
are
four
already
been
some
surprises
this
morning
and
about
10
or
10
or
12
messages
to
the
working
group
chairs
list
that
were
very
useful.
A
So
then,
basically,
everything
that
you,
you
activity
that
you
partake
in
here
at
the
ietf,
is
considered
a
contribution
and,
as
a
result,
those
contributions
are
covered
by
ietf's
ipr
disclosure
policy.
A
We
ask
that
you
make
your
ipr
disclosures
early
and
often
we
have
lots
of
other
policies
which
are
listed
in
the
bcps
here,
anti-harassment
and
and
so
forth.
Copyright
so
by
participating,
please
be
aware
of
these,
and
if
you
need
advice,
talk
to
me
or
or
the
or
the
area
directors
have
questions
go
go
for
it
on.
D
A
You
know
on
on
our
mailing
list
or
or
with
the
with
the
other
materials
that
are
available.
Any
questions.
A
Okay,
hearing
none
I'll
proceed
so
then
we've
got
our
our
agenda
today,
with
status.
We'll
we'll
talk
about
all
three
of
our
working
group
drafts.
We've
got
proposals
on
two
of
them.
A
We've
got
a
a
new
proposal
to
upgrade
the
benchmarking
methodology
for
network
interconnect
devices,
and
there
were
some
questions
on
multiple
loss
ratio,
draft
and
and
some
email
discussion
of
the
network
service
function
intensity,
but
the
rest
of
them
were
not
updated
in
time.
So
I
I'm
sort
of
considering
that
maybe
just
the
top
three
proposals
will
be
ones
that
we
cover
today
and
and
the
last
two,
probably
not
very
much
so
any
questions
or
or
bashes
to
the
agenda.
A
Hearing
none,
let's
accept
the
agenda
as
written,
and
I
will
note
that
tim
carlin
is
is
taking
notes
today.
If,
if,
if
any,
if
anyone
out
there
can
help
monitor
the
the
chat
or
the
the
jabber,
that
would
be
appreciated.
I'm
not
sure
we
don't
usually
get
any
jabber
traffic
during
bmwg.
So
it's
not
a
hard
job.
A
Sarah,
can
you
can
you
be
heard.
E
Oh
gotcha
gotcha,
I
just
wanted
to
let
tim
know,
hey
thanks
for
taking
notes.
I
really
appreciate
you.
This
is
almost
better
than
being
in
the
room
where
we
have
to
beg,
borrow
and
steal.
We
had
you
signed
up
in
advance,
so
thank
you
and
I'll.
Give
I'll
give
a
hand
on
on
the
ether
pad
as
well
to
cover
you
when,
when
you're
presenting.
A
Okay,
so,
let's
proceed
on
to
the
working
group
status,
the
evpn
draft
is
is
back
to
the
working
group.
That's
warren's
decision.
We've
had
some
review
of
that
on
the
list
and
and
of
course
the
new
proposals
keep
coming.
We've
got
a
long
list
here.
You
know
we've
gotta
develop
some.
You
know
some
momentum
behind
a
few
and
decide
which
ones
the
working
group
which
wants
to
take
up
at
this
point.
Also
it
it's.
A
A
Over
the
weekend
I
updated
the
milestones
and
and
you'll
see
that
we've
got
three
here
for
august,
basically,
the
next-gen
firewall,
back-to-back
frame
and
evpn
benchmarking.
I
think
these
are
actually
all
achievable
in
in
in
august,
at
least
to
get
them
to
a
d
review.
So
that
means
having
a
working
group
less
calls
and
things
of
that
nature
so
and
we've
got
a
promised,
update,
updated
drafts
for
two
of
them,
and
I
probably
you
know
in
in
general,
there
will
be
I
you
know.
A
If
there's
any
comments
today
updated
drafts
for
all
of
them
and
then
we
go
out
to
december
to
pick
up
the
additional
items,
any
questions
about
that.
A
Okay
hearing
none,
that's
it
for
the
chair
review,
very
good
okay,
so
I've
got
a
I've,
got
a
quick
status
on
on
the
evpn
draft.
The
author
is
unable
to
join
us
today.
He
did
not
register
for
ietf
and
I'm
not
sure
what
the
barrier
was
he's
he's
registered
for
and
attended
most
other
itfs,
but
this
time
around
he's
not
here
so
the
the
bottom
line
is,
I
reviewed
after
we
were
asked
as
a
working
group
to
do
some
more
editing
for
clarity
in
this
draft.
A
A
Still,
looking
for
volunteers
to
help
with
editing
some
of
the
sections
there,
anything
after
section
two
needs
help.
E
So,
for
folks
that
are
newer
to
the
to
bmwg.
This
draft,
in
particular,
is
super
approachable,
even
if
you
come
at
it
without
evpn
expertise,
just
the
the
flow
of
the
document
and
the
way
test
cases
are
written
and
then,
of
course,
some
of
the
the
basic
editing
right.
This
is
a
a
draft.
E
I
think
where
just
having
a
bit
of
that
extra
editorial
help
would
be
helpful,
but
in
particular
this
draft
has
gone
through
a
decent
amount
of
technical
review,
but
the
editorial
part
is
what
could
use
some
help
here.
E
So
just
vanilla
how
you
approach
test
cases
along
with
just
some
basic
formatting
things
he
the
tool,
I
think
that
they
were
originally
using
is
doing
some
funky
things,
and
this
is
one
of
the
documents
that
we've
earmarked
for
august
2020
and
the
milestones
that
al
just
put
up,
and
so
I
think
one
way
we
could
really
get
there
and
help
pay
it
forward
is
to
put
a
couple
of
extra
eyeballs
and
and
hands
on
this
draft.
E
I
mean
you'll,
hear
al,
say
or
you'll
hear
myself
say
often
like
the
easiest
way
to
get
people
to
review.
Your
draft
is
to
review
theirs,
and
so,
if
you
could
help
out
in
particular
with
this
draft,
I
think
bmwg
and
the
draft
itself
would
would
be
a
much
better
shape
and
we'd
be
super
thankful
for
that.
So
please,
if
there's
one
draft,
you
were
considering
taking
a
look
at,
don't
be
worried
about
the
tech
part.
This
would
be
the
one
for
you.
A
Great
thanks:
sarah,
okay,
so
yeah
tim,
I'm
gonna
bring
you
out
of
the
queue.
B
Hi,
I
just
actually
had
a
quick
question
on
the
previous
point
for
evpn.
I
wasn't
sure
if
there
was
a
you
know
preferred
platform.
It's
I
haven't
taken
a
really
deep
dive
into
the
draft.
I
I
can
possibly
take
a
look
at
you
know
editorial
wise,
some
of
it.
I
know
some
groups
using
a
github.
I
don't
know
if
there's
a
preferred
format
for
sending
changes,
if
just
a
simple
diff
is
okay.
A
Yeah
we're
we're
not
using
a
github
for
anything
at
the
moment
and
we've
been
relying
on
communications
being
the
via
the
email
list
and
and
their
pretty
much
pretty
much.
Any
commenting
format
will
will
help
the
old
new
format
or
or
or
like
a
a
text
file
where
you've
done
some
corrections
and
then
do
a
diff
and
then
send
the
author.
The
text
file
too,
you
know,
however
it
works,
for
you
is,
is
is
fine
for
us
and
I'm
sure
fine
for
the
author,
as
well.
E
And
just
a
small
point
specifically
on
this
draft
timothy,
if
you
find
yourself
wanting
to
make
a
bunch
of
changes,
everything
al
just
said
is
perfectly
correct,
and
that
is
exactly
how
we
do
business.
However,
with
this
author
generally,
I
find
sudan
if
you
find
yourself
wanting
to
make
a
lot
of
changes
like
I
did
initially,
I
actually
just
pinged
him
and
said
hey.
E
Would
you
share
the
document
and
he's
generally
okay
with
me
making
edits
in
line
and
it's
text,
it's
very
approachable
and
he's
very
good
at
doing
the
diff
on
his
own.
So
it's
another
way
if
you
find
you
know
for
me.
Sometimes
I
find
it
tedious
to
say
in
section
1.3,
please
consider
changing
the
sentence,
blah
blah,
sometimes
it's
easier
just
to
make
the
change
and
he's
really
good
at
just
doing
a
diff
to
see
what
the
changes
were.
E
A
A
All
right
so
next
on
the
next
on
the
agenda,
is
the
next
generation
firewall,
benchmarking,
tim
and
and
and
that's
you
so
let
me
find
the
right
talk
here.
I
think
it's
this
one:
yep,
okay
and
and
and
you're
in
good.
B
There,
I
am
all
right,
thank
you,
alexar
yeah,
so
tim
carlin
from
university
of
new
hampshire
interoperability
lab
speaking
today
about
the
ngfw
draft
reminder.
This
is
the
draft
that
sort
of
guides
the
netsec
open
certification
program,
so
I'm
wearing
sort
of
multiple
hats
as
a
participant
and
a
lab
and
and
sort
of
a
supporter
of
that
program.
So
I'm
not
an
author
of
this
draft.
B
However,
so
detailed
questions
about
that
I'll
do
my
my
best
to
answer,
but
I
I
think
we
do
have
some
experts
as
well
on
the
line
here.
So
we'll
all
do
our
best
to
answer
any
questions,
and
I
can
give
a
high
level
update
of
where
we
stand
so
first
or
next
slide.
B
So
the
current
draft
is
o3.
I
know
we
are
a
little
bit
late
on
providing
an
update
so
hopefully
we'll
be
able
to
get
that
in
pretty
soon
there
are
links
in
the
the
I
used
to
call
it
etherpad.
I
guess
it's
not
etherpad
anymore,
but
code
emd
to
a
couple
of
mailing
list
archive
emails
that
we've
put
out
there
that
I
think
got
a
little
bit
of
traffic
on
so
we'll.
B
Try
to
be
resolving
those,
at
least
for
the
next
version
of
the
draft
as
far
as
additional
updates.
Basically,
they
revolve
primarily
around
the
addition
of
the
network
ips
section
to
it.
Previously
it
was
focusing
more
on
the
performance
benchmarks,
so
so
I'll
try
not
to
read
verbatim
the
slides.
B
You
can
see
here
what
things
are
getting
updated:
cve
tests,
the
security
effectiveness
and
that's
pretty
much
the
running
theme
here
and
then
looking
at
those
kpis
or
key
performance
indicators,
so
block
cvs,
bypassed
or
or
not
blocked,
cvs
or
loud
cvs,
background
traffic
and
basically
seeing
what
happens
to
that
during
the
testing
and
then
the
the
statistics
following
that.
So
next
slide.
B
Other
additions
here
for
the
security
features
ssl
inspection
anti-malware
is
a
big
one
that
we're
working
a
lot
through
here.
Anti-Botnet,
of
course,
logging
identification,
inspection
evasion
and
all
related
to
the
network.
Ips
section
not
too
much
of
the
security
features
section
regarding
njfw.
B
B
A
B
D
B
So
the
next
one
again,
not
too
many
changes
as
far
as
test
equipment
for
ngfw,
but
for
network
ips.
There's
a
fair
bit
of
changes
coming
here.
Regarding
the
configuration
talking
about
the
distribution
of
http
htbs,
you
know
first
off
the
requirements
in
that
are
already.
There
are
pretty
much
unchanged:
they're
remaining
even
distribution
of
https
and
and
tls
traffic
over
https.
B
And
we'll
base
those
on
the
maximum
throughput
or
the
results
determined
that
we
in
previous
sections
of
the
test
and
then
also
cve
traffic
transmission
rates,
so
trying
to
figure
out
the
how
many
cves
per
second
basically
are
able
to
be
supported
and
determining
the
those
values
to
help
configure
the
equipment.
B
So
here
for
the
validation
criteria,
the
number
of
failed
transactions
in
the
background
traffic,
we're
specifying
it
should
be
less
than
0.01
of
attempted
transactions
and
then
the
number
of
terminated,
tcp
connections.
We
specified
the
same
value
there
for
for
that
it
should
be
less
than
the
total
initiated
connections.
In
the
background
you
know,
of
course,
the
different
phases
here.
So
during
the
sustained
phase
you
know
the
traffic
should
be
four
at
a
constant
rate
and
there
should
be
no
false
positives.
B
So
this
is
all
text
and
changes
that
are
coming.
You
know
the
best
way
to
do
this
is
for
us
to
sort
of
submit
this
text
and
have
folks
take
a
look
at
it,
which
we'll
be
doing
so
that
hopefully
some
of
this
starts
to
clear,
as
we
do
that,
and
we
can
move
to
the
next
slide.
I
think
which
brings
us
to
the
measurement
itself
and
the
kpis.
So
basically
it
comes
down
to
block
cvs,
unblock
cvs,
the
background
traffic
and
the
statistics.
B
B
Right
so
we'll
do
one
of
the
following,
so
be
no
impact
in
a
concentrated,
either
minor
impact
or
heavily
impacted,
so
basically
adjusting
the
traffic
to
to
handle
that,
and
then
all
the
statistics
for
each
of
these
things
and,
of
course,
no
changes
here
for
ngfw
for
measurement
and
the
last
slide
again
just
documenting
these
things
around
the
test
procedures
and
the
expected
results
for
background
traffic
and
how
to
go
about
doing
the
cve
emulation.
B
B
If
there's
anyone
who
has
any
comments
lingering
on
those
specific
things,
please
let
us
know
it'll
be
great
if
there's
any
thoughts
about
how
to
handle
any
of
these
always
encourage
more
review
or
questions
and
hopefully
be
on
the
lookout
for
text
here,
as
I'll
mentioned
in
the
coming
weeks,
and
of
course,
if
there's
any
questions
now
we'll
be
happy
to
take
a
stab
at
answering
them.
A
A
B
They're
working
on
yeah,
I
think
the
idea
is
to
add-
and
we
had
some
conversation
about
this-
so
maybe
maybe
it's
fair
to
bring
this
up
again
as
to
wanting
to
add
the
ips
information
which
would
be
on
each
for
each
of
these
topics
in
each
of
these
sections
to
the
draft.
B
So
the
question
we
sort
of
had
is:
should
we
add
it
to
this
draft
or
possibly
create
a
new
one,
and
we
seem
to
prefer
and
lean
towards
adding
it
to
this
draft,
though,
maybe
not
for
the
august
time
frame,
I'm
sort
of
speaking
off
the
cuff
here
in
terms
of
timing.
B
A
Yeah,
I
kind
of
think
that,
speaking
as
a
participant,
I
would,
I
would
first
wonder
whether
the
ips
topics
fit
under
the
original
scope
that
we
adopted
as
as
a
as
a
working
group
draft-
and
you
know
I
can
say
it's
kind-
you
know
it-
obviously
it's
kind
of
a
kind
of
a
gray
area,
right,
sure
and
and
and
also
so
so
then,
with
my
chairman's
hat
back
on,
we
we've
already
got
a
pretty
large
draft
here
and
and
that's
going
to
mean
a
a
big
load
in
terms
of
number
of
pages
for
people
who
are
going
to
walk
in
and
try
to
help
this
draft
advance
with
their
reviews.
A
So
I
mean,
as
speaking
as
working
group
chair
I
would
I
would
I
would
rather
we
bit
this
off
in
a
couple
of
different
chunks.
But
let
me
open
up.
Let
me
open
the
floor
up
here
and
ask
how
ask
how
other
people
feel
I
see.
Sarah
has
joined
the
cube.
E
Yeah
thanks
al
I
on
one
hand
you
know,
I
think
you
could
put
it
in
this
draft,
but
speaking
as
a
participant
excuse
me,
but
I
do
think
the
notion
of
a
network
ips
is
on
its
own
a
huge
feature
and
whether
or
not
every
ng
firewall
actually
supports
that
or
supports
the
same
breadth
or
depth
of
that
I
think
potentially
gets
a
little
messy.
E
So
I
tend
to,
I
think,
agree
with
al
specifically,
because
I
think
the
ips
notion
itself
is
is
pretty
large
having
a
separate
draft
for
that
might
be
nice
and
make
this
a
lot
more
readable,
because
the
current
draft
is
pretty
large,
and
that
is
a
pretty
hefty
feature.
If
that
makes
sense.
C
Not
too
sure
can
you
hear
me
now?
Yes,
can
hear
you
now:
okay,
great
yep.
So
what
if
we
were
to
split
out
into
separate
drafts,
the
security
effectiveness
side
of
things,
so
we
would
have
security
effectiveness
requirements
for
ngfw
and
we
would
have
certain
security
effectiveness
requirements
for
net
network
ips
and
then
have
this.
C
That
would
serve
the
purpose
of
decreasing
the
size
of
the
current
draft
a
bit
and
make,
and
those
are
the
areas
that
are
that
are
less
likely
to
change
significantly
and
and
then
we
can
move
forward
with
the
security
effectiveness
stuff
in
the
in
the
working
group.
Would
that
make
sense?
C
A
I
think
that's
a
path
forward
that
that
might
be
successful
too.
It's
a
I
mean,
like
I
said
it's
already,
a
pretty
big
draft,
so
in
some
sense
how
how
the
I
guess,
how
the
authors
feel
most
comfortable
doing.
The
split
is
what
you
should
probably
recommend
to
the
working
group.
C
So
what
I,
what
I
think
it
we'll
we'll
do
that
is,
you
know
tim,
and
I
will
take
this
back
to
the
group
at
this
echo
open,
that's
working
on
it,
including
the
authors
and
and
and
discuss
that
with
them.
A
Okay,
so
we'll
well
we'll
leave
we'll
leave
the
the
future
split
of
topics
to
the
author
recommendation
and
and
look
forward
to
a
no
four
version.
You
know,
after
we've
discussed
that
recommendation
on
the
list
that
implements
that
recommendation,
yep,
okay,
because
because
officially
now
this
is
a
working
group
document.
So
you've
it's
good
to
make
a
recommendation
and
then
get
some
working
group
agreement.
A
No
not
at
all,
just
just
trying
to
emphasize
that
point.
Okay,.
E
We
are
super,
or
at
least
I
am
as
of
speaking
as
a
participant,
really
excited
to
see
a
network
ips
have
that
coverage
come
in,
so
please
I'm
looking
forward
to
reading
that
and
giving
you
feedback.
Thank
you.
B
Yeah,
thank
you,
everyone.
We
appreciate
the
the
feedback
and,
as
brian
said,
we'll
see
if
we
can
try
to
make
as
minimal
changes
as
we
can
to
this,
and
maybe
even
trick
down
the
draft
and
see,
if
maybe
spinning
off
another
one,
a
new
draft
makes
sense
for
ips.
So
we'll
come
back
with
some
options.
A
Okay,
more
email
from
the
working
group
chairs,
all
right
so
so
next
up
is-
is
al
morton
with
his
participants
hat
on
this
is
the
third
working
group
item
on
updating
the
back-to-back
front
frame
benchmark
in
rfc
2544,
and
this
is
my
minimal
title
slide.
So
here's
what's
happened
and
although
I've
shrunk
it
down
to
one
slide,
it's
a
lot.
A
One
of
our
active
participants,
veradka
pollock,
sent
comments
in
late
2019
and
then,
when
we
had
our
our
may
interim
meeting,
vratko
had
to
remind
me
that
I
hadn't
seen
his
comments
or
hadn't
addressed
him
at
least,
and
I
I
have
to
I-
I
apologize
to
vracco
again.
This
is
it's
the
kind
of
thing
that
happens
at
the
end
of
the
year.
When
you
know
a
bunch
of
stuff
comes
in
all
together
and-
and
it
was
actually
a
fairly
stressful
time
for
me.
A
So
I
I
was
able
to
get
his
comments
and
then
actually
raccoon
sent
more
comments
to
the
list
after
our
interim
meeting,
so
that
that
those
are
his
may
2020
comments
and
and
and
they
resulted
in
this
whole
list
of
resolutions
here-
that
buffer
sizes
expressed
in
time
in
the
original
tests,
we've
defined
the
buffer
time.
The
buffer
filling
rate
the
average
average
repeated
tests
is
a
a
topic
we
covered
in
more
detail
and
the
revised
time
correction
factor
to
calculate
buffer
time
again.
A
So
we've
had
support
for
this
expressed
over
the
last
two
years.
Really
there
was
a
call
for
support
early
early
2019
and
the
supports
popped
up
many
times
and
in
fact
the
reason
for
the
reviews
that
we're
getting
from
maciac,
constantinoids
and
and
veraco
recently
are
because
they're
actually
using
the
the
draft
and
the
techniques
in
the
context
of
their
benchmarking
and
their
feedback,
is
some
very
important
testing.
A
So
and
I'll
add
that
you
know
this
is
one
of
the
several
updates
to
rfc
2544,
which
is
our
foundational
benchmarking
for
network
interconnect.
Devices
and
several
of
the
previous
ones
are,
are
listed
here
and
actually
several
rfcs
that
updated
talk
about
state-of-the-art
latency,
which
you
know
we're
we're
making
a
buffer
time
measurement
here.
That's
important
in
that
topic
as
well.
A
So
I'm
going
to
go
into
some
detail
here.
This
is
the
kind
of
comment
that
you
know
that
we've
been
dealing
with.
You
know.
A
Basically
the
mistake
I
often
make
in
in
writing
a
draft
like
this
and
what
has
especially
in
the
first
version
of
the
draft
is
that
when
I,
when
I
cite
a
reference-
and
in
this
case
the
reference
was,
was
the
testing
and
the
slide
deck
that
we
presented
at
the
open,
open
platform
for
nfv
summit
in
the
result
of
the
vswitch
performance.
A
Project
of
opnfv
we
presented
slides
at
the
summit,
basically
describing
this
measurement
problem
with
the
current
rfc
and
and
the
solution,
and
we
did
that
all
in
about
three
slides,
including
the
the
measurement
results
and
everybody
that
looked
at
it.
You
know
nodded
their
heads
and
said
great,
you
know
so,
but
in
the
in
the
context
of
of
updating
the
rfc
2544
procedure,
which
was
very
concise.
A
In
other
words,
it
was
actually
less
than
a
page
we've.
We've
we've
described
what
the
what
the
problem
is
with
the
current
measurement
and
talked
about
how
we
found
that
problem,
and
you
know
what
what
we're
and
basically
what
we're
doing
about
it
and
and
the
comments
all
along
the
way
have
asked,
for
you
know
just
a
little
more
text
here
to
explain
what
happened
and
and
so
on
and
so
forth.
A
So
that's
I
mean
that's
one
of
the
sections
that
was
added
here,
basically
that
if
we
were,
if
we
were
just
looking
at-
and
I
guess
I
should
give
a
little
bit
of
background
here-
we
have
we
have
a
traffic
generator
and
this
little
text,
diagram
kind
of
explains
what
happened.
We
have
a
traffic
generator
that
generates
a
a
burst
of
back-to-back
frames,
and
that
goes
through
the
ingress
links
to
the
device
under
test
and
the
frames
are
stored
in
a
buffer.
A
And
what
normally
happens
is
that
there's
a
but
you
know
a
frame
header
processing
and
when
that
happens,
the
the
frames
are
removed
from
the
buffer
and
they
go
out
the
egress
to
the
receiver,
and
so
the
the
strategy
is
to
keep
increasing
the
number
of
frames
that
are
sent
in
a
burst
back
to
back.
A
You
know
the
minimum
inner
frame
gap
and
preamble
on
on
ethernet
and
and
then
on,
the
receiver
look
for
the
the
the
longest
burst,
where
there's
zero
loss
and
from
calculations
based
on
that
that
burst
size.
The
original
2544
buffer
time
was
was
calculated,
and
what
we
noticed
right
away
is
that
that
original
calculation
didn't
account
for
the
frames
that
were
leaving
the
buffer,
because
the
buffer
is
a
lot
because
the
the
dut
is
alive.
A
It's
processing
headers,
it's
removing
headers
from
the
buffer
and,
as
a
result,
the
buffer
is
not
only
filling,
but
it's
it's
also
being
bled
off
and-
and
so
there
was
a
an
important
factor
about
the
header
processing
to
include
when
seeking
information
on
the
exact
buffer
size.
And
of
course
you
know,
as
I
mentioned
here
many.
A
There
are
many
buffers
involved
in
the
device
under
test,
but
this
is
the
model
that
we're
using
it's
a
it's
a
sort
of
a
one
buffer
model
and
and-
and
this
has
so
far
not
failed
us.
A
So
that's
a
quick
background
and
when
we
were
looking
at
the
the
referenced
testing
in
the
opnfv
vswitch
performance
project,
we
noticed
calculations
that
produced
buffer
sizes
of
30
seconds
for
some
of
the
frame
size
and,
and
that
was
clearly
wrong.
A
It
turned
out
that
that
was
the
limit
at
which
the
test
devices
involved
would
send
a
burst
of
traffic,
and
then
they
would
just
stop
and
and
zero
loss,
and
so
they
would
report
30
seconds.
So
it
was.
A
Then
that
means
the
header
processing
is
not
a
limitation
and
header
processing
rate
and
you
can't
really
measure
the
buffer
size
with
those
frame
sizes.
A
So
we
we've
added
that
information
into
the
procedures
now
to
restrict
the
testing
to
the
cases
where
you
know
only
only
the
packet
sizes
that,
where
the
the
header
processing
rate
is
challenged,
those
are
the
only
places
where
you
can
make
this
measurement
and.
A
So
more
clarifications
here,
just
adding
some
more
text
about
you
know.
It's
apparent
that
the
device
under
tests
frame
processing
rate
empties
the
buffer
during
a
trial.
That's
you
know
one
burst
and
and
tends
to
increase
the
implied
buffer
size
estimate,
because
many
frames
have
left
the
buffer
when
the
burst
of
frames
ends.
A
So
we've
got
a
calculation
now
of
a
corrected
buffer
size
estimate
and
then
some
more
clarifications
here.
We're
assuming
that
the
packet
header
processing
function
operates
at
the
measured
throughput
and
that's
the
just
making
that
aspect
clear.
It's
veraco
said
yeah.
This
is
fine.
You
know
and
and
note
that
it's
the
best
approximation
we
have
and
and
that's
exactly
right.
So
all
of
that's
been
added.
A
A
It's
the
difference
between
the
maximum
theoretical
frame
rate
on
ingress
and
the
maximum
throughput,
the
header
processing
on
egress,
so
to
bring
the
picture
back
up,
we've
got
our
generator
generating
back-to-back
frames,
that's
the
maximum
theoretical
frame
rate
and
then
the
slower
header
processing
rate
at
the
maximum
throughput
means
that
this
buffer
is
going
to
fill
up
over
time.
A
And
if
you
know
the
size
of
the
buffer,
you
can
calculate
that
and
if
you
assume
a
buffer
size,
then
you
can
check
some
of
your
calculations
and
so
forth,
which
is
exactly
what
we
did
so
then
some
more
changes
here.
The
original
text
in
the
original
definition
in
rfc
2544,
the
longest
burst
of
frames
as
determined
process
and
buffer
without
frame
loss,
is
determined
from
a
series
of
trials.
A
So
that's
what
we're
that's,
what
we're
working
with
and
then
we
added
this
new
section
5.3,
which
has
to
do
with
the
test
repetition
and
so
we're
we're
saying
that
the
test
should
be
repeated
at
least
50
times,
with
the
average
of
the
recorded
values
being
reported,
that's
from
2544,
and
so
therefore
it's
it's
an
average
burst
length
and
and
we've
clarified
that
and
there
there
was
a
small
clarification
on
the
corrected
buffer
time
and
that's
basically,
what
we
had
included
in
the
text.
A
Was
this
correction
factor
it's
a
little
bit
funky
here
to
try
to
see
exactly
what
happened,
but
we've
got
a
full
equation
now
for
the
corrected
buffer
time,
which,
as
I
said
before,
was
the
most
important
thing
to
measure
and
report
so
based
on
veraco's
comments
and
the
two
of
us
got
together
and
did
some
sample
calculations
and
we've
agreed
that
this
is
correct.
A
So
then
the
next
steps
are
to
it's
been
very
quiet,
though
other
than
racco's
few
comments.
So
I
I,
as
as
the
author,
I'm
I'm
going
to
suggest
that
we
trigger
any
concluding
reviews
with
a
working
group.
Last
call
and
and
we've
got
a
draft
that
we
can
work
with
for
that
and
and
so
sarah
with
your
chairs
hat
on,
you
can
take
it
from
here.
Thank
you.
E
Yeah,
I
I
thank
you
al.
I
totally
agree
it's
I
realize,
as
a
participant,
I've
actually
been
terrible
in
the
sense
that
I
read
it
and
I
thought
it
was
extremely
well
written
and
didn't
have
feedback
other
than
hey
al.
It
was
really
well
written.
Thank
you,
so
I
should
probably
drop
that
on
the
list,
but
to
to
al's
point
I
I
agree.
I
think
we
are
sort
of
at
the
end
of
this.
It's
well
written.
It's
been
reviewed,
it's
got
feedback.
I
would
like
to
have
this
draft
go
into
a
working
group.
E
Last
call
any
objections
to
that.
E
So
I
guess
I
would
ask
hey
I'll
drop
the
note
onto
the
list.
Folks,
please,
especially
as
drafts
go
into
working
groups.
Last
call
it's
really
good
to
get
a
feedback.
A
quick
read
through.
I
think
it's
extremely
well
written
in
and
again
it's
another
approachable
draft
by
the
the
working
group.
So
please
take
a
look
and
now
I'll
put
the
I'll.
Take
the
action
to
put
that
onto
working
group.
Last
call
after
this
meeting.
E
A
You
so
we
have
a
so
that
that
concludes
our
our
our
three.
A
No,
it's
not
that
one.
So
that
concludes
our
three
working
group
items.
So
we're
waking
we're
just
in
summary
we're
waiting
for
an
update
on
the
evpn
draft,
we're
waiting
for
some
more
discussion
amongst
the
co-authors
and
possible
division
of
responsibilities.
Here
between
on
the
firewall
draft.
But
again
another
draft
coming
and
we've
agreed
to
start
a
working
group
last
call
on
the
back
to
back
frame
benchmarking
draft,
very
good
okay,
so
we've
got
the
we've
got
the
remainder
of
our
time
to
discuss
proposals
today
and
I've.
A
I've
teed
up
the
newest
proposal
for
discussion
and
gabor
events
will
will
be
giving
us
that
presentation
today.
Gabor
is
not
new
to
the
working
group.
He
worked
with
our
friend
marius
on
the
ipv6
transition
and
techniques
benchmarking
draft
some
years
ago,
and-
and
so
he's
come
back
to
us
now,
with
a
with
a
new
proposal
and
you'll,
see
lots
of
you'll
actually
see
lots
of
references
to
that
here.
A
So
I
hope
you've
all
read
it
and,
let's
see
so
gabor,
if
you
can,
if
you
can
press
the
the
microphone
button,
I
will
advance
the
slides
for
you,
okay
you're,
in
the
queue
I'll
take
you
out
of
the
queue
and
you're
good
to
go.
F
A
It's
probably
a
little
dangerous,
but
okay,
if
it's
dangerous,
why
don't
we?
Why
don't
we
just?
Why
don't
you
give
us
the
updates
verbally
and
then
you
can
send
me
another
deck,
which
I
will
update
as
a
revision.
F
So
thank
you
very
much
for
the
opportunity
to
present
our
draft
and
we
have
listed
on
the
second
slide.
We
have
listed
four
things
which
we
can,
which
could
update
old
isd
2544,
but
instead
of
reading
them,
let
us
go.
Let
us
jump
to
the
third
slide
and
and
start
talking
about
them,
so
in
rfc
8219,
which
I'm
a
co-author,
and
it
is
fair
to
say
that
I'm
just
a
quarter
of
in
a
little
part
about
dns6
for
benchmarking.
F
The
rest
is
the
work
of
mario's,
so
in
in
those
in
that,
in
that
rsc
mario
recommended
several
good
things,
and
I
I
think
it's
it
was
accepted
and
and
approved
by
the
working
group.
So
I
think
they
are
good
things,
and
now
I
would
like
to
mention
some
differences
between
rfc
2544
and
the
new
rfc
8219.
F
F
However,
in
some
cases
it
might
be
an
oversimplification
if
we
use
if
we
just
use
a
single
number
instead
of
a
lot
of
different
numbers
and
in
the
past,
when
we
used
hardware
packet
forwarding
devices,
it
was
usually
not
not
a
big
deal,
but
currently,
when
we
use
some
software
devices,
it
can
be
even
a
more
dangerous
thing
to
use
just
one
mesh
one
number
and
a
smaller
difference
is
that
medium
was
used
instead
of
average
and
median
is
less
sensitive
to
outliers.
F
So
it's
a
bit
different
than
average,
and
the
main
difference,
I
think,
is
that
mario's
recommended
to
use
the
first
and
99th
percentiles.
Of
course,
if
we
have
less
than
100
measurement
results
is
the
minimum
and
maximum.
F
So
I
think
they
are
they're
very
good
to
use
them,
because
it
can
be
a
good
measure
of
of
the
dispersion
of
the
results
if
they
are
consistent
or
they're
scattered.
F
So
I
think
it's
a
good
thing
to
use
them
and
also
a
higher
statistical
reliability
was
achieved
by
requiring
at
least
20
tests
and
some
other
thing
that
in
original
rfc
2544
it
was
only
a
single
time
stamp
used
and
in
the
new
rst
at
least
500
timestamps
had
to
be
used.
So
I
think
they
are
good
and
if
he
could
go
to
the
next
slide,
I
would
recommend
to
use
these
new
things
and
to
backboard
them
to
general
network
interconnect.
F
Device
testing,
because
rst829
is
just
about
ipv6
transition
technologies,
but
in
the
same
way
we
can
test
routers
and
other
things.
So
we
can
use
these
new
things
in
the
old
context.
So
why
not?
We
update
rsd2544
to
do
so,
and
we
have
also
a
tester
which
was
written
by
me.
It
is
a
siiit
perth
which
was
intended
to
be
an
ssid
tester.
It
complies
with
rst8219,
but
it
can
be
configured
just
to
test
to
benchmark
ipv
version
4
ip
version,
6
routers
2..
F
F
So,
as
you
know,
well,
rst
2544
use
the
classic
support
measurement
procedure
that
we
send
a
number
of
frame
test
frames
at
a
constant
rate
and
we
count
the
send
and
received
frames
and,
after
finishing
sending,
we
just
for
two
more
seconds
continue
on
receiving.
F
So
if,
if
we
would
like
to
to
be
a
bit
tricky,
we
can
say
that
the
die
mode
for
the
first
frame
is
62
second,
and
for
the
last
minus
two
seconds.
If
we
do
a
60
second
long
test,
it
is
usually
okay,
if
you
use
hardware
forwarding
devices,
but
there
may
be
a
problem
if
we
use
some
software
solutions
because
in
case
of
software,
there
can
be
some
some
large
buffers
and
interrupts,
and
it
may
happen
that
some
of
the
frames
are
selectively
laid.
F
So
we
just
made
an
experiment,
which
was
a
very
tricky
case
when
we
delayed
only
one
percent
of
the
test
frames
by
100
milliseconds
and
of
course
it
is
not
noticeable
by
a
classic
rfc
2544
throughput
test.
But
we
experienced
that
this
this
little
trick
decreased
more
than
my
50
percent.
The
http
could
download.
F
F
F
So
I
think
keeping
the
old
good
throughput
test.
We
could
define
an
additional,
I
don't
know
which
bird
is
good,
improved
or
advanced
or
or
which
name
could
be
used,
but
a
different
super
test
which
is
sensitive
to
delay,
and
I
believe
that
it
can
be
valuable
and
there
was
a
discussion
on
the
mailing
list.
So
there's
a
link
there
when
I
try
to
explain
why
it
can
be
better,
but
of
course,
if
it's
really
better
or
if
it's
useful,
it
can
be
done
by
experiments
so
in
real
life.
F
F
So
in
rfc
8219,
several
times
at
least
four
times
it
is
said
that
at
least
20
measurements
should
be
done,
so
the
test
should
should
be
repeated
20
times
and
on
the
one
hand
it
would
be
a
good
guideline
for
throughput
tests
to
have
a
fixed
number.
So
at
least
10
times
or
20
times.
We
should
repeat
the
binary
search,
but
on
the
other
hand,
it's
dangerous
to
say
so,
because
binary
source
is
costful
in
the
time
of
in
the
course
in
the
meaning
of
execution
time.
F
So
a
fixed
number
is
probably
not
a
very
good
solution,
so
on
the
next
slide,
we
recommend
that
there
should
be
an
electric
algorithm
that.
F
Performs
a
few
benchmarking
tests,
and
maybe
after
five
or
ten
it
just
checks
how
smooth
the
results
are.
So
if
the
results
are
consistent,
we
may
stop
before
20
repetitions,
but
if
the
results
are
scattered,
we
can
go
further
and
not
20,
but
maybe
50
or
100
positions
are
necessary
to
have
statistically
relevant
results.
So
it
is
our
third
recommendation
and
we
have
a
fourth
one,
and
maybe
some
of
you
will
not
like
it.
F
Maybe
some
of
you
will
I'm
not
sure,
but
there's
a
force
from
the
next
slide,
and
it
is
an
optional
non-zero
frame,
loss,
acceptance,
criterion
for
the
support
measurement
procedure
and,
of
course
there
are
of
arguments
for
and
agents
it.
We
have
listed
some
arguments
for
it,
so
packet
forwarding
is
often
implemented
nowadays
in
software,
unlike
it
was
20
years
ago,
when
routers
were
more
or
less
hardware.
F
Today
they
are
out
of
software
things
and
it's
not
feasible
to
require
absolutely
zero
frame
loss
and
our
applications
got
used
to
having
frame
loss
and
destroy
some
low
flavors
rate,
for
example
0.01
and
some
commercial
testers.
Like
spar
end,
they
have
a
parameter
called,
sometimes
loss,
tolerance,
which
professionals
use
and
set
to
non-zero
values
many
times,
and
if
the
practice
is
true,
then
I
think
it
would
be
much
better
to
recognize
it
as
an
optional
test
and
also
to
to
define
a
required
reporting
format
where
it
is
mandatory
to
state
the
loss.
F
Tolerance
least
used.
So
it's
much
much
better
thing
to
allow
it
and
to
make
it
mandatory
to
to
reflect,
but
loss
rate
was
used.
So
this
is
our
ideas
and
on
the
last
slide,
we
just
say
thanks
for
listening
us,
and
we
are
really
really
eager
to
hear
your
comments.
Your
ideas
and
our
question
is,
if
you
find
any
of
them
useful,
so
they
were
just
four
ideas
and
if
you
would
like
to
you
really
want
us
to
continue
on
this
research,
we
are
happy
to
to
just
find
a
few
of
them.
F
A
A
Okay,
all
right
well
I'll
I'll
get
things
started
a
bit
here.
Gabor
I
mean
this
is
the.
I
guess
this
is
the
the
most
controversial
one
here.
The
optional
non-zero
frame
loss
acceptance
criterion,
one
one
way
that
we've
been
one
way
that
we've
been
able
to
do.
This
is
to
not
disturb
the
throughput
benchmark,
but
to
add
consideration
for
metrics
like
sort
of
phrased
as
follows.
A
Oh
yes,
capacity,
measured
at
x
percent
loss,
so
it
so
it's
it's
something
that
is
included
in
the
in
the
in
the
related
metric,
but
it's
it
but
the
true,
but
the
true
benchmark
of
throughput
you
know
stands
the
same
way.
It's
it's
stood.
You
know,
basically,
since
it
was
first
defined
in
the
in
the
mid
90s.
A
Right,
right
and
and
some
some
other
folks
in
other
drafts
here,
have
tried
to
use
other
names
as
well
that
the
kind
of
answers
kind
of
answers,
one
of
the
questions
on
the
multiple
packet
loss
ratio
draft,
which
we
now
probably
don't
need
to
cover,
but
they
they
were,
they
were
calling.
This
were
calling
this
this
capacity
measured
at
x
percent
loss.
A
They
were
calling
that
partial
loss
ratio,
measurements
and,
and
then
they
have
so
plr
and
then
they
have
another
level
that
they
measure
called
ndr,
no,
no
dropped
ratio,
and-
and
so
you
know,
there's
there's
other
schools
of
thought
about
the
naming,
but
so
far
we've
you
know,
we've
tried
to
maintain
that
throughput
is
the
thing
throughput
is
defined
as
it
was
early
on
for
this
group
and
and
we'll
and
we'll
try
to
maintain
that
definition.
I
think.
E
A
It's
really
a
lot
of
concerns
that
that
we
have
this
this
this
very
solid
definition
and
and
a
lot
of
people.
I've
talked
to
you
know
currently
working
in
the
industry,
for
example
maciac
constantinoids,
who
does
this
testing
all
the
time
and
a
co-author
on
the
the
multi
multiple
loss
ratio
draft?
He
said
he
he
says,
I'm
I'm
100
with
keeping
throughput
at
zero
loss.
I
have
no
problem
with
that.
A
Also
when,
if
you
remember
our
our
farewell,
our
our
farewell
meeting
for
scott
bradner,
one
of
the
one
of
the
past
co-chairs
dug
up
an
old
email
where
you
know
where
the
working
group
was
starting
to
test
at
at
gigabit
per
second
rates,
and
he
asked
on
the
on
the
mailing
list.
So
is
it?
Is
it
time
for
us
to?
You
know
to
allow
a
little
bit
of
loss
here
and
and
scott
sent
back
a
one-word
answer?
A
A
And-
and
I
so
I
think
you
know,
I
think,
with
we're-
we're
coming
up
on
30
years
of
history
here
with
the
the
definitions
that
we've
got
and
and
the
truth
is
in
in
measuring
and
measuring
the
the
virtualized
environment.
A
We
noticed
that
some
of
the
problem
with
measuring
non-zero
frame
loss
came
from
the
fact
that
that
there
were
now
transient
interrupts
in
the
device
under
test
that
were
very
difficult
to
set
aside,
and
I
mean
that
had
been.
The
our
way
of
testing
in
the
past
had
been
of
one
where
we
tried
to
get
all
the
transient
activity
and
startup
address,
learning
routing
updates
all
that
sort
of
stuff.
A
We
tried
to
get
that
out
of
the
way
before
we
measured
throughput
and
when
we're
testing
with
you
know,
general
general
purpose,
computers
or
or
even
high
quality
computers.
You
know
they
have.
They
have
interrupts
that
are
fired
all
the
time
that
need
to
happen
to
ensure
the
health
health
of
the
system,
and
so
what
we.
A
What
we
created
was
a
new
form
of
binary
search
where
we
distinguish
between
steady
state
resource
limitations
and
measurements
which
produce
some
losses
due
to
transients,
and
we
try
to
measure
those
two
things
separately.
So
that's
that's
a
better
solution
than
you
know.
A
Changing
our
loss
threshold
here
I
think,
and
and
and
that's
the
one
that
lots
of
people
have
agreed
with
in
the
context
of
writing
the
the
etsy
nfe
network
function,
virtualization
test
working
groups,
benchmarking
standard,
so
I
I
think
that
you
know
with
the
with
a
little
more
background.
I
I
think
I
shared
you
some
of
these
links
right,
gabor,.
A
Very
good,
very
good,
so
so
at
least
we've
we've
talked
about
these.
These
things
some
and
I
see
our
our
area
director-
is
in
the
you,
so
rob.
D
You've
got
the
floor,
so
this
is
a
comment
as
an
individual,
not
as
an
ad,
not
wearing
any
hats,
but
basically,
I
think
I
sort
of
agree
with
what
you're
saying
here
al
that
I'm
looking
at
comments
saying
that
packet
14
is
often
implemented
in
software.
D
Where
I
sit,
I
actually
sort
of
the
opposite
for
all
the
platforms
I
work
on.
You
try
and
get
nearly
everything
to
go
through
hardware
where
possible.
So
I
think
it's
very
much
separate
classes
of
devices
here
in
play
and
you
want
to
make
sure
that
you
don't
optimize
for
general
cpu-based
ones
against
ones
where
which
are
doing
lots
of
hardware
forwarding,
where
nearly
everything
is
forwarded
in
hardware
and
if
stuff
gets
punted
to
software,
then
it's
very,
very
strictly
rate
limited
as
to
what
you
would
do
with
that
packet.
D
A
Well
well,
but
thanks
for
thanks
for
your
comment,
rob
I
I
I
appreciate
it
it's.
You
know
we're
we're
looking
for
more
perspectives
at
this
point,
and
I'm
glad
you
inserted
that
here.
Thank
you.
E
So
that
first
bullet
under
arguments
was
why
I
asked
you
the
question
al,
because
I
don't
know
that
I
necessarily
agree
that
it
requires
you
to
accept
zero
percent
frame
loss.
But
I
do
think
with
experience
on
shipping
products
that
do
pack
a
forwarding
in
in
software
and
on
the
cpu
without
the
the
benefit
of
hardware,
I
do
find
that
we're
often
asked
for
both.
E
E
So
I
have
a
bit
of
a
soft
spot
for
this,
although
I
cannot
argue
against
any
of
the
points
that
you
just
raised,
because
I
think
they're
all
very
good
points
too.
So
I
kind
of
liked
the
fact
that
it
was
an
optional
one,
but
I
guess
to
your
point
gabor.
Maybe
if
we,
if
you
consider
positioning
it,
excuse
me
along
the
lines
that
al
had
mentioned
for
previous
drafts.
Maybe
you
accomplish
what
you're
looking
for
in
a
way
that
still
holds
true
to
to
bmwg
history
and
and
not
just
to
keep
history.
E
Sorry
for
history's
sake,
but
I
think
those
are
prudent
arguments
that
have
served
us
well
and
unless
somebody's
willing
to
stand
up
and
say
we
should
change
it
and
and
if
that's
the
case,
it's
a
conversation
we
should
have
have
as
a
working
group.
Then
I
think
amending
these,
as
al
pointed
out,
is
good.
I
just
there's
something
about.
It
is
not
feasible
to
require
zero
percent
frame
loss.
That
kind
of
sits
a
little
funny
with.
A
No,
I
I
think
your
your
your
perspective
is
is
just
as
valid
as
anyone
else's.
Of
course,
sarah
it's
it's!
I
think
gabor
is
asking
for
a
little
flexibility
here
and-
and
I
I
simply
described
one
of
the
ways
in
which
we
attacked
that
problem
separating
you
know
separating
separating
out
the
metrics
very
clearly
with
different
names
that
seems
to
have
made
everybody
who
was
a
user
of
the
etsy
testing
spec
happy.
A
So
let's
look
at
some
of
the
other
ones
too.
I
I've
got
no
issue
with
providing
input
on
the
statistical,
statistically
relevant
number
of
tests.
That's
perfectly
good
information.
I'd
love
to
see
more
of
that.
In
fact,
I
think
we
had
a
couple
of.
I
think
we
had
a
couple
of
presentations
a
few
years
back
by.
Oh,
his
name
escapes
me
now,
but
you
know
he
was.
A
He
was
kind
of
adding
his
perspective
on
this
topic
as
well,
so
it's
kind
of
an
action
item
to
look
back
through
our
our
past
agenda
and
and
to
see
if
we
can
find
some
of
his
presentations.
I
can't
remember
who
it
was.
He
also
gave
some.
A
A
Actually
so
and
and
then
here's
here's
a
there's
a
couple
of
other
questions
here,
which
is
which
are
are
related
to
the
using
the
timeout
through
for
throughput.
I
see,
I
don't
think
if
you're,
if
you're,
actually
able
to
measure
the
delays
on
every
packet.
A
With
your
you
know
with
your
tool,
then
you
could
report
the
classic
throughput,
which
has
you
know
these
delay
criteria
62
seconds
for
the
first
packet
two
seconds
for
cube,
lead
off
to
collect
the
last
packet
and
and
and
then
characterize
the
delay
distribution
at
the
throughput
level
of
load
and
and
once
you
do
that,
then
then
you
completely
understand
you
know
the
delays
and
which
are
gonna,
be
you
know
in
the
hundred
millisecond
range
and
how
often
they
occur
in
the
distribution
and
so
basically
replacing
the
the
the
now
very
weak.
A
I'm
just
trying
to
find
it
here.
Oh
yeah
they're,
very
the
very
weak
latency
measurement
procedure,
which
is
in
rfc
2544
with
a
you
know,
with
a
much
better
one.
You
know
that
that's
obviously
obviously
a
great
idea
and
by
the
way,
all
the
hardware
generators
do
that
today,
but
they,
but
they
instead
of
the
one
percent
and
99th
percentiles.
A
I
think
they
report
min
and
max
delay
and
the
average
you're
asking
for
the
median
here.
So
you
know
you
know,
I'm
I'm
fine
with
all
of
that
and.
A
But
I
I
but
I
wouldn't
trouble
the
the
throughput
definition
with
it.
I
would
I
would
treat
these
things.
As
you
know,
orthogonal
benchmarks,
where
you've
got
you
know
a
latency
benchmark
and
a
and
then
a
you
know,
a
more
complete
measurement
of
the
distribution.
F
May
I
just
have
a
question
so,
of
course,
in
rfc
8219
many
times
appeared
the
number
20..
How
was
the
this
number
selected?
Why
not
10?
Why
not
50
by
exactly
20,
why?
I
do
need
just
20
tests,
not
51,
or
I
don't
know
why
this
number
was
calculated.
A
Yeah,
do
you
well,
I
I
can.
I
can
answer
that
question
with
a
a
silly
question.
Gabor,
do
you
do
you
still
have
marius's
email
address?
A
F
What
he
says
well,
I
asked
him
and
he
didn't
didn't.
He
wouldn't
say
anything.
He
couldn't
say
a
good
reason
why
20
was
selected.
Oh
all,
right!
Well,
he
said
that
he
didn't
know
of
any
considerations.
Why
is
that
820
was
selected.
A
Right
right:
well,
it's
I
mean
there
are
some
rule
of
thumbs
about
you,
know
a
number
of
tests
and
so
forth
that
have
been
floating
around
for
years,
but
the
truth
is
as
you
as
you
said,
you
really
have
to
look
at
the
variation
and
and
from
that
decide
whether
you've
got
a
consistent
result
or
not,
and
and
that's
hard
to
codify.
F
So
when
I
did
some
tests,
especially
with
with
dns
and
dna
six
for
testing,
it
happens
sometimes
that
in
twenty
in
ten
in
ten
measurements
there
were
no
outlier,
but
when
I
performed
another
additional
ten,
so
in
twenty
there
was
one
outlier.
So
maybe
90
is
a
good
number,
but
I
cannot
say
why.
A
Okay
well
well,
let's,
if
we,
if
we
do
update
on
this
topic,
let's
try
to
say
something
about
why
here,
I'm
just
just
quickly
checking
our
time,
so
we
can
go
to
50
minutes
after
this
hour.
Okay,
all
right,
good,
all
right!
A
So,
oh
I
closed
the
wrong
thing
here:
let's
open
it
up
again,
all
right
and,
of
course,
there's
definitions
for
packet,
delay,
variation
and
inner
packet
delay
variation
which
we've
got
in,
I
think
rc
5481
is
a
good
source
for
those
and
the
these.
The
sort
of
the
power
and
advantages
of
these
two
definitions
are
carefully
compared
in
rfc
5481.
A
Right,
it's
that
one
comes
out
of
the
ip
performance
metrics
working
group,
where
we
make
measurements
on
the
production,
backbone
network
and,
of
course,
those
definitions
are
equally
applicable
here.
A
It's
a
you
know
it:
they
they
each
have
their
strengths
and
weaknesses,
and
this
was
in
this
rfc
5481
is
a
applicability
statement
for
the
various
forms
that
we
looked
at
and
did
some
analysis
on.
A
F
I
tested
10
gigabit
links
and
with
84
bytes
packets,
not
not
64,
but
84
because
of
you
know
the
transit
transaction
between
version
4.6.
So
with
84
bytes
packets,
it
did
7
million
packets
per
second
instead
of
14
million.
A
Okay,
okay
and
you're,
using
are
you
using
udp?
F
I
just
used
one
a
single
flow,
so
just
a
single
test
frame
was
sent
always,
and
I
also
did
because
rc
2544
is
required
to
test
with
a
single
address
pair
and
also
with
256
network
at
different
networks.
So
it
can
also
do
one
two,
four,
eight
and
up
to
256
different
networks.
But
I
didn't
use
various
ports
because
I
just
used
the
test
stream
format
from
rs3254.
F
And
since
then
we
had
a
discussion
with
you
about
the
changing
the
port
numbers.
So
I
I'm
to
implement
the
the
change
report
number,
but
it's
not
yet
implemented.
So
just
it
was
one
floor
or
up
to
256
flows.
And
if
I
increase
the
number
of
flows,
the
performance
somewhat
decreases.
If
I
do
a
self-test,
but
the
performance
increases.
If
I
do
a
routing
test,
because
if
I
just
use
a
single
address
pair,
then
just
two
cpu
cores
work
on
the
router.
A
Right
right
so,
and
a
lot
of
people
want
to
run
the
multiple
flow
tests
anyway.
So
it's
I
mean
it's
good
to
know
the
single
flow
limits,
but
also
then,
the
the
multiple
multiple
flow
limits-
and
I
guess
you
haven't-
tested
this
with
25
or
40
gig
links.
F
So
I
I
plan
to
implement
this
multiport
test
within
a
few
months
and
I
will
report
if,
if
I
added
this
feature
to
sight
preferred
I
will
report
on
the
on
the
benchmarking
working
group
mailing
list,
the
new
new
feature
in
a
few
months.
I
will
do
it
very
good,
very
good,
okay,
what
do
you
recommend
with
this
draft
to
keep
the
first
three
and
abandon
the
fourth
point
or
just
rename
it
to
capacity
measurement
at
x,
percent,
loss
or
others
will
do
it.
A
That,
I
guess
that's
where
I
ended
up,
but
you
know
I
encourage
other
folks
to
to
comment
here
as
well.
With
regard
to
the
you
know,
with
regard
to
the
the
way
forward
here,.
A
All
right
so
we'll
try
to
get
some
more
comments
on
the
list
and
and
then,
as
you
said,
gabor
you've
got
a
a
really
short
draft
at
the
moment
with
where
you
obviously
plan
to
do
more
development
and
and
so
forth,
and-
and
I
think
you
know,
I
think,
you've
got
some
feedback
today
on
on
where
to
put
your
emphasis
for
for
the
next
one.
A
A
So
so,
then,
on
the
topic
of
the
multiple
loss
ratio
search,
as
I
mentioned
earlier,
there
were
some
questions
from
sudeen
on
the
topic
of
just
checking
to
see
who's.
Here
I
don't
see
masiak
or.
A
Vratko,
okay,
so
so
without
the
authors,
I'll
simply
mention
that
there
are
a
few
questions
on
the
list
for
the
multiple
loss
ratio
search.
This
is
where
this
is.
This
proposes
a
search
algorithm
where
you
can
measure
you
know
you
basically
reuse.
All
of
the
you
basically
reuse.
All
of
your
tests
that
you've
conducted
in
in
the
attempt
to
determine
the
throughput
level
that
matches
multiple
loss
ratios
like
zero.
A
You
know
one
percent,
five
percent,
ten
percent
and
and
so
forth,
so
that
I
I
think,
gabor
that
would
be
interesting
reading
for
you
and
then
on
the
network
function,
service
density,
masiak,
the
kind
the
the
author
here
was
asking
that
we
sort
of
revisit
the
whole
problem
space
and
explore
some
tighter
collaboration
options.
A
I
exchanged
some
email
with
him
on
this
topic,
but
I
haven't
received
any
response
from
anything
recently
from
masiak,
so
unfortunately,
we
can't
move
that
one
forward
until
we've
got.
You
know
kind
of
like
all
the
groups
commenting
here,
but
this
is
one
where
the
opnfv
vs
perf
project
is
is
working
as
well,
and
massieck
was
looking
for
kind
of
a
across
standards.
A
Development
organization
across
industry
meeting
where
we,
where
we
talked
about
this
problem
space,
the
main
problem
is
that
you
know
when
you
go
to
the
container
world
and
and
container
networking,
there's
just
so
many
options
that
yeah
we
kind
of
like
some
help
to
pare
things
down.
A
I
think
that's
beginning
to
happen,
but
we'll
you
know
we
want
to
try
to
accelerate
it
a
little
bit
so
so
that's
the
the
short
status
of
network
service
function,
density
and
then
no
updates
on
the
probabilistic
packet
loss
ratio
search.
Veraco
reported
last
time
that
was
still
kind
of
experimental.
A
The
other
evpn
drafts
we're
we've
asked
sudin
to
hold
these.
While
he
gets
some
experience
trying
to
get
a
draft.
His
current
evpn
draft
through
the
iesg
processing
all
the
way,
so
you
can
fully
appreciate,
what's
involved
there
and
and
there's
actually
another
evpn
draft
working
around.
It's
the
one
from
me
and
my
colleague,
jim
utaro,
which
we
haven't
updated
either,
but
you
know
we're
we're
hoping
to
get
this
first
one
out
of
the
way
before
we
see
what
faces
us
next.
A
A
Any
questions
on
that,
quick
that
quick
status.
A
All
right,
well,
I
think
I
kind
of
think
this
was
a
meeting
that
was
affected
by
vacations
and
the
registration
fee.
At
least
we
heard
that
that
from
two
people
on
the
way
in
and
also
we
we
we
didn't,
get
all
the
updates
that
we
were
hoping
for
either.
So
it's
a
it's
another
reason
that
our
our
our
meeting
time
was.
Our
agenda
is
a
little
light
this
time
around.
Let's
put
it
that
way.
Is
there?
A
A
Appreciated
and
happy
to
help,
right
and
and
and
thanks
to
everybody
who
joined
it
was
a
you
know,
I
think
a
pretty
good
turnout
and
we
will
take
what
we'll
expect
to
see
on
the
mailing
list
as
we
develop
some
of
these
working
group
items
and
our
new
proposals,
which
are
all
still
very
relevant.
A
Okay,
everybody
we'll
let
you
have
a
few
extra
minutes
back
in
your
in
your
busy
week.
So
thanks
again
have
a
good
day.