►
From YouTube: IETF101-BMWG-20180320-0930
Description
BMWG meeting session at IETF101
2018/03/20 0930
https://datatracker.ietf.org/meeting/101/proceedings/
B
A
A
So
what
we're
gonna
get
going
now
folks,
good
morning.
This
is
the
benchmarking
methodology
group
and
I'm
al
Morton.
Thank
you
for
working
so
hard
to
find
this
location.
It's
one
of
the
more
difficult
rooms
to
find
in
in
the
London
Metropole
I
hope
you
all
brought
your
coats
because
occasionally
it
gets
really
cold
in
these
dungeon
rooms.
A
All
right,
so
so,
as
I
mentioned,
this
is
the
etherpad,
where
we're
going
to
collectively
create
our
meeting
minutes.
I've
got
two
folks,
Pierre
and
Carsten,
who
are
going
to
be
primarily
in
charge
of
that,
but
anyone
who
anyone
who
praises
a
question
at
the
microphone
and
would
like
to
be
sure
that
their
name
is
spelled
correctly
and
would
like
to
be
sure
that
the
phrasing
of
their
question
is
exactly
what
they
want
feel
free
to
type
it
on
the
ether
pad.
A
Thank
you
also.
It
helps
it
helps
actually
what
I'll
do
right
right
up
here
at
the
top.
It
helps
to
say
that
al
is
this
purple
color.
So
if
I
type
anything
in
that
that
way,
folks
know
what
what
I
have
pasted
it.
Other
folks
can
can
do
that
as
well.
So
there's
Pierre
going
in
see
this
works
just
like
magic,
great
and
here's
Carsten
fantastic.
A
All
right,
so
here
we
go,
we
and
we
also
have
us
currently
seven
participants
on
meet
echo
and
they
can
see
the
slides,
but
they
will
only
be
able
to
hear
you
if
you
speak
directly
into
a
microphone.
So
what?
When
we're
conversing
in
general
in
the
room
to
make
to
make
it
more
possible
for
these
folks
to
participate?
A
All
right,
so
this
is
our
meeting
at
ITF
101.
As
I
said,
I'm
al
morton.
Sarah
banks
is
intending
to
join
us
remotely.
Today
she
was
unable
to
to
travel
here,
but
we
will
see
her.
I
suppose,
shortly
and
and
you'll
certainly
hear
her
participating
as
we
go.
I
have
the
suggestion
to
move
close
to
the
front
because
we're
usually
a
small
crowd
but
feel
free
to
sit
wherever
you
want.
A
A
Hallway
conversations
certain
hallway
conversations
are
also
covered,
but
they're
they're
covered
by
our
IPR
policy.
What
we
ask
is
that
you
disclose
any
IPR
can
in
connection
with
those
comments
in
a
timely
fashion,
so
and
actually
the
the
truth
is
now
everyone
who's
registered
has
read
this
because
you
have
to
click
that
you
acknowledge
it
when
you
register
for
the
ITF.
So
this
is
mostly
a
reminder
and
if
you're
unfamiliar
with
any
of
these
best
current
practices
documents,
then
that
would
be
a
good
thing
to
take
a
look
at
as
well.
A
A
I'll
be
working
soon,
all
right,
one,
one
Kumari,
our
ad
advisor
area
director
advisor
sitting
up
front
here
and
he's
so
he's
playing
both
the
important
role
of
AD,
advisor
and
and
jabber
scribe.
Thank
you.
So
I've
just
mentioned
the
intellectual
property
rights
statement
and
the
blue
sheets,
where
we
all
sign
in
on
attendance.
That's
all
moving
around
oh
and
that's
got
to
stop.
A
B
A
So
so
here's
our
room,
here's
our
planned
agenda,
we're
going
to
do
the
working
group
status
and
the
Charter
and
the
milestones
briefly
hopefully
assuming
sarah
joins
us-
will
have
a
quick
status
of
the
SDN
controller
benchmark
drafts.
If
not
I
can
provide
that
and
then
a
sue
Dean
Jacob
is
with
us
and
he's
going
to
be
presenting
the
evpn
and
PBB
evpn
draft
updates.
To
that,
then
we
have
a
whole
list
of
continuing
proposals
here,
some
of
for
which
some
of
the
authors
may
not
join
us.
A
The
last
two
items
are
working
group
discussion.
We
have
the
Etsy
nfe
liaison
on
nfe
benchmarking,
a
normative
specification,
we're
going
to
take
a
look
at
that,
and
our
last
item
is
to
sort
of
close
on
the
topic
of
reach
are
during
the
Working
Group.
We've
circulated
charter
text.
We
circulated
charter
milestones,
let's
finish
that
up
and
move
it
on
to
the
next
stage
of
approval.
A
Workgroup,
we
got
quite
a
few
comments
during
last
call
and
the
authors
have
revised
those
drafts.
They
are
now
on
the
agenda
for
the
April
April
mumble
first
week
of
April
I
believe
iesg
meeting.
So
that
means
we'll
probably
get
some
more
comments
from
ADEs
shortly
and
hopefully
that
will
all
go
smoothly
and
we'll
get
those
approved
in
short
order.
A
So
we
we
had
an
interim
meeting
that
was
back
in
February
last
week
of
February,
I
believe
or
first
actually,
first
day
of
March
I
think
was
March
first
and
that's
where
we
really
saw
in
some
detail.
The
new
proposal
for
benchmarking,
modern
firewalls,
had
a
real
chance
to
go
through
that
specification
there
and
had
some
good
comments
on
it.
There's
been
updates,
so
we'll
be
talking
about
that
again
today
and
then
the
our
status
is
really
that
the
proposals
keep
coming
as
you,
as
you
saw
on
the
agenda.
A
All
of
these
are
now
active
proposals,
so
we
will
consider
those
and
milestones
for
them
as
we
consider
retiring
any
questions
on
the
your
working
group
status.
At
least
this
part
of
it.
This
is
sort
of
part
one
because
we
have
no
new
RFC's
this
time,
but
that's
not
bad,
because
at
the
last
meeting
we
announced
about
six,
so
we've
we've
been
making
really
good
progress.
A
We'll
update
the
Charter
as
soon
as
we
can-
and
we
do
have
this
nice
supplementary
you're
working
group
page,
which
has
been
moved
now
to
to
one
of
Sarah's
administrated
websites.
So
we
we
have
that
available
if
you
want,
if
you
want
to
quick
up
quick
heads
up
or
getting
started
guide
to
joining
the
working
group,
this
is
the
place
to
find
it.
I
wrote
this
up
some
time
ago
and
it's
still
still
fairly
valid.
A
All
the
links
are
good
well,
so
let
me
ask
the
question:
who
is
new
to
benchmarking
methodology
working
group
who's
attending
our
session
for
the
first
time,
everyone's
a
repeater?
That's
great,
very
good!
All
right
well
well,
well
welcome
back,
then
everyone,
so
so
that
I
think
that
concludes
the
the
working
group
status,
except
for
the
milestones
and
here's
our
status
on
milestones,
basically
they're
all
done,
which
is
why
we're
reach
are
during.
A
All
right,
so
we
always
have
this
quick
discussion
about
a
standard
paragraph
which
I
have
developed
some
time
ago
and
like
to
see
incorporated
in
our
drafts.
The
reason
for
that
is
is
this
that
we
have
a
scope
which
is
laboratory
testing
only
isolated
test
environment
and
for
years
folks
in
the
security
Directorate
or
other
directorates,
would
look
at
what
we
were
doing
and
just
absolutely
flip
out,
and
you
can't
send
traffic
like
this
over
the
network
and
they
had
never
read
our
Charter
the
scope
of
our
charter
for
lab.
A
Only
testing
was,
you
know,
not
not
generally
included
in
the
drafts,
because
we
all
knew
what
we
were
talking
about
and
and-
and
so
you
know,
we
sort
of
jointly
developed
this
paragraph
for
the
introduction
and
or
security
considerations.
Sections
and
you're
welcome
to
modify
this
or
to
use
it
a
wholesale.
If
it
applies
to
your
work
and
we've
we've
effectively
reduced
the
amount
of
difficulties
that
the
Directorate
reviewers
who've
never
been
in
the
room
with
us
have
with
this
kind
of
thing.
So
just
a
word
to
the
wise
that
that
is
available.
A
A
We
don't
have
Sarah
and
we
do
have
Bala
now.
That's
good.
Okay,
so,
like
I,
can
sort
of
see
who's
joining
us
and
then
so
can
you
in
fact
so
great?
So
so
we
don't
have
Sarah.
We
don't
have
move
on,
who
generally
speak
about
the
Sdn
controller
performance
but
I've
already,
given
a
brief
status
on
that.
A
D
A
It's
I've
moved,
but
it's
a
I
see
it's
still
coming
on
on
me
check
out
there,
so
so
hang
on.
Okay,
thank
you.
Thank
you.
D
So
all
this
sections
are
taken
care
and
we
have
modified
based
on
all
the
comments
which
these
are
the
highlights
of
the
comments
and
all
the
sub
comments.
We
have
incorporated
the
draft
so
then
moving
to
the
next
slide,
so
yep
we're
there
yeah
perfect,
thank
you.
So
these
are
all
the
high
level.
You
know
the
benchmarking
parameters
we
have
defined
for
this
services,
that
is,
the
macula,
the
matte
flesh,
the
Mac
aging
high-availability,
the
ARP
and
nd
scaling
scale,
convergence
and
the
soak
so
based
on
eight
parameters.
D
Moved
to
slide
four
sitting.
Oh
thank
you
all.
So
special
thanks
to
Sarah
for
guiding
us
and
she
was
very
helpful
in
giving
the
comments
and
the
total
outlook
of
the
draft
and
thank
you
all
for
the
valuable
feedback
Google
given
in
by
DT
of
ninety
nine
and
offline
comments,
and
really
value
that
and
really
appreciate
that,
and
thanks
al
for
the
support.
D
A
Good
well
then,
thanks
for
thanks
for
updating
the
drafts
again,
this
time
sue,
Dean
and
you
know,
I.
Think,
though
your
your
last
slide
basically
says
next
steps
request
for
adoption,
so
I
think
we're
like
three-quarters,
but
basically
3/4
toward
adopting
this
draft
I
mean
on
the
on
the
summary
of
proposals.
I've
already
got
this
one
marked
green
and
I.
A
A
A
This
item
number
five,
this
one
that
we
we
don't
have
a
presentation
for
this
time,
but
I'm
going
to
give
a
I'm
going
to
give
a
quick
status,
and
that
is
that.
So
this
is
the
MS
Li
M
network
service,
layer,
abstract
model,
so
we're
where
we
had
a
presentation
on
this
remotely
in
the
Singapore
meeting
the
IETF
100.
A
So
what
we've
challenged
the
author
to
do
is
to
is
to
make
this
modeling
effort
more
specific
to
benchmarking
and,
and
so
that
is
something
he's
working
on,
and
he
correspond
with
me
by
email,
basically
to
say
that
the
the
works
in
progress
and
he's
expecting
to
complete
the
draft
this
month
with
this
benchmarking
example
built
in
and
so
we're
looking
forward
to
that.
But
at
the
moment
we're
kind
of
on
hold
waiting
for
update,
so
so
just
so
I'm.
A
E
A
A
It's
defined
in
RFC
1242
as
the
longest
burst
of
frames
that
a
device
under
test
can
process
without
loss,
so
tests
of
this
parameter
are
intended
to
determine
the
extent
of
data
buffering
in
the
device.
Now,
once
we
learned
once
we
learned
that
intent
and
also
the
test
that
was
involved,
we've
got.
A
With
somewhat
further
investigation,
we
found
ways
in
which
we
could
actually
improve
sort
of
getting
at
this
quoted
sentence
and
I'll
describe
that
briefly.
But
one
other
thing
that
we
noticed
is
that
in
RFC
25:44
there's
an
extremely
concise
wording
of
the
objective,
very
concise
procedure
and
very
concise
reporting
and
bye-bye
concise
I
mean
brief,
and
perhaps
at
at
some
point
it
would
be
worthwhile
to
add
some
additional
detail.
A
Almost
able
to
accommodate
the
frame
rate
at
a
given
frame
size
chosen,
then
you
would
see
basically
no
buffering
in
the
device
or
no
no,
no
ability
for
a
burst
of
packets
to
exceed
the
buffer
size,
because
packet
processing
packet,
header
processing
was
was
fast
enough.
That
devices
that
the
packets
were
moving
through
there
was
there
may
have
been
some
buffer
developed
there,
but
it
simply
was
not
possible
to
overflow
with
the
buffers
on
a
regular
basis
every
once
in
a
while.
You
might
see
something
that
was
generally
this:
the
cause
of
inconsistency.
A
So
some
of
the
test
equipment
reported
frame
lengths
which
were
extremely
long
unexpectedly
long,
but
that
turned
out
to
be
just
a
limit
of
the
test
equipment.
It
would
send
a
30
second
frame
of
packets
and
they
would
all
pass
through
because
they
were
at
a
fairly
high
frame
size
and
therefore
a
low
packet
header
rate
and
the
header
handling
rate
was
of
the
device
under
test
was
sufficient
to
handle
those
packets.
So
you
basically
couldn't
create
a
a
loss
and
the
device
test.
A
A
We've
got
different
packet
sizes
here,
64,
bytes,
128,
256,
up
to
15,
18
and,
and
the
most
important
bar
here
is
the
green
bar,
which
is
the
maximum
theoretical
frame
rate
for
the
device
under
test
with
its
interfaces
and
and
so
forth,
and
we
had
a
test
with
a
couple
of
different
V
switches.
The
O
vs
v
switches
in
blue
and
the
VPP
V
switch
vector
packet
processing
is
in
red
and
we
see
throughput
in
frames
per
second.
So
that's
the
y-axis
here
and
and
with
64
byte
frames.
We
could
not
attain
the
maximum
throughput.
A
That's
not
all
that
surprising.
It's
very
common
with
the
smallest
frame
sizes,
and
so
at
this
frame
size
we
were
able
to
very
accurately
estimate
the
buffer
size,
because
now
that
here
the
packet
header
processing
was
less
than
the
theoretical
maximum
throughput
of
back-to-back
frames
and-
and
so
we
always
saw
some
burst
length
which,
which
had
some
loss
in
it
now
here
we're
sort
of
right
on
the
128
byte
frames
we're
kind
of
right
on
the
cusp
of
the
maximum
theoretical.
So
here
we
saw
some
variation,
but
that's
completely
explainable.
A
Sometimes
it
would
handle
it
handle
the
worst.
Sometimes
it
wouldn't.
There
would
be
some
variation
in
the
results.
By
the
time
we
get
to
256
byte
frames,
they're,
their
throughput
is
equal
to
the
maximum
theoretical.
So
now
the
packet
frame,
processing
rate
of
the
device
under
test
is
preventing
the
buffers
from
growing.
So
this
is
this
is
the
effect
that
that
we
felt
was
really
important
to
to
be
able
to
handle
and
to
and
actually
to
cover
as
part
of
the
test
procedure.
So
what
we're
recommending,
then,
is
this
prerequisite
test
of
the
2544
throughput?
A
A
The
trial
requires
sending
a
burst
length
and
Counting
the
forwarded
frames
to
be
sure
that
none
of
them
have
been
lost
and
we're
seeking
the
longest
burst
length
with
zero
loss.
We've
got
a
test.
The
test
outcome
is
the
burst
length
for
each
after
the
searching.
You've
repeat
you,
you
have
found
the
longest
burst
that
you
can
send
without
loss,
and
then
we
repeat
the
test
n
times
with
the
searching
and
and
the
burst
lengths
with
zero
loss
are
then
subsequently
averaged,
and
the
average
link
is
the
benchmark
over
in
Java
and
end-to-end
tests.
A
So
well,
let
me
talk
about
the
updates
now.
Now,
we've
got
a
little
bit
of
background
here.
So
we've
basically
clarified
text
describing
what
is
measured
when
we
report
this
this
length
and
the
corrected
the
corrected
burst
length.
I'll
talk
more
about
that
in
a
minute,
but
let's,
let's
just
stare
at
this.
A
For
a
moment,
knowledge
of
the
approximate
buffer
storage
size
in
timer
bytes
may
be
useful
to
estimate
whether
frame
losses
will
occur
if
device
under
test
forwarding
is
temporarily
suspended
in
a
production
environment
due
to
unexpected
interruption
of
frame
processing,
and
then
this
parenthetical,
an
interruption
of
duration
greater
than
the
estimated
buffer
would
certainly
cost
cause
lost
frames.
Now
you,
you
can't
really
estimate
what
the
sort
of
what
the
minimum.
A
In
in
practice,
what
the,
what
the
guaranteed
number
of
frames
that
that
could
be
a
cover
that
could
be
accommodated
during
a
short
interruption
of
forwarding.
You
can't
really.
You
can't
really
guarantee
that,
because
you
really
don't
know
the
state
of
the
buffers
when
forwarding
is
interrupted,
and
the
truth
is
that
all
the
buffers
could
be
almost
full
when
forwarding
is
interrupted
briefly,
and
that
would
cause
the
sort
of
the
additional
bursts
or
the
the
short
interruption
time
to
be
zero.
A
A
There's
there's
lots
of
detail
in
in
terms
of
background
in
in
these
two
references.
So
what
we've
basically
got
here
is
a
presentation
from
last
summer
at
the
open
platform
for
any
of
the
summit,
and
that's
where
some
of
these
tests
were
analyzed
in
detail
and
we've
also
got
a
wiki
that
supported
that
testing,
which
explains
the
calculations
and
even
more
detail
than
the
slides
in
this
presentation.
A
So
I
I
think,
for
the
sake
of,
for
the
sake
of
time,
I'll,
let
you
read
the
draft
go
over
the
calculations
yourselves,
but
effectively
with
this.
When
you
have
the
the
implied
buffer
time,
it's
it's.
The
average
number
of
back-to-back
frames
that
you
can
send
in
a
burst,
divided
by
the
maximum
theoretical
frame
rate
for
the
interfaces
that
you're
using.
A
So
that's
the
so
that's
the
implied
buffer
time
the
length
of
the
burst
divided
by
the
rate
at
which
the
the
burst
is
sent,
but
when
we
want
to
correct
that,
what
we
need
to
recognize
is
that
with
packet
header
processing
in
progress,
some
of
the
but
some
many
of
the
packets
that
have
been
sent
into
the
device
under
test
have
been
processed
and
passed
out
of
the
device.
By
the
time
the
burst
eventually
causes
a
loss.
A
A
Should
we,
my
guess,
is
that
we
should
always
start
from
a
burst
length
which
were
fairly
sure
that
the
device
under
test
can
accommodate
and
that
way
that
bursts
will
show
zero
loss,
and
then
we
can
begin
to
step
up
in
a
linear
fashion
to
reach
the
burst
size
which
possesses
a
loss.
So
that's
my
thought
on
the
on
one
search
algorithm,
but
there
could
be
others
so
I'm.
A
And
well
then,
let's
consider
this
other
one,
because
it's
also
related
to
we're
searching
so
should
the
search
include
trial,
repetition
whenever
a
frame
loss
is
observed
to
to
potentially
avoid
the
effects
of
background
loss
with
with
virtualized
devices
under
test
virtualized
switches
and
so
forth,
every
once
in
a
while.
You
have
sort
of
this,
this
unexpected
mode
of
operation,
where
some
some
extremely
important
background
process
has
to
be
handled
like
file,
system,
flush
or
something
something
that's
completely
non-maskable
and,
and
you
have
to
have
it,
but
it
could.
A
H
I
Carsten,
a
constant,
wasn't
so
I
think
that's
I
would
like
to
pose
the
question
the
other
way
around
how
to
get
to
a
realistic
measurement
result,
if
only
testing
at
a
given
time,
because
you
know
these
kind
of
things
that
happen
from
time
to
time
in
virtualized
systems
we
see
them
as
well,
but
sometimes
they
happen
only
once
an
hour
and
yeah
I'm,
not
sure
what
is
what
is
the
intention
of
this
methodology?
Should
it
actually
yield
the
optimum
result
or
a
realistic
result
or
the
minimum
results.
A
Well,
if
the,
if
the
phenomenon,
if
they're,
not
phenomenon
that
causes
the
occasional
once
an
hour
of
loss,
is
not
the
phenomenon
we're
trying
to
measure
the
the
capacity
of
the
buffers
in
the
device,
then
it
would
be
really
good
if
we
could
somehow
separate
those
two
effects
and-
and
one
way
to
do
that
is-
is
to
perform
long-term
testing.
Let's
say
at
the
throughput
level,
and
to
look
at
the
results
every
five
minutes
during
the
hour
and
then
to
find
out
how
often
some
background
process
causes
loss
and
influences.
H
A
A
Would
not
necessarily
be
running
or
something
you
know:
I
mean
I'm,
truly
unnecessary
background
processes.
I
think
that
would
be
a
legitimate
thing
to
do,
but
then
some
of
these
other
things
just
like
file
system
flush,
they
have
to
be
lived
with
so
that,
then
your
point
is:
let's
get
that,
let's,
let's
encourage
those
to
happen
before
the
test.
H
E
A
Think
that
I
think
that
the
draft
currently
encourages
people
to
in
general
to
operate
this
test
in
under
circumstances
that
appear
to
be.
You
know,
the
kind
of
test
environment
we've
been
able
to
count
on
in
the
physical
world
where,
if
you,
basically,
if
you
see
unexpected
loss,
you
you
troubleshoot
your
environment
and
try
to
scare
that
stuff
out
I
agree.
It's
going
to
be
much
more
difficult
to
do
that
here
now,
but
I
think
we
should
still
encourage
that,
and
so
anyway,
I
think
that's
I
mean
I.
A
Okay?
Well,
let's
see
so
as
far
as
next
steps
go
I'd
like
folks
to
take
a
look
at
this.
It's
not
that
long,
a
draft
we
could
create
a
milestone
for
this
work.
That's
actually
part
of
the
proposal
for
the
retargeting,
we're
hoping
at
least
I'm
hoping
for
a
working
group
adoption
for
this
draft.
We
can't
ask
anything
about
that
today,
because
who's
read
the
draft
by
the
way
anybody
know
okay,
so
we
can't.
We
can't
ask
that
question
again
today,
but
I
really
encourage
folks
to
read
it
and
well.
A
We
could
still
propose
a
milestone
if
folks
are
interested
in
this
topic-
and
this
is
one
of
the
ones
that
I
think
where
we
can.
We
could
probably
really
learn
a
lot
about
the
environment
that
we're
focusing
on
this
virtualized
testing
environment
through
benchmarking
like
this,
so
I
guess,
that's
it
any
any
further
questions
or
comments
on
this
topic:
No,
okay!
Well,
good!
Thanks
for
your
attention
and
we'll
move
on
to
the
next
thing,.
A
A
H
Okay,
so
actually
my
colleague
Bala
Bala
Raja
has
done
most
of
the
work,
but
he
can't
be
here
today
because
of
another
project,
so
I'm
the
co-author.
So
we've
worked
together
with
a
group
of
people
called
the
the
net
sec
open
group
to
create
this
next-gen
firewall
performance,
benchmarking
methodology
document
as
anybody
but
attended
the
interim
meeting
two
weeks
ago:
okay,
cool
Thanks,
so
I
don't
want
to
spend
too
much
time.
But
there's
some
background
about
this
group
has
been
formed
and
it's
outside
ITF
what
we
decided
to
that.
H
We
would
really
like
to
submit
contribute
our
test
plans
to
the
ITF,
a
vm
WT.
So
the
idea
is
to
have
new
methodology,
a
benchmarking
methodology
and
terminology
for
next-gen
security
devices
firewalls,
but
not
only
firewalls.
Basically,
all
of
the
network
security
solutions
out
there
today,
like
IDs
intuition,
detection,
unified
threat
management,
web
application,
firewalls
and
all
sorts
of
other
beasts.
H
And,
of
course,
there
is
pre-existent
our
efore
firewall
security,
the
firewall
benchmarking,
sorry
testing,
but
that's
about
ten
years
old
now,
and
we
really
want
to
strongly
improve
the
applicability
for
for
today's
solutions,
and
we
also
want
to
improve
the
reproducibility
and
transparency
of
this
kind
of
tests,
because
the
old
IRC
was
like
like
they
used
to
do.
They
provided
some
guidelines.
The
question
is
how
how
do
people
interpret
these
guidelines
in
the
same
way
and
like
tulips
would
come
to
the
same
conclusion?
H
So
one
of
the
ideas
and
requirements
for
our
work
is
also
that
multiple
labs
come
to
the
same
conclusion
and
as
a
footnote.
In
the
end,
we
want
to
create
a
certification
program
about
this,
so
just
a
quick
walk
through
of
the
draft.
Currently
we
are
in
version
zero
to
version
zero.
Three
is
ready,
but
I
didn't
want
to
upload
it
last
minute,
so
it
will
be
uploaded
after
this
meeting
and
so
basically
introduction
scope
and
so
on.
Like
normal,
there
is
a
pretty
extensive
section
on
the
test.
H
Setup
testbed
configuration
how
to
make
sure
the
test
beds
are
all
configured
in
the
same
way.
Some
guidelines
for
the
UT
configuration
participant
configuration
as
well
and
then
a
section
called
test
bed
considerations:
how
to
make
sure
that
things
become
reproducible,
that
everybody
has
the
same
kind
of
test
bed
setup
section.
Six
is
reporting
guidelines,
basically
definitions
of
the
key
performance
indicators
and
we
found
that
even
basic
things
like
I,
don't
know
throughput
or
something
like
that.
H
H
There
is
probably
in
a
few
years
from
now
no
unencrypted
h-e-b
traffic
anymore,
so
if
I
was
need
to
be
tested
specifically
with
SSL
TLS
HTTPS
in
mind,
so
the
test
setup
is
kind
of.
As
expected,
there
is
a
solution
on
the
test
or
device
on
the
tests.
In
the
middle
there
can
be
some
aggregations,
which
is
a
communication
routers
if
the
test
equipment
needs
them
and
the
test
equipment
is
expected
to
emulate
clients
and
servers
on
both
sides
and
routers,
if
needed,
from
the
configuration
perspective
of
the
do
team.
H
So
far,
nothing
No
no
surprises
the
surprise
has
come.
Rather
when
we
talk
about
like
what
do,
we
actually
need
to
test
and
what's
actually
in
scoping
because
of
the
wide
variety
of
different
solutions
that
exists
out
there.
There
is
no
single
set
of
test
methodology
that
can
be
applied
to
everybody
anymore,
and
also
the
amount
of
tests
required
to
test
more
to
validate.
More
advanced
solutions
are
growing
explosively.
H
So
we
we
looked
at
what
kind
of
test
areas
there
could
be
like,
on
the
left
hand,
side
basic
things
like
web
filtering,
antivirus,
SSL
inspection,
it's
actually
not.
The
basic
things
is
like
starting
with
more
advanced
things,
and
we
looked
at
you
know.
What
can
we
actually
provide
and,
of
course,
that
would
be
very
weak
would
become
a
very
long
document
in
the
end,
so
we
said
we're
going
to
focus
on
a
certain
subset
of
test
methodology.
H
For
now
that
we
can
actually
manage
and
of
course
any
contributions
are
welcome,
both
in
this
leftmost
column
area,
which
is
like
the
initial
scope
from
from
the
point
of
view
of
the
current
authors
and
also
in
the
futuroscope.
So
as
my
we
can
get
done,
of
course,
we'd
be
happy,
but
we
wanted
to
confine
our
scope
to
things
that
we
can
do
ourselves
from
the
existing
set
of
contributors,
so
next-gen
firewall
future
scope
would
be
web
filtering.
H
Ddos,
denial
of
service
certificate
validation
and
these
are
all
functions
that
are
implemented
today,
but
they
are
not
implemented
by
everybody,
and
so
we
thought
we
focus
on
that
in
the
future.
And
then
there
are
all
of
these
other
security
functions
which
we
haven't
started
discussing:
next-gen
intrusion
prevention,
service,
web
application,
firewall
breach
prevention
systems,
as
brokers
and
I
forgot
again.
What
80
was
so
so
there
are
lots
of
different
different
functions
and
the
matrix
is
still
open
and
ready
to
be
filled
as
soon
as
we
can
get
some
contributors
for
that.
H
So
the
KPI
definitions
that
we've
that
we
have
so
far
is
on
some
basic
things
like
on
the
TCP
on
the
HTTP
layer
and
things
like
time
to
last
by
10
to
first
byte,
and
actually
it
turns
out.
People
have
a
good
understanding
of
all
time
to
first
byte,
but
we
already
started
to
having
some
discussions
about
type
2
last
byte
and
that's
just
one
example
so
time
to
last
byte.
Just
just
to
give
you
some
insight
means
like
how
long
does
it
take
to
deliver
the
whole
HTTP
response
or
HTTP
response?
H
All
of
the
content
back
and
in
most
cases
people
have
only
looked
at.
How
long
does
it
take
for
the
first
bite
to
arrive,
and
then
the
assumption
was
well.
The
rest
will
just
trickle
through
the
high
in
some
power
but
I
think
that's
that's
not
what
we
see
all
the
time
and
so
time
to
last
byte
is
also
interesting,
but
then
discussions
come
in.
H
You
know
this
depends
on
the
on
the
size
of
the
request
and
response
and
some
people
the
vendors
always
like
to
create
responses
that
are
only
one
byte
large,
because
that
improves
the
performance
of
the
device
and,
of
course
the
operators
want
to
have
realistic
sizes,
and
the
test
left
sometimes
want
to
have
very
large
response
sizes.
So
these
are
the
kind
of
discussions
we've
been
having
in
the
last
few
months.
We
basically
started
discussing
this
around
a
year
ago
and
we
started
the
detailed
test
test
plan
discussions.
H
Brian
would
know
better
what
is
listening,
I
think
six,
seven
eight
months
ago,
so
I'd
like
to
just
walk
you
through
one
of
the
test
cases
as
an
example,
there's
7.1
throughput
performance
with
the
traffic
mix,
and
so
the
first
question
is:
what
do
we
want
to
do
here?
We
just
want
to
determine
the
average
throughput
performance
of
the
next-gen
firewall
and
that
test
case
has
already
been
in
3511.
So
what's
the
difference
here?
H
First,
we
define
a
specific
application
traffic
mix,
so
it's
not
a
uniform
stream
of
frames
or
packets,
that's
coming
in
with
arbitrary
content
or
no
content.
In
fact,
these
are
all
HTTP
HTTP
requests
which
follow
a
certain
distribution
of
URLs
of
certificates
and
so
on,
and
that
has
been
well-defined.
Then
there
are
a
couple
of
variable
test
parameters
that
the
user
can
choose
like
what
how
many
clients
they
want
to
have
how
many
servers
they
want
to
have.
What's
the
traffic
distribution
between
v4
and
v6
I
think
defining
something
statically.
H
It
would
be
unwise
because
v6
traffic
is
growing
over
the
years
and
we'd
have
to
adopt
to
the
reality
and
also
the
initial
end
target
throughput.
So
let's
say
we
start
with
10
percent
of
line
rate
and
then
with
the
target
throughput
would
be
hundred
percent
layer
it
and
we
need
to
see
what
is
the
actual
performance
of
the
solution,
and
so
that
requires
some
binary
search
in
that
case.
So
we
start
in
the
test
procedure
to
run
with
10
percent,
for
example,
then
go
to
100
percent
and
then
go
through
a
binary
search.
H
H
So
that's
why
we
want
to
limit
the
TTL
DTV
ation,
so
they
say
the
difference
between
the
maximum
and
the
minimum
time
to
last.
Byte
shall
not
exceed
a
certain
a
certain
threshold
and
the
same
in
the
same
way.
Also,
the
maximum
TCP
connect
time
is
controlled
in
this
test
case.
Let's
say
the
device
gets
loaded
and
then
it
takes
more
and
more
and
more
time
to
actually
start
responding
in
any
way.
So
it
takes
time
to
connect
new
TCP
sessions,
then
sometimes
a
way
for
our
device
to
manage
very
high
load.
H
A
But
when
we
say
binary
search
did
we
all
know
what
we
mean
when
we
say
that?
Do
we
all
know
that
we're
starting
high
or
starting
low
did
we
all
know
that
there's
a
step
size
involved,
no
matter
how
you
search,
I'm,
I'm
thinking
that
there's
I
see
were
nodding.
I
see,
I,
see
a
possibility
for
us
in
one
of
our
works
to
to
define
this
with
a
little
more
specificity.
Do
you
have
any
thoughts
on
that
yeah.
H
G
A
And,
and
and
and
then
I'm
sort
of
thinking
that
this,
this
repetition
of
checking
certain
results
that
that
would
be
important
to
have
that
if
we,
if
we
agree
that
that's
a
valuable
thing
at
some
point,
that
that
we,
it
would
be
easy
to
see
how
we
would
build
that
into
binary
searches
and
linear
searches
and
so
forth.
If
we
have
the,
if
we
have
the
good
degree
of
specificity,
what.
H
Problem
is
that
binary
search
is
a
very
time
consuming
method.
So
let's
say
we
need
10
iterations
to
get
to
the
precision
of
0.01
percent
failed
transaction
rate
of
water.
Now,
actually,
no,
that's
not
the
point.
I
don't
actually
know
what
what
precision
we
find
here
but
I
remember
for
one
percent
precision
an
average
of
like
seven
iterations
are
required
unless
of
course,
the
device
is
just
passing
the
initial
maximum
rate
with
no
loss
at
all,
in
which
case
it's
only
one
little
rate
or
two
iterations
initial
end
targets.
H
But
if
that's
not
the
case,
then
around
seven
iterations
are
required.
Let's
assume
each
of
those
runs
two
minutes,
then
we
already
have
15
15
minutes,
but
normally
you
want
to
run
the
test
case
a
little
longer.
So
if
you
you
have
half
an
hour
and
then,
if
you
want
to
repeat
these
the
whole
series
of
tests
multiple
times
it
easily
becomes
days
of
testing
just
waiting
for
results,
yeah,
sometimes
a
little
difficult
in
the
lab
sure
so
I'm
actually
open.
H
A
The
tests
tests
of
the
past
should
be
able
to
help
us
fine
tune,
the
parameters
for
the
searches
of
the
future
and
and
that
that
there's,
basically
always
some
correlation
between
results
across
across
tests
that
tests
the
same
configuration
and-
and
there
might
be
there
might
be
a
way
that
we
can
improve
test
time.
I
mean
if
that
test
time
is
always
one
of
our
big
constraints.
H
C
H
A
H
A
Very
good,
so
another
question
I
had
or
comment.
Carsten
is
this
phrase
test
results
acceptance
criteria.
We
have
to
be
with
me
really
careful
with
the
wording
on
this,
because
all
we've
always
said
that
our
benchmarking
and
and
really
any
performance
measurement
in
the
ietf
that
we
don't
declare
kind
of
like
a
pass/fail
criteria.
A
We
we
seldom
set
performance
objectives
here
in
a
numerical
way.
We
may
we
may
recognize
that
they
exist,
but
but
the
ITF
itself
hasn't
standardized
them.
We
allow
others
to
say
here's
the
target
rate
and
and
and
then
you
use
the
ietf
procedure
to
find
out
whether
you've
met
that
or
not
so
I
mean
we.
At
the
same
time,
we've
always
had
a
kind
of
acceptance
criteria.
A
Rfc
2544
throughput
is
run
with
zero
loss
and
and
and
and
that's
a
numerical
sort
of
acceptance,
threshold
or
target
objective
for
the
test
that
you're
running-
and
this
is
this-
is
additional
dimensions
which
have
the
same
I
mean
they
have
the
same
feel
but
I
I
think
we've
got
to
be
careful
about
how
we
word
these,
and
it
may
be
that
we
may
be
that
we
asked
the
testers
to
provide
the
the
point
o
1%
themselves
or
maybe
suggest
this
said.
This
set
of
this
set
of
criteria
has
been
used
in
in.
H
Thought
from
the
beginning
of
my
career
that
the
zero
loss
in
RC
25:44
was
a
strong
and
very
helpful
statement,
and
over
the
last
few
years
we've
jointly
suffered
from
you
know
when
is
trying
to
deviate
from
the
zero
loss
in
the
virtual
world
and
the
subsequent
attempts
to
standardize
any
non
zero
loss
right.
So
I
think
there
is
great
interest
in
these
numbers
and
I
don't
want
to
shy
away
from
them.
If
we
completely
remove
these
targets
from
the
draft,
I
think
it
would
lose
a
lot
of
its
value
because.
C
H
I
H
C
A
A
C
A
H
H
Way
to
make
sure
that
the
numbers,
if
any
here
are
representative
and
realistic,
is
also
to
bring
in
some
users-
and
we
have
this
set
of
calls
weekly
calls
that
are
aside
from
in
WG
calls
they're.
Basically
just
work
working
calls
right
and
currently
we're
bringing
in
some
users
from
financial
sector,
mobile
mobile
service
provider,
mobile
operator
and
automotive
sector.
So
like
just
some
users
enterprise,
and
so
it's
vital
to
make
sure
that
we
actually
fulfill.
A
Know
in
in
in
some
way
include
these
participants
directly
in
BMW
G
work.
If
we
can,
maybe
there
maybe
there's
the
opportunity
for
an
interim
meeting
in
the
future,
where
you
know
sort
of
jointly
BMW,
G
and
net
sec
open
get
together
with
some
of
their
industry
participants,
because
I
mean
just
yesterday
in
the
Operations
Directorate
group,
we
we
talked
about
how
do
we
get
more
user
input
and
then
make
our
specifications
more
relevant,
and-
and
this
is
right
on
point-
if
you
guys
are
able
to
if
net
sec
open,
is
able
to
help
us.
H
Sure
we're
happy
to
so.
It
goes
on
with
the
other
test
cases
and
I.
Don't
want
to
bother
you
with
more
test
case
discussions
here
so
in
7.2,
concurrent
TCP
connection
capacity
with
HTTP
traffic.
It's
a
similar,
you
know
question,
you
know
how
to
get
to
a
good
description,
and
we
also
defined
some
things
here,
like
the
HTP
object
size
and
the
rate
at
which
this
capacity
test
has
been
has
to
be
conducted
to
make
sure
it's
actually
realistic.
H
One
of
the
main
motivations
of
this
homework
has
been
that
the
data
sheets
of
firewall
vendors
have
shown
less
and
less
connection
to
the
reality
of
the
actual
throughput
of
their
devices
in
real-life
scenarios,
and
one
of
the
reasons
was
that
this
test
case,
for
example,
is
typically
carried
out
with
zero
traffic.
So
basically
the
vendors
just
opened
the
session
and
maybe
send
a
request
to
get
one
byte
response
in
the
beginning,
but
then
they
leave
the
session
open
and
they
change
the
time.
H
Also
that
the
sessions
are
in
keeping
kept
open
and
then
sessions
sit.
There
says
this
sit
there
and
it's
basically
just
only
a
memory
game.
You
know
how
much
memory
can
I
put
in
my
firewall
and
how
long
does
it
take
to
fill
that
memory?
But
if
and
then,
usually
we
came
in
as
an
independent
test
lab
and
we
would
try
to
reproduce
this.
It
doesn't
work
because,
of
course,
we
keep
sending
requests
over
all
of
these
connections
or
at
least
a
subset,
and
often
these
can.
H
These
connections
are
dysfunctional,
so
the
the
device
is
completely
overloaded
and
it
doesn't
know
what
to
do
anymore.
It
has
no
capacity
to
process
packet
anymore.
It's
only
busy
opening
more
and
more
and
more
connections
and
that's
what
we
wanted
to
get
away
from
and
make
sure
that
this,
the
result
from
this
test
case
yields
a
realistic
number
of
maximum
connections.
H
H
I
don't
know
for
around
a
year
and
so
they
basically
defined
a
modern
enterprise
parameter
traffic
mix.
We
already
had
this
discussed
in
the
interim
meeting
and
I
remember
Farrah
and
you
and
some
others
saying
yeah.
This
is
wonderful.
Please
not
not
specifically
this
distribution,
but
in
general
the
idea,
please
let
add
more
traffic
mixes
for
other
scenarios,
because
an
enterprise
parameter
is
important,
but
it's
not
the
only
traffic
mix
and,
for
example,
a
mobile
operator
one
would
look
very
different.
So
this
is
a
blend
currently
of
70%
encrypted
30%
unencrypted
traffic.
H
H
But
what
is
important
is
that
the
certificates
don't
met,
don't
change
at
the
same
frequency
and
that
the
type
of
traffic
distribution,
like
how
many
connections
does
the
client
open
to
actually
get
to
office
3
to
use
office
365
that
also
doesn't
change
so
frequently,
and
the
number
of
URLs
used
in
this
parameter
mix
also
doesn't
change
so
frequently.
So
there's
always
a
few
heavy
applications
and
a
long
tail-
and
these
are
quite
important
to
use
to
create
a
more
realistic
cetera
again.
H
So
there
are
10,000
unique,
URLs,
1002
main
names,
400
certificates,
and
this
makes
sure
that
the
final
work
cannot
be
optimized.
The
configuration
cannot
be
optimized
by
a
vendor
just
to
survive.
Ok,
let's
dump
all
this
memory.
Let's
reduce
our
certification
store
to
1
and
just
send
all
the
same
certification
again
and
again,
buffer
it
and
catch
it
and
then
optimize
what
throughput.
So
we
really
want
to
have
a
realistic
results
and
that's
one
of
the
means
to
getting
there.
H
So
those
traffic
mix
is
already
implemented
in
at
least
one
emulator
and
I
think
another
vendors
getting
up
to
speed
and
we're
also
see
trying
to
see
how
we
can
actually
implement
this
mix
in
a
open
source.
Emulator
the
t-rex,
the
t-rex
emulator,
which
would
then
just
be
a
capture
and
replay
I'm,
not
sure
how
this
is
possible
certificates
but
we'll
see.
So
that's,
basically
all
I
wanted
to
say
about
the
content
proposed
schedule.
I
promise
to
upload,
oh
I,
think
I'm
actually
made
a
mistake,
so
this
should
be
draft
or
true
and
zero.
H
Three
four
five,
six
seven
so
I
need
to
fix
this.
So
I
have
an
offset
of
one
in
this
graph
numbers,
so
draft
zero
two
has
been
uploaded,
I
mean
before
the
deadline
and
zero.
Three
will
be
uploaded
tomorrow
with
the
traffic
mix
and
X
has
been
edit
in
two
more
test
cases
and
subsequently
I
think
we
want
to
proceed
quickly
and
an
important
milestone
would
be
in
June
to
add
security
effectiveness
section
so
so
far
we
only
focus
on
benchmarking,
but
of
course,
you
just
those
more
advanced
security
devices,
especially
intrusion
prevention,
intrusion,
detection.
H
Of
course
there
is
no
point
of
testing
the
through
word.
The
important
point
is
to
test
different
types
of
attack
vectors,
and
there
is
a
nice
database
which
we're
looking
into
probably
having
around
1000
attacks
that
we
want
to
test.
We
won't
define
them
all
in
the
document,
but
we'll
probably
point
to
the
NIST
database
and
select
some
that's.
A
H
A
H
And
then
separately
we're
running
some
concept,
testing
with
a
couple
of
solutions
we
will
be
running,
let's
put
it
that
way,
especially
for
the
performance
benchmarking
test
cases.
Those
pocs
are
scheduled
to
be
completed,
end
of
June
and
then,
hopefully,
in
July
and
before
the
deadline
for
Montreal
for
one
or
two
will
have
a
stable
draft
to
be
submitted.
Our
question
back
to
you
because
I'm
not
too
familiar
with
the
whole
process,
that's
like
how
do
we
get
to
an
RFC
and
what
other
steps
I
need
to
observe.
H
A
So
it
would
be
really
good
if
we
adopted
our
Charter
within
the
next
two
weeks.
I
mean
I,
think
we
can
make
steps
toward
doing
that
and
get
some
milestones
agreed
and
then
assuming
Warren
can
successfully
negotiate
the
new
charter
with
the
iesg,
which
hasn't
been
a
problem
since
1989,
then
you
know
we
will
will
be
in
full
go
ahead
mode.
A
Now,
the
with
this
schedule
trying
to
get
a
stable
draft
by
July
we're
basically
we're
basically
you're
looking
at
trying
to
get
interim
meetings
together
to
advance
this
work
as
quickly
as
possible
and
I'm
gonna.
And
if
so,
if
that's,
what
you're,
asking
and
and
that's
what
the
working
group
is
is
ready
to
to
do,
then
we
could
probably
get
pretty
close
to
July
and
I.
H
Now
I
want
to
get
it
done
because
we're
going
to
get
it
done
because
we
think
there's
some
urgency
and
specifically
because
they
want
to
use
the
results
for
certification
within
the
next,
like
open
group.
So
we're
progressing
the
work
anyway
and
if
an
interim
meeting
is
as
easy
as
like
a
one-hour
conference
call
like
we
staged
two
weeks
ago,
I
think
we
can.
H
A
I
think
I
suspect
that
that's
possible
as
long
as
we
obey
the
you
know,
sort
of
our
meeting
announcement,
sort
of
pre
announcement
rules,
things
of
that
nature
and
and
and
as
long
as
it's
accessible
to
everyone
in
the
ITF
and
we
published
the
minutes
and
so
forth.
Although
all
the
usual
interim
meeting
stuff,
then
we're
we're
quite
good
to
go
on
that
I
think
I
think
it
would
be
it's
probably
reasonable
to
to
to
maybe
have,
let's
say,
let's
say
working
group
consensus
by
July
and
we
have
multiple
interim
meetings
and
so
forth.
A
I
mean
then
there's
our
area,
director
review,
which
has
which
has
gone
quickly
and
smoothly
for
good
drafts
and
then
and
then
there's
the
IES
them.
There's
the
IETF
last
call
which
takes
about
two
or
three
weeks
and
and
then
another
couple
weeks
to
get
it
on
iesg
agenda,
so
I
think
I
think
by
the
time,
by
the
time
you're
looking
at
ie
ie
sge,
our
governing
body
approval,
we're
probably
we're
probably
looking
at
sometime
in
August
for
those
steps
to
take
place.
A
H
That's
all
good
and
I
mean,
of
course,
we're
a
bit
selfish
in
terms
of
the
to
the
two
reasons
we
were
we
are
actually
putting.
This
work
in
the
ITF
is
one,
of
course.
We
think
it
will
be
much
better
recognized
in
the
industry
and
we
think
if
we
spend
a
whole
lot
of
work,
creating
a
good
good
test
plan
then
make
sense
to
give
it
to
the
best.
You
know,
organization
in
the
industry.
H
A
H
H
Okay,
so
request
correction,
please
review
and
please
feel
free
to
contribute
on
each
of
the
sections
we
have
assigned
contributors
for
these.
The
test
case
lists
that
I
explained
before
and
in
addition,
specifically,
new
traffic
mixes
contributions
are
requested
and
security
effectiveness,
test
methods,
okay,
and
if
anybody.
H
A
A
We
should
plan
on
getting
a
security
area
advisor
someone
who
you
know
is
sort
of
willing
to
look
over
this
from
the
security
Directorate
I
get
like
for
an
early
review
or
things
of
that
nature,
just
so
that
we
have
the
benefit
of
IT,
F's
security,
expertise
right
from
the
start,
because
that's
basically
what
we're
we're
talking
about,
the
the
intersection
of
benchmarking
and
security.
That's
right.
H
H
H
A
A
F
A
Right
well,
when
we
get
to
the
Charter
will
see
that
you
know.
I've
obviously
recognized
this
as
a
very
active
draft,
and
so
we
have
a
a
milestone
proposed
for
this.
We
may
need
to
revise
that
milestone,
though
so,
based
on
your
your
most
recent
version
of
this
work
fast
in
a
meeting
scheduled
so
well,
try
to
remember
to
to
do
that
when
we
get
there
all
right.
Let
me
drop
this.
Let's
see
in
this,
in
that
slideshow
all
right.
A
B
K
K
So
basically
what
we
know
so
far
about
benchmarking,
TNS.
You
know
in
our
concepts
from
the
research
I'm
doing
research
in
university,
benchmarking
vnf.
We
know
that
there
are
different
concepts
for
vnf,
since
the
definition
for
run
time
like
before
program
until
different
ways
of
compiling
TNS
optimizing
and
what
you
might
find
the
state
and
then
the
data
plane
of
the
vnf.
There
are
different
compositions
of
the
vnf
itself
like
project
Clearwater,
and
some
of
the
NFS
are
truly
programmable
via
Sdn
concepts
and
openness
which.
K
We
know
that
also
there
are
different
motivations
for
benchmarking,
genna's
from
different
actors
like
PMF
developers,
service
provider,
infrastructure
providers,
compare
to
compare
via
naps
with
physical
network
functions,
to
know
the
the
footprint
of
vnfs
and
to
have
some
analytical
development
of
Afyon
s
as
well.
We
know
that
there
are
different
factors
that
TNF
depends
on
the
performance
of
the
TNF.
Basically,
it's
all
the
blocks
here
with
the
green
config
box.
We
have
done
tests
to
evaluate
that,
and
this
is
basically
the
storyline
I
started.
K
Then
we
had
to
bend
up
G
and
then
we
we
had
a
like
a
stop
over
there
to
that.
We
are
quite
naive
at
that
time
to
make
progress.
If
the
draft
we
see
that
still
the
considerations
for
benchmarking,
that
became
the
RFC
81
72
hours
to
being
performant
here.
So
we
took
our
time
we
develop
at
some
code,
run
experiments.
We
came
with
publications,
and
now
we
are
back
here
to
see
how
how
the
group
is
open
to
receive
this
work
again,
and
we
have
the
open
source
to
be
released
this
year
too.
K
Their
draft
is
actually
to
be
it
kind
of
like
a
solid
foundation
for
for
via
net
which
marking
methodology
itself
as
a
generic
framework,
and
we
think
that
specific
vnf
method
benchmark
methodologies
could
be
derived
from
this.
This
document,
we
aim
to
approach
like
publications
that
we
are
performing
and-
and
we
seen
the
literature
there
are
other
groups
also
interested
in
this-
that
we
have
been
exchanging
emails
with,
and
mostly
activity
is
also
inside
Etsy
and
Fe.
K
The
scope
of
the
the
the
document.
Basically
consider
VN
access
black
box
and
the
finds
methodology
further,
but
this
is
something
that
we
need
to
discuss
as
well,
because
as
I
showed
in
the
beginning,
there
are
many
vnf
that
open
source
and
are
open
for
instrumentation
internal
instrumentation.
So
we
think
now
by
the-
and
this
is
open
for
discussion.
The
group
I
think
we
consider
quite
box
approaches
as
a
particular
case
with
some
proper
considerations
of
internal
Genesis
fermentation.
This
is
something
that
we
need
to
discuss.
The
terminology
base
comes
from
Etsy
innately
framework.
K
It's
also
inside.
There
are
the
drive
itself
most
of
the
heifers
there.
We
can't
we
come
with
also
referred
to
RFC
1242
and
there's
IC
81
72.
We
don't
have
any
other
reference
of
an
FD
I
think
inside
IDs,
so
we
need
to
to
grab
most
of
these
definitions
of
vnf
from
from
at
C.
This
is
something
also
that
it's
open
for
discussion
and
so
which
were
the
music.
K
We
have
a
generic
range
vnf
benchmarking
setup
specified
in
the
in
the
in
the
draft
that,
with
generic
components,
I'm
going
to
show
it,
we
have
the
official
for
what
is
the
deployments
and
are
you
in
influencing
aspects
of
the
Vienna
performance
itself
to
be
considered.
So
what
what
we
consider
as
a
generic
benchmarking
setup.
C
K
K
Not
of
all
of
these
components
are
mandatory
in
and
we
consider
it
also.
This
these
components
can
be
aggregated
in
on
only
a
single
white
or
black
box,
and
we
consider
also
that
in
describing
it
indirect
that
are
the
possibilities
of
these
components,
influence
the
performance
of
the
DNF
itself
and
the
test
for
the
general
description
of
the
the
draft
Lacombe.
We
consider
the
definition
of
two
terms
in
the
two
to
make
it
generic
enough
for
other
vnf
benchmarking
methodologies
be
a
vnf
benchmarking
layout.
K
Ideally,
we
see
that,
with
the
benchmarking
layout
specification
and
the
configuration
definitions
that
are
also
specified
in
the
graph,
it's
possible
for
a
user
or
any
other
person
that
has
the
skeleton
of
hardware
and
software
components
deployed,
is
discussed
and
it's
possible
to
reproduce
all
the
deployment
scenario
and
repeat
the
experiments,
define
it
by
them,
the
procedures
itself
or
there
define
it
in
the
in
the
draft.
It's
just
an
initial
proposal.
We
have
a
lot
of
work
to
do
in
that.
We
are
seen
from
the
the
work
done
so
far,
and
here
we
end
up.
K
Do
you
have
how
you
can
accept
specify
more
and
more
details
about
the
procedures?
So
we
didn't
come
with
an
orthodox
proposal
for
procedures
were
more
open-minded
to
see
how
the
community
thinks
about
it,
but
basically
those
for
the
deployment
and
see
the
expression
of
the
of
the
features
step
by
step
but
and
the
testing
procedures.
I.
Think
this
terminology
here
is
align
it
with
the
the
DEA's
own
proposal
and
see
we
have
a
trial.
That's
basically
a
one
iteration
to
extract
a
single,
singular
measurement
from
the
net
which
markings
matrix.
K
We
have
a
test
that
defines
the
particular
components
to
define
this
this
trials,
so
one
task
and
then
fight
multiple
trials,
and
we
have
a
method
that
is
basically
a
set
of
parameters
that
can
compose
a
range
of
parameters
for
for
for
multiple
configurations
and
define
various
tests,
and
we
consider
that
we
must
define
particular
cases
for
as
defined
in
the
RFC
81,
72
and
methodologies
for
those
cases
as
well.
Currently,
they're
just
specified
some
considerations
that
we
must
take
in
for
these
cases
for
the
noise
behavior.
K
K
But
from
our
perspective,
we
just
understand
that
benchmarking
report
for
enf
would
consider
the
the
function,
a
structure
in
the
function,
parameters
that
were
defined
in
the
benchmarking
layout.
We
also
define
that
it
must
aim
the
statistical
significance
of
the
trials
and
iteration
of
over
many
tests,
as
was
defined
by
all
in
the
right
back
draft.
K
In
virtualization
scenarios
we
don't
have
strict
boundaries
of
the
DNF.
We
don't
have
strict
education
environment
that
different
behaviors,
like
one
one
change
in
the
line
of
code
can
change
the
whole
iteration
of
how
the
vnf
works.
So
we
need
to
repeat
experiments
many
times,
so
we
we
aim
to
see
statistics
in
different
ways
to
find
it
in
these
reports,
we're
actually
looking
how
we
could
post
some
metrics,
like
definition
of
possible
metrics
and
even
the
specification
of
outliers
in
today's
this
this
report.
K
This
is
a
research
that
is
being
done
and
we
we
think
that
also
the
performance
profile
must
be
associated
with
the
three
by
three
matrix
coverage
by
the
VM
Doug
G.
So
we
have
a
open
source
reference
implementation.
For
that
there
are
two
publications.
The
first
one
is
the
definition
of
the
framework
itself
there,
the
the
PDF
contained
this
presentation
contains
links
further
the
papers
and
the
first
one
defines
how
this
frame,
ecology
was
define,
it
coded
and
the
design
principles
define
it
there.
K
So
we
think
this
this
framework
can
be
a
reference
implementation
that
realized
that
this
vnf
make
mark
methodologies
in
the
document
currently
I'm
reviewing
the
the
code
we
got
because
this
was
developing
a
partnership
with
a
company,
and
we
got
the
open-source
approval
to
release
the
code
open
source
so
now
I'm
documenting
and
refactoring
the
code
to
release
it,
but
we
think
it
will
be
released
by
the
second
half
of
this
year.
So
the
idea
is
that
the
draft
and
the
and
and
the
open
source
code
walk
side
by
side.
K
This
is
I,
think
one
of
the
interest
in
the
in
the
working
group
as
well
in
the
ITF,
and
we
have
a
lot
of
work
to
do.
We
know
that
we
came
here
as
a
open
minded
view
of
what
is
a
DNA
benchmarking
methodology.
We
have
open
source
running.
You
have
reference
implementation
for
there
that
we
consider
it.
K
We
think
the
draft
can
be
a
common
grout
for
for
DNF
niche
market
methodologies.
What
we,
the
list
that
things
that
we
need
to
do
are
much
bigger
than
what
you
have
done
and
basically
I
think
we
need
to
refine
the
scope
to
see
if
we
consider
white
box
ENS
as
well.
I
think
this
is
something
that
we
need
to
discuss.
We
need
to
assert
the
terminology
considering
if
the
draft
is
considering
a
proper
terminology
are
taking
most
of
it
from
the
Etsy
and
Nephi.
K
We
consider
also
is
a
exemplifying
the
benchmarking
procedures
and
parameters,
or
maybe
referencing
the
open
source
code
or
maybe
itself
in
the
in
the
drafting.
As
generic
a
generic
format,
we
are
going
to
explain
that
for
sure
explain
each
particular
case
as
a
subsection
defining
the
objectives,
the
procedures
and
the
reporting,
the
possible
reporting
format
for
that,
and
also
the
the
definitions
of
the
report.
K
Well,
we're
done.
We
are
doing
research
to
see.
What's
the
best
approach
to
have
a
report
format
or
not
how
it
would
be
beaut,
we
need
to
adjust
the
draft
which
are
in
conformance
FRC
2119,
to
define
what
are
the
most
the
cans,
the
shoots
of
it,
and
possibly
we
think
that
in
the
future
we
can
have
a
liaison
statement
to
attend
a
fee
for
this
approach.
So
I
think
this
is
all,
and
we
would
like
to
see
your
comments,
suggestions
critics.
We
are
more
than
welcome
Shannon.
Thank
you
all.
A
Thank
You
Raphael,
if
before
we
start
with
the
questions,
let
me
ask:
where
are
the
blue
sheet?
Where
is
the
blue
sheet?
Has
anyone
not
signed
the
blue
sheet?
There's
a
gentleman
in
the
back
there
right,
good,
okay,
so
that's
good
have
to
keep
track
of
that
administrative
thing
all
right.
So
let
me
open
the
floor
to
questions
for
Rafael.
A
A
K
Totally
agree
and
the
the
thing
about
putting
open
the
switch
there
is
like
this.
The
second
publication
was
more
of
testing
the
automation
of
the
the
framework
itself
of
the
gym
that
the
source
code
and
how
we
the
perspectives
of
automating,
the
the
benchmarking
tests
I,
know
open
when
the
switch
is,
as
you
said,
is
use
it
mostly
as
an
infrastructure
element.
K
K
So
the
configuration
of
the
frame,
for
example,
the
frame
size,
is
where
we're
mostly
for
the
agent
agent
part
of
the
the
framework
how
it
would
Express
the
infrastructure
and
later
on,
have
a
reference
to
get
that
and
as
a
result
and
correlate
that
in
our
in
a
database,
I
can
show
you
the
how
it
works
later
again,
but.
K
We
have
a
proposal
of
a
demo
late
this
year,
probably
they're
trying
to
put
it
at
sitcom,
but
let
I
can
show
you
the
the
code
and
how
it's
run
actually
I'm
finishing
up
HDTV
so
short
on
time
to
make
this
code
happen
like
by
the
end
of
the.
C
C
Open
doing
open
source
with
this,
have
you
considered
working
with
Opie
nav
they're,
pretty
good
testing
outfit
for
for
stuff,
like
this
and
I
know,
they're
looking
into
vnf
testing
and
they'd
be
interested
in
this
as
a
project.
So,
firstly,
second
thing
any
any
liaison
AIT
an
early
test
working
group
will
get
attention.
I
guarantee
you,
okay,.
K
For
the
the
first
one
about
Opie
NFV,
when
we
are
developing
the
when
I
was
developing
the
source
code,
I
check
it
order
the
project
inside
like
yardstick
and
we
switch
birth
and
well
what
I
saw.
That
is
that
there
was
even
another
framework
called
thought
for
for
automation
of
test
and
what
I
saw
that
I
could
make.
It
happen.
Triopia
nappy,
but
I
saw
it
mostly
attached
it
somehow,
if
OpenStack
and
and
the
orchestration
and
configuration
would
were
kind
of
like
a
quite
attach
it
with
each
other.
K
That's
what
one
of
the
reasons
I
put
it
outside
up
up
in
FDIC
totally.
The
the
initial
proposal
of
the
of
the
the
code
was
to
be
a
benchmarking
as
a
service,
so
it
provides
interface,
the
API
that
an
orchestration
or
any
open
FP
testing
function
could
make
use
of
it.
This
is
the
the
main
target
of
the
source
code
as
well,
and
for
the
liaison.
C
A
All
right,
well,
I,
think
I
think
what
we
would.
What
we'd
want
to
do
is
is
sort
of
get
a
good
understanding
of
the
work
in
your
draft.
That
means
getting
people
in
the
community
to
review
it
and
then
we'll
consider
sort
of
writing
a
liaison
to
Etsy.
On
this
specific
topic,
we've
got
later
in
the
agenda,
a
response
to
Etsy
that
we
would
need
to
consider
here
and
that's
a
different.
K
The
open
source
code
is
is
not
published,
yet
it's
going
to
be
published
by
the
the
second
half
of
this
year
and
it's
a
full
framework
develop
it
in
Python.
The
first
publication
here
I
can
show
you
later.
It
says
how
the
framework
is,
define
it,
the
specification,
the
design
issues
and
how
we
benchmark
it.
V
a
V
RMS
of
project,
work,
Clearwater,
so
yeah,
it's
it's
stable.
K
The
is
totally
component
is
a
micro
service.
The
idea
is
that
we
have
all
the
messages.
As
rest
rest,
we
have
a
REST
API
and
full
flexible
messages
among
the
components,
programmable
messages,
yeah
I'm
happy
to
share
the
code
as
soon
as
possible,
but
and
also
to
explain
it
and
share
the
publication's.
So.
A
Thanks
and
so
I
think
I
think
I
understood
you
to
say
rafael
that
when
you
saw
heavy
reliance
on
OpenStack
in
projects
like
yardstick
and
OPN
Fe,
that
was
kind
of
a
well
that
was
kind
of
a
negative
for
you
guys.
You
wanted.
You
wanted
to
do
the
management
yourselves
and
can
control
and
configuration
yourselves
so
that
that
kind
of
puts
it
in
the
same
class
of
a
project
as
V
switch
per
project
that
at
OPM,
ëthey
and
but
I.
Think.
A
That's
that's
interesting
that
that's
like
that
sort
of
aligns
it
with
testers
or
one
as
well,
where
the
we're
just
some
mechanism
was
imagined.
You
know
over
on
the
management
side
that
would
put
the
VNS
together
and
and
and
set
them
up
ready
for
for
a
characterization
so
that
that's
interesting
you're,
basically
taking
taking
that
management
role
away
from
OpenStack.
But
that
allows
you
to
get
very
explicitly
what
you
want
from
from
the
system.
Yeah.
K
E
Sir,
are
you
Tara?
Yes,
someone
call
me
something.
So
what
is
a
Brynner?
Yet
there
is
your
patients,
there
is
ur
people,
don't
it's
kind
of
library
things
and
but
is
it
you
think
about
the
Priya
net?
Also
yeah,
you
know
parable
all
sodium.
Some
function
with
with
this
API
is
also
the
Vienna
and
in
a
separately
is
only
thing.
Is
some
I
sort
of
we
wrap
your
things
that.
K
I
that
that's
good,
the
the
P
for
runtime
api
is
like
just
an
example:
I
have
colleagues
working
with
developing
basically
a
compiler
before
compiler
and
they're
working
also,
if
implementation
use
case
for
that
and
I
have
a
colleague,
for
example,
working
to
develop
to
make
a
broadband
network
gateway
where
you
have
multiple
DNS
inside
the
vnf
in
Southwick,
let's
say
vnf,
components
and
I
think
this
is
truly
possible.
The
idea
was
just
to
put
but
an
example.
K
What
what
what
event
F
can
be,
how
V&F
can
be
abstracted-
and
we
are
seeing
this
closer
now
to
this
programmable
packet
program,
allowing
dependent
protocol
pipelines
like
before
and
and
the
idea
is
that
we
are
also
having
benchmarking
scenarios
with
before
I,
say
vnf
like
before
define
about
before
programs,
and
now
we
see
that
this
can
be
changed
at
runtime.
I
mean
the
pipeline's
themselves,
so
this
exemplifies
as
a
DNF,
can
be
abstracted
and
how
we
can
consider
it
inside
the
draft
itself.
A
A
Let's
see,
let's
get
rid
of
that
and
now
we're
back
to
this,
and
so
I
don't
see
Samuel
on
the
on
the
list,
but
a
list
of
remote
participants,
but
he's
got
a
draft
on
benchmarking
network
virtualization
platforms.
It
kind
of
follows
on
from
the
the
work
that
Jacob
rap
who's,
his
co-author
did
on
data
center
benchmarking.
That
was.
A
Lucian,
Nemerov
and
and
some
others
so
I
encourage
people
to
take
a
look
at
that.
We're
not
going
to
hear
about
it
today
and
I'm
gonna
quickly
bash
the
agenda
here.
My
my
my
my
inkling
is
that
the
most
important
of
these
next
two
topics
is
reach.
Our
Turing
BMW
G
will
get
to
the
liaison
after
we
go
through
that.
But
my
my
thought
is
that
we
probably
ought
to
consider
the
retiring
discussion
like
right
now.
So
that's
so.
Let's
do
that.
Let's.
F
A
A
A
A
Signaling
control
gateways
and
other
forms
of
gateways
are
included.
Benchmarks
will
foster
comparison
between
physical
and
virtual
network
functions
and
also
cover
unique
features
of
network
function.
Virtualization
systems
also
with
the
emergence
of
virtualized
test
systems
specification
for
a
test
system
calibration,
are
also
in
scope.
A
Now
I
had
a
little
discussion
with
one
of
our
new
folks
here,
Doug
during
the
week,
which
lines
up
very
closely
with
this
Doug
I
think
and
I
see
Doug
nod
nodding,
so
we
may
get
some
new
proposals
right
along
these
lines
and,
and
that
would
be
good
so
then
the
the
rest
of
the
Charter
is
pretty
much
as
it's
always
been.
You
know
each
recommendation
will
describe
a
class
of
network
function
and
and
a
set
of
metrics
that
aid
in
the
description
of
those
performance
characteristics
that
we
decide
on
our
benchmarks.
A
A
We
could
stand
to
do
a
little
more
outreach
in
that
area
at
at
the
various
operator
groups,
but
will
will
ask
that
folks
do
that
in
the
fullness
of
time
and
we're
distinguished
from
other
initiatives
in
the
ITF,
because
we're
characterization
of
these
technologies
in
the
lab
environment
and
that
clearly
allows
us
to
do
more
than
than
what
can
folks
can
do
on
the
production
network
and
we're
striving
for
vendor
independence
and
universal
applicability
to
a
different.
A
given
technology
class
demands
of
a
particular
technology
may
vary
from
deployment
to
deployment.
A
A
That
correctly
and
then
provide
a
forum
for
development
of
advanced
measurement
techniques
with
insight
from
the
operator
communities.
Okay,
any
comments
on
the
text
of
the
Charter
we've
reviewed
this
on
the
on
the
mailing
list.
Several
times
we
have
discussed
it
at
the
interim
meeting
and
at
the
meeting
in
Singapore
I'm
now
asking
for
any
final
comments.
A
A
Read
it:
okay,
okay!
Well,
that's
that's!
Fine
I
have
sent
it
to
the
list
in
a
message
that
I'm
sure
I
can
find
a
link
for,
but
if
we
have
a
Kenny
come,
if
we
make
any
comments
today,
we
will
I
will
be
resending
this
text
to
the
list
and
and
and
it's
possible
that
with
our
area
director
at
the
microphone,
we
may
be
making
some
changes
right
now.
A
I
think
I
think
we
should
continue
to
encourage
it
and
I
almost
had
a
chance
to
go
to
to
an
an
odd
meeting.
I
think
it
was
last
September,
but
it
ended
up.
The
reason
I
wanted
to
go
was
to
get
input
that
I
could
kind
deliver
to
the
quick
interim
meeting
that
was
taking
place
and
it
turned
out.
They
were
the
same
days.
A
So
I
do
have
some
intention
at
times
to
kind
of
try
to
follow
through
on
what
we
talked
about
in
in
getting
this
user
feedback
and
and
and
I
think
it
it
also.
It
also
seats
us
very
well
in
the
in
the
Operations
Directorate,
given
that
we're
we're
looking
for
input
from
these
organizations,
so
I
don't.
A
Given
the
given
the
working
group
meeting
in
midst
in
mid-july
and
area
director
review
that
could
potentially
follow
that
for
a
couple
of
weeks
by
the
time
we
get
this
to
last
call
and
iesg
review
I'm
thinking,
August
2018
is
probably
the
the
more
realistic
deadline
for
this
methodology
for
next
generation.
Firewall
benchmark,
see
I,
see
Karsten
nodding,
so
that's
good.
So
so,
let's
note
the
other
milestones
that
we're
proposing
here,
the
update
to
twenty
five,
forty,
four
back-to-back
benchmarking.
We
saw
a
presentation
on
that
today.
A
Methodology
for
evpn
benchmarking
saw
a
presentation
on
that
today.
We've
got
considerations
for
benchmarking
network
virtualization
platforms.
This
is
when
we
didn't
see
today,
but
what
for
which
there
is
an
active
draft.
The
network
service
layer
models
and
I
said
automated
vmf
benchmarking,
rafael
is:
is
that
a
reasonable
sort
of
description
for
your
Jim,
open
source
tool
and
I
mean,
or
maybe
you'd
like
to
edit
the
wording
here
a
little
bit
and
please.
K
Step
to
the
mic,
if
you
wait,
no
sir
I
think
it
fits
the
the
current
air
draft,
but
I
think
also.
We
couldn't
make
it
a
little
bit
more
generic,
because
I
think
we
can
add
automation,
an
automation
as
a
consideration
into
the
current
draft
and
as
a
consideration
and
a
recommendation
and
possible.
So
as
a
reference
for
the
open
source
code,
yeah
I
would
like
the
VMF
benchmarking
only
if
it's
possible
to
make
it
more
generic
and
the
part
of
the
automation
would
come
together
with
it.
A
C
A
Well,
my
thought
on
that
is:
is
this
our
scope
in
and
we'll
talk
about
this?
This
specification
in
a
minute
for
those
who
are
wondering
the
background
is
that
it's
it's
benchmarking
for
the
NF,
VI
and
I
think
a
2544
update
ought
to
include
not
just
the
network
virtual,
the
NF
e
aÃ,
but
the
world
of
physical
and
virtual,
and
so
that's
a
that's
a
that's,
a
bigger
update
work
which
we're
currently
proceeding
on.
A
If
you
look
back
at
our
last
five
or
six
years
of
history,
we've
been
updating,
RFC,
25:44,
sort
of
a
section
at
a
time
and
I
think
we
ought
to
have
an
update
in
the
future
near
future.
That
pulls
all
of
that
together,
it
will
pull
in
consideration
from
test
zero,
zero
nine.
Once
we
obtain
agreement
on
that
and
share
it
with
the
I
mean
actually
it's
shared.
A
That
a
requirement
at
this
point
we
can
we
can
I
mean
what
we
need
is
a
set
of
solid
milestones
that
we
can
point
to
a
draft
and
get
this
charter
approved.
We
can
discuss.
We
can
have
the
rest
of
this
discussion.
We
just
had
once
we
get
there
this.
This
has
started
come
on
man.
This
has
started
to
happen
in
the
latter
half
of
the
meeting
here
and
I.
Wonder
if
it
if
it's
somehow
oh
look
at
that,
there's
a
nice
new
island.
A
H
To
those
extensive
discussions
anyway,
I
agree
I
mean
it
needs
to
happen.
So
one
comment
on
this
general
DNF
benchmarking.
So
if
we
substitute
vnf
by
P-
and
you
know,
would
this
have
any
chance
to
have
a
draft
on
the
general
benchmarking
for
all
physical
functions
network
functions
out
there?
You
know
routers
switches,
fixed
network
devices,
bng
normal
quorum
IMS.
You
know
anything
that
we
can
imagine
in
our
world.
Would
you
agree
to
have
a
draft
on
one
standard
that
says
it
covers
all?
H
It
doesn't
make
sense
from
my
point
of
view,
and
only
you
know
the
actually,
the
selection
of
like
this
election
of
like
these,
which
Class
B
IMS
to
I
mean
it
I,
have
all
respect
for
your
work,
but
I
think
it
does
not
belong
in
one
in
one
draft,
it's
like
to
have
an
ice
cream
shop
that
also
sales
philosopher.
He
doesn't
make
sense.
You
know,
and
and.
A
Kirsten,
that's
it!
That's
a
it's
a
it's
an
excellent
point
which
I
sort
of
mulling
over
myself
and
I.
Think
now
that
I
think
back
to
it.
It
was
why
I
sort
of
inserted
the
word
automated
there
as
a
as
a
as
a
possible
distinguishing
attribute
of
the
work
that
Raphael
and
his
team
are
approaching
here.
So
I
I
I.
A
I
really
resonate
with
the
idea
that
that
you,
you
cannot
write
a
single
draft
that
includes
all
the
relevant
metrics
for
all
the
different
pmf
sand,
vnfs
in
the
world
and
get
all
those
metrics
right
and
and
and
useful.
I
mean
that
we
have
typically
done
that
on
a
per
technology
basis.
So
how
do
we
distinguish
this
milestone
and
make
it
make
it
make
it
worthwhile
make
it
something
that
you
consider
in
the
scope
of
your
work,
but
not
throw
the
but
but
not,
but
not
make
it
a
boil
the
ocean
kind
of
milestone?
I.
K
Specifications
like
VMs
or
open
this,
which
could
come
and
address
based
on
these
procedures,
generic
procedures
of
the
generic
vinovich
mic
methodology
and
the
definition
of
the
generic
set
up.
There
is
a
specific
case
for
that,
where
certain
conditions
were
addressed,
certain
configurations,
certain
procedures
and
certain
target
metrics
or
extract
that
that
was
my
my
initial
idea,
but
I'm
open
for
for
discussions
about
that.
I
understand
where
the
it
sounds
vague.
The
general
nature
argument
it's
up
for
the
group.
A
A
H
A
A
A
D
A
I
A
A
A
Of
the
problems
is
that
this,
this
terminology
of
network
service
has
been
a
real
mess
in
in
Etsy
nfe.
They
used
it
to
mean
something
which
no
service
provider
would
ever
have
agreed
to,
and
yet
somehow
it
got
approved
there.
So
I
think
it's
just.
The
terminology
is
just
too
ambiguous
to
throw
it
down.
Now
now
we're
gonna
have
to
tackle
the
problem
a
little
bit
in
this
in
this
network
service
layer.
A
J
J
That
having
it
or
not
having
it
was
gonna
change
things
I,
don't
think
that
you
know
if
they
were
a
services
document
whatever
that
means-
and
it
wasn't
explicitly
listed
in
where
would
say
no,
you
can't
work
on
that.
So
I
don't
think
it's
worth
the
worth
the
bath
good
thanks,
yeah.
A
Okay,
so
I
think
we've
had
a
decent
discussion
of
this,
especially
the
new
text.
So
let
me
ask
the
group:
I'm
gonna
have
three
opportunities
to
hum.
It
should
be
mutually
exclusive.
You
only
have
hum
in
one
of
the
three
categories:
there'll
be
a
hum
for
support
for
the
new
charter
humm.
If
you
object
to
the
new
charter
and
a
hum,
if
you
have
no
opinion
whatsoever,
all
right,
we'll
start
with
the
first
one.
Please
hum
now,
if
you
support
and
agree
with
the
new
charter,
I.
A
Please
hum
if
you
have
no
opinion
all
right,
so
what
I
heard
there
was
support
for
the
support
for
the
Charter,
no
objections
and
and
no
abstentions.
Thank
you
very
much
all
right,
so
we
will
of
course
ratify
this
on
the
list
with
the
changes
we
made.
I'll
probably
give
about
a
week
for
comments,
but
what
will
take
care
of
that
in
in
the
fullness
of
time?
Thank
you
very
much
for
your
time
on
that
very
important
step,
so
really
quickly.
In
the
last
12
minutes,
we
have
a
oh
this'll.
A
Do
yes
here
here
we
go
so
we
have.
We
have
a
liaison
from
Etsy
nfe
and
it's
on
the
specification
with
this
tremendously
long
title
specification
for
of
networking
benchmarks
and
measurement
methods
for
the
NF
VI
and
that's
a
network
function,
virtualization
infrastructure,
so
they're
currently
up
to
version
zero.
Six
of
this
work
and
it's
coming
from
the
Etsy
n
Fe
testing
and
open
source
working
group
which
our
colleague
with
us
today
Pierre
Lynch
chairs,
thank
you
for
joining
us
Pierre.
A
We've
started
to
write
normative
specifications
in
etsy
nfe,
and
this
is
one.
The
idea
is
that
consistency
and
repeatability
are
critical.
So
here's
the
the
table
of
contents
with
the
topics
that
are
most
important
flagged
on
the
left.
Here
we've
got
a
general
framework
for
benchmark
definitions,
it's
kind
of
like
the
template
we
use
in
our
definition
documents.
A
We
we
go
into
great
detail
on
the
test
setup,
some
configuration
test
device
and
function
capabilities.
So
now
here's
where
we're
asking
the
vendors
here's
some
new
there's
some
new
capabilities
we
want
this
is
this
is
easy
place
for
them
to
find
that
traffic
generator
receiver
requirements
and
the
general
functional
requirements.
So
we
actually
have
two
throughput.
We
have
two
benchmarks
in
there
on
throughput
and
latency,
leaving
the
last
meeting.
A
We
agreed
to
write
benchmarks
on
packet,
delay,
variation
and
that'll,
take
two
forms
and
that'll
be
a
new
benchmark
and
there's
also
going
to
be
a
benchmark
on
packet
loss,
just
the
the
classic
measurement
of
loss
ratio.
So
there's
going
to
be
at
least
two
more
benchmarks
in
this
and
the
methods
of
measurement
section.
We've
got
a
great
deal
of
detail
to
organize
the
hierarchy
of
testing
where
a
trial
is
like
a
single
measurement
under
the
test
conditions.
A
When
you're
seeking
a
goal
like
the
binary
search,
we
talked
about,
then
you're
going
to
run
a
series
of
trials
within
a
test.
If
you,
if
you
repeat
tests,
then
we
have
sets
of
tests.
So
that's
the
set
hierarchy
which
wasn't
in
your
list
Raphael,
and
then
we
have
a
method
above
that
the
method
changes,
significant
parameters
for
the
the
lower
levels
of
the
hierarchy
to
examine
like
frame
size
and
protocols,
encapsulations
the
things
of
that
nature.
A
So
that's
the
hierarchy
that
we're
that
we're
working
with
in
the
methods
of
measurement
and
and
what
approach
should
have
said.
First,
but
it's
worthwhile
saying
now,
we
started
out
this
work
in
Etsy,
with
a
survey
of
the
current
benchmarking
published
results,
the
test
campaigns
where
people
had
learned
a
lot
of
things
and
we
put
down
the
key
learnings
and
we
talked
about
how
we
would
address
them
in
the
document
and
we've
been
working
through
all
of
these
sections.
A
Looking
for
opportunities
to
address
these
issues
as
they've
come
up,
so
that's
a
that
that's
sort
of
the
the
way
this
has
worked.
So
the
known
issues,
specification
gaps
are
our
search,
algorithms,
and
so
you
know
here
again
I've
this
is
on
my
mind,
so
I
was
talking
about
it
before,
but
we
have
to
do
some
better.
A
little
better
specification
of
this
after
we've
talked
about
it
some
more,
and
we
also
need
some
sort
of
automated
way
to
monitor
the
infrastructure.
A
This
is
currently
mentioned
during
benchmarking,
and
this
is,
for
example,
in
in
Oh,
P
and
V.
Right
now,
we've
got
one
of
the
daemons
collecti
that
runs
and
it
can
collect
CPU
utilization
and
some
of
the
other
platform
metrics,
while
the
benchmarking
is
taking
place,
of
course,
that
you
have
to
test
to
be
sure
that
your
platform
collection
isn't
influencing
the
results.
A
So
you
have
to
rerun
it
without
that,
that's
a
form
of
calibration
and
here's
one
of
the
tool
gaps,
we'd
love
to
have
automated
collection
software
and
of
the
for
the
software
and
hardware
configuration
in
all
the
meaningful
dimensions.
If
we
could
pull
that
out
of
a
platform
and
then
make
intelligent
comparisons
with
that
same
information
run
from
another
platform,
then
we
can
easily
see
the
differences
and
what
might
be
significant
to
explain
different
results
when
we
see
them
and
and
and
so
that's
a
that's
kind
of
a
Holy
Grail
that
we
need
to
fulfill.
A
But
we've
identified
that
in
this
draft-
and
this
this
picture,
that's
impossible
to
see-
is
a
flow
diagram
that
basically
talks
about
the
this
is
the
trial
procedure
over
here
which
feeds
into
the
test
and
feeds
into
the
sets
and
the
methods
and
so
forth.
So
that's
the
it's
basically
what's
going
on
with
this
draft.
So
my
that's
a
really
quick
look,
but
what
I'm
encouraging
people
to
do
in
our
community
is
to
is
to
read
this
through,
let's
and
and
also,
let's
not
take
up
any
overlapping
work
with
this
effort
at
the
moment.
A
A
So
the
the
last
item
on
the
on
the
agenda
here
is,
let
me
get
rid
of
this
one
or
it's
not
agenda
where's,
our
agenda
Oh
way
back
here.
Okay,
last
I'm,
not
on
the
agenda,
is,
is
any
other
business,
any
other
business.
Anybody
thinking
about
doing
some
work
in
this
world
that
they
would
like
to
tell
us
about
for
a
few
moments.
A
Thank
you
all
right.
Well,
I
I
thank
all
our
remote
participants,
while
they
can
still
hear
us
I.
Thank
you
all
for
participating
today
and
I'm
declaring
this
session
closed,
which
means
now
we
can
talk
amongst
ourselves
about
things
that
we
were
afraid
to
come
to
the
microphone
to
talk
about
thanks.
Everybody
we'll
see
you
in
Montreal
and
on
the
list
very
good,
oh
well,
I.
Let
me
put
the
times
in
real
quick,
which
is
real
easy
to
do.
Thank
you.
A
I
always
like
to
thanks
I,
always
like
to
let's
say
says:
this
was
932,
and
it's
now
11,
okay,
56!
There
we
go
and
it's
17
the
red
Thank
You
Warren,
the
the
red
button.
The
red
button
does
not
cause
anything
to
explode.
I
used
it
when,
when
we,
when
we
had
late
when
we
head
the
phone
we
had,
the
pac-man
show
up
in
the
queue
over
here
was
Sue
Dean
asking
to
make
a
comment:
I
have
to
press
the
button
to
allow
him
into
the
into
the
microphone.