►
From YouTube: IETF100-BMWG-20171116-1550
Description
BMWG meeting session at IETF100
2017/11/16 1550
https://datatracker.ietf.org/meeting/100/proceedings/
A
So
we're
a
few
minutes
late,
starting
I,
had
to
install
an
HDMI
adapter
and
a
new
driver
and
reset
my
whole
system.
So
everything
I
queued
up
to
show
you
I'm
now
gonna
have
to
open
during
the
session.
Isn't
that
fun?
So
I'm
al
morton
co-chair
of
the
benchmarking
methodology
working
group-
and
this
is
sarah.
A
Welcome
welcome
to
our
our
session
at
IETF
100
this
this
this
group
is.
We
were
overlooked
in
the
mentioning
last
night,
but
this
group
is
as
old
chartered
wise
as
the
THC
dynamic
host
control
working
group.
I
think
they've
met
one
more
time
than
BMW
G,
but
but
they
were
both
chartered
at
the
same
time.
So
so
we
have
that
honor
and
that's
why
I
didn't
want
to
miss
this
one
for
sure.
A
A
A
D
D
A
A
A
D
A
One
other
participant,
mister,
ding,
I,
think
it
is
it's
a
little
hard
to
see
for
me
at
least
so
we'll
be
watching
for
participants
to
queue
up,
and
in
any
case
so
this
is
the
agenda,
will
quickly
do
working
group
status
and
talk
about
our
Charter
and
milestones.
We've
got
one
draft
still
in
actually
two
drafts
on
Sdn
controller
performance,
they're
in
working
group
last
call
finishing
up
and
one
draft
that
we've
sudo
adopted
on
evpn
and
Pb
Pb
be
evpn
benchmarking.
A
A
A
A
But
then,
after
looking
at
these
new
proposals
and
reviewing
the
old
ones
briefly,
we'll
look
at
the
the
reach
are
during
text
will
consider
what
would
we
would
might
want
to
modify
additionally,
and
then
we
may
look
at
a
few
milestones
if,
if
we've
got
the
wherewithal
to
do
that
and
I
actually
think
Warren
will
join
us
today,
Sarah
our
ad
advisor,
you
said
he
had
another
meeting,
but
I
think
that
other
meeting
was
cancelled.
I
think
it
was
DNS
SEC
yeah,
so
he
even
so.
A
Seeing
none,
that's
good,
sir!
Please
please
grab
the
blue
sheet
there
and
sign
in
you'll,
see
I'm
recruiting
blue
sheet
signers
because
we're
we're
we're
late
on
Thursday
afternoon
and
that
usually
that
usually
affects
attendance.
The
only
thing
that
will
be
worse
would
be
to
be
after
lunch
tomorrow.
I
think
so.
In
any
case,
I'm
sorry
good
question
Sarah
asks
how
many
newcomers
do
we
have
in
the
room.
Please
raise
your
hands.
A
Why
five
very
good,
so
what
I
always
say
when
newcomers
join
us
and
we're
and
you're
very
welcome
to
join
us.
This
is
a
very
easy
group
to
join.
If
you
have
a
testing
background,
you'll
you'll
fit
right
in
here,
and
we
have
a
supplemental
web
page,
which
Sarah
was
nice
enough
to
restore
to
the
interwebs
which
which,
in
a
few
paragraphs,
tells
you
how
to
get
involved
here
very
quickly.
So
that's
on
our
you
know
our
data
tracker
charter,
page.
C
So
you
guys
will
see
at
the
end
we're
talking
about
proposals
and
retar
during
discussion.
What
that
really
means
is
as
we're
looking
at
taking
on
new
work.
You
guys
are
joining
at
a
perfect
time,
because
if
there's
something
you
guys
are
interested
in
and
you
wanted
to
bring
in
now
is
a
great
time
so
I
realize
you're
new
or
maybe
on
a
Thursday
you're
four
days
into
your
first
IETF.
A
Knew
or
tend
to
use
join
us.
Please
grab
the
blue
sheets
er,
which
is
and
pass
it
back
there
perfect.
Thank
you
so
much,
okay,
so
on
so
other
than
this
about
on
no
bashing
of
the
agenda,
seeing
none!
Let's
move
on
all
right,
so
here's
our
quick
working
group
status,
folks,
I'll
bet
this
thing
is
supposed
to
be
turned
on.
C
A
B
A
C
A
They're
supposed
to
be
monitoring
audio
to
pick
this
up,
I
mean
I
mean
I
mean
that
we
could
do
that
too,
but
I
can't
so
here's
the
quick
working
group
status
as
I
said
we're
in
group
last
call
is
just
about
to
end
for
the
SDN
controller
drafts,
we're
tracking
many
proposals.
Here's
a
list
of
some
of
the
proposals
which
we've
just
I
talked
about
in
the
agenda,
but
we've
got
more
status.
A
Some
really
good
news
here,
look
at
the
list
of
for
RFC's
that
we've
completed
in
the
last
interim
period,
but
we've
I
mean
the
the
reason
we're
reach.
Our
turing
is
fairly
obvious.
We've
actually
been
really
productive
in
the
last
couple
of
years
or
so,
and
and
and
believe
me.
This
is
a
group
that
that
works
long
and
hard
on
its
RFC's
to
push
out
for
like
this
is,
is
pretty
amazing.
I
can't
remember
this
ever
happening
in
a
long
time
right.
A
So
discussion
on
the
agenda,
we're
gonna
be
talking
about
the
Charter.
We
mentioned
that
oh
and
here's,
the
here's,
the
link
to
the
supplementary
BMW
G
page
I-
knew
it
was
in
here
someplace.
This
is
on.
You
know,
encrypted
dotnet
and
Sarah
was
kind
enough
to
provide
this
after
one
of
the
cable
providers
discontinued
my
residential
web
pages.
So
no.
A
A
So
when
we,
when
our
drafts
are
reviewed
by
other
areas,
and
it's
a
good
chance
to
mention
our
our
scope
and
charter.
Here,
we
do
all
of
our
testing
in
an
isolated
test
environment
packets
that
we
use
frames
that
we
use
they
all
have
addresses
in
an
address
space.
That's
been
assigned
to
the
benchmarking
methodology
work,
and
what
that
means
is
that
if
you
see
these
packets
on
the
live
network,
you
can
discard
them.
We
don't
want
our
packets
to
get
out
and
the
live
network,
and
we
prefer
the
testing
to
be
isolated.
A
So
that's
one
way
to
you
know
to
try
to
ensure
that
that
kind
of
isolation
have
our
own
address
space
and
so
forth.
But
the
the
main
point
I'm
making
here
is
that
what
we
want
is
for
the
security
folks
to
understand.
If
they
pick
our
draft
up
cold
and
they
haven't
read
our
Charter,
they
may
look
at
this
and
scream.
What
do
you
mean?
You're
doing
you
know,
look
at
all
this
traffic
you're
sending
them
to
the
network.
That's
got
to
be
some
kind
of
problem.
A
A
B
A
So
we
we've
adopted
and
in
in
some
way
shape
or
form
this
evpn
benchmarking
proposal.
So
we've
got
that
all
in
green
there
and
you
can
see
the
kinds
of
things
that
we
evaluate,
whether
there's
a
proposal
written
on
you
know
written
up
for
it,
whether
it's
in
the
scope
of
the
Charter.
We,
you
know,
we
make
the
call
on
that
ourselves
and
then
get
feedback
from
our
ad,
whether
there's
a
draft
that
supports
the
proposal,
whether
we've
seen
significant
support
at
meetings
and
on
the
list
and
whether
there's
any
dependencies
and.
A
Everything
here
we
have
covered
for
that
and
we've
had
the
adoption
column
on
the
list
and
I
believe
that
was
successful,
and
so
the
last.
The
last
thing
we
need
there
is
is
a
version
of
it
that
actually
takes
on
the
working
group
file
name,
but
but
that's
something
where
I
won't
be
working
toward
okay.
So
it's
there's
other
proposals
here
that
I'll
just
briefly
mention
this
virtual
benchmarking
as
a
service.
A
Chaining,
we
did
see
something
on
that
in
July
at
the
last
meeting
and
for
virtualized
Network,
which
sort
of
went
on
hiatus
for
a
while
and
then
came
back,
we've
seen
a
new
draft
of
that
in
July,
but
it
continues
to
have
the
issues
that
have
been
raised
on
the
list
and
at
meetings,
and
so
they
they
really
haven't
solved
the
problems
just
by
keeping
the
draft
up
to
date.
So
that's
that's
kind
of
a
liability.
There
I'll
mention
that
we've
got
the
new
proposal
from
Sean
woo.
D
A
A
C
Okay,
so,
as
al
mentioned,
the
SDN
controller
draft
is
in
the
hopefully
last
group
last
call
comments.
Welcome.
We
were
just
and
I've
done,
and
now
I
saw
feedback
from
you,
I
think
yesterday,
where
you
caught
a
snafu
on
the
editing
side
with
a
cut-and-paste
error
and
then
an
additional
comment
that
I
need
to
work
through
with
my
co-author,
so
we'll
we'll
do
that
and
revise
and
resubmit.
C
D
C
E
C
Alrighty
then
we
move
on
to
the
EVP
n
yeah.
So
the
author,
there
is
a
student
Jake.
Ax
he's
been
coming
to
the
IETF
meetings,
he's
based
out
of
India
and
couldn't
make
it
to
this
meeting
this
time.
I
don't
see
him
logged
into
the
meet
echo
either,
so
he
brought
the
draft
in
last
meeting.
There
was
a
fair
amount
of
I
would
say
spirited
discussion
in
the
room
about
the
technical
and
the
editorial
content
of
the
draft.
We
do
too
quickly.
I
would
say
we.
C
We
almost
always
ask
that
we
have
discussions
on
the
list,
but
because
there's
been
a
huge
outreach,
I
think
with
the
IETF
India
community
and
making
sure
that
folks
are
comfortable.
I
did
discuss
with
students
offline
and
pointed
him
back
to
the
minutes
from
IETF
99,
where
the
content,
some
technical
things.
For
example,
his
test
bed,
wasn't
very
clear
what
he
was
testing
at
all
and
while
he
could,
verbally
or
I,
think
he
was
verbally
trying
to
explain
it.
C
C
I
will
help
him
with
the
English
or
just
the
editing,
sometimes
doing
your
first
draft
can
be
daunting,
so
we
we
are
here
to
help
we've
all
been
in
this
position,
but
well
I,
don't
know
if
you
ever
were
I
feel
like
you
might
have
come
in
knowing
how
to
do
it
all,
but
I
certainly
did
not
know
how
to
do
it.
All
when
I
first
joined
and
I
was
a
huge
help.
So
we're
we're
here
to
help,
especially
when
you're
new
doing
it.
The
first
time
we
realize
it
can
be
a
challenge.
C
A
C
A
C
So
from
here,
I'm
expecting
I
will
ping
Soudan.
If
you
want
to
assign
me
that
action
after
IETF
100
to
make
sure
that
he
understands
that
we've
adopted
the
draft
that
we
do
see
the
value
not
to
be
discouraged
by
the
amount
of
feedback
that
he
got,
because
it
was
a
fair
amount
of
feedback,
but
I
think
the
draft
will
be
much
better
for
it
and
then
once
that
comes
in
I'll
ask
him
to
post
that
explicitly
on
the
BMW
G
list
and
if
you're
interested.
C
A
A
Great,
thank
you.
Alright,
one
further
point
on
this
Sarah
and-
and
that
is
some
I,
seem
to
remember
that
Sue
Dean
was
looking
for
additional
editor
help
I
think
you've
you've
volunteered
to
do
that.
If
there's
anyone
else
who
has
evpn
expertise,
we
should
also
consider
maybe
another
author
as
well,
to
provide
real
the
the
technical
background
that
might
help
out
here.
Yeah.
C
C
A
A
C
A
C
C
A
A
Yeah,
that's
will
will
will
work
this
at
a
pace
that
everybody
is
comfortable
with
very
good
okay.
So
it's
fairly
easy
to
find
our
drafts
on.
This
is
the
tools
page
by
the
way
which
I've
I
mean.
This
has
been
around
a
lot
longer
than
the
data
tracker
and
I'm
kind
of
more
comfortable
using
this,
but
there
there
is,
of
course,
the
the
data
track
of
version
of
this,
and
you
can
go
there
alright.
So,
let's
see
where
are
we
in
the
agenda
all
right,
we're
racing
along
here?
A
A
A
A
F
F
Right,
thank
you.
So
the
network
services
layer,
abstract
model,
is
really
a
reference
model.
There
provide
a
converse
Kelton
of
how
we
would
define
at
the
document
the
benchmarking
methodology
based
on
the
EM
data
structure.
So
I
have
read
a
quite
a
few,
our
past,
our
FCS
and
a
draft
from
BMW
G,
and
the
first
question
is:
do
we
need
another
modelling
or
do
we
need
another
methodology
in
general,
since
we
have
already
many
existing
standards
out
there?
F
So,
let's
take
a
step
back
and
consider
our
traditional
methodology,
you
for
the
benchmarking
I
think
I'll
mention
it
before-
is
that
we
typically
looking
at
the
black
box
level
testing
and
we're
looking
for
the
external
characteristics
of
a
device
or
a
system.
So
there's
a
few
challenges
as
we.
The
networking
technologies
have
evolved
over
the
years,
for
example,
we're
looking
at
the
black
network
testing.
Instead
of
giving
testing
folks
on
a
single
device
or
a
set
of
advice
in
many
cases
that
the
user
will
be
presented,
it
was
a
private
vertical.
F
F
What
your
model
has
been
typically
doing
is
trying
to
define
almost
every
aspect
of
a
technology
or
protocol
how
the
controller
can
use
a
vendor
agnostic
way
to
configure
a
device
using
a
setter.
The
very
detailed
commands,
but
here
the
abstract
model
is
trying
to
use
a
similar
scene
set
of
the
scene
taxes,
but
I
only
retain
the
high
level,
which
is
relevant
from
a
service
outlook
perspective.
So
that
will
leave
a
lot
of
details
out.
Only
folks
were
the
external
characteristics.
F
Another
kind
of
hub
buzzword
is
intent-based,
networking
and
I'm,
not
sure
how
you
already
are
familiar
with
the
intent
based
work
networking.
So
traditionally
a
user
interact
with
the
device
via
a
set
of
configuration
and
the
Yamato.
It's
just
a
vendor
agnostic
way
to
do
it,
but
we
have
evolved
a
using
the
Sdn
controller
that
it
will
only
require
user
to
provided
a
very
small
subset
of
within
this
necessary
intention,
so
that
are
based
on
a
lot
of
the
machine
learning
and
telemetry.
Later.
F
F
Another
challenge
is
the
scale
and
dimension
the
unit
dimension,
those
scaling
and
unit
dimensional.
Benchmarking
is
often
the
primary
focus
of
testing,
and
typically
it
works
great.
When
is
in
the
unit
dimension
and
the
ones
adding
the
additional
dimensions
which
the
motif
services
edge
in
many
different
cases,
we
that's
where
we
start
to
seeing
a
problem.
F
So
when
you
have
the
ability
to
abstract
the
multi-dimensional
Network
and
into
a
simpler
file
model
that
we
can
really
test
it
out
in
the
lab
and,
finally,
the
converge,
your
services
and
there
has
really
blurred
the
boundaries
between
the
different
layers
today
like
if
we
want
to
send
a
layer
of
free
traffic,
we
know
not
necessarily
require
a
layer,
3
interconnect
or
layer.
3
based
the
product
comes
in
many
different
cases,
for
example
the
evpn
with
type
5
route.
F
So
we're
using
this
service
model
we're
trying
to
come
up
with
separate
the
underlying
technology
and
the
service
profile,
which
basically
is
a
set
of
the
customer
requirements
if
I'm
a
business
perspective.
So
this
is
why
we
are
trying
to
propose
a
simplified
a
year
model
to
for
the
model
network
services
for
the
benchmarking
purpose.
F
So
a
little
bit
busy,
slider
here
or
I,
should
have
a
put
a
box
in
the
middle.
This
is
the
count,
there's
a
four
key
components
in
this
model,
some
of
the
old
existing
that
we
have
been
using
for
decades.
Some
of
the
top
the
component
is
the
relatively
new.
So
let's
start
from
the
bottom,
if
you
look
at
in
the
middle
column,
so
the
bottom
is
the
inventory.
The
inventory
refers
to
a
set
of
the
notes
or
hardware
software
resources
and
eating
a
bunch
of
the
connections
wide
or
Wireless.
They
form
a
physical
topology.
F
That
is
the
underlying
hardware
layer
that
we
are
dealing
with
and
moving
up
a
bit
I
separated
the
logical
topology
from
the
physical
topology,
the
reason
being
eating
the
same
set
of
the
physical
topology.
In
a
lot
of
the
benchmark
testing.
You
know
we
can
easily
create
the
tunnels
to
form
a
they're,
completely
different
view
of
the
topology,
and
you
have
the
a
protocol
adjacency
where
the
device
is
the
far
apart
from
a
physical
perspective,
but
they
are
really
next
to
each
other.
F
The
service
provision,
which
is
the
dark
blue,
bars
those
really
the
static
configuration
of
a
testbed
if
you're
looking
of
the
infrastructure.
This
is
the
focusing
on
IGP
bgp,
the
MPS
transport
and
the
potentially
the
bridging
those
define
a
set
of
the
communication
protocols
between
the
networking
devices
provide
the
baseline
connectivity
and
finally,
on
the
top
on
the
left
side
that
which
is
a
service
layer.
This
is
the
fund
a
set
of
the
ultimately
services
received
by
the
user
perspective.
F
F
But
what's
new
here
is
to
use
a
set
of
the
e/m
based
data
modeling
to
describe
or
document
those
key
elements
and
to
provide
the
baseline
support
when
we,
if
we
try
to
zoom
in
a
particular
service
benchmark,
so
next
to
our
boxes,
wanted
in
a
dark,
green.
It's
a
service
profile,
I
think
a
lot
of
times
that
in
the
game
model
we
typically
define
the
configuration,
which
is
a
static,
is
scale
static.
Configuration
the
service
profile
is
a
more
referring
to
a
setup
with
a
dynamic
elements
of
the
network.
F
F
What's
news
in
addition
that
a
to
current
subcategory
was
introduced,
the
one
we
have
to
have
a
certain
set
of
the
SLA
or
how
to
describe
the
resiliency
of
a
network
using
a
set
of
the
timers
like
how
quickly
the
network
will
respond
to
a
failure
scenario?
And
finally,
we
need
to
a
set
of
other
events
that
we'll
need
to
define
a
common
event
like
no
link
of
failure,
the
control
protocol
churns
and
some
of
the
mobility.
F
The
mobile
could
be
a
station
moving
from
one
location
to
another
or
VM
motion
that
aware
we
quickly
provisioning
a
virtual
machine
from
one
data
center
to
another.
So
those
are
the
four
category
that
we
are
looking
to
define
what
service
provider.
So
this
profile,
really
owned
by
a
customer
versus
a
service
provision
is
owned
by
the
provider,
an
ISP
or
cloud
provider
and
finally,
on
the
top
of
the
resource
utilization.
The
resources
described
how
much
it's.
F
Basically,
it's
a
health
indicator
of
the
network
and
the
particular
provision
which
is
a
set
of
the
static
configuration
and
a
particular
set
up
with
a
loader
which
is
described
in
the
service
profile
and
how
busy
the
network
is.
Do
they
have
enough
capacity
to
handle
additional
load
of
those
things?
F
F
Here's
the
set
of
the
design
principles
that
are
we
trying
to
come
up
with
this
is
service
model.
One
is
we
trying
to
leverage
the
existing
MO
I
took
a
lot
of
the
tools.
It
efforts
were
all
daily
tracker.
There
are
tons
of
the
efforts
ongoing
or
past.
There
are
2500
yen,
related
RFC's
and
more
than
a
hundred.
The
internet
draft
is
being
actively
worked
on.
So
we
did
not
want
to
reinvent
the
wheel.
F
We
want
to
take
it
existing
what
all
the
work
has
been
done
in
various
protocol
areas,
but
that
we
don't
want
to
take
or
that's
what
we
want
making
sure
only
provide
a
sufficient
set
of
the
keywords
to
this
abstract
model,
but
not
exhaustive.
We
will
leave
a
lot
of
additives
out,
for
example,
the
IP
addressing
and
roundest
Englisher,
and
those
are
necessary
for
a
real
production
network
provisioning,
but
not
needed.
Here
we
were
trying
to
look
at
the
aggregator
view
from
a
network
of
high-level
and
unnecessary
zooming
in
down
to
the
device
level.
F
An
additional
set
of
the
rules
we
had
to
introduce
is
the
distribution
model,
we're
not
trying
to
mirror
the
exact
how
the
network
is
operating,
but
instead
we're
trying
to
look
for
a
abstract.
The
distribution
model
to
use
a
very
few
parameters
to
model
the
existing
set
of
the
routing
entries
or
map
database
or
even
a
tracker
profile,
and
the
service
model
also
need
to
provide
a
view
from
different
perspective,
the
Cosmo
the
provider
at
a
manufacturer.
They
have
different
folks
about
how
they
need
to
benchmark.
F
For
example,
the
customer
is
only
looking
at
the
network
capability,
not
necessary
care
about
the
underlying
technology
that
they
provide
the
service
profile,
meaning
that
I
just
want
to
move
the
traffic
from
place
a
to
place
B,
they
don't
necessary
care
about
which
we
ended
up
the
equipment,
what
technology
use
EDP
and
avi
POS?
They
probably
don't
care
the
MVP
on
or
just
vanilla,
IP
multicast.
F
Those
are
just
a
choice
or
if
I'm
going
to
provide
a
perspective,
it's
not
necessary
from
customer.
So
the
service
model
is
really
focusing
on
the
a
external
characteristics
which
is
the
from
a
set
of
the
surface
profiles
and
a
finally
not
but
not
least,
as
we
want
to
make
sure
those
servicemen
were
calm,
rated.
F
Again,
there's
a
lot
of
the
content,
data
and
I'm,
not
sure
if
you
can
see
the
clearly
from
the
behind
so
there's
only
this
is
not
a
complete
a
profile.
It's
a
subset
of
the
the
model
that
is,
it
broke
into
four
different
categories
on
the
top.
The
three
one
is
the
service
definition
here
is
example
of
the
layer.
Two
services
in
the
middle
is
the
traffic
profiles.
F
Traffic
profiles
are
really
you
can
consider
it
as
a
service
profile.
What
really
the
customer
is
a
program
to
put
a
load?
What
kind
of
a
load
that
is
prolonged
to
network
that
is
going
to
be
handled
by
the
underlying
service
and
on
the
right
side
is
the
resource,
because
when
we
do
them
benchmarking-
and
we
have
to
look
at
the
not
only
the
final
throughput,
but
also
looking
at
the
health
indication
about
a
network
and
again
this
table
is
not
the
black
network,
black
box
testing
every.
F
We
will
have
to
look
at
the
performance
in
indication
that
we're
looking
at
those
KPIs
of
the
devices
and
on
the
bottom.
The
bottom
is
the
resiliency.
The
residency
is
really
defined.
A
set
of
the
timers
define
how
quickly
the
network
or
the
device
will
recover
function.
This
is
some
of
the
common
events,
for
example
the
failover
in
a
motor
homing
scenario,
some
of
the
know
the
link
of
failures,
the
ISSC
or
the
in-service
software
upgrade,
and
how
much
time
that
would
be
allowed
to
the
device
to
recover
without
breaking
the
control
sessions.
F
So,
by
a
small
set
of
the
data,
we
expect
the
user
or
the
tester
will
take
the
high-level
parameters
and
generate
the
configuration
depending
on
the
underlying
device
type,
and
all
that
we
always
care
about
is
get
a
set
of
the
validation
from
both
the
resource.
It
basically
a
setter
of
the
KPI
and
the
performance
in
the
term
of
the
forwarding
or
the
scale
of
the
control
plane
it
can
handle,
as
well
as
making
sure
they
have
in
demonstrate
enough
resiliency
and
me
those
threshold
for
various
types
of
the
events.
F
So
this
is
a
there's,
a
lot
of
work
underway
to
make
that
really
useful
and
making
it
more
complete
so
that
it
can
be
used
to
profile
various
types
of
networks.
So
I
think
that
the
decision
is
really
hard
when
we
trying
to
distill
a
large
number
of
the
existing
yen
data
model
into
a
very
small
subset
and
only
keeping
what's
necessary.
That's
the
hardest
part.
A
A
A
question
Shawn
from
Al,
it
looks
like
looks
like
most
of
the
most
of
the
things
we're
looking
at
here.
The
elements
of
the
yang
model,
so
to
speak.
Many
of
them
are
provisioning
or
configuration
values
like,
for
example,
we've
got.
The
we've
got
the
traffic
profile
here,
which
would
be
it
would
both
be
input
to
the
device
under
test,
but
also
the
input
to
the
traffic
generator
to
to
to
use
these
specific
frame
sizes.
A
F
A
Good
and
it
and
it's
it
may
be
worthwhile,
just
as
a
you
know,
as
a
quick
feedback
that
occurs
to
me
here
that
that
we've
got
we've
got
these,
what
we
call
sort
of
secondary
or
auxiliary
metrics
like
the
CPU
utilization
and
the
memory
footprint
and
so
forth.
A
Those
are
those
are
good
to
know,
but
we
we
always
set
them
aside
from
the
benchmarks
that
we
that
we
try
to
measure
the
benchmarks,
of
course,
like,
like
throughput
being
the
primary
things
that
we're
after
CPU
utilization
at
the
measured
throughput
is
a
there's,
a
good
thing
to
know,
and
probably
the
kind
of
information
that
will
will
particularly
want
to
supplement
our
benchmarks
with,
so
that
we've
got
a
better
chance
of
improving
how
our
benchmarks
are
used
in
network
engineering
in
the
future.
Yeah.
F
Absolutely
I
think
there's
different
level.
The
importance
of
the
resources
sections,
and
also
this
is
just
showing
one
snapshot
of
the
steady-state
and
also
showing
we
probably
will
be
interesting.
Hood
is
a
PQ
tradition
and
utilization
in
event
of
those
failure
scenarios
how
the
network
will
respond
because,
in
a
like,
say
10
in
10
based
networking,
those
in
tradition
will
matter
and
dynamically
change
the
configuration
and
back
to
the
service
provisioning,
meaning
they
will
dynamically
move
the
component
of
a
certain
function
into
different
elements
within
the
network.
C
F
D
F
C
F
F
So
here's
a
few
use
cases
that
I
can
think
of
how
we
can,
if
we
develop,
spend
time
developing
those
abstract
model.
How
can
we
use
first
of
all
is
to
have
a
formal
way
to
document
in
the
benchmarking
methodology
and
the
results
in
a
more
into
exchange,
but
data
form
act
and
once
a
library
of
the
source
model
or
network
deployment
scenario
is
been
developed
or
captured.
F
So
here's
how
we
can
I
can
see
that
the
being
used
first
war
is
an
a
map
between
the
real
network
production
network
to
the
abstract
model
it
can.
We
can
have
a
set
of
a
tool
to
discover
the
existing
network
in
tradition,
exist,
network
baseline
and
convert
those
at
your
configuration,
probably
is
gonna,
be
a
Java
set
of
the
data
into
a
simplified
abstract
model.
F
Once
we
have
the
model,
then
we
can
feed
back
into
the
test
lab
for
the
benchmarking
for
the
design
choices
and
do
the
F
what
if
scenarios
in
a?
Finally,
we
have
the
test
results
in
a
feedback
to
the
the
model
to
document
and
to
store
the
data
for
the
further
analysis
and
those
are
results.
The
results
are,
from
those
analysis
can
be
further
fed
backing
to
the
network
to
improve
the
design,
to
optimize
the
resource
assignment
to
those
things.
F
F
I
know,
there's
a
lot
of
data
sheets
out
there,
typically
that
we
have
follow
the
the
benchmarking
RFC's
and
we
already
have
a
set
of
those
but
I
think
when
we
move
it
on
and
how
we
can
define
those
that
elem
information
in
the
yen
data
model
and
provide
a
more
English
interchangeable
format,
so
that
a
user
can
use
the
same
set
of
a
syntax
that
generate
the
report.
They
like.
A
Thank
you,
Shawn.
That
was
a
great
and
very
interesting
presentation.
It
was
our
our
first
exposure
to
yang
models
here
in
the
benchmarking
methodology,
working
group
and
I.
Think
now
we
can
probably
say
that
the
net
confiscation
of
the
IETF
is
complete.
That's
the
the
term
that
people
are
thrown
around
for
how
how
the
reason
that
everybody
in
the
IETF
seems
to
be
working
on
Anna
yang
model
somewhere.
A
G
Pedro
Martinez
from
any
city-
and
my
question
is
that
how
do
you
think
to
deal
with
data
and
actually
the
benchmarking
stuff?
You
know
you
have
some
kind
of
four
men
smoking
you
have.
You
must
have
some
kind
of
algorithm
and
also
some
data
to
input
in
to
the
to
the
protocol
or
whatever,
and
then
some
reference
output
to
compare
that.
Where
do
you
plan
to
fire
and
this
data,
and
what,
in
that
sense,
how
do
you
think
that
it
would
be
done
with
your
model.
F
Yeah
I
think
the
model,
thanks
for
the
question
by
the
way
and
I
think
a
year.
Definitely
right,
I'll,
point
out
the
difference
between
the
model
itself
and
the
data
to
be
captured
and
how
the
data
can
be
used,
analyzed
for
the
further
improvement
of
the
network
or
benchmarking.
So,
if
I
understand
your
question
correctly,
is
that
the?
F
G
D
G
F
G
F
So
so
the
question
is
that
where
this
data
is
coming
from
to
be
fed
into
the
test
lab,
so
that
the
test
lab
can
do
their
test
right
now,
yeah,
okay,
so
this
is
the
trying
to
use
a
set
of
the
a
subset
of
the
gear
model.
Take
it,
for
example,
layer,
3,
deep
here
layer,
3
VPN
has
so
many
arguments
it
you
completely
define
a
layer,
3
VPN,
offering
what
this
model
does
is
to
extract
a
subset
of
the
key
elements.
F
For
example,
we
only
tell
the
test
lab
I
need
have
a
hundred
sites
say
I
did
a
total
of
the
hundred
thousand
routes,
but
not
necessary.
Tell
the
test
lab
the
which
router
has
how
many
routes
or
which
route
I
had
how
many
VPN
instances
what
the
data
you
provide
is
an
aggregated
view,
so
that
a
test
generator
will
use.
This
high-level
data
from
basically
is
describing
a
customer
requirements
for
network
and
then
distributed
into
the
individual
devices
based
on
the
definition.
F
If
we
go
back
to
the
one
slide
example,
probably
that
can
be
explained
better
yeah.
Ok,
so
in
this
particular
case,
is
that
we,
this
model
defines
what
the
key
elements
is
needed
to
describe
a
network
sufficient
number
of
the
elements
and
the
the
actual
value
is
not
set
by
this
data.
Instead,
it
should
be
set
by
it's
a
the
test,
engineer
and
say:
hey.
This
is
a
target.
I
have
a
bigger
router
I
need
to
increase
the
number
of
the
global
Mac
count.
F
C
If
it
doesn't
define
the
actual
value,
I
think
one
of
the
sort
of
core
tenants
that
we
have
in
BMW
I
feel
like
I'm,
great
repeatability.
If
it
doesn't
define
the
actual
value,
then
how
if
I
run
the
same
test
ten
times
in
a
row,
for
example,
how
do
I
know
that
I've
run
the
same
test
ten
times
in
a
row
or
how
do
I
know
I've
run
the
same
with
the
same
input
values
each
time
so
that
I
can
compare
apples
to
apples
the
results?
Do
you
see
what
I
mean.
F
Well,
yes,
but
I
think
the
value
when
we
do
a
report,
a
test
say
this
is
the
describe
the
profile.
The
profile
will
have
a
list
of
the
elements
end
of
the
value
yourself,
but
our
proposal
here
is
just
to
define
the
skeleton
of
the
profile,
but
the
tested
repeated
test
it
will
have
used
the
set
of
parameters
there
were
you
the
same
set
up
the
number
the
pass
on
the
test
generator
so
out
of
the
10
attempts?
F
They
will
hear
the
same
number
so
I
think
there's
a
question
is:
there's
a
profile
contain
the
value?
Yes,
it
does.
Does
our
proposal
define
the
value
it
doesn't
our
proposals
only
to
find
the
key
elements
and
but
certainly
we
can
provide
the
values
but,
as
vadi
will
change
over
the
time,
depending
on
the
capacity
never
underlying
network
can
offer.
So
I,
don't
think
I
answer
your
question.
Maybe
so.
D
C
Profile
itself
is
in
essence
the
methodology
you
just
happen
to
provide,
in
the
example
sample
test
data
that
the
tests
are
filled
in,
but
I
think
what
you're
saying
is.
It
would
be
up
to
me
to
take
the
skeleton
without
the
values
and
then,
as
the
test
case
or
the
engineer,
fill
in
the
values
and
record
exactly
what
I
tested
and
then,
of
course,
I
could
repeat
with
those
same
values
10
times
in
a
row.
For
example,
that's.
A
Good
good
clarification,
yeah,
so
there's
no
one
else
at
the
microphone.
A
Well,
I
think
we've
got
a
a
decent
idea
of
what
you
plan
here.
Shawn
and
I.
Think
I
think
it
will
be
I've
got
your
next
step
slide
up.
F
A
F
A
A
A
A
So
here
we
go
so
that
so
the
draft
was
prepared
and
and
circulated
on
the
list
made
a
little
announcement
of
it
and
as
a
as
a
little
background
for
the
for
this
proposal,
RFC
25:44
subspecialize
this
measurement
measures
the
back
to
back
frame
benchmark,
that's
what
it's
called,
but
it
requires
a
little
more
really.
The
back
to
back
frames
is
more
a
description
of
the
methodology
than
it
is
what's
actually
being
measured.
A
It's
it's
defined
in
RFC
1242
as
the
longest
burst
of
frames.
A
device
under
test
can
process
without
loss,
and
they
go
on
to
explain.
The
tests
of
this
parameter
are
intended
to
determine
the
extent
of
data
buffering
in
the
device.
So
in
25:44
today
there
is
in
fact
a
very
concise
objectives,
expressed
a
very
concise
procedure
and
also
a
very
concise
reporting
for
this
benchmark.
A
Links
here
to
references
the
reason
I
included,
the
links
was
because
in
the
printout
of
the
RFC
something
went
wrong
with
the
formatting
and
the
URLs
went
away.
So
it
wasn't
what
wasn't
much
help
well,
it
was,
it
looked
to
me
like
it
was
in
a
place
where
they
would
be
displayed,
but
apparently
you
got
to
just
append
it
in
the
title
or
something,
and
that
makes
it
show
up
in
the
reference.
A
So
so
I've
got
that
straightened
out
here,
for
at
least
for
a
temporary
basis
and
as
I
said,
we
I
I,
circulated
and
discussed
the
are
our
results
from
the
V
switch
performance.
Benchmarking
at
some
length
at
the
last
meeting.
A
I'll
talk
about
that
a
little
bit
now,
but
you're
welcome
to
take
a
look
at
the
at
the
slides,
which
is
which
are
the
this
first
link
and-
and
actually
you
can
see
even
more
detail-
spelled
out
in
the
this
wiki
page,
where
we
were
working
on
on
lots
of
traffic
generator
testing
and
comparisons
there.
But
looking
at
the
back
to
back
frame
testing
in
particular,
so.
A
When
I
ran
the
tests-
and
this
is
a
as
I
said-
this
is
a
test
where
you've
got
the
traffic
generator,
sending
traffic
to
a
device
under
test
and
it
passes
through
the
device
under
test.
So
it's
either
routed
or
forwarded
or
encapsulated
or
transcoded,
or
you
know
whatever
else.
Whatever
else
the
device
under
test
does
to
the
traffic
it
does
it,
and
then
it
passes
that
traffic
back
out
the
other
side
for
a
reception
so
the
there.
A
It
had
been
noted
in
the
past
that
the
back
to
back
frame
benchmark
had
some
problems
with
consistency,
but
we
were
able
to
test
this
over
a
series
of
months
and
actually
over
a
series
of
two
different
hardware
platforms,
and
we
saw
a
very
consistent
results
of
between
the
two
for
fixed
frame
sizes
and
it
was.
It
was
somewhat
variable
and
a
few
of
them,
but
we
came
up
with
an
explanation
for
that,
but
the
sort
of
the
surprising
results
were
for
large
frame
sizes.
A
You
saw
the
range
of
frame
sizes
that
Sean
just
showed
us
now
all
the
way
up
to
a
fairly
high
values.
64
all
the
way
up
to
you
know
15
1,500,
approximately
white
frames,
but
there
so
the
the
frame
length
reported
for
a
large
frame
sizes
was
unexpectedly
long.
In
other
words,
it
was
on
the
order
of
30
seconds
of
storage
in
the
under
test.
We
knew
that
wasn't
true,
and
so
it
required
us
to
investigate
that
and
the
test
equipment.
A
We
were
using
didn't
report
any
kind
of
error
when
it
when
it
said
I
just
produced
a
burst
of
30
packets
and
that's
as
far
as
I'm
gonna
go,
and
so
that's
the
longest
burst
this
couldn't
handle.
That's.
It
turns
out
that
that
that
that
was
erroneous
data.
The
the
test
generator
was
reporting.
I'll,
explain
that
a
little
bit
later,
the
calculation
of
the
extent
of
buffer
time
in
the
device
under
test
helped
explain
the
results
with
all
frame
sizes.
A
It
turns
out
that
some
frame
sizes,
like
the
the
highest
frames,
the
device
under
test,
was
able
to
process
them
at
full,
back-to-back
frame
line
rate.
So,
in
fact,
while
the
processing
is
bleeding
off
the
buffer
as
fast
as
the
frames
arrive,
you
don't
get
the
buffering
that
you're
trying
to
measure
in
this
test.
Now,
that's
a
problem,
if
you
don't
report
it
that
way.
A
Actually,
if
they're,
if
they're
paired
up
with
the
back-to-back
frame
tests,
they
can,
they
can
also
be
used
to
reduce
the
number
of
frame
sizes
tested,
because
only
when
only
or
only
when
we
have
a
frame
size
which
is
faster
than
the
the
device
under
test
can
handle.
Do
we
generate
a
queue
length
and
eventually
a
tail
drop
or
a
queue
overflow,
where
the
burst
of
packets
that
was
submitted,
causes
some
lost,
and
then
we
can
conclude
that
the
burst
was
too
long.
A
A
So
on
this
graph-
and
this
comes
from
the
OPN
efi
testing-
we
ran
tests
at
64,
byte
frames
with
the
open,
V
switch,
we
saw
I
guess
it
was
about
24
million
frames
per
second
with
a
VPP.
It
was
a
little
bit
less
in
this
particular
test,
but
they
were
both
less
than
the
maximum
theoretical
throughput.
So
this
is
a
frame
size
that
we
can.
We
can
test
well
to
see
if
the
device
under
test
is
going
to
be
doing
it.
Well,
it's
it's
certainly
doing
some
buffering.
A
It
can't
keep
up
the
device
under
test
can't
keep
up
with
the
the
maximum
throughput
at
64
bytes
per
second.
So
at
128
it's
a
little
bit
ambiguous.
It's
actually
very
close
to
the
maximum
theoretical
throughput
and
we'll
actually
see
that
in
in
the
results,
but
then
at
256
or
512
there
there
we
we
get
in
in
in
both
cases
both
for
VPP
and
OBS.
A
A
The
trials
seek
the
longest
burst
length
that
can
be
sent
with
zero
loss,
measured
or
same
same
number
of
frames
sent
to
the
same
to
the
number
received
and
the
test
outcome
is
every
time
it's
the
burst
length
and
we
are
only
going
to
count
the
ones
where
we
see
zero
loss.
So
the
tests
are
repeated
n
times
and
actually
for
the
after
you
after
you
seeked
and
found
the
largest
burst
that
you
can
send
without
loss.
A
So
that's
a
lot
more
detail
than
is
currently
in
RFC
2544,
but
I
think
that
that's
the
that's
the
set
of
things
that
we
are
looking
for
now
so
for
clarifications.
So
I
I
grabbed
this
right
out
of
the
read
out
of
the
draft
so
for
each
frame
size
we're
going
to
calculate
the
following
Maree
statistics
for
the
back-to-back
frames
over
the
end
tests.
The
average
is
the
benchmark,
but
we're
also
going
to
report
the
maximum
burst
length
that
we
saw
in
the
end
trials.
A
The
minimum
and
the
standard
deviation
and
further
we're
going
to
calculate
the
implied
device
under
test
buffer
time
and
the
corrected
device
under
test
buffer
time
as
follows.
So
the
the
average
buffer
time
is
the
average
number
of
back-to-back
frames
divided
by
the
theoretical
frame
rate.
So
what
we've
got
here
is
a
calculation
of
the
time
that
that
is
represented
by
the
burst,
and
that
time
is.
A
While
the
burst
was
running,
so
in
fact,
you
can
think
of
it
as
a
a
buffer
which,
in
the
in
the
implied
buffer
time,
it's
the
it's.
The
complete
burst
that
buffer
is
the
time
is
represented
by
all
the
packets
sent
in
the
burst.
In
this
corrected
buffer
time.
We
we
use
the
maximum
theoretical
foot
throughput
to
figure
out
how
many
packets
were
bled
off
the
buffer
before
one
packet
overflowed,
and
this
gives
us
an
estimate
of
the
the
actual
buffering
time
inside
the
device.
A
A
So
that's
a
good
point.
A
problem
I'll
probably
want
to
boost
that
up
so
that
it
matches
more
the
kind
of
millions
of
packets
we've
seen
in
typical
testing,
but
so
we've
got
the
average
at
26,000.
We've
got
the
the
min
and
the
max,
and
the
standard
deviation
here
reported
at
at
25,
527
thousand
and
and
20
packets,
and
then
the
corrected
buffer
time,
which
is
let's
see
here.
A
It
looks
like
that's
about
it's
small,
it's
a
40,
microseconds
yeah,
so
not
very
much
so
that
would
typically
report
it
in
a
table
like
this
and
in
addition,
we'd
want
to
report
the
the
standard,
the
static
and
configuration
parameters
that
are
associated
with
the
test.
So
that's
the
number
of
repetitions,
obviously,
and
the
minimum
step
size
that
the
device
the
tester
uses
to
try
to
determine
the
burst
length.
In
other
words,
it
may
not
be.
A
It
may
not
be
incrementing
the
burst
length
by
one
packet
that
might
take
a
very
long
time
so
that
might
be
configured
at
at
in
a
test
with
where
we're
trying
to
test
millions
of
packets,
we
might
have
a
step
size
of
a
hundred
packets.
Let's
say
something
along
those
lines,
and
that
would
improve
our
ability
to
complete
this
test
quickly,
so
this
is
so
a
lot
of
this
other
than
the
average.
A
A
So
so
that
that'll
give
you
a
feeling
for
the
oh
I,
think
I
think
the
blue
sheets
are.
A
Thank
you,
Steve
and
and
welcome
to
BMW
Jane.
So
so
next
steps
know
what,
as
far
as
I
can
tell
no
one's
actually
reviewed,
read
and
reviewed
the
draft
yet
and
that
would
be
essential
before
we
adopted
it.
I
think
you
know:
I
I
have
to
hold
myself
up
to
the
same
standard
that
anyone
else
would
as
a
participant
bringing
a
proposal
here
and
then
so.
Once
we
get
some
readership
and
review
and
feedback
we
we
could
possibly
consider
I,
guess
I
didn't
ask
the
room.
Has
anyone
read
the
draft.
A
A
Well,
we
I
know
I
know
so
what
we
could.
We
could
agree
to
create
a
milestone
for
this
work
on
the
basis
of
this
description
today
that
we
think
this
is
worthwhile.
I've
actually
proposed
one
there
and
then
obviously
sometime
in
the
future,
after
we've
done
some
readership
and
review,
but
we'll
hold
off
on
asking
that
today,
we'd
ask
her
working
group
that
this
whole
list
of
authors,
not
me
and
and
actually
my
my
colleagues
from
OPM,
if
the
vSwitch
performance
project
may
may
join
me
on
this
author
list
as
well.
A
But
but
I
wrote
this
in
like
two
days
over
the
weekend,
so
I
did
get
a
chance
to
ask
them.
They
wanted
to
join
it,
but
in
any
case,
there's
no
guarantee,
even
if
we
take
up
a
milestone,
there's
no
guarantee
that
this
draft
would
be
the
one
that
satisfies
it.
Other
ideas
are
welcome.
So
that's
the
way
we
play
here
so
Sarah.
A
A
D
So
when
we
do
these
kind
of
tests,
usually
we
are
looking
to
test
some
particular
you
know
component
within
the
switch
architecture
such
as
queuing
such
as
packet
processing
such
as
buffering
after
processing
or
before
purchasing,
might
take
away
from
what
you
described
is
in
a
way
you
try
to
check.
What
is
the
capacity
to
buffer
buckets
before
hosting
events
am
I
correct,
or
can
you
I.
A
Stay
right
stay
right
there
there
you
may
have
a
follow-up
question.
What
what
I
anticipate
is
that
I've
in
my
mind
and
and
the
way
I
described
it,
I
modeled
this
as
as
one
buffer
and
one
processing
stage
that
bleeds
off
the
buffer?
The
reality
is,
there
are
many
more
than
one
buffer
and
there
are
many
processing
stages,
but
at
least
one
of
them
is
the
bottleneck.
A
A
Been
thinking
that
the
reason
to
have
this
correction
factor
to
actually
try
to
assess
the
actual
buffer
time
or
space
that's
available,
is
that
well
actually
one
of
the
background
things
is
that
you
know
real
traffic
that
doesn't
necessarily
arrive
smoothly
or
continuously
it.
It
arrives
in
bursts
and
it
may
be
valuable
to
know
when
a
device
that's
doing
forwarding
or
routing
when
that
device
pauses
for
a
moment
or
two
and
and
and
the
processing
stops,
then
how
much
real
buffer
is
inside
that
we
can
count
on
to
be
able
to
smooth
out
these
bursts.
B
Please
yeah:
my
name
is
Juan
Gauri
from
PRI
at
tree,
so
I
asked
you
welcome.
C
B
It's
what
professor
so
my
battery's
about
previous
previous
slide
I
mean
the
props
are
so
many
metrics
there
is
so
what
area,
six
or
seven
things,
but
this
is
currently
work
area
right
are.
B
A
B
A
B
B
A
B
C
I
would
be
very
interested
in
fact,
I've
been
looking
at
potentially
doing
something
down
this
path
as
well.
So
if
you
have
that
draft
as
soon
as
you
have
that
ready
a
with
my
chair
hat
on
I,
think
we
very
much
encourage
that
it's
very
relevant
to
what
we
see
in
the
industry
and
and
how
to
benchmark
that
I
think
is
a
good
thing
and
then,
with
my
chair
hat
off,
just
as
a
participant,
I'm
very
excited
to
read
that
so,
if
you're
going
to
take
that
on.
Thank
you
we'll
see
you
in
London
and.
D
A
Very
good
and
and
be
sure
that
you
take
a
look
at
RFC
80204,
which
is
where
the
the
open
platform
for
NF
e
V
switched
project
provided
some
considerations
for
doing
exactly
this
work.
I
mean
that's
the
project,
that's
the
project
that
also
contributed
the
test
results
that
we
just
discussed,
and
so
we've
been.
You
know:
we've
been
testing,
V
switches
and
and
and
now
vite
the
cisco
VPP
virtual
packet
process.
Are
there
as
well?
A
In
fact.
In
general,
the
RFC
81
72
talks
about
considerations
for
benchmarking
in
general
in
the
in
this
virtual
space,
and
we
just
we
expanded
on
that
here
for
virtual
switches
in
this
one,
so
that
and
that's
a
another
RC
that
was
published
really
just
before
this
interim
period.
So
it's
it's
actually
been
five
very
recent
ones
that
we've
completed
okay.
A
C
A
C
A
All
right,
so
I've
been
asked
to
represent
this
proposal
on
benchmarking,
next-generation
firewalls
by
the
the
proponents,
who
happened
to
be
a
couple
of
a
couple
of
good
guys
who
worked
with
us
here
in
in
the
past,
carsten
or
us,
and
hovel
and
balla
balla
raja
both
from
the
european
advanced
network
test
center
and
they're,
currently
working
with
a
not
profile,
not-for-profit
initiative
called
net
sec,
open,
informed
to
innovate,
network
security,
test
methodologies
and
they've
been
looking
at
this
for
quite
a
while.
A
They
want
to
strongly
improve
the
applicability
and
reproducibility
and
transparency
of
benchmarks
for
next
generation
firewalls,
but
also
intrusion,
detection
and
prevention
systems,
unified
threat
management
and
and
oh
I,
guess
it's
and
unified
threat
management
solutions.
So
this
is
very
opportune
that
we
have
such
an
esteemed
security
area
colleague
here
with
us
as
we're
discussing
this.
Yes,
yes,
he
has
to
wake
up
now.
Yes,.
A
Also,
the
you
know
the
good
timing
with
our
charter
reach
our
during
efforts
and
so
forth,
and
and
the
fact
that
you
know
we
have
the
attention
of
the
industry
here
in
in
having
done
this
work
before
RFC
35
11
is
our
previous
effort
in
this
area.
So
well
they
they
plan
to
submit
their
first
draft
sometime
next
week
and
they
hope
to
proceed
quickly
through
November,
in
December
and
and
so
depending
on
on
how
quickly
we
we
want
to
review
this
work.
We
may
take
up
an
interim
meeting.
A
A
So
we
will
investigate
the
possibility
of
an
interim
meeting.
It
will
be
a
virtual
meeting
to
hear
more
from
the
net
sec
open
folks
and
what
they've
got
here
is
a
table
of
contents,
which
mostly
is
composed
of
the
scope.
So
it's
focused
on
test
methodology
for
network
security
device.
Benchmarking
tests
in
terms
of
performance
metrics
describes
the
test
methodology
to
obtain
repeatable
results
independently,
using
different
vendor
test
equipment
by
defining
the
full
set
of
test
configuration
parameters.
This
document
will
allow
users
to
reproduce
network
performance
and
compare
measurements.
A
Benchmarking
tests
focus
a
set
of
key
performance
indicators,
we're
all
familiar
with
those
devices
such
as
firewalls,
next-generation,
firewalls
intrusion,
detection
and
prevention
devices.
Well
see
this
here
they
seem
to
go
beyond
the
next-generation
firewall
scope,
so
that's
something
I
would
feedback
I
mean
I,
think
I
think
they
should
probably
attack
next-generation
firewalls.
A
First
see
how
that
goes,
which
is
actually
what
they
said
in
the
message
and
then
worry
about
this,
this
other
interesting
stuff
like
deep
packet,
inspection
devices,
web
application,
firewalls
and
and
so
forth,
but
but
I
think
you
know,
we
might
consider
adding
a
sentence
in
our
Charter
to
be
sure
that
everyone
knows
that
we're
we're
we're
willing
to
take
on
devices
in
this
of
this
ilk.
If,
in
fact,
the
working
group
is
so
how
let
me,
let
me
ask
the
room
now
that
we've
described
this.
How
is,
is
there?
A
C
A
And
and
when
you,
when
you
look
at
the
fact
that
in
fact,
RC
3511
was
one
of
the
first
ones,
I
helped
to
finish
up
as
a
co-chair
and
and
so
that's
a
long
time
ago,
20
2003,
so
yeah
a
lot
has
been
learned
in
the
meantime
and
if
you
remember,
we
had
Mike
Hamilton's
content-aware
work.
Yes,
yes!
Well,
he
was
the
original
proposer.
That's
why
I
always
I
always
think
of
it.
A
Think
of
him
there,
but
if
there,
but
if
there's
anything,
we
learned
from
that
work,
we
need
to
bring
that
forward
as
well
to
that
that's
still
relevant,
of
course,
that
that's
been
retired
for
a
few
years.
So
and
then
the
rest
of
this
is
fairly
usual.
The
test
setup
testbed
calibration,
that's
a
good
thing,
though,
and
reporting
and
benchmarking
tests
and
so
forth
and
and
they
want
to
define
traffic
mixes
here,
both
know
the
stateful
and
and
stateless
stuff.
A
D
C
It's
one
of
the
places
where,
at
least
for
me,
you
know
25:44
is
great,
but
the
average
packet
size
has
evolved
on
the
internet
from
Warmack
started
out
in
twenty
five
forty
four
times
so
for
them
to
at
least
take
a
stab
at
defining
traffic
mixes
that
make
sense
to
them.
So
you
can
have
an
apples,
apples,
comparison
across
vendors.
Isn't
that
what
we
all
want?
Yeah.
A
A
I've
seen
a
couple
of
data
sheets
recently
for
devices
in
the
next-generation
firewall
class
and
they
are
basically
testing
whatever
they
want
at
the
moment.
So
so
that's
the
I
mean
that's
that's.
The
goal
here
is
to
get
everybody
testing
the
same
thing
so
that
we
can
have
the
apples
to
apples
comparison
without
the
speck
mint
chip
yeah.
That's
our
that's
our
original
goal.
Okay!
So
any
further
questions
about
that.
A
A
A
A
Paul
Emmerich
from
a
Technical
University
of
moongeun
talked
about
the
moon,
gen
traffic
generator
and
also
some
results
that
we've
some
of
us
have
seen
in
the
industry
where,
if
you
use
a
different
traffic
generator
and
there's
quite
a
few
choices
out
there
today,
there's
the
hardware
ones
from
Lexy
Inspiron
and,
of
course,
several
software
generator
generators
like
like
moon,
gen
and
and
also
t-rex,
which
has
its
own
platform
but
can
be
run
on
on
general-purpose
hardware
as
well
and
there's
a
few
others.
I
won't
try
to
mention
them
all.
A
A
But
the
idea
that
that
that
came
to
me,
as
as
Paul
was
making
his
presentation,
showing
the
the
stream
generator
characteristics,
was
that
if
we
have
ideal
characteristics
that
we're
trying
to
match
that
a
measurement
of
the
stream
itself
would
be
very
valuable,
and
we
could
compare
that
with
a
tolerance
template
around
the
let's
say
the
inner
packet
arrival
times
that
have
been
generated.
Compare
that
across
you
know
a
stream,
that's
long
enough
to
characterize
a
device
under
test
and
then
for
every
generator
that
we're
considering
using
in
the
various
circumstances
and
so
forth.
A
Take
a
look
at
how
well
it
conforms
to
this
template
of
Tolerance
kind
of
as
a
calibration
of
the
the
generators
themselves
and
I.
Think
that
we,
if
we,
if
we
wanted
to
take
up
this
work,
that
we
would
need
to
add
at
least
one
sentence
to
our
Charter.
To
say
that
we
will
also
look
into
this
calibration.
C
C
E
Yeah
Paulette
Mike
to
Munich,
so
yes,
obviously
I've
had
experiences
that
since
I
gave
the
talk
earlier
today
and
it's
really
difficult
to
measure.
So
this
is
like
everything
is
moving
to
software
package.
You
know
at
us
and
now
we
are
seeing
that
that
they
are
not
accurate
but
again
to
validate
them.
Validate
the
software
package.
Generator
is
another.
Software
tool
doesn't
really
work
because
you
get
the
same
in
your
measurement
device.
The
same
position
problems
as
with
your
generation
device.
So
you
then
really
need
hardware
to
precisely
calibrate
it.
Then.
E
The
next
question
is
what
is
an
important
or
what's
the
right
metric
to
or
what
do,
I
tell
events.
What
is
even
acceptable
may
be.
Some
small
bursts
are
acceptable,
maybe
gave
even
what
we
want
so
I
don't
have
all
Anton
antis
I
just
did
some
work
there
and
it
was
very,
very
challenging
and
also
yeah.
E
C
Could
I,
then
I
agree
I,
think
it's
hard
I
do
think.
Sometimes
when
we
take
on
a
draft
we
take
it
in
and
it
doesn't
have
to
necessarily
publish
out.
So
if
we
get
to
the
end
of
the
exercise,
maybe
we
decide
this
is
too
difficult
to
do.
But
at
least
we
tried
for
me
I
think
it's
valid
I,
wouldn't
mind.
I
had
a
class
today
and
couldn't
see
your
presentation
and
I'm
wondering
if
I
could
potentially
convince
you.
C
D
A
We
can
we
can
take
a
look
at
those,
but
yes,
absolutely
I
think
it
if
we,
if
we,
if
we
can
pursue
this,
then
that
would
be
great
I'd,
be
very
interested
in
following
up
on
this
as
well.
Also
yeah,
great
there
I,
think
and
I
think
I
think
the
minute
the
minute
anyone
in
the
test
generator
world
gets
wind
of
us
thinking
about
doing
this
well
will
suddenly
have
lots
of
collaboration.
C
E
C
E
Results
here,
that's
really
nice.
So
how
I
did
these
tests
another
time
we
used
an
FPGA,
but
that
was
like
well,
it's
it's
great
hardware,
but
it
can
be
very
cumbersome
to
workers
if
you
need
to
change
something,
but
at
least
you
know
how
to
ourselves.
So
you
know
it
works
and
and.
A
If
we
have
a
actually
there's
no
reason
to
wait
all
the
way
from
London,
we
you've
got
the
slides
and
if
we
have
an
interim
meeting,
this
could
be
one
of
the
topics
that
we
take
up
sure,
but
but
you
could
still
potentially
get
a
trip
to
London.
Out
of
this
that'd
be
good,
so,
okay,
well
I,
I'm
glad
I
brought
that
one
up
and
it
was
well
timed
with
Paul's
earlier
presentation.
A
A
I'm
just
gonna
pray,
oh
yeah,
I've
got
it
right
here.
Alright,
so
I
wanted
to
I
wanted
to
mention
that
we've
we've
got
this.
We've
got
this
charter
that
I
shared
on
the
list
for
discussion.
Sarah
and
I
worked
on
this
a
bit
before
it
was
republished,
and
this
this
paragraph
here
the
scope
of
BMW
G,
has
been
extended
to
develop
methods
for
virtual
network
functions
and
their
unique
supporting
infrastructure
such
as
Sdn
controllers
and
V
switches,
so
we're
actually
where
we
had
this
as
a
kind
of
like
a
bullet
item.
A
Before
for
a
specific
area
of
work,
we've
actually
completed
the
considerations
draft,
which
was
the
first
homework
item
there
and
we're
just
about
to
complete
the
SDN
controller
stuff
and
then
and
go
on
and
do
some
other
things.
So
we've
got
all
sorts
of
things
here
mentioned:
platform
capacity
and
performance
characteristics
or
virtual
routers,
firewalls,
signaling
control
gateways,
other
forms
of
gateways
are
included,
benchmarks
will
foster
comparison
between
physical
and
virtual
network
functions
and
also
cover
unique
features
of
the
network
function,
virtualization
systems
like
migration
or
life
cycle
operations,
and
so
forth.
C
A
D
B
A
Okay,
so
the
so
we'll
flush
the
wording
of
those
sentences
out
and
circulate
it
on
the
list
and
then
I
guess
we'll
propose
after
we've
had
some
some
time
for
a
review,
and
some
of
the
other
documents
well
will
propose
milestones
related
to
evpn
and
whatever
else
we
decide
to
to
take
up
out
of
the
the
set
of
new
proposals.
We've
heard
we've
heard
about
three
brand-new
proposals.
A
Today,
we've
got
some
other
ones
which
we've
shown
in
the
matrix
they're
lurking
around
and
and
hopefully
the
proponents
will
be
more
active
and
then
then
we
should
have
a
good
set
of
milestones
that
we
can
put
together
with
our
Charter
text
and
go
to
the
iesg
and
say
please
let
us
continue
as
one
of
the
longest-running
working
groups
in
the
Haight.
Yet
it
shouldn't
be
too
much
problem
all
right.
A
So
if
you
have
questions
in
general
ideas
about
benchmarking,
now's
the
time
to
come
to
the
microphone
and
talk
about
it,
okay
looks
like
a
pretty
satisfied
group
all
right
folks.
Well,
thank
you
all
for
your
attendance
and
your
participation
in
particular.
Those
who
did
I
think
that
we
were
very
glad
to
have
that
here
today
and
there
will
be
even
more
of
us
and
in
future
meetings.
A
Yeah
it
worked.
It
worked
very
well
and
we're
very
thankful
to
meet
Eko
for
that.
So
thank
you
meet
Eko,
guys
and,
and
so
there
we
there
we
are
I,
think
we're
done
folks.
So
thank
you
again
and
we'll
see
you
in
most
likely
in
a
virtual
meeting
interim
meeting,
but
also
for
sure
in
London.
Thanks
again.