►
From YouTube: IETF104-BMWG-20190327-1120
Description
BMWG meeting session at IETF104
2019/03/27 1120
https://datatracker.ietf.org/meeting/104/proceedings/
A
B
D
C
B
B
So,
let's,
let's
begin
by
doing
the
the
usual
stuff,
we're
gonna
circulate
the
blue
sheets.
Our
area
director
is
right
here
in
front
signing
the
blue
sheet
right
now:
Warren
Kumari.
If
you
have
an
interest
or
a
something
you'd
like
to
see
done
differently,
Warren's
also
a
person
you
can
talk
to
because
we
work
for
him
in
effect
and
we're
happy
to
to
take
on
new
actions.
So,
let's
see
here,
I
guess
it's
a
no!
It's
a.
E
B
F
F
B
Yeah,
alright,
so
the
note
well,
if
you've
been
to
an
IETF
meeting
before
you've
seen
this,
it
basically
means
that
everything
you
say
and
do
in
this
room
is
a
contribution
to
the
IETF.
Also,
you
know
messages
to
the
mailing
list
and
all
sorts
of
things,
but
the
most
important
thing
is
something
I've
added
to
the
top
of
the
note.
Well,
we
work
as
individuals
and
we
try
to
be
nice
to
each
other.
This
needs
to
be
said.
More
often,
that's
what
the
working
group
chairs
have
agreed
on.
B
So
we're
saying
it
now
at
the
beginning
of
every
meeting,
those
of
us
who
picked
up
on
this-
and
let's
also
note
that
well,
thank
you
so
on
to
the
agenda.
First
off,
we
need
some
volunteer
note
takers
and
the
reason
I
ask
for
that.
Now
is
that
we've
I've
just
managed
to
post
the
entire
agenda
into
our
etherpad.
There
are
some
updates
to
the
agenda.
Some
items
that
we're
gonna
flip
around
a
little
bit
and.
B
B
A
A
B
That's
you
know,
that's
the
kind
of
help
we
need.
We
mean
I've
been
I've,
been
working
till
late
last
night,
posting
slides
from
everybody
and
reading
people's
drafts,
and
you
know
it's
it's
not
a
two-person
operation
here
we
need
the
community
to
work
with.
So
thank
you
all
right.
So
let's
has
anybody
on
jabber,
so
we've
had
problems
with
this
in
the
past,
my
machine
is
blocked,
I,
think
your
machine
is
blocked
or
somebody
I
mean
we've
had
a
lot
of
trouble
with
this.
B
All
right,
so
what
so
we'll
use
meet
echo
for
the
for
the
jabber
and,
and
that
means,
if
you're
a
remote
participant
today
and
it
looks
like
we
have
three
or
four
who
are
not
the
organizers
of
meet
echo.
Then
please
use
the
please
use
the
jabber
equivalent
on
that
Thank
You
Warren
great
suggestion.
All
right,
the
blue
sheets
are
going
around.
We've
done
the
IPR
sort
of
note.
B
Well,
if
there's,
if
you
have
IPR
associated
with
your
internet
draft,
please
disclose
that
just
close,
it
frequently
disclose
it,
often
and
and
in
a
timely
fashion,
so
make
sure
the
blue
sheets
don't
get
lost
and
sort
of
keep
them
in
the
back
where
people
will
walk
in
and
see
them
and
hopefully
saw
we
need
to
keep
as
many
names
on
the
blue
sheets
as
possible.
Thank
you.
B
So
our
quick,
a
working
group
status,
that's
what
we'll
cover
first,
we'll
look
at
our
Charter
and
milestones
and
then
we'll
hear
from
Carsten
and
his
team
on
the
benchmarking
methodology
network
security
device
performance,
nice,
big
draft
there.
We
have
six
continuing
proposals,
one
from
me
on
back
to
back
frame
methodology
for
vnf
benchmarking,
water,
business.
Automation
is
manual
here
and
well
yes,
nice
to
meet
you
man
well
and
then
San,
Juan
and
Jacob
are
going
to
present
on.
B
That's
a
really
quick
one
for
me,
in
fact,
it's
got
zero,
slides
and
then
the
multiple
loss
ratio
search
and
the
probabilistic
plus
you
less
ratio
search
from
the
team
of
Ratko
and
ma
check.
Welcome
guys.
So
then
we
have
a
whole
bunch
of
new
proposals.
Evpn
multi
casting
is
a
remote
presentation.
Mr.
young
ye
son
is
that
close,
close
to
correct
pronunciation,
okay
and
and
then
on
containerized
infrastructure,
and
if
he
service
benchmarking
from
Mott
Jack
I
have
a
comment
on
that
that
I
haven't
fully
typed
up.
B
B
Benchmarking
methodology
for
evpn
PWS,
that's
a
it's
an
another
draft.
It's
been
around
actually
no
presentation
this
time,
so
those
are
additional
drafts.
You
can
take
a
look
at
any
bashing
of
the
agenda
needed
I.
Think
we've
already.
We've
actually
already
done
one
this
morning
rearranging
a
couple
of
talks
so
seeing
no
requests
for
them
phone.
Let
us
continue
so
here's
the
status
I
hope
by
by
looking
at
a
full
page
of
an
agenda
items.
B
There
are
fourteen
of
them
that
you'll
agree
with
me
that
proposals
keep
coming
and
they're
in
all
these
different
areas
and
I
mean
I'm
starting
to
see
some
interesting
interactions
between
the
proposals.
So
I
think
that
that's
that's
one
of
the
things
we
might
look
for
today's
some
synergies,
some
areas
where
author
teams
could
work
together
and
and
produce
a
larger,
better
draft.
That
would
that's
one
of
the
things
that
I
would
like
to
suggest.
B
B
Well,
I.
Think
we're
doing
fairly
well
on
on
all
of
these.
So
it's
up
to
the
chairs
to
go
in
and
do
a
reassessment
a
job
on
on
each
one
of
these
I
mean
the
bottom
line.
Is
we
haven't?
We've
only
adopted
drafts
for
two
of
these
I
think
the
methodology
for
next-gen
firewalls
and
the
evpn
benchmarking.
So
you
know
we've
gotta,
move
we're
gonna
move
up
and
you
know
pick
up
pick
up
the
work
that
would
help
us
to
satisfy
these.
B
B
So
no
new
RFC's
we're
still
working
on
a
charter
that
little
I
think
it's
a
little
less
than
a
year
old.
Now
we
have
a
supplementary
BMW,
G,
page
who's,
new
and
BMW
G
tending
for
the
first
time.
Please
raise
your
hand
great.
It's
like
five
six
people.
That's
that's
good!
So
you'll
find
this
is
a
very
easy
group
to
join,
especially
if
you
spend
any
time
in
the
lab
doing
testing.
B
If
you
read
some
of
our
fundamental
dress
like
or
there
are
seas
like
RC,
25,
44
or
RFC
2889
I
mean
these
are
the
these
are
the
real
pillars
of
our
work
that
we're
leaning
on
still
today,
so
I
suggest
that
you
get
involved
in
the
group
that
way
and-
and
if
you
have
any
questions
about,
you
know
things
that
you
might
get
started
on
drafts
to
review
and
in
your
area
of
expertise,
please
see
either
Sarah
or
or
I
after
the
meeting.
We
would
be
glad
to
help
you
out
so
welcome.
B
So
here's
our
work
proposal,
summary
and
I
haven't
added
all
the
new
proposals
here.
In
fact,
I
mean
there's
two
that
are
in
green,
which
we've
actually
adopted
so
they're,
not
proposals,
there's
some
other
stuff
in
here,
the
SFC,
that's
kind
of
expired.
Now
that's
gone.
The
network
service,
our
abstract
model,
that's
gone,
but
we
have
other
proposals
that
might
take
the
place
of
that
in
the
and
the
things
that
we
looked
at
in
the
milestones
here,
so
a
fairly
good
activity
on
back-to-back
frame
testing.
B
B
Eventually,
the
security
area
will
review
our
dress
and
the
security
area
doesn't
review
our
Charter
when
they
review
our
dress
and
sometimes
the
security
area
reviewer
flips
out
when
they
see
that
we're
doing
all
this
stuff
and
sending
traffic
across
the
network
and
congesting
the
hell
out
of
network
devices.
It's
a
it's
a
real
shame
that
they
don't
read
the
Charter.
But
then,
if
we
put
this
in
our
security
considerations
section,
it
immediately
alleviates
all
the
issues.
Oh
wow,
this
is
just
in
the
lab.
I
get
it
now
all
right
all
right.
B
We
still
get
some
comments
from
them,
occasionally,
which
is
fine,
but
we're
and
we're
happy
to
deal
with
that.
But
but
this
dispels
a
lot
of
problems,
so
I
I
strongly
suggest
that
folks
use
this.
But
it's
up
to
you
I
think!
That's
it
that's
it!
So
that's
the
Chairman's
slide
any
any
questions.
Additional
comments:
Sara!
Okay!
Thank
you.
It's
good
to
hear
that
from
someone
all
right
so
off
to
the
next
year.
Let's,
let's
see
alright
so
on
on
my
version
of
the
agenda,
which
is
somewhere
here,
yeah
I'm.
B
Really
sorry,
when
I
put
when
I
pasted
this
into
etherpad,
all
my
text
came
out
pink.
So
those
of
you
who
looked
at
that,
no
not
nothing
wrong
with
it.
But
it's
like
I
mean
it's
a.
It
ends
up
being
a
low
contrast
thing
for
me
yeah.
So
so
as
part
of
our
status,
we
said
we
were
gonna
cover
the
status
of
the
benchmarking
methodology
for
evpn
and
PBB
evpn.
B
So
that's
not
good.
That
means
we
can't
close.
The
working
group
last
call
in
fact,
I
think
I
think
it
means
we
have
to
have
another
one.
So
so
that's
what
we'll
do
we'll
have?
Another
working
group
last
call
on
this
and
I'll
ask
Sarah
to
strike
that
when
you
get
access
to
your
email,
please
and
then-
and
we
should
cross
post
I-
think
to
the
bier
working
group
where
they
work
on
evpn,
so
we
should
have
there.
The
benefit
of
their
input
is
always
good.
B
It's
always
good
for
our
group
to
cross
post
to
a
group
that
has
effective
expertise
in
the
area
as
well.
I
mean
we're
we're
all
testers.
We
can't
expect
to
be
experts
on
everything
like
next-generation
firewalls,
for
example.
So
in
in
that
topic,
which
is
coming
up
right
now,
I've
actually
requested
an
early
review
from
the
security
directory
and
hopefully
and
I've
asked
them
to
find
somebody
who's
a
who's.
A
real
firewall
expert
to
provide
us
some
additional
comments
there.
So
that's
the
beginning
of
our
interactions
with
IETF
on
unworked
like
this.
B
C
B
Thank
you,
sorry
about
that
brain
all
right,
so
our
so
that
covers
yeah
thanks,
so
that
basically
covers
the
status.
The
charter
of
the
milestones.
So
now
we're
gonna,
hear
from
Karsten
ro,
Sandoval,
I'm,
sorry,
Carson,
I,
couldn't
spell
your
whole
name
out
and
the
agenda
slide
there
and
so
I
ended
up
truncate,
truncating
everybody's
name,
so
that
so
they
didn't
go
across
to
two
lines.
All
right
here
we
go
so
now
wait
a
minute.
B
G
I
think
we're
ready
to
go
okay,
so
thanks
yeah,
so
the
main
author,
Bala
Bala
Raja,
is
also
in
the
meeting
or
training
remotely.
He
wasn't
able
to
come
in
person,
unfortunately,
and
Brian
moment
from
the
net
tech.
Open
group
is
also
on
remotely
available
and
Tim
from
IOL
who's
also
contributed
heavily
present
here.
So
any
questions
I
think
I
hope
we
should
get
covered
yeah.
So
this
draft
has
been
presented
before
I
think
at
Ike,
if
one
of
three
and
one
or
two
and
not
sure
about
101,
so
she
can
move
the
first
slides.
G
We've
continued
to
refine
this.
So,
unfortunately,
many
of
our
discussions
are
not
usually
seen
over
the
BMW
G
mailing
list,
because
we
have
a
separate
group,
it's
called
net
sec
open
and
we
have
like
discussions
and
weekly
calls
and
actually
we
use.
They
have
two
weekly
calls
until
a
short
time
ago
and
we've
basically
had
a
lot
of
internal
draft.
So
before
we
upload
anything
to
the
ITF
official
document
repository,
we
basically
have
internal
reviews.
We
used
to
have
a
separate
set
of
test
cases
for
transaction
per.
G
Second,
we
figured
in
the
proof
once
a
test.
We
can
merge
them.
So
now
we
go
down
from
11
main
chest
areas
to
9,
because
we
emerged
transactions
per
second
with
a
true
protest,
so
that's
actually
making
things
more
efficient.
We
added
more
system
under
test
features.
We
clarified
more
stuff
and,
in
general,
our
draft
guru
in
with
the
attempt
to
make
things
as
precisely
defined
as
possible.
G
So
the
main
goal
here
is
to
make
sure
that
anybody
who
applies
this
document
to
a
firewall
test
will
yield
the
same
results
so
basically
to
iron
out
any
uncertainties
any
black
spots
of
not
being
defined.
We
also
improved
the
test
procedures,
so
we
actually
found
when
working
with
multiple
labs
and
multiple
tests
with
vendors,
that
some
of
the
text
was
not
precise
enough.
G
Yet
so
we
improved
the
test
procedures
both
to
make
them
more
reproducible
and
also
to
make
them
more
precise
on
how
exactly
to
set
up
things,
and
we
to
this
end,
we
also
defined
more
TCP
parameters
like
we
had
a
long
debate
about
the
window
size
that
was
also.
There
were
also
some
discussions
on
the
mailing
list
about
delayed
AK
and
congestions
windows
and
so
on
so
I
think
we
have
now
defined
the
precise
set.
G
So
please
look
at
the
document
and
see
if
you're
agreeing
with
that
and
this
exact
parameters
ation
actually
enables
test
automation.
So
there
are
two
commercial
testable
vendors
in
the
process
of
automating
all
of
these
tests
right
now
one
of
them
has
reached
90%
and
the
other
one
has
said
they
should
be
ready
by
the
end
of
q2.
So
it
seems
like
that's
another
good
proof.
That
standard
is
actually
implementable
and
is
precise
enough.
G
So
next
slide,
please
just
for
those
who
might
remember
RC
3511
the
test
methodology
for
firewall
performance,
benchmarking,
that's
a
couple
years
old,
so
we
don't
really
we've
never
really
discussed
whether
we
actually
want
to
formally
supersede
it
I
think:
that's
not
my
goal,
but
just
to
put
things
in
comparison.
So
some
things
like
traffic
requirements,
we're
not
defined
back
then,
and
here
we
have
a
precise
requirement
about
the
object
sizes
and
the
traffic
mix.
G
That's
very
important
because
when,
if
I
were
Venice
went
to
like
very
small
object
sizes
for
some
tests
or
very
large
one
for
other
tests,
optimize
the
results
and
it's
important
to
have
this
comparable
in
an
apples-to-apples
way.
We
also
define
the
cipher
suites.
That's
actually
very
important,
as
HTTP
becomes
much
more
widely
used,
and
these
cipher
suites
are
defined
for
individual
test
cases.
We
had
a
big
debate
about
cipher
suites,
because,
obviously
you
can
imagine
from
the
ITF
perspective.
G
We
want
to
have
the
most
advanced
cipher
suites,
but
from
the
vendor
perspective
we
want
to
have
the
easiest
ones
and
then,
from
the
reality
check
perspective
with
enterprises.
They
want
to
have
something
that's
already
robust
and
it's
not
just
the
latest
and
greatest,
but
is
actually
used
in
the
field.
So
next
to
please.
G
Regarding
the
rule
sets
we
defined
like
ACLs
and
yeah,
so
basically
rule
sets
for
for
firewall
rules.
In
a
specific
way,
we
tried
to
come
up
with
different
definitions
for
different
sized
firewalls.
So
again,
I
asked
you
to
review
this.
So
there
is
a
table
in
the
document
which
says
there
are
XS
extra
small
types
of
firewalls.
There
are
small,
medium
and
large
firewalls.
They
are
characterized
by
their
maximum
throughput
and,
of
course,
the
larger
firewall
is
the
more
advanced
it
is
normally.
G
So
we
cannot
expect
the
same
CPU
power,
the
same
feature
set
from
a
very
small
firewall.
That's
usually
used
at
a
very
small
branch
office
or
at
the
cpe
side
or
whatever.
So
that's
why
we
define
different
rule
sets,
but
of
course
that's
our
tree
and
it
depends
on
you
know
manufacturers.
So
if
you
have
any
comments
there,
that's
welcome.
H
Wrap
from
game
we're
looking
at
the
word
that
the
rule
set
as
well
as
the
throughput
recommendations,
and
as
can
we
is
there
a
better
way
to
classify
it
in
large,
medium,
small
extra
small,
because
I
think
over
time.
Those
definitions
will
change
like
throughput
as
you
should.
We
should
measure
throughput
or
measure
how
many
rules,
but
to
put
it
in
the
category,
is
kind
of
like
that's
it'll.
It
might
go
away
tomorrow
or
I
may
have
a
very
definition,
different
definition
of
how
many
rules
are
large
or
small
yeah.
I
G
C
Sarah
banks
all
add
on
to
what
Jacob
saying
I
agree:
I
think
specifying
as
extra
small
small
medium
large
is
fine,
but
it's
loose
right
so
having,
in
my
opinion,
two
approaches,
one
is
to
say,
look
for
each
one
of
those.
We
think
that
they
are
roughly
this
many
rules
and
this
much
of
throughput
and
then
also
because
the
I
think
everybody's
definition
might
vary
as
to
what
small,
medium
and
large
is.
The
other
alternative
is
to
say
and
I.
C
D
C
The
guidelines
is
not
a
bad
idea,
but
the
second
thing
that
I
like
to
see
sometimes
is
look
if
you're
gonna
test
make
sure
if
you're
gonna
put
in
your
number
that
each
time
you're
running
the
test,
that
you
noted
how
many
rules
and
what
the
throughput
was
and
that
you
repeated
those
so
that
you
have
apples
to
apples
comparisons
when
you're
done
for
each
iteration
and
I.
Think
those
two
things
set
you
free
because,
as
testers,
what
else
it
makes
sense,
yeah.
G
H
D
H
C
I
that
a
little
I'd
say
small,
medium,
large
and
custom,
and
that
way
when
somebody's
reporting
their
stuff,
you
can
ask
them
hey.
Did
you
use
the
small
medium
large
or
did
you
go
custom
and
then
it's
crystal
clear
that
if
you're
trying
to
game
it?
Well,
maybe
you
want
custom,
but
it's
clear
versus
if
you
use
small
medium
large,
presumably
now
we're
going
to
have
the
same
route.
It's
a
decent
apples
to
apples,
comparison
of
different
vendors
because
you
use
the
exact
same
number
of
rule
sets
and
throughput
yeah.
B
B
G
So
next
one
it's
a
it's
actually
animated
same
flight.
Okay,
the
last
thing
is
TCP
sticks
we
in
RC,
35
11.
It
wasn't
foreseen
that
details
of
the
TCP
stack
should
be
defined
nowadays,
it's
pretty
obvious.
These
things
need
to
be
defined,
as
I
said:
T
V
window
size
and
so
on,
algorithm
maximum
segment
size
and
all
of
these
things
that
can
be
used
to
twist
the
results
or
to
make
them.
B
As
a
participant,
al
Morden
has
a
comment,
and-
and
that
is
that
TCP
is
dynamics
are
affected
by
the
round-trip
delay,
even
even
with
cubic,
which
tries
to
be
independent
of
it.
And,
let's
see
for
one
thing,
I
don't
see
in
this
list
a
specification
of
the
congestion
control,
algorithm,
I
think
it's
in
the
draft,
though.
So,
if
it's
not,
it
should
be.
D
B
G
Okay,
yeah
good
point,
I
think
yeah
I'm,
not
sure
whether
this
would
be
another
test
or
just
another
parameter
to
be
applied
to
all
tests,
because
ultimately,
I
think
this.
This
document
is
probably
going
to
be
used
in
the
lab
and
by
vendors
to
define
datasheet
numbers
and
if
we
add
delays,
they
are
our
trait
for
a
use
case
scenario,
I'm,
not
sure
Tim.
J
D
J
B
G
Okay,
so
next
slide,
please
so
last
aspects
of
the
comparison
with
RC
3511.
So
we
have
multiple
test
validation
criteria,
so
we
don't
really
define
pass
or
fail,
but
we
define
criteria
to
measure
expected
results
against
and
we
also
make
sure
that
the
same
system
and
the
test
features
are
switched
on
or
off
in
the
test
and
that's
actually
quite
important,
because
the
features
of
modern
firewalls
are
increasing.
G
H
Can
is
there
a
way
to
reframe
that
a
bit
of
like
not
making
a
recommendation
of
just
benchmarking
them?
So,
instead
of
saying
these
are
mandatory,
or
these
are
optional
and
making
a
recommendation
over
which
features
should
be
there.
Just
saying
here's
a
list
of
features
and
here's
a
test
to
go
after
them,
because
I
think
I,
don't
know
if
we
want
to
be
in
the
business
of
making
them
recommendation
of
what
vendors
should
implement
in
terms
of
features.
H
I,
don't
have
an
answer
to
that,
because
it's
it's
up
to
interpretation
right
I
mean
it
may
be
all
of
them,
but
you
put
some
of
them
as
optional
as
well.
Right
where
you
say
some
are
mandatory.
Some
are
optional,
but
then
maybe
you're
saying
that
all
are
mandatory,
so
I
would
just
put
him
on
the
list
saying:
hey,
here's
all
the
list,
universal
features
and
here's
how
to
go
test
them.
Not
that
not
not
make
a
recommendation
of
what's
mandatory
or
optional.
G
Yeah,
well
obviously,
from
the
lab
perspective,
we
want
to
maintain
as
much
stringent
requirements
as
possible.
So
I
see
there
is
a
usual
conflict
of
goals
and
in
the
end
you
know,
if
you
switch
off
everything
you
could
you
could
test
a
router
with
only
layer,
three
enabled
and
apply
this
methodology
selectively
this
wonderfully
blazingly
fast,
because
it
doesn't
do
anything
yeah,
but
the
reporting
wouldn't
show
anything.
But
the
problem
is
the
problem
is
typically.
If
you
look
at
I
will
inspect
it
a
lot
of
data
sheets.
G
C
A
compromise
then
be
perhaps
to
have
two
cases:
the
first,
where
we
say
turn
everything
on
and
test,
and
the
second
is
define
what
the
features
were
that
work
turned
on
when
you
test,
because
the
other
thing
is:
let's
fast
forward
to
the
point
where
this
goes
to
the
queue
and
we
publish
the
RFC.
What
happens
if
I
don't
know?
Tls
one
seven
comes
out,
and
it's
not
an
option
here
in
the
RC.
Yes,
you
could
go
back
and
update,
but
another
way
to
cover.
C
That
is
to
say
well
if
the
RFC
still
covers
this,
because
now
you
have
to
say
TLS,
one
seven
was
a
feature
we
enable
it
or
we
disabled
it.
So
your
second
case
is,
you
tell
me
all
of
the
features
your
firewall
has
and
then
tell
me
the
ones
that
were
turned
on
or
off
and
that
way
I
can
make
an
intelligent
decision.
But
the
first
case
still
covers
your
okay.
We
don't
want
everybody
to
turn
everything
off
and
have
it
be
a
all
this
was
this
was
RFC
bla,
bla,
bla,
tested
and
look.
C
G
Think,
and
from
my
experience
as
a
you
know,
lab
writing
marketing
just
reports.
People
are
vendor
state
commissioners
with
tests
that
are
beyond
the
state
of
art,
they're
very
proud,
and
they
asked
us
has
to
you
know,
put
it
in
bold
and
large
funds
that
they
actually
went
above
and
beyond.
But
it's
more
a
problem
with
that
want
to
stay
at
the
bottom,
so
I'm
very
much
concerned
about
the
lower
limit.
G
C
It's
a
suggestion.
You
know
how
you
want
to
phrase
the
first
case
where
you
somehow
define
what
the
the
features
are,
but
the
second
case,
I
think
gives
us
the
out
to
your
point
as
well,
which
at
least
tells
me
what
are
your
features
or
which
ones
were
the
ones
that
you
had
turned
on
in
off,
so
that
I
can.
If
I
go.
G
G
H
No
I
agree
I
think
that
I'm
more
thinking
about
where
we
may
handle
one
of
these
cases
in
a
different
way
that,
because,
if
there's
your
think
that
comes
back
to
a
couple
other
things
I
wanted
to
mention
too
it's
like.
Are
we
trying?
Are
you
trying
to
define
what
a
next-gen
firewall
is
in
this
document?
Are
you
trying
to
define
the
methodology
to
test
what
a
next-gen
firewall?
It
is
well.
G
H
Be
yeah,
so
that's
what
I
wanted
to
maybe
I'll
take
it
on
the
list
as
well
of
I,
was
definitely
interested.
So
I
read
the
draft
Oh
quite
a
bit
of
it.
You
know
how
do
we?
How
does
this
change
as
the
firewall
isn't
one
centralized
thing
and
it's
spread
out
on
the
hyper
visor,
not
just
within,
like
what
VMware
does,
but
in
the
public
cloud
as
like,
Azure
or
Google
get
better
at
doing
like
ids/ips
all
this
stuff
at
the
host
level
and
actually
covers
most
of
this
list
in
that
level
of
how?
H
K
H
I
also
think
that
there's
probably
some
more
definitions
we
can
put
in
this
draft
to
to
make
it
also
exhaustive
for
that.
It's
like
there's
I,
think
like
stateful,
is
often
misused
and
there's
a
bunch
of
other
terminology
that
they
actually
defined
and
that
2647
that
could
be
yeah
I
did
as
well
yeah
part
of
that
mechanic.
G
I
think
those
of
course
these
comments
would
be
very
welcome,
because
this
is
stuff
that
we
just
discussed
and
defined
in
an
etic
open
group
more
than
a
year
ago.
So
at
the
very
beginning,
and
since
then,
we've
mostly
worked
at
the
individual
test
methods.
So
maybe
some
of
it
is
benefits
from
a
good
review
and
expansion
thanks
thanks.
So
any
more
questions
on
this
slide.
Okay,
so
then
we
actually
went
into
proof-of-concept
testing
and
L
wrote
in
the
agenda
like
we're
going
to
have
actual
test
results.
G
I
have
to
probably
disappoint
you
I,
don't
come
up
with
individual
numbers
with
commercial
vendor
names
at
this
point,
but
we
ran
a
pretty
extensive
POC
testing
program
with
two
goals,
basically
make
sure
that
the
test
procedures
are
actually
producing,
correct
and
expected
results
accurately
and
second
goal
to
make
sure
that
this
is
all
comparable,
and
this
is
actually
quite
important.
Most
of
the
security
benchmarking
tests
that
exist
in
the
industry
are
proprietary
single
lab.
G
Two
things
are
actually
all
of
them,
so
it's
comparison
is
not
a
problem,
because
it's
only
one
lab
and
with
one
tool
winner
who
governs
this
program,
but
in
this
in
our
program
that
we
want
to
create
based
on
this
document
in
net
SEC
open,
we
want
to
have
multiple
labs.
We
want
to
have
multiple
tools
we
want
to
have.
You
know,
of
course,
many
vendors,
and
that
requires
that
all
of
the
methodologies
precise
enough
that
it
always
creates
the
same
results
independent,
which
left
runs
the
test
and
which
tool
they
use.
G
So
that
was
the
second
goal
to
make
sure
that
is
possible,
and
actually
that's
that
yields
quite
a
number
of
interesting
challenges.
So
we
started
with
this
in
October
it
was
a
NTC
and
il
running
these
tests.
They
were
aspiring
onyxia
involved
and
for
firewall
vendors,
which
I
will
not
name.
Some
of
them
also
did
test
there
only
ups,
and
we
have
initial
results
that
we
have.
We
are
discussing
under
NDA
in
the
net
SEC
open
group,
so
you're
welcome
to
join,
but
I
don't
want
to
pitch
it.
G
So
if
you
go
to
the
next
slide,
so
what
what
what
I
was
able
to
provide
here
is
the
results
of
a
test
or
analysis
of
the
results
of
the
tests
that
we
did
at
the
MTC
with
one
commercial
firewall
vendor
and
we
went
through
protests
according
to
the
draft
with
an
automated
test
tool
and
the
results
were
actually
40%
higher
than
the
vendor
had
published
in
the
datasheet,
and
that's
quite
interesting
because
this
is
one
of
the
vendors
who
I
said
earlier
like
they.
They
want
to
go
above
and
beyond.
G
They
want
to
be
industry
leader,
so
they
switch
on
everything
and
then
they
report
a
number.
But
of
course
that
puts
the
number
fairly
low
in
comparison
to
others
and
since
some
of
the
stuff
like
propriety
stuff
that
they
used
to
switch
on
to
create
their
own
datasheet
number
was
not
switched
on.
As
per
that
draft
standard,
our
numbers
were
40%
higher
than
the
vendor
datasheet.
G
So
the
next
topic
was
the
session
capacity,
and
the
session
capacity
is
less
of
an
issue
because
it's
a
fairly
static
number,
so
the
vendor
they.
She
didn't
provide
any
details
parameters,
but
the
results
were
identical
and
the
last
topic
were
the
connections
per
second,
and
in
this
case
the
vendor
actually
used
very
small
HTTP
transactions
with
one
byte
content,
which
are
unrealistic,
and
we
debated
this
in
the
group
for
a
long
time.
There
is
no
real
use
case
with
one
bite-sized
ATP
transactions.
G
Although
vendors
really
love
it
and
a
lot
of
them
fought
for
it
heavily
and
they
also
had
the
classification
of
the
applications
suppressed,
so
they
basically
said
for
connections
for
a
second.
We
want
to
be
as
fast
as
possible,
so
that
goes.
That
explains
maybe
a
little
bit
Jacob.
Why
why
I
am
a
little
resistant
to
switch
a
lot
of
switching
of
things,
because
if
you
switch
off
things,
then
you
get
very
large
numbers
so
in
in
our
POC
the
results
were
only
half
of
the
vendor
datasheet
numbers.
G
G
B
Yeah,
so
so
we
we
have
two
jobs
to
do,
and
that
is
to
raise
the
visibility
of
this
and
then
make
sure
that
this
new
RFC,
when
we've
only
get
there
has
has
some
adoption
and
it
sounds
like
net
SEC
open
can
help
with
that.
So
that's
great
yeah.
G
I
hope
so
so
further
findings.
There
were
some
false
positives,
so
we
actually
tested
CVS
like
we
test,
it's
a
vulnerability
attacks
and
these
are
actually
run
under
load
and
that's
actually
also
new,
so
any
other
public
tests
that
have
been
published
so
far.
Our
testing
vulnerabilities,
like
the
denial
of
service
attacks
or
whatever
in
a
functional
scenario.
So
basically
the
labs
typically
set
up
a
performance
test
and
they
create
numbers,
and
then
they
remove
the
performance
test.
They
use
an
idle
firewall
and
then
they
attack
it.
G
And,
of
course
the
file
sister
has
nothing
to
do
and
has
all
resources
to
analyze
the
attack
in
in
our
recipe
we're
actually
putting
background
traffic
in
parallel
to
these
attacks.
So
that's
more
challenging
and
we
also
use
the
NIST
database
of
vulnerabilities
with
certain
parameters
that
are
precisely
defined
in
the
document
to
create
a
selection
of
it
was
what
was
it
a
couple
of
hundred
potential
CVS
and
not
all
of
them
always
work
for
all
vendors
and
similarly
well,
let's
put
it
that
way.
G
So
we
needed
to
run
a
lot
of
manual
tests
for
troubleshooting
and
that's
another
reason
why
we
automated
things
so
with
any
new
methodology.
Of
course,
there
are
there's
a
lot
of
ramp
up
its
first
about
understanding.
You
know
each
vendor
is
used
to
doing
things
in
a
certain
way.
Now
they
need
to
do
things
in
a
different
way,
so
it's
also
of
finding
new
problems
with
new
methodology.
So
that's
why
automation
is
critical
and
we
had
also
some
latency
issues
and
especially
delay
variation
questions
here.
G
So
I
agree
with
your
point
about
latency
and
delay
variation
al.
We
just
need
to
find
out
like
what's
acceptable
and
what's
reasonable
for
each
use
case.
So
that's
basically
all
regarding
the
POC
test.
The
next
steps
will
be.
We
will
continue
to
review
this
draft,
which
we
consider
already
pretty
pretty
stable,
will
add
more
security,
effectiveness,
testing
details.
We
will
focus
on
traffic
profiles.
G
Currently,
we
have
one-and-a-half
traffic
profile,
we
have
one
which
is
fairly
detailed,
which
is
represents
an
enterprise
parameter
test,
but
we
of
course,
also
want
to
focus
service
by
their
mobile
operator
firewalls,
a
little
bit
of
that
is
mentioned
in
the
document,
but
not
too
much,
and
we
also
need
to
prepare
continue
preparing
an
open
certification
program
because
I
we
figured
there
are
always
two
different
levels
of
doing
things
right.
It's
the
same
as
with
the
good
old
RC
2544.
G
The
document
is,
has
a
lot
of
things
and
then
in
the
industry,
they're
established
practices
started.
That
say,
we
use
it
in
this
way
and
we
probably
need
to
have
two
stages,
because
we
cannot
guarantee
exact
represented
by
itself.
We
need
to
also
have
a
group
that
reviews
and
approves
these
results,
and
that's
this
certification
program
from
me
in
DC's
perspective.
We
also
want
to
elaborate
open
source
implementations
and
the
problem
is-
or
it's
not,
maybe
not
a
problem.
G
The
fact
is
that
for
any
domain,
specific
layer,
7
testing,
we
have
to
use
commercial
test
tools
at
this
point,
which
is
fine,
we're
getting
great
support
from
them
and
there
are
actually
not
only
the
HSU.
There
are
three
more
that
are
in
the
queue
of
joining
and
but
still
it
would
be
useful
to
do
some
testing
with
open
source
test
tools
and
what
these
configurations
are
quite
complex
and
for
us
at
least
overwhelming.
G
So
we
ask
for
support
and
help
from
some
groups,
for
example,
for
our
use
of
tier
X
or
other
open
source
testers.
So
if
there
are
any
open
source
groups
here
that
are
interested
in
participating
in
the
POC
testing,
that
would
be
very
much
welcome
and
we
would
work.
We
would
certainly
add
from
a
testing
perspective
and
probably
also
from
our
L
like
to
work
with
them.
So
any
questions.
M
Hi
Tim,
Chang
I
think
this.
All
this
work
is
is
really
good.
My
perspective
on
this
would
be
from
coming
from
a
national
research
network
operator
JISC,
and
we
work
with
a
number
of
universities
trying
to
help
them
move
large
volumes
of
research
data
around
well
I.
Think
what
I
see
with
this
draft
as
it
stands?
M
D
M
Because
of
all
the
background,
traffic
and
processing
that
and
that's
what
that's
a
fairly
well-known,
modern
firewall.
That's
suffering
in
that
way,
I
mean
one
view.
Is
you
mentioned
a
tack
on
the
file
and
in
one
wait
view
these
large
flows
are
kind
of
an
attack
on
the
capability
of
the
files
is
probably
also
bringing
down
its
capacity
to
do
the
business
processing.
So
that's
a
little
bit
of
a
little
bit
of
a
fluffy
words.
Well
I
would
like
to
know
is
whether
there's
scope
in
this
specification
to
allow
the
traffic
mix.
M
I've
looked
in
the
draft,
and
you
mentioned
certain
types
of
mix,
whether
the
traffic
mix
could
include
small
numbers
of
very
high
throughput
flows
and
part
of
the
performance
evaluation
is
the
impact
on
those
flows,
while
the
other
things
you
were
already
defined
are
happening
and
the
impact
of
the
larger
flows
may
be
on
the
business
traffic
as
well.
Yeah
yeah.
G
Absolutely
I
think
we
briefly
discussed
elephant
flows
if
I
remember
correctly,
yeah
Tim
is
nodding,
so
we
did
discuss
them
and
the
good
thing
is
that
this
document
is
modular.
So
we
don't
need
to
change
the
methodology.
We
only
need
to
change
the
profile
or
add
one.
So
we
could
say
we
have
an
enterprise
parameter
profile.
That's
that's
what
we've
been
working
on,
then
we
could
have
a
mobile
operator.
G
Let's
say
whatever
you
know,
file
a
profile
with
a
lot
of
T's
terminals,
sending
small
amounts
of
traffic,
and
then
we
could
have
a
that
say,
academic
area
profile,
and
maybe
if
someone
from
from
your
group
of
universities
could
help
us
to
contribute
it
could
help
contributing
it.
That
would
be
really
much.
M
Practice
at
the
moment,
universities
try
to
tend
to
now
engineer
their
networks
of
the
research
traffic
doesn't
go
through
the
business
firewall.
They
know
they
apply
security
policy
to
it
in
a
sorry,
say,
Lina
way
the
sort
of
security
of
a
social
science
dmz
type
of
approach
you
may
have
come
across
so,
but
it
would
be
much
more
interesting
to
try
and
encourage
the
performance
of
these
firewalls
and
whatever
their
internal
architectures
are
to
not
be
hampered
by
these
types
of
larger
flows
and
process.
G
M
G
M
G
M
Sort
of
aggregate
degradation
I
was
talking
about
was
for
a
sub
ten
millisecond
RT
tiers
between
two
sites,
not
far
apart
in
the
UK
right.
Obviously,
when
you're
sharing
research
traffic
with
the
states
or
something
you're
going,
seventeen
hundred
milliseconds,
then
the
the
impact
obviously
will
be
greater.
How
you
might
simulate
it'll.
Do
that?
There's
an
interesting
question
but
I
think
if
you're
trying
to
get
practical
results
for
university
wants
to
buy
a
file
and
no
as
research
traffic
is
going
to
performance
through
it.
B
B
N
D
N
But
it's
an
interesting
work
thanks
very
much
for
driving
it
home
and
on
the
open
source.
Benchmarking
tool
point
you
called
out
t-rex,
so
I
would
be
very
much
interested
to
to
collaborate
to
see
to
what
degree
t-rex
can
apply
here,
we're
using
t-rex
extensively
in
in
our
project,
and
it's
got
a
stateful
capabilities
with
API
is
now
enabled
so
I'll.
Take
you
on
the
on
flowing
up
on
that
excellent
and
second
question.
What
I
haven't
seen
from
what
you
present
it
and
also
from
the
draft?
N
G
Internally,
at
NTC,
we've
run
this
methodology
with
three
virtualized
firewalls
and
it
works
as
well.
I
mean
there's
shouldn't
really
be
a
difference
from
the
black
box
perspective
because
from
the
application
layer
the
goals
are
the
same.
You
know
if
a
customer
wants
to
have
a
firewall
function
for
a
certain
application
use
case
scenario:
they
don't
really.
They
shouldn't
really
see
any
difference,
whether
it's
an
appliance
or
a
virtualized
solution
in
the
traffic
streams
are
the
same.
The
expected
performance
is
the
same.
So
maybe
is
Jacob
has
said
you
know
if
you
really
split.
G
N
Excellent,
so
you
actually
guessed
my
second
point,
which
is
we're
spending
quite
some
time.
Looking
at
the
distributed
network
functions,
these
need
to
be
in
the
cloud
space
with
container
networking
function
and
that,
thank
you.
The
question
asked
earlier
towards
the
degree
and
the
methodology
may
change
and
because
the
the
way
that
the
composite
device
is
built
is
different
and
I
know
that
you
want
to
also
capture
that
in
the
in
the
draft,
where
the
actual
functions
that
you
are
testing
or
this
with
across
multiple
deities
formulas
of
the
cloud
of
of
the
UTS.
G
Right
so
yeah
any
contribution
is
welcome
either
just
here
via
the
mailing
list
or
if
you
want
to
be
more
in-depth
involved.
You
know
in
netic
open
there
are
currently
quite
a
number
of
firewall
vendors
participating,
but
none
that
already
just
focused
on
cloud
firewalls.
So,
but
because
there
this
in
scope,
we
consider
ssandsk.
Okay,
thank
you.
So.
C
There
are
two
comments,
one
from
actually
both
from
Bala.
The
first
is
that,
in
the
future,
net
sec
open
will
also
create
multiple
traffic
profiles
and
Brian
agrees.
The
second
is
for
virtual
test
reporting
may
be
different,
and
we
need
to
specify
the
number
of
V
CPUs
the
amount
of
memory,
etc,
etc.
After
yes,
thank.
N
You
that
would
actually
that
think
that
I
for
long
is
the
resource
utilization,
which
is
one
of
the
critical
things
in
the
nav
space.
You
measure
performance,
but
you
know
there's
a
difference
that
there
are
two
cores
used
or
five-course
and
and
and
other
and
other
resources.
So
it's
not,
as
with
it
will
have
to
capture.
G
B
D
B
The
process
that
goes
beyond
that
which
is
ITF
last
call
where
you
know
of
any
number
of
people
and
the
Directorate
reviewers,
can
throw
in
their
comments
as
well
we're
trying
to
get
an
early
security
review
to
protect
benefit
from
that
and
then
finally,
the
iesg
gets
to
review
it.
So
I
mean
these
are
multiple
steps
that
hopefully
we
can
move
along.
D
C
Bring
in
the
BCP
use
the
resource,
utilization
and
virtual
environments
is
awesome
and
I
want
to
add.
Brian
is
asking
me
to
mention
that
net
sec,
open
bylaws
mandate
met
members,
agree
to
abide
by
RFC
81-79
guidelines,
so
I
said
it
will
see
if
they
actually
do
but
back
to
the
next.
Does
anybody
know
what
that
one
is
yeah
not
off
the
top?
B
Right
well,
what?
If
what
is
RFC
81-79?
What
we
can?
We
can
actually
looked
at
up
here.
Okay,
thanks!
Thank
no
thank
you
very
much
Carson
and
I.
Oh
I'll,
the
IP
are
one.
Yes,
yes,
yes,
okay!
Good!
Thank
you!
Thank
you
excellent.
So
the
next
topic
is
updates
on
the
back
to
back
frame,
benchmark
draft
I'll,
say
sitting
here
as
chairman
that
we
had
a
working
group.
Adoption
call
on
this
draft
back
in
December
and
we
haven't.
B
B
There
are
actually
more
advanced
technologies
that
are
available
in
the
RFC's
that
Jacob
and
Bucheon
Lusine
wrote
together
and
that
we've
actually
had
some
pretty
good
comments
on
on
the
list
recently
and
at
the
Bangkok
meeting.
There
was
a
great
discussion
of
that
there,
but
but
we
still
have
the
need
for
the
simplified
single
port
kind
of
benchmarking
with
back-to-back
frames
and,
and
it
turns
out
that
this
is
still
highly
applicable
in
the
virtualized
world.
So
that's
where
that's
where
this
draft?
B
B
You
know
a
brief
picture
of
the
the
simple
model
of
the
packet
header
processing.
Basically,
we've
got
a
generator
which
goes
in.
You
know
the
stream
goes
into
an
ingress
port.
It's
it's
producing
a
back-to-back
frame
of
a
back-to-back
stream
of
frames
and
a
buffer
captures
that
header
processing
tries
to
forward
it
out
the
egress
and
eventually
the
receiver
receives
that.
B
So
the
the
general
test
procedure
is
that
we
send
in
this
long
stream
of
back-to-back
frames
and
if
the
buffer
and
the
combination
of
the
buffer
and
the
header
processing
are
able
to
transmit
that
burst
effectively,
then
that
then
we
haven't
actually
characterized
the
size
of
the
buffer.
Yet
so
what
we
want
to
do
is
in
this
process,
overflow,
the
buffer,
but
not
by
very
much,
and
that
way
we'll
be
able
to
tell
how
much
buffering
takes
place
in
the
device.
B
But
what
was
overlooked
in
the
past
is
the
fact
that
there
is
actual
header
processing
going
on.
So
some
of
the
frames
have
actually
egressed
the
device
under
test,
and
we
need
to
correct
for
that.
So
that's
effectively
what
we're
doing
where
we
need
to
know
the
max
theoretical
throughput
in
order
to
do
that,
and
we
need
to
account
for
the
header
processing-
and
you
know,
the
frames
that
are
processed
are
currently
not
in
the
buffer.
So
those
are
all
the
things
we've
said.
So
that
was
a
good
clarification.
B
I
think
it
was
in
the
references
but
I'm
happy
to
include
it
here.
So
then,
in
next
slide
in
in
version
of
four
based
on
mr.
itõs
feedback,
we
we
were
able
to
clarify
that
we
are
applying
a
correction
factor
here,
and
so
we
always
have
this
correction
factor
equation.
But
now
we've
basically
said
you
know.
B
The
buffers
involved
absorb
that
interrupts
and
the
the
suspension
of
form
forwarding,
however
brief,
or
they
don't
and
you
lose
packets
due
to
that
transient
interrupt.
So
we,
this
is
something
we
still
need
to
understand
pretty
pretty
intensely
in
our
in
our
world
and
now
we've
got
a
better
test.
At
least
I.
Think
we've
got
a
better
test
here
to
do
it
I
think
mr.
mr.
Ito
agrees
so
so
so
that's
that's
the
update,
the
complete
update
and
that's
why
complete
talk,
and
now
we
need
to
talk
about
working
group
adoption.
B
B
C
So
I'll
send
out
I'm
happy
with
what
it
come
in
I
think
it's
sort
of
my
fault,
I
should
have
called
adoption
on
this
before
we
walked
in
the
room
here
today.
Anyways,
but
in
order
to
go
through
to
working
group
last
call
I
think
making
sure
that
folks
in
the
room
are
reading
this.
It's
a
good
thing.
C
It's
a
good
complement
to
the
tests
that
we
already
have
and
frankly
it's
something
that
as
a
test
or
at
least
I've
done
a
lot
in
the
past,
just
trying
to
figure
out
what
is
the
buffer
and
how
do
I
do
damage
or
not
to
it,
and
so
this
potential
RFC
would
be
giving
me
a
repeatable
methodology
to
do
that.
So,
if
you
find
that
useful,
please
read
the
draft
and
give
your
comments
on
list,
but
I
do
plan
to
send
out
the
the
note
to
BMW
G
for
call
for
adoption
on
this.
Well.
O
B
B
It's
one
that
if
you
volunteer
for
you,
won't
be
over
committed,
but
thank
you
for
those
volunteers,
much
appreciated,
okay
and
and-
and
let
me
go
back
to
my
chairman
hat
for
a
moment-
simply
simply
to
say
that
this
is
how
we
get
work
done
here
when
you
have
a
proposal
and
you
want
other
people
to
read
your
draft
read
other
people's
drafts.
That's
the
easiest
way
to
make
friends
and
influence
free
future
reviewers
marchek.
L
N
Quick
comment:
magic,
cisco,
FDA,
oh
so
I
scanned,
the
draft
I
haven't,
read
it
I'll
Louise
properly,
and
the
goals
look
extremely
useful
and
typical,
and
my
only
concern
is
that
in
the
nav
space
I
think
the
only
hesitation
I
have
is
that
abstracting,
the
construct
of
the
software
data
plane
with
just
a
single
buffer
and
and
a
single
packet
processor
may
not
you
know
in
reality-
may
not
be
actually
correct
because
of
all
various
tricks
and
mechanics
that
are
happening.
But
I,
don't
think,
is
a
showstopper
for
the
methodology.
N
E
N
B
Sort
of
test
where
you
can
determine
that
the
simple
model
falls
apart
and
then
that
would
simply
be
something
we
write
into
the
scope.
You
know
test
for
this.
If
you,
if
you
have
these
conditions,
then
then
you
know-
and
this
isn't
the
test
for
you-
it's
probably
more
like
the
test
that
Jacob
and
museum
came
up
with,
which
is
sort
of
a
Multi
multi
port
multi
port
test.
B
B
C
B
No
I
think
the
scope
is
good
at
the
moment
and
I
think
that
that's
where
we
should
that
that
for
that,
for
the
cases
where
this
simple
buffer
model
applies,
that's
the
case
that
we
should
concentrate
on
in
this
draft.
If,
if
people
can
bring
examples
from
the
virtualized
environment
that
are
different
from
what
Jacob
and
Lucian
wrote
about
and
but
but
also
don't
apply
to
the
simple
model,
then
we
should
have
another
draft
for
that.
So
we.
B
C
C
C
B
P
Thank
you
very
much
yeah
hello,
everybody.
My
name
is
Manuel
oyster
and
from
Auburn
University
I'm,
going
to
prevent
updates
of
this
draft
called
methodology
for
vnf
benchmarking,
automation
and
I
think
it
was
previously
presented
in
IETF,
101
or
102,
and
by
the
first
offer,
and
basically
I'm,
going
to
show
what
changed
since
then,
as
well
as
what
we
are
currently
doing
in
this
area.
So
maybe
next
slide.
P
So,
first
of
all,
why?
If
we
consider
the
NFB
world
and
typically
yeah
our
network
functions,
I
realized
a
software
which
we
can
employ
on
general
purpose
hardware.
We
can
run
them
in
the
cloud
on
edge
hardware.
Whatever
the
point
here
is
that
network
function,
virtualization
allows
people
to
automate
almost
everything
it
allows
deployment
of
the
network
function.
It
allows
automating
the
scaling
of
the
network
functions,
it
allows
automating
healing
of
network
functions
and
so
forth.
So
we
were
thinking
okay,
when
you
automate
all
these
other
tasks
for
your
nfu
deployment.
P
Why
not
also
automating
the
benchmarking
tasks
of
your
vnf
as
such,
which
means
in
our
terms
that
we
really
want
to
have
an
end-to-end,
automated
or
a
process
or
methodology
to
end-to-end
automate,
BNF
benchmarking.
So
someone
gives
you
a
description
on
how
a
bunch
marking
experiment
should
look
like
whatever
the
spanish
map
will
be,
and
there
might
be
open
source
tools
or
in
general
tools
or
platforms
following
our
methodology,
which
then
perform
this
description
and
do
the
benchmarking
experiment
in
a
fully
automated
manner.
P
So
you
have,
for
example,
some
NFV
infrastructure
in
your
lab,
get
the
description
of
a
benchmark.
It
get
to
be
an
F,
and
then
you
put
everything
together
and
run
this
automated
procedure
to
get
your
results
next
slide.
So
regarding
our
draft
and
the
updates-
and
we
restructure
it
to
improve
the
readability
and
it's
now
available
in
version
0.3
and
first
of
all,
we
now
have
releases
end-to-end
definition
on
how
we
think
automated
INF
benchmarking
method
should
look
like.
The
important
point
to
emphasize
here
is
that
we
talk
only
about
how
to
automate
benchmarking.
P
P
What
we
use
to
describe
such
benchmarks,
something
because
vnf
benchmarking
descriptors
this
basically
comes
from
that
in
the
febrile.
Almost
everything
is
described
by
descriptors
of
the
guys
have
descriptors,
describing
how
a
vnf
looked
like
they
have
descriptors.
Our
network
service
looked
like,
so
we
basically
aligned
to
this
terminology
and
said:
okay,
let's
have
a
descriptor
defining
our
benchmark
for
arena,
looks
like
and
also
in
the
updated
traffic
tree
and
proposed
to
open-source
implementations.
P
The
first
is
called
a
June,
which
is
basically
based
on
a
comic
paper,
I
think
it
was
published
in
2:17
and
the
second
open-source
implementation
and
which
allows
you
to
do
this.
Automation
is
called
TNG
bench
which
was
presented,
I
think
the
first
time
at
the
I,
Triple,
E
and
Fe
SDM
conference,
also
in
270
on
next
slide,
please
so
yeah.
In
addition
to
other
updates
I
just
presented,
we
are
also
working
on
some
additional
stuff.
We
want
to
bring
into
there
on
into
the
draft.
P
The
same
applies
to
the
stuff.
Our
benchmarking
produces,
which
we
call
usually
BNF
performance
profiles
or
in
FP
piece.
This
is
a
preliminary
name.
We
don't
have
a
better
one
right
now,
and
we
also
think
that
therefore
modelling
should
apply
and
we
should
have
kind
of
a
standardized
way
how
such
outcomes
of
this
automated
process
have
to
look
like,
but
this
is
still
under
development.
P
In
addition
to
this,
as
I
said,
it's
a
generic
framework
for
automation.
This
means
we
basically
plug
in
traffic
generators,
monitoring
systems
and
so
forth
and
so
forth,
and
we
think
that
for
this
plug-in
mechanisms
we
also
should
provide
standardized
interfaces.
So
it's
clear
how
the
automation
framework
as
such
talks
to
a
traffic
generator
you're
probing
units,
whatever
you
use
there,
and
this
is
also
something
we
are
working
on
right
now
and
finally,
we
are
performing
some
test.
P
Experiments
with
our
source
implementations
here
are
some
exactly
results,
showing
they
are
available
on
github,
so
there
are
links
in
the
slide,
so
you
should
be
able
to
access
them.
What
they
are
basically
showing
it's
not
about
the
details
is
that
you
write
down
a
benchmarking.
Descriptor
saying:
ok
takes
this
vnf.
In
this
case,
it
was
an
intrusion,
detection
system
and
please
test
it
for
me
with
different
configurations,
for
example,
test
it
with
one
TP
money,
virtual
CPU,
core
for
virtual
CPU
cost,
eight
virtual
CPU
cores
different
memory
assignments
and
so
on
and
so
forth.
P
And
then
this
automation
tools
do
all
this
performance
for
you
one
after
each
other.
Do
the
measurements
collect
the
data,
and
then
you
have
basically
this
thing.
We
call
a
profile
for
different
resource
assignments
of
your
ENS,
and
this
is
in
particular
needed
for
this
nfe
space
and
missing,
and
maybe
the
next
slide
also
mentions
this,
which
is
one
of
our
main
goals
here,
and
also
issues
right
now.
I
P
As
you
already
mentioned,
we
saw
that
there
are
some
other
trusts
which
might
be
directly
related
to
our
work,
especially
in
this
land
data
model,
and
for
this
test
management.
I
think
this
would
exactly
fit
in
the
need
for
our
interface
description
to
let
the
automation
framework
talk
to
traffic
generators.
I
think
this
is
exactly
what
they
are
doing,
so
we
think
we
should
contact
them.
I
will
contact
them
and
talk
to
them.
If
you
can
no
just
love
white
beasts
be
completely
aligned,
which
should
be
good
thing,
I
think
yeah.
G
P
G
You
think
it's
already
pretty
mature,
it's
not
like,
because
I'm
asking,
because
personally
I
don't
really
understand
from
leper
spective
how
to
use
it.
It's
very
abstract
and
it
has
a
lot
of
things
that
are
yeah
for
me,
okay,
it's
kind
of
a
guide
how
to
do
testing
in
general,
okay
and
okay.
If
you
want
to
do
testing,
you
need
to
have
your
prerequisites
aligned.
You
know
if
you're.
G
P
He
grabs
this
description
and
runs
a
test
on
it
without
knowing
the
details
on
how
to
run
a
test
and
without
the
need
to
do
the
setup
manually
and
configure
sinks,
and
going,
though,
pass
down
to
this
assumption
of
anything
that
really
everything
can
be
done
automatically
without
human
interaction.
At
that
point,.
G
So
the
second
question
is
from
our
experience.
All
of
the
vnf
testing
is
very
domain-specific,
so
at
very
early
on
when
we
define,
let's
say
young
models
or
whatever
for
testing
we
get
into
domain-specific
things,
but
this
document
doesn't
really
doesn't
cover.
Domain-Specific
aspects
at
all.
I
am
not
sure
whether
this
is
a
successful
approach.
Yeah.
P
P
We
just
say:
okay,
this
is
the
standardized
framework,
and
then
you
can
use
your
benchmark
may
be
relying
on
some
other
RFC
and
let
it
be
executed
by
this
automation
framework,
and
we
don't
think
that
at
this
point
or
at
this
high
level
of
automating
it,
you
need
this
domain
knowledge,
for
example,
which
traffic
profiles
you
want
to
send
to
a
firewall,
because
this
is
kind
of
abstracted
away.
It's
kind
of
a
layer
over.
G
G
My
last
question
is
regarding
this:
open
source.
Open
source
projects
are
described
in
the
document,
and
I
tried
to
download
things
from
Jim,
but
I
only
found
three
llamó
documents.
It
says
like
there
is
open
source,
it's
a
it's
a
framework
in
which
you
can
do
something,
but
there
is
I
didn't
find
any
line
of
code.
Yeah
I
think.
P
P
It's
it's
installable
and
it's
actually
used
by
people
and
we
are
getting
more
and
more
feedback
about
it,
especially
from
the
nfe
research
community
and
all
these
H
2020
European
projects
are
collaborating
with
us
when
it
comes
to
this
yeah
testing
and
benchmarking,
stuff
and
yeah.
Maybe
you
should
try
this
tool
and
pretty
sure
that
you
can.
G
G
P
L
H
H
Alright
I'll
call
it
all
lean
down
here
used
to
it
alright,
like
we
can
go
to
the
first
next
slide
here
right
and
get
right
into
it.
So
this
is
the
I
think.
Fourth,
iteration
of
this
draft.
We
working
on
we've
been
trying
to
narrow
the
scope
down
to
network
virtualization
platform,
specifically
related
to
the
nvo
3
working
group,
to
kind
of
narrow
the
focus
and
get
the
definitions
down.
H
H
So
we
need
to
take
into
consideration
how
flow
optimization
is
done
as
we
walk
through
this.
So
a
quick
review
of
env
co-located
versus
split
as
well
talk
about
before
I.
Don't
spend
too
much
time
on
the
slide,
but
kind
of
the
top
diagram
is
defined
and
RFC
880
14
and
the
bottom
one
is
kind
of
defined
in
83
94,
whether
that
so
we're
try
to
use
the
same
terminology
within
our
drafts
of
what
is
split
versus
co-located
as
we
go
through.
H
So
we
we
also
took
one
step
further,
because
we
noticed
there's
actually
a
split
co-located
and
split,
not
co-located,
because
a
split
nve
could
be
actually
co-located
as
a
virtual
machine
or
some
other
component
that
lives
in
the
hyper
lives
on
the
same
physical
sut
or
it
could
be
split
as
in
it's
somewhere
else.
So
we
kind
of
sub
categorize
it
in
this
recent
draft
to
take
an
account.
H
After
that,
we
looked
at
traffic
flow
optimizations,
so
a
lot
of
what
as
new
technologies,
come
and
keep
coming,
as
in
some
of
the
like,
moving
everything
to
a
smart
NIC
for
example,
or
coprocessors
or
dedicated
cores
you're.
Putting
in
on
this
could
happen.
This
is
for
like
nfe
world,
to
write.
There's
a
lot
of
things.
We
can
do
now
we're
just
trying
to
capture
the
set
universes
of
what
these
other
things
that
are
there,
so
the
hypervisor
may
not
be
living
within
the
actual
tenant
tenant,
but
the
actual
processors
on
the
servers
anymore.
H
They
could
be
living
on
us
in
a
smart
neck,
for
example,
and
there's
definitely
different
considerations
to
be
taken
into
there
and
then
the
thickness
bath
past,
for
example,
I
may
I,
may
have
actually
different
paths
depending
on
is
the
flow
long
lived
or
the
short
lived.
I
may
have
all
short
lived,
go
through
one
path
and
then,
as
soon
as
it
becomes
along,
the
flow
I
might
put
program
it
to
take
the
fast
path
and
I
was
definitely
different.
Processing
considerations
to
be
taken
for
that.
H
The
state
changes
I
had
put
a
cousin
of
a
work
in
progress
because
we're
still
kind
of
working
this
out
and
we
want
to
make
it
more
generic,
not
just
VM
creation.
The
nvo
three
working
group
really
took
a
look
at
the
VM
area,
and
this
really
applies
a
lot
more
to
the
split
envied
scenarios
and
this
actually
applies
to
containers
as
well
as,
if
you
have
container
network
interface,
where
you're
trying
to
like
get
information
about
what's
happening
but
you're
not
part
of
that
system.
So
how
do
those
like
creation
events
happen?
H
Change
events,
migration
events?
What
if
this
thing
just
dies
and
decides
to
come
all
back
up
all
at
the
same
time?
How
can
we
look
at
some
of
those
scenarios
within
it
as
well?
So
we
started
that
in
that
control,
plane,
scale
considerations
as
we
go,
and
this
is
just
a
continuation
of
the
VM
events
slide.
H
Keep
going
so
I
wanted
to
go
over
some
test
result,
so
that
kind
of
ends
like
what
we
did,
but
I
wanted
to
show
why
this
really
matters
and
some
of
the
tests
that
we
did-
and
this
is
a
fairly
simple,
easy
test
to
run.
It
doesn't
even
require
any
fancy
software,
it's
just
using
iperf
with
several
different
threads
and
over
a
given
time.
So
this
test
was
run
with
four
VMs
and
four
threads
each
to
about
sixteen
threads
as
we
go
through
this.
H
So
if
we
look
at
the
results
for
this
right,
I
mean,
of
course,
all
flows.
It
should
be
hopefully
no
surprise
if
you've
ever
used
any
of
these
offload
before
the
offloads,
definitely
matter
when
you're
talking
about
the
virtual
world
and
because
we're
not
necessarily
looking
at
every
packet,
all
the
time,
we're
looking
at
big
64,000,
byte
chunks
versus
the
packet
by
packet,
because
we're
not
really
doing
it
like
ireally
any
network
network,
funk
network
function
virtualization
on
it,
we're
just
the
end
application.
H
B
And
and
I'll
mention
I
mean
this
is
a
good
time
to
mention
it.
We
we've
we've
reserved
some
time
after
the
session
today,
our
BM
doh
ever
time
to
go
into
detailed
test
results.
So
if
you,
if
it's,
if
it's
possible
to
kind
of
summarize
quickly
here
and
then
and
then
we'll
pick
that
up
in
the
Congress
three
call
I
think
is
where
we're
headed
next,
okay,
yeah.
H
So
the
reason
afford
to
get
Nick
here
is
I'd
be
able
to
go
external,
so
the
differentiation
kind
of
evens
out
a
little
bit
a
little
bit
more,
but
it's
still
fairly
significant
and
then
just
different
platforms.
So
platform
one
is
hypervisor
a
platform
as
hypervisor
or
different
different
types
of
hypervisors,
so
that
you
can
have
the
exact
same
things
all
loaded,
the
exact
same
way
and
it's
depending
on
how
how
the
hypervisor
is
implemented.
It
can
have
impact
performance
so
with
that
those
are
the
updates
that
I
had
for
this
presentation.
H
B
A
H
H
B
Good
all
right,
well,
I,
think
I'll
likely
read
this
again
to
take
of
them
and
take
some
steps
to
try
to
get
our
friends
in
nvo
three
to
look
at
it
as
well
excellent
work.
Thank
you
thanks.
Yes,
that,
along
to
seminal
to
alright,
so
I've
got
a
really
short
presentation
here.
On
item
number:
seven:
it's
the
benchmarking
methodology
for
evpn,
multihoming
restoration
and
mass
withdrawal.
B
I
produced
this
draft
at
the
last
for
the
last
meeting
and
I
got
some
feedback
off
list,
I
that
in
a
version
zero
one
of
the
draft,
but
everybody
who
volunteered
to
read
it
at
that
meeting
did
not,
and
so
really
this
is
just
a
statement
of
I
mean
it
has
two
authors
from
eighteen,
T,
myself
and
Jim
Utah
row.
This
is
the
these
are
the
things
that
we
think
we
ought
to
be
benchmarking
or
VPN,
because
these
are
the
features
we
think
we
should
be.
B
Where
that
we're
going
to
deploy
at
least
we
think
we're
going
to
deploy
these
so
exactly
yeah.
So
you
know
the
the
rest
of
the
evpn
stuff
is
nice,
but
this
is
what's
important
to
us
and
we
were
hoping
to
get
some
feedback
and
and
some
more
interest
on
on
this.
As
a
I
mean
it's
obvious,
I'm,
a
service
provider,
so
potential
customer.
B
It's
completely
different
technology,
we
I
mean
this
is
the
reason
that
Jim
had
sort
of
an
extensive
exchange
with
sue
Dean
back
in
the
Chicago
meeting.
That's
two
years
ago
now
so
I
know
we
wrote
it
up,
and
here
it
is,
and
we'd
like
to
have
folks
contribute.
If
evpn
is
important
and
if
it's
not
then
not
an
important
technology,
then
you
know,
then
we
make
another
decision.
G
I'm,
not
a
good
reader
of
documents
and
contributor
on
the
mailing
lists,
I
admit
but
I
think
it's
important
and
I
think
it's
it's
good
to
test
it.
We
see
a
lot
of
problems
where
these
multihoming
scenarios,
I
I,
would
probably
have
some
ideas
how
to
improve
the
methodology,
but
we
can
take
that
offline
I'm.
B
C
Said
that
I'd
also
ask
you
to
read
the
other
evpn
drafts,
because
I
think
there's
potentially
a
conversation
we
need
to
have
as
a
working
group
about
the
two
of
these
together
and
particularly
somebody
who's
doing
the
testing
I
think
your
feedback
would
be
I.
Think
everybody's
feedback
is
valuable,
but
I
think
in
this
case
it
would
be
extremely
invaluable.
If
I
could
convince
you,
Carsten
Rob
you
with
a
beer,
you
let
me
know,
but
it
would
be
very
helpful
if
you
could
read
both
I'll.
Send
you
this
well
actually
there's
three
floating
around
here.
B
B
B
N
So
this
is
a
proposal
to
rationalization
of
improved
binary
search.
What
we
found
with
experience
of
running
a
lot
of
hundreds
and
thousands
of
automated
tests
a
day
is
that
the
time
it
takes
to
run
an
RFC
to
504
I
recommended
binary
search.
That's
been
adopted
by
a
lot
of
testing.
Vendors
takes
a
lot
of
time,
so
MLR
search
and
search,
there's
a
miss,
fell
and
aims
to
address
the
challenge
of
of
it.
N
N
Suppose
those
two
ratios
and
one
can
actually
define,
add
more
and
the
more
throughput
will
throughput
rates
will
be
will
be
found
next
slide.
So
those
slides
is
really
the
an
abbreviated
version
of
of
how
the
algorithm
is
defined
and
I
will
skip
through
most
of
them
of
the
detail.
But
we
basically
you
define
that
the
the
the
final
trial
duration
and
we
have
a
default
currently
set
to
30
seconds,
but
it
it's.
N
You
know
the
draft
will
is
likely
to
recommend
evaluates
that's
compatible
with
RFC
2,
5,
4
4,
so
60
seconds,
and
the
other
important
thing
is
the
accuracy
final
relative
weight.
So
I'll
stick
to
those
that's
one.
So
the
way
we
we
actually
save
time
is
we're
on
the
rueful
face
which
discovers
what
the
what
system
is
capable
of.
We
call
it
a
maximum
which
live
right
and
we're
using
that
as
their
as
input
to
drive
the.
N
N
N
It's
not
really
readable,
but,
and
so
next
I
can
actually
show
you
the
same
stuff
in
the
in
the
table.
Okay,
this
shows
basically
the
improvements
we
gained
from
our
search
versus
the
the
binary
search
and
the
top
table
shows
the
time
I've
taken
four
different
tests
in
different
durations
and
compared
to
the
stand
alone
and
the
our
binary
searching
and
PBR
binary
search
and
specifically
just
looking
at
that
the
game
for
a
single
in
your
search.
N
C
N
So
I'll
repeat
the
last,
maybe
the
last
the
last.
The
last
summary
the
gain
from
using
the
MDR
search
versus
the
binary
search
and
is
in
the
range
of
30
to
60
percent,
looks
like
yes,
so
we're
looking
for
for
comments
and
and
assuming
this
is
in
scope
for
the
for
the
group
and
with
enough
comments
would
like
to
ask
for
it.
Oops
very.
B
Co-Author
and
this
gentleman
here,
very
good
okay
I've
actually
investigated
the
search
algorithm,
so
I'll
put
my
hand
up
to
is
having
done
this
as
well.
I
I'm
I'm
particularly
interested
to
see
the
results
today.
I
think
there's
absolutely
within
our
scope.
I
mean,
I
think
that
that
this.
B
N
B
Let's
see
we're
we're
looking
for
reviewers,
so
if
we
have
any
volunteers
for
review,
Sarah
and
you've
already
reviewed
in
carsten,
okay,
so
Sarah,
Carsten
and-
and
please
put
your
if
you
if
you've
read,
please
put
your
comments
on
the
list.
Mr.
Lee,
mr.
mr.
mr.
Lee's,
already
read
it.
It's
really
easy
for
him
to
put
out
the
comments
very
good,
so
rec,
oh.
L
L
B
L
L
So
one
of
the
things
is
that
that
word
throughput
has
a
really
strong
definition
in
RC,
so
this
graph
needs
another
definition.
And
the
main
point
is
the
last
item
here
that
we
want
to
make
the
most
reliable
conclusions
of
the
data
that
was
measured
by
by
this
algorithm
next
slide.
Please,
and
nothing
is
the
algorithm
is
probably
stick
so
it
doesn't
cut
the
search
interval
into
definite
the
house
or
PCs
trying
to
do
something
more,
the
smart
in
order
to
support
systems
that
are
not
little
mystic
enough.
L
The
algorithm
still
needs
quite
a
few
assumptions
about
the
system
under
test
so
that
the
recorded
values
make
sense
under
those
assumptions
and
yeah.
The
main
thing
is
that
by
biasing,
inference
is
used,
so
there
is
a
scientific
background
about
the
final
value
next,
like
this
and
yeah,
maybe
so
just
skip
this
slide,
because
I
plan
to
spend
some
more
time
when
shopping
graphs
is
this
the
thing
that
some
features
about
this
algorithm
are
easy,
easy
to
describe
by
showing
grass
and
hard
to
define
by
putting
the
next
day
in
the
draft.
L
This
is
so
yeah.
Another
thing
is
that
beyond
search
aims
to
be
a
class
of
algorithms
and
we
only
have
a
one
prototype
implementation,
and
so
these
graphs
will
be
specifically
to
this
one
prototype
the
implementation.
This
implementation
uses
two
so-called
fitting
functions.
Here
is
a
graph
on
the
Left.
We
have
absolute
values,
so
you
don't
see.
The
difference
is
around
zero
and
on
the
right
we
have
a
logarithm
on
y-axis.
B
L
That
should
correspond
to
each
other.
So
if
you
see
the
difference,
there
is
just
an
artifact
of
the
software
that
was
floating
the
graph.
The
numbers
are
the
same.
The
each
feeding
function
can
have
two
parameters.
One
roughly
corresponds
this
value
specific
here,
1
million,
so
it
should,
it
could
scale
and
other
parameter
corresponds
to
how
sharp
the
edge
between
0
and
this.
D
L
Of
the
previous
graphs
we're
just
what
all
the
functions
whose
formulas
are
in
the
graph,
this
is
some
data
obtained
by
testing
the
algorithm
on
real
system.
So
here
it
is
VPP,
dr.
peckler
packet
process
or
one
specific
test
case,
and
you
see
that
it
looks
like
those
two
colors
are
converging.
Those
two
colors
are
more
about
an
upper
bound.
You
can
read
it
and
it
looks
like
that
that
algorithm
really
converges,
but
actually,
when
I
looked
at
the
data,
it
is
not
that
algorithm
was
wrong
in
earlier.
L
It
was
just
the
system
on
the
test
was
getting
different
results,
different
the
amount
of
packets,
so
this
graph
actually
shows
that
system
under
test
changes
behavior
during
the
test
and
the
algorithm
is
following
it.
So
it's
very
nice
graph.
Now
it's
a
worst
graph
you
can
see.
Colors
are
different.
That's
good
description
is
missing.
It
is
from
earlier
its
implementation,
and
now
you
see
the
two
colors
do
not
look
like
they
will
ever
meet.
So
this
is
like
a
bad
case,
and
this
is
the
reason
why
we
are
using
two
physic
functions.
L
Basically,
it's
not
exact,
but
the
two
colors
corresponds
to
predictions
from
the
two
fitting
functions
and
if
you
can
see
that
the
values
are
not
converging,
you
can
tell
that
there
is
something
wrong
going
on
in
the
system
or
maybe
some
of
the
assumptions
were
not
satisfied.
Next
slide,
please-
and
here
is
a
very
bad
example.
L
This
is
actually
when
we
find
out
that
our
system
under
test
is
not
totally
isolated
and
there
was
another
process
that
was
affecting
and
that
was
continually
decreasing
the
performance
and
you
can
see
that
both
lines
go
down,
but
one
thing
is
they
the
two
lines
cross?
The
colors
are
still
that
green
is
up
and
blue.
D
L
L
G
D
L
Difference
is
that
this
is
a
result
of
run
time
of
10
hours
previous
graph,
so
we're
just
a
half
an
hour
and
you
can
see
once
again
system
under
test
is
not
behaving
regularly.
You
can
see
big
jumps
down.
That
means
that
the
system
on
the
test-
they
usually
behaves
well,
but
one
measurement
encounters
very
high
most
count.
So
that's
why
the
algorithm
changes
the
prediction
all
the
way
down.
D
L
Can
see
it
if
you
keep
it
running
long
enough,
it
will
eventually
converge
to
some
values
that,
hopefully,
will
be
still
relevant
for
the
end
user.
So
this
is
an
example
that
this
long
run
can
be
used
as
kind
of
a
song
tester
which
both
discovers
and
validates
the
critical
load,
and
we
have
also
sent
some
questions
to
the
email
about
asking.
How
do
people
use
song
test
so
that
we
know
if
this
is
good
implementation
for
sometimes
or
we
need
to
change?
L
D
L
L
B
Well
very
interesting
presentation.
A
lot
to
understand
here:
there's
lots
to
unpack
about
what
you're
doing,
which
I
think
we
can
only
begin
to
appreciate
after
after
reading
the
draft,
and
so
that's
a
good
time
to
ask
for
volunteers,
we
can't
ask
the
co-authors
and
they've
been
contributing
like
crazy
to
the
review
volunteerism
today,
who's
willing
to
read
the
draft.
B
B
B
O
F
Five
mil
I'm
so
relieved
with
moving
this
place
for
me
is
yeah
thanks.
Okay,
so
this
is
more
of
a
follow-up
of
saline
Jacobs
draft
on
evpn
benchmarking
capability.
So
so
this
is,
traffic
particularly
is
for
the
multicast.
So
what
are
the
Graham
does
at
V
want
to
benchmark
for
any
performance
results,
so
this
slide
describes
what
is
levy
pinnable
so
that
our
senior
is
defined
just
an
introduction,
bedroom
kind
of
slate.
So
we
have
all
our
active
multihoming
with
Ethernet
segments,
so
controlling
math
learning
and
so
on.
F
So
this
ICMP
snooping
paradigm
is
been
around
for
a
long
time.
It's
used
to
constrain
multicast
traffic
to
those
interfaces
where
there
are
personal
interests.
So
now
we
have
a
mix
of
a
VPN
and
can
you
want
to
have
optimized
multicast
and
any
pin
family?
We
deploy
igmp
snooping
in
it.
So
there
are
some
challenges
and
there
are
some
opportunities
on
how
we
deploy
this
optimized
multicast
an
EVP.
So
we
don't
want.
There
are
some
signaling
vgb
signaling
routes
that
have
been
introduced
towards
this.
F
F
So
this
is
a
typical
EVP
in
fabric.
We
have
two
spines
which
yeah,
which
is
a
classical
clock
class
model,
so
we
have
several
leaves
in
order
of
how
hundreds
so
some
leaves
are
multi-home,
so
some
leaves
aren't
so
some
leaves
leaf.
Devices
have
receivers
behind
them.
Some
do
so
so
in
the
multicast
traffic
comes
from
the
RT
we
register
behind
Spain,
so
this
traffic
gets
flooded
or
towards
the
div
devices
so
with
snooping
or
with
optimization.
So
the
traffic
from
spine
pro
gets
only
to
those
business
with
actually
business.
F
So
the
listener
interest
is
conveyed
by
the
celebrity
stood
behind
the
sea
at
the
bottom
by
scanning,
an
Asian
pity
port.
So
the
leave
devices
it
gets.
This
I
am
very
poor,
translated
into
a
BGP
type
six
and
send
it
to
the
spines
so
that
the
spine
too
can
track
about
the
listener
interest
in
forward
traffic
to
the
listener
selectively.
F
So
there's
one
more
challenge
here
that
when
there,
when
the
leaf
devices
are
multi
home,
so
the
IGMP
codes
have
to
be
synced
across
this
multi
home
devices
so
that
when
traffic
comes
from
the
core,
both
the
leave
devices
have
state
so
that
the
TF,
the
VPN
TF,
actually
gets
to
forward
the
traffic
to
the
listener.
So
overall
we
have
so.
F
The
type
8
is
another
route
dive
where
we
actually
use
it
to
indicate
that
there's
a
leave
agent,
a
leave
or
this
interest
in
that
group,
and
you
want
to
sync
it
across
the
multi
who
pink
devices.
So
overall,
we
have
now
described
how
multicast
hoping
signaling
works
in
the
access
interfaces,
as
well
as
on
the
VPN
code.
So
we
want
to
measure
different
parameters
for
which
the
learning
rate
in
that
for
the
convergence
and
so
on.
So
that's
that's
what
I
said
after
water.
F
F
Okay,
so
so
typically,
we
have
identity
ports
that
can
go
in
the
order
of
several
thousands.
So
when
several
reports
come
in
ashram
in
one
shot-
or
maybe
they
come
over
a
period
of
time,
so
so
these
leaf
devices
have
to
be
learning
the
ICMP
reports
that
command
access.
So
this
is
a
characteristic
called
as
an
ICMP
join
learning
rate.
So
this
has
been
a
classical
measurement
which
has
been
present
in
a
lot
of
l2
switches.
So
this
is
the
scrape
in
the
traffic.
F
Sometimes
the
listeners
do
not
send
out
reports
on
time,
so
they,
instead
of
sending
a
leave
for
that
group.
They
just
keep
sending
the
report
and
so
the
joint
times
out
on
the
street.
So
this
is
a
house
on
the
leave
device
times
of
the
judge.
Cuz
another
parameter
measurement,
so
when,
of
course,
actually
since
I
believe
to
express
disinterest
in
the
group
Switzer,
the
leaf
device
have
to
has
to
send
out
the
last
last
all
solicit
kind
of
a
packet
so
that
it.
So
this
is
a
need,
any
other
reports
from
any
other
hosts.
F
So
this
call
is
an
LM
Q
or
a
last
member
query.
So
so
the
leaf
device
has
to
do
those
performances
and
then,
if
there
are
no
listeners,
so
it
has
to
clear
state.
So
how
often
how
soon
the
leaf
device
can
learn?
The
leaf
is
a
measurement
parameter
and
how
soon
the
leaf
device
can
clear
state
so
that
the
traffic
stops
getting
forwarded
to
the
hosts
in
multicast
applications.
Typically
yeah,
this
couple
more
minutes,
yeah
many
gas
applications.
Typically
the
yeah
just
couple
of
slaves.
So.
B
B
Is
anybody
here,
working
on
evpn,
multicast
I
see
no
one
alright,
so
it's
like
I
mean
you
guys
are
gonna
have
to
find
people
in
the
the
other
working
groups
that
are
interested
in
this
topic
and
are
willing
to
apply
their
expertise
here
in
BMW,
G
imeem
I
think
that
that
we've
got
a
whole
group
of
evpn
drafts.
Now
you
know
they
cover
a
lot
of
space,
but
you
know,
but
we
simply,
you
know,
we
said
we.
B
C
Before
you
comment,
I
just
want
to
add
you
guys
at
least
one
of
your
authors
already
has
a
V
VPN
draft
and
I'm
one.
Why,
as
a
tester,
to
come
in
and
see
if
all
of
these
were
to
get
published,
niya
for
RFC's
I'd
be
slightly
annoyed,
because
I
have
to
go,
read
all
four
to
figure
out
what
I
want
to
do,
whereas
I
think
if
they
flow
in
one
document
or
somebody
needs
to
help
me
understand
as
a
participant.
C
Why
aren't
they
in
one
document
so
that
it's
clear,
because
right
now,
I'm
asking
al
I'm
looking
at
this
I'm
familiar
with
the
other
draft,
and
it
feels
like
this
could
be
an
add-on
to
the
other
draft
in
a
separate
section,
so
that
plus
the
clarity
I
think
you'll
gain
from
the
diagrams
that
are
already
outlined
in
the
other
draft,
whereas
here
I'm
not
able
to
make
heads
or
tails
of
what
you
were
just
describing
on
your
diagram.
Unfortunately,
yeah.
B
F
F
So
this
this
benchmarking
is
related
to
the
earlier
EVP
and
related
benchmarking,
but
typically
the
applications
are
different
right.
So
when
the
measurements
are
also
quite
different
than
that's
one
of
the
reasons,
I
think
so
Dean
chose
to
write
another
separate
draft,
so
I
will
probably
take
the
feedback
to
him
and
see
if
you
want
to
merge
it
in
a
single
draft.
That's
like
would
you
help
me
to
take?
Take
the
staff
to
the
work
appropriate
working
group,
I
thought
this
was
looking
to
villages
yeah.
B
A
B
K
So
I
am
the
KJ
from
distance
University
in
the
Korea
or
so
RA
people
talk
about
our
East
for
the
top
ones
in
the
containerized
infrastructure
in
the
next
track.
Please
so
yeah
everyone
knows
about
the
what
is
the
container
and
then
the
rod.
What
is
difference
is
between
there
and
the
PM
best
infrastructure.
So
I'll
move
on
to
the
next
right,
yeah.
So
in
the
ecchi
an
athlete
has
to
go.
They
also
already.
They
defined
it.
Different
network,
setup
architecture
or
compared
with
that.
We
invest
infrastructure.
K
K
Is
a
some
little
plugins
such
as
CNI
or
CNN,
so
they
use
so
they
have
some
people's
different
characteristics
or,
for
example,
store
number
bridges
they
use
or
something
like
that.
So
we
want
to
try
to
test
depending
on
their
some
differences,
and
then
we
test
our
circuit
and
then
latency
between
the
two
vnfs.
Okay,
next
slide,
please,
and
in
the
DM
coverage
they
are
already
two
rfcs
about
10,
fe,
benchmarking,
us
and
then
RFC,
80,
81,
72
and
80
82
or
o
4.
K
K
Then
we
are
so
considered
as
IOP
as
a
natural
exploration
and
we
tested
the
pH
line
and
bln
and
a
strawberry
traffic
or
between
the
two
via
naps.
Next
slide,
please
yeah.
So
this
is
our
test
environment
with
two
servers
and
then
the
the
various
different
models
in
the
the
H
parameter
servers,
and
then
we
test
or
the
trampy
between
the
two
pianos
next
piece.
K
You
know
this
is
our
test
environment
specification
and
then
yeah
I'll
skip
that
for
testing
scenarios
we
use
the
three
cases
are
based
on
our
deployment
model.
Pnp
means
the
parameter,
p,
OD
and
then
vm
p
means
that
the
PM
p
OD
for
each
case
we
consider
the
local
case
or
which
are
to
venice
in
the
same
host
and
the
remote
case,
which
are
two
games,
are
in
the
typical
different
host
and
then
tour
to
measure
the
traffic
boundaries
and
the
letter
C.
K
K
Yet
there
are
some
difference
or
such
as
a
number
with
the
bridge
they
use
order
or
when
we
consider
what
met
Dylan
and
I
people
and
they
or
make
that
their
child
may
not
dress
and
ICO
dress
for
the
parents
pitch
Carnac.
So
so
we
are
or
figure
out
their
differences
and
then
testing
and
then
compare
with
ax
or
for
the
top.
Once
you.
B
O
K
Yeah,
so
there
are
formed
our
experience.
Also,
we
we
rest.
We
want
to
raise
the
issue
about
conservation,
for
benchmarking
in
the
continuous
infrastructure,
so
Cantera
containerize
infrastructure
have
different
isolation
method
so
that
it
may
have
impact
before
such
a
severe
enough
lifecycle
measurement
or
even
the
network
pop
was
testing
and
then
Fonterra's
infrastructure
have
a
sorority
point
diploma
two
options
so
based
at
options.
K
B
O
B
N
N
N
Read
network
function,
data
planes
and
running
on
the
on
on
a
single
server,
but
not
in
isolation,
so
not
a
single
network
function
and
but
multiple
valve
and
including
the
cases
where
they
constitute
at
bigger
service,
whether
it's
a
service
chain
or
reverse
topologies,
and
specifically
try
to
capture
the
impact
of
the
of
the
noisy
neighbor
and
the
usage
of
of
shared
resources.
So
that's
the
proposal
inside
and
this
shows
an
abstraction
of
energy
services,
so
the
assumption
is.
N
The
underlying
assumption
is
that
energy
services
are
are
built
of
multiple
network
functions
and
they
can
be
run
in
either
isolated
through
a
link
as
in
VMs
as
VMFS
as
we
call
them
in
the
draft
or
in
containers
a
CMS,
and
there
are
other
packaging
options.
You
know
they
they
apply
to
and
they
access
external
world
for
a
shared
house.
I'd
apply
next
slide.
N
So
how
do
we
abstract
the
service?
Well,
we
we
see
how
the
network
functions
are
connected,
so
the
topology
how
they
are
configured,
and
so
the
configuration
and
and
the
combination
of
configuration
and
topology
constitutes
actually
everything
that
trans
to
the
server
and
then
the
the
way
that
packets
are
forwarded
for
for
benchmarking,
looks
like
so
within
a
we've
actually
done
some
and
we
applied
the
methodology
to
the
tests.
N
We
run
in
to
open
source
projects
at
the
FDA,
associate
that
that
I,
lead
and
also
in
the
CNF
lab
initiative
within
the
cloud
native
computing
foundation
run
by
Duncan
and-
and
we
try
to
you-
know,
work
with
the
teams.
Open
source
seems
to
see
how
those
methodology
applies.
So
we
define
the
the
VMF
service,
trains,
CLF
service
trains
and
CNF
service
pipelines,
as
shown
on
the
slide
and
describe
the
draft
next
slide
and
the
once
we
once
you
configure
or
design
this
nav
services
here
end
up
with
a
core
usage.
N
So
we
have
composed
the
number
of
different
service,
the
city
mattresses,
showing
the
NF
count
view
and
also
process
or
a
usage
across
our
core
usage,
and
we
also
present
the
results
within
the
the
similar
metrics,
where
we
have
a
row
indices
representing
number
of
under
the
service
instances
on
the
server
and
the
columns
representing
the
size
of
the
service
or
number
of
the
Nats.
Next
slide.
N
B
N
We
can't
measure
all
combinations
due
to
the
shortage
of
resources,
so
we
we
clearly
are
bound
by
number
of
course
per
socket,
but
as
those
are
increasing,
we
are
expecting
to
improve
the
coverage
next
slide,
but
hopefully
you're
getting
the
the
view
of
New
York
are
systematically
approaching
the
problem
and
I'm
here
the
year.
The
results
represent
that
in
there
you
know
very
much
the
same
metrics,
so
a
number
of
service
instances
and
number
of
her
nerves
and
the
members
are
maybe
a
bit
small
to
read.
N
But
the
idea
is
to
really
compare
and
for
a
specific
service
allocation,
a
personal
collocation
for
the
V
switch
and
for
the
NFS
on
how
they
compare
across
the
other
service
density
matrix.
So
we
are
increasing
the
number
of
services
as
we
as
we
go
down
and
very
quickly,
probably
is
better
to
show
it
on
the
paragraph.
But
the
the
the
containerized
approach
is
fairly
much
lighter
weight,
so
it
exemplifies
a
better.
N
N
The
results
more
in
the
overflow
necks
like
yeah,
yeah,
okay,
so
we're
firstly,
looking
to
this
was
our
initial
zero,
zero
main
draft
we're
looking
for
feedback-
and
this
is
our
first
attempt
to
standardize
the
way
that
one
can
evaluate
NF
software
later
playing
performance
at
the
at
the
system,
level
and
yeah.
Here
we
go.
G
Yeah
I
can
also
obedient.
You
see
just
a
very
quick
comment,
so
the
the
two
documents
like
Jacobs
network
virtualization
platform,
benchmarking
document
and
yours,
look
very
diff,
but
they
seem
to
cover
the
same
area
from
two
different
sides:
I'm,
not
sure,
if
maybe
it's
radical,
but
if
it
would
be
possible
to
align
the
documents
not
to
integrate
them
but
to
say
like
okay,.
G
N
B
B
D
N
B
B
N
So
can
I
ask
one
more
question:
yeah
sure
so,
I
understand
that
evaluating
performance
at
the
compute
node
level,
so
when
the
system
is
actually
loaded
load,
the
system
is
something
that
is
of
interest.
Yes
and
and
I
I
haven't
told
through
a
number
of
drafts
and
I
couldn't
really
find
anybody
else
doing
this
sort
of
work
in
the
IETF
and
definitely
not
in
B&W
G.
The.
B
Closest
thing
to
it
is
some
things
we
talked
about
in
testers
or
nine,
where
we
have
like
multiple
mesh,
VNS
I,
think
that's
the
I
mean
that's
probably
the
closest,
but
this
this
goes
to
the
the
network
service
density,
which
I
like
a
lot
having
having
looked
at
this
personally,
it's
a
good
good
taxonomy
for
looking
at
these
different
implementations.
Okay,
thank
you.
Thank
you
much
yet
so
so
now
it's
a
now.
It's
like
grab
and
go
lunchtime
find
a
place
to
grab
your
lunch
and
then
or
first
go
grab.
B
Your
lunch
then
come
back
to
Congress
Hall
three,
where
we
will
have
our
overtime
session
and
I
forgot
to
get
that
on
the
slides,
but
that's
okay
and
and
there
we
go
all
right.
So
thanks
for
your
attendance,
everybody,
where
are
the
blue
sheets
in
the
back?
Okay,
thank
you
and,
and
and
and
a
mighty.
Thank
you
to
our
note
takers
stepping
in
today
much
yet
and
unworn.