►
From YouTube: IETF105-BMWG-20190722-1000
Description
BMWG meeting session at IETF105
2019/07/22 1000
https://datatracker.ietf.org/meeting/105/proceedings/
B
C
D
E
D
B
Hey
bingo,
so
all
that
red
stuff
is
when
you
get
to
the
either
pad
all
that
red
stuff
is
going
to
be
the
agenda
and
you
get
this
I,
don't
think
you
can
actually
delete
stuff.
Can
we
oh
we
can,
if
you
own
it,
you
can
delete
it.
Okay,
so
and
and
I'll
ask
somebody
to
scrape
screen,
scrape
the
ether
pad
occasionally
put
in
a
text
file,
because
things
go
wrong
with
ether
pad
and
that's
an
unfortunate
reality
of
our
business
alrighty.
B
So
Sara's
gonna
cover
Jabbar
today,
I
guess:
we've
got
a
collective
of
a
few
people
who
are
taking
meeting
notes
and
contributing
to
the
ether
pad.
That
would
be
great.
You
know
type
in
a
few,
mostly
the
action
items
that
come
up
I
mean
that's
the
kind
of
thing.
That's
interesting
also
the
questions
that
are
raised.
B
If
you
guys
are
testers
if
you've
worked
in
the
lab,
if
you
have
an
interest
in
what
other
people
are
testing
and
some
of
the
technologies
that
are
on
our
agenda
today-
and
this
is
the
place
for
you
and
you'll-
find
this
a
very
easy
working
group
to
join.
You
can
read
some
drafts
and
provide
some
comments.
You
can
read
some
of
our
fundamental
RFC's
and
you'll
learn
very
quickly
what
it's
like
to
prepare
an
RFC
in
this
area,
starting
out
with
an
internet
draft
and
so
forth.
We
have
a
you
know
on
our
webpage.
B
We
have
also
a
supplementary
page
that
kind
of
gives
a
little
guidance
along
these
lines,
so
welcome
all
right
use
the
clicker
out.
So
here's
the
note
well,
so
we
got
to
go
through
this
a
little
bit,
because
this
is
exactly
the
first
meeting
of
the
week,
usually
sort
of
hand.
Wait
and
say:
have
you
seen
this
already,
but
we
can't
today,
so
we
work
as
individuals
and
we
try
to
be
nice
to
each
other.
That's
that's
our
personal
note.
Well
to
you
beyond
that.
B
Basically
everything
you
say
at
the
microphone.
Every
mail
you
send
to
our
list
and
sort
of
every
other
kind
of
public
conversation
here
is
a
kind
of
considered
a
contribution
to
the
IETF.
You
have
to
be
aware
of
whether
that
conversation
on
your
part
is
covered
by
IPR.
You
need
to
disclose
that
you
know
in
a
prompt
fashion,
and
so
that's
basically
what
we're.
What
we're
asking
of
folks
today,
if
you've,
got
something
you're
presenting
for
the
first
time
or
a
new
time
and
it's
covered
by
our
PR.
Please
let
everybody
know
thank
you.
B
So
here's
our
agenda
and
you
can
see
right
from
the
top.
It's
really
full.
So
we've
got
the
what
we'll
talk
about
the
benchmarking
for
evpn
and
PPV
evpn
status?
What
going
to
our
charter
and
milestones
and
the
general
working
group
status
all
together,
then
we've
got
the.
We
actually
don't
have
a
presentation
for
number
three
I,
don't
think
I,
don't
think.
Karsten
has
joined
us,
but
I
can
quickly
give
a
status
there.
B
And
then
we
have
one
other
working
group
documents:
the
updates
to
the
back
to
back
frame,
benchmark
o
P
and
a
V
and
V
s,
birth
testing.
So
that's
that's!
A
topic.
I
have
to
cover,
as
a
participant
I'll,
try
to
remind
you
that
I'm
changing
hats
when
I
that
and
we
had
to
actually
have
quite
a
good
number
of
good
comments
on
the
list.
So
that's
that
should
be
a
good
discussion.
So
then
we
have
the
continuing
proposals.
B
So
folks
can
read
that,
and
by
the
time
of
light
Amir
does
person,
it
will
be
a
better
informed
he's
trying
to
you
know,
sort
of
understand
how
the
working
group
works
and
I
think
that's
a
that's
a
great
way
to
go.
Also
in
general,
you
know
when
you
come
here
and
make
a
proposal
join.
The
group
read
somebody
else's
draft
help
them
along
they'll
help
you
along
it.
So
you
know
it's
a
give-and-take
kind
of
thing
here.
Much
appreciated
any
bashes
to
the
agenda.
B
See
no
requests
for
the
floor.
Nobody
hit
the
mic.
So,
as
I
said,
we've
got
the
jabber
and
the
IPR
covered.
The
blue
sheets
are
going
around
everybody.
Please
sign
the
blue
sheets.
Where
is
the
blue
sheet
at
the
moment
right
up
front
here?
So
let's
keep
that
circulating,
make
sure
everybody's
signed
it
and
how
many
names
we
got
on
there
right
now,
sir
twelve.
Alright
there's
a
lot
more
than
twelve
people
in
the
room.
So
let's
get
let's
get
the
rest
of
the
folks
signed
up
on
that.
B
B
Everybody
found
the
jabber
page
or
not.
I'm,
not
sorry,
jabber
ether,
ether
pad
page,
where
you
can
type
in
some
notes
for
us
help
us
out
there.
That
would
be
great
okay,
so
here's
the
status
in
the
interim
we've
adopted
the
back
to
back
frame
benchmarking.
It's
an
update
for
the
particular
section
in
our
one
of
our
fundamental
rfcs
RFC
2544,
and
this
is
where
we
try
to
determine
the
size
of
a
buffer
that
the
device
under
test
has,
and
that
turns
out
to
be
pretty
important.
B
B
Isis
and
then
we'll
make
it
you're
allowed
you're
allowed
to
update
you're
allowed
to
do
some
updates.
We,
when
we
put
this
in
people,
went
oh
really,
August,
20
18,
you
know,
wait
well,
we
thought
it
was
aggressive
and
and
I
think
one
of
the
problems
is
one
of
the
authors
has
left
disposition
but
in
any
case,
we'll
there's
good
work
that
was
done
there
and
I
think
we
can
get
that
gone
enough
on
that.
B
So
keep
looking
for
the
blue
sheets,
folks,
who
are
arriving
a
few
minutes
in,
and
so
here's
where
we
stand,
no
new
FCS,
no
charter
updates.
Oh
you
got
a
son.
This
is
a
supplementary
page
or
folk
for
new
folks
joining
the
group.
That's
where
you
go
to
find
out.
You
know
all
the
details
of
things
that
I
just
said
very
quickly.
C
C
For
the
new
folks
coming
in,
especially
on
top
of
which
we
have
an
awful
lot
of
virtualization
work
going
on,
if
there's
information
you
guys
want
to
share
tools,
implementations,
you've,
seen
of
things
that
we're
adding,
please
send
me
the
links
and
I'll
make
sure
that
we're
updating
the
page.
That
stuff
is
reflected
in
a
central
repository,
because
sometimes
I
find
you
actually
send
out
emails
periodically
reminding
of
us
of
it,
and
it
might
be
good
to
list
that
on
the
the
page.
That's.
B
Right,
that's
right!
In
fact,
we
Vladimir's
and
I
had
a
conversation
about
a
truck
to
test
traffic
generator
product
like
during
the
hackfest
briefly,
so
that
interesting
stuff,
yeah.
Okay,
thanks
for
the
augment
there,
sorry
alright!
So
when
you
write
an
internet
draft
here,
we
are
laboratory
only
that's
our
Charter.
So
we
put
this
nice
paragraph
into
the
some
parts
in
the
intro
some
parts
in
the
security
considerations
section
and
when
the
security
people
read
it,
they
go.
A
A
A
B
As
chair
sorry
back
to
chair
in
the
chair,
I
I
wanted
to
I
wanted
to
quickly
mention
that
we
that
we
had
some
comments
of
an
early
security,
Directorate
review
on
the
next
generation
and
firewall
draft.
The
authors
have
heard
them
sort
of
thinking
about
accommodating
them
and
I
think
that
was
a
good
cross
area
post
took
a
while
to
get
the
comments,
but
Kathleen
Moriarty,
DISA
favor
there
and
completed
the
review.
So
we
thank
you
kathleen
and
well.
Thank
you
when
we
see
you
again
in
person
so,
but
no
other
updates
there.
B
What
we're
asking
for
for
next-generation
firewalls
people-
please
read
this
so
we're
gonna.
Put
this
in
last
call
I!
Think
soon,
we'd
like
to
see
some
more
reviews
on
the
list.
It
was
generated
by
a
large
group
of
people
much
larger
than
the
author
list.
So
that's
that's
the
message
next-generation
firewall,
we're
trying
to
finish
that
up.
Please
read
it!
Get
ready
for
last
call!
Thank
you,
okay!
B
Okay,
so
you
can
see
that
I'm
kind
of
doing
the
very
economical
slide
generation.
Now
do
this
snip
thing
on
the
off
the
did.
I.
Do
this
snip
thing
off
the
diff
and-
and
it's
got
all
the
information
everybody
wants.
You
know
for
a
slide
title,
so
this
is
the.
This
is
our
updates
on
back
to
back
frame
benchmarking.
It's
a
been
a
fundamental
thing
in
2544
and
we've
got
two
sets
of
comments
on
the
list.
B
Maciek
and
bharat
KO
ma
CX
comments
were
reflected
in
the
zero
zero
version,
but
Morocco's
comments
came
right
after
I
pushed
that
working
group,
zero,
zero
version,
so
I
didn't
actually
get
to
those
until
last
night,
and
so
we'll
talk
about
those
for
a
moment.
Maciek
asked
for
a
couple
of
things.
You
know
moving
some
text
around,
that's
great.
It's
done.
B
The
goal
is
measuring
ingress
buffer
size
in
front
of
the
header
processor.
You
know,
there's
a
picture
here
that
I'll
show
you
in
a
moment
as
per
the
definition,
so
this
works.
If
there's
no
other
buffering,
add
a
paragraph
dictating
the
setup
and
I
think
we've
got
that
kind
of
got
that
cover,
but
we
expanded
on
it
a
bit.
So
here's
one
of
the
changes
likewise
in
this
is
in
the
scope.
B
Likewise,
the
the
methods
of
RFC
2083
39,
which
basically
focused
on
the
egress
buffer
evaluation,
should
be
used
for
test
cases
where
the
egress
port
buffer
is
the
known
point
of
overload.
We're
we're
actually
talking
about
some
modifications
for
that,
but
that's
a
good
thing
and
then
somehow
in
the
working
group,
a
0-0
draft,
this
text
was
all
deleted
and
it
actually
gives
it
actually
gives
our
model.
B
Though
the
model
is
that
you
know,
we've
got
a
traffic
generator
and
ingress
ingress
to
the
DUT
there's
a
buffer
in
here
and
and
then
the
header
processing
system,
and-
and
so
if
the
header
processing
system
can't
keep
up
with
what
you're
sending
in
this
buffer
grows.
We
try
to
do
that
with
a
back
to
back
frame
burst
the
maximum
burst
we
can
send
in
without
loss.
That
gives
us
an
implied
size
of
this
buffer.
So
it's
like
a
just
a
very
simple
model
of
the
DUT.
B
Then
we
aggress
to
the
receiver
and
see
if
we've
got
zero
loss.
That's
basically
how
this
test
works,
so
I
actually
added
all
this.
In
version,
5
of
the
document
and
I
figured
out
last
night
late.
What
what
happened?
I
think
I
must
have
grabbed,
oh
for
when
I
did
the
zero
zero
update
and
that's
how
this
got
lost.
So
now
it's
back
in
for
what
it
will
be
in.
Oh
one,
okay,.
B
Okay,
all
right
and
Maciek
had
had
a
long,
a
final
list
of
comments
which
listed
areas
of
noise
to
the
measurement
which
I
basically
captured
here,
pretty
much
word-for-word.
So
we
should,
you
know,
evaluate
these
identify
them
remove
them
if
possible,
I
mean
this
is
the
kind
of
thing
that
could
I
mean
obviously
we're
using
loss
in
this
measurement
to
find
out
whether
the
buffer
size
has
been
reached.
Resources
have
been
exceeded,
so
if
we've
got
other
sources
of
loss,
we
want
to
chase
them
away
or
understand
them
and
test
around
them.
B
So
that's
basically
what
the
is
all
about.
Okay
and
then
we're
mentioning
the
importance
of
the
Etsy
nfe
tester
zero.
Nine
specification,
which
specifies
a
search,
algorithm,
called
binary,
search
with
loss,
verification,
that's
one
of
the
ways
that
we
can
mitigate
these.
These
cases,
where
transient
effects
in
a
you
know,
virtualized
environment,
affect
the
the
device
under
test
and
basically
it
allows
the
binary
search
to
search
twice
at
every
position
or
multiple
of
times,
but
there's
a
there's
lots
of
factors
there
in
terms
in
terms
of
the
trial,
duration
and
so
forth.
B
To
try
to
avoid
the
transients
that
you
get
that
we
used
to
be
able
to
work
around.
We
used
to
be
able
to
say,
okay,
all
the
address
learning
is
done
and
all
the
routing
updates
are
done,
and
now
the
device
is
stable.
Yeah
in
the
virtualized
world
that
we
can't
chase
away
all
the
interrupts,
they're
necessary
for
the
health
of
the
device.
So
we've
got
that
in
there
there's
an
explicit
step
in
the
in
the
measurement
and
I
mean
that's
our
corrected
buffer
time,
which
is
really
the
the
main
contribution
of
this
draft.
B
When
we
send
a
burst
of
packets,
the
header
processing
has
handled
some
of
the
burst
while
we're
sending
them
in,
and
so
the
the
size
of
the
burst
that
we
can
send
through
loss
free
is,
is
basically
the
sum
of
the
the
size
of
the
buffer,
aided
by
the
number
of
packets
that
header
processing
has
been
able
to
pull
out
and
that's
determined
by
the
max
throughput,
RFC
2544
throughput
of
the
device.
So
now
in
that
recognition
we
thought
you
know.
B
One
of
the
one
of
the
points
here
he
says
is
that
deployers
wishing
to
predict
the
time
for
the
buffer
to
fill
using
a
real
actual
rate
will
be
different
from
the
back
to
back
rate
that
we're
pumping.
In
so
he's
saying,
we
could
add
this
calculation
as
well
it.
It
will
basically
come
up
to
a
longer
buffer
time,
because
this
is
you're
going
to
be
your
actual
traffic.
B
A
question
in
my
mind,
though,
is
a
tester
going
to
know
this
at
the
time
that
they're
running
this
benchmark
test
and
I'm
kind
of
thinking,
I'm
kind
of
thinking.
This
might
be
an
appendix
topic,
it's
a
maybe
for
me,
but
that,
but
basically
we're
reporting
the
corrected
time.
That's
that's
that's
supposed
to
be
the
most
useful
thing,
and
so
the
question
of
the
working
group
is:
do
we
add
this
calculation
any
comments
there?
B
C
D
C
B
Let's
capture
that
this,
this
would
not
be
a
benchmark
and-
and
it's
a
and
it's
a
piece
of
information
that
maybe
we
can
tack
on
at
the
end-
let's
see
if
there's
any
more
support
for
it
and
and
if
not,
we
we
thank
of
reko
for
a
suggestion,
but
maybe
we
just
don't
put
it
in
or
any
more
than
an
appendix
good.
Thank
you,
sir.
Oh
yes
Maciek
that
sounds.
B
B
I'm
I'm
kind
of
partial
to
the
original
ones,
but
you
know
I'm
done
speaking
as
the
author
and
and
and
as
a
and
as
a
person,
who's
working
from
kind
of
the
original
version
of
the
specification.
Maybe
you
know,
maybe
maybe
maybe
we
can
incorporate
those
terms
in
the
in
the
word
definitions
of
these
of
these
variables.
So.
C
B
B
B
The
maximum
of
the
frame
rate
at
maximum
offered
load,
so
you
put
the
maximum
offer
to
load
in
on
the
on
the
sort
of
the
wired
interfaces
and
whatever
comes
out.
That's
this
frame
rate.
That's
what
he
was
suggesting.
You
might
want
to
characterize
it
there,
but
he
also
says
not
sure
about
how
much
tension
said.
Other
such
quantities
should
get
in
the
draft
and,
as
throughput
has
the
advantage
of
avoiding
some
frame
sizes
in
the
in
the
testing.
So
yeah
I
think
we
need
some
contributions.
B
B
It's
related
to
the
section
four
prerequisites,
so
there's
sources
of
packet
loss
that
are
unrelated
to
consistent
evaluation,
the
buffer
size.
They
should
be
identified
or
mitigated
this
material
was
just
added.
That
was
something
Bracco
was
questioning
and
do
we
have
a
separate
document
discussing
the
difference
between
testing
the
device
under
test
and
the
system
under
test
Braca
thinks
we
should
have
such
a
document.
B
H
C
As
the
person
who
will
probably
take
this
through
working
group
last
call,
my
ass
to
the
working
group
is
25:44.
In
my
mind,
is
a
big
deal
and
if
we're
gonna
update
an
account
for
this
I
think
that
having
our
eyeballs
on
it
as
a
working
group
and
our
feedback
collectively
is
really
important
on
these
things,
because
it
matters
and
physical
and
I
think
it
could
matter
more
and
virtual.
C
So
I
want
to
make
sure
if
we're
gonna
go
ahead
and
update
2544
that
we
not
just
a
couple
of
us,
but
a
good
majority
of
us
have
and
give
out
lots
of
input
either.
Yes,
we
agree,
or
no,
we
don't
and
here's
why?
But
my
asked
to
you
guys,
especially
if
we're
gonna,
try
to
working
group
last
call
in
Singapore,
is
that
we
take
a
look
at
that.
So
please
read
this
draft.
C
Even
if
the
only
thing
you
focus
on
specifically
is.
Are
these
specific
things?
I
know
al
wants
us
to
focus
on
everything
we
should.
Oh,
that's
why
I
put
it
up
here,
but
if
the
only
thing
you
have
time
or
in
your
your
day
is
to
go
ahead
and
address
these,
these
are
these
could
be
contentious.
So
getting
everybody's
eyeballs
would
be
I,
think
helpful
to
the
community
as
a
whole.
Yeah.
H
B
You
thank
you,
sir.
So
yeah
specifically,
this
is
an
update,
226
Section
226
point
4
of
RFC
2544.
If
you
haven't
read,
25:44
read
it
at
least
that
far
and
then
and
then
jump
into
this
draft
and
you'll
see
some
of
the
reasons
why
this
is
a
good
update.
It's
a
generally
much
larger
expansion
of
the
procedure.
That's
there
so
good.
Thank
you
for
your
attention.
B
H
D
B
I
B
G
H
H
So
these
are
descriptions
of
what
has
changed
from
the
previous
draft.
The
mostly
said
minor
fixes.
Most
of
this
presentation
is
the
same
as
we
had
the
previous
meeting.
So
let's
go
to
the
next
slide.
So
here
is
the
motivation,
and
so
the
main
point
of
a
pillar
search
is
that
it
is
probabilistic
search,
probabilistic
algorithm,
so
it
is
suitable
for
cases
when
the
device
under
test
or
system
under
test
is
not
consistent.
Enough
typical
example
is
that
you
perform
one
trial
measurement.
H
H
That's
next
like
this,
so
the
overview,
the
one
important
thing
for
our
search
is
that
it
is
not
suitable
for
finding
throughput,
as
defined
in
RFC,
two
five,
four:
four,
because
that
the
definition
says
that
if
it
is
throughput,
you
should
see
no
was-
and
this
is
very
hard
to
prove
when
your
system
is
not
the
domestic
enough,
so
to
make
things
really
easier
for
the
algorithm.
Instead,
we
defined
so
called
critical
load,
which
is
expected
average
of
loss
ratio
over
multiple
measurements.
H
So,
typically
you
choose
this
parameter
to
be
nonzero,
but
very
small.
Let's
say
ten
to
the
minus
seven,
which
means
out
of
ten
million
packages
that
you
sent.
You
accept
only
one
on
average
be
missing.
So
when
you
give
this
go
to
the
algorithm,
it
is
easier
for
the
algorithm
to
give
both
upper
bounds
and
lower
bounds
so
that
you
can
pinpoint
to
this
critical
load
more
and
achieve
better
result.
Next
slide,
please.
H
So
there
are
some
differences,
and
one
typical
think
is
that
there
are
multiple
probabilistic.
Let's
call
them
models
that
can
model
your
device
and
usually
they
range
from
very
simple
to
more
realistic
but
very
hard
to
implement.
So
our
search
is
tending
to,
let's
call
it
simple
models,
but
still
not
broken
enough,
so
that
the
results
are
reliable
and
we
know
that
today,
neural
networks
and
machine
learning
and
methods
like
that
can
give
you
very
good
information
about
your
system,
but
they
are
not
deterministic
enough.
H
That
means
that
if
you
get
a
slightly
different
data,
the
resulting
suggestion
can
be
different
and
other
people
trying
to
get
the
same
results
will
get
different
results.
So
it
is
a
useful
matter
not
stable
enough.
So
that's
why
we
are
sticking
to
to
prove
on
statistical
methods
next
slide.
Please,
and
this
slide
is
new
compared
to
the
presentation.
H
The
last
meeting.
There
are
some
interesting
observations.
One
observation
is
that
sometimes
the
system
under
test
shows
behavior.
That
depends
on
time.
For
example,
the
throughput
is
getting
lower
and
lower
because
something
is
leaking,
or
things
like
that.
We
are
arguing
that
beyond
our
search,
results
are
better,
even
in
this
case,
compared
to
standard
binary
search
but
yeah.
Of
course,
this
is
a
not
thing
to
be
written
in
the
draft.
This
is
just
a
side
comments
that
we
have
done
some
measurements
on
it
and
we
are
satisfied
where
we
see
the
results
and
yeah.
H
H
Next
slide,
please
yeah.
These
are
the
graphs
for
the
fitting
functions.
There
are
two
approximations
let
the
algorithm
uses
to
make
decisions.
The
thing
is
that
the
algorithm
chooses
different
offer
loads
and
most
of
the
time
the
transient
overload
is
not
the
final
critical
order
that
this
is
search
for.
So
the
algorithm
has
to
make
some
reasoning
about
what
does
it
mean
when
I
measured
too
high
or
too
low?
H
How
does
it
translate
so
these
questions
are
used
for
that
you
will
see
graphs
of
them
for
a
particular
set
of
parameters
that
are
there
to
show
that
in
absolute
values,
the
blue
one
looks
that
it
is
smaller,
but
when
you
look
at
the
logarithm
academic
values,
you
see
that
for
very
small
offer
loads.
Actually,
the
orange
one
gives
smaller
predictions
next
slide.
Please-
and
this
is
the
example
of
what
the
results
look
like.
H
H
This
is
just
to
show
you
the
how
the
algorithm
changes
its
evaluation
over
time
and
when
you
see
the
y-axis,
if
you
see
the
small
numbers,
this
is
very
precise,
so
upper
bound
and
lower
bound
is
very
close
together,
and
this
is
how
we
realize
that
our
system
of
the
test
does
not
have
exactly
constant
performance,
but
it
is
very
slightly
coming
down.
So
this
is
to
show
that
we
are
search,
even
if
it
is
probably
stick
and
then
it
can
give
you
besides
enough
an
output
if
the
system
behaves
well
enough
next
slide.
H
Please-
and
this
is
a
let's
say
average
case,
the
gray
dots
show
the
improved
selection
of
offered
load.
This
is
the
case
when
most
of
the
results
give
you
zero
loss,
but
sometimes
you
give
nonzero
and
quite
a
bit
close,
and
you
can
see
that
the
algorithm
is
trying
to
measure
at
the
values
that
can
give
it
with
some
probability,
nonzero
loss,
even
if
the
actual
estimates
are
way
below
it,
because
we
find
out
that
this
way
of
choosing
overloads
can
give
you
better
convergence.
H
So
that
the
upper
bound
and
lower
bound
is
far
apart
than
it
was
previously,
but
it
is
what
the
algorithm
should
give
you.
It's
just
a
consequence
of
very
surprising
data
point
appearing,
so
this
is
the
this
is
how
most
of
the
test
we
are
performing.
Look
like
next
slide.
Please-
and
this
is
the
example
where
the
algorithm
is
doing
basically
the
same
thing,
but
the
upper
and
lower
bounds
are
not
converging
together.
Very
well.
D
H
Okay,
so
here
are
the
links.
So
there
is
FD
io
open-source
project
and
we
are
sub
project
called
CSIT
and
we
are
using
this
algorithm
to
show
the
behavior,
but
it
is
a
steal,
let's
call
it
experimental.
We
are
not
running
those
tests
for
every
scenario
we
have.
We
are
still
using
a
deterministic
test
for
most
of
them,
but
when
we
are
more
confident
about
this
PLA
search
I
believe
we
can
switch
back.
H
H
H
Next
slide,
please,
okay,
so
this
is
the
important
thing
so
a
side
of
our
part
implementing
small
improvements.
We
are,
let's
say
stuck
that's
the
adoption
point
of
view
because
we
didn't
get,
let's
say
any
reviews,
and
we
know
that
the
syntactic
and
terminology
in
parts
of
our
drafts
are
not
very
good.
Maybe
that's
the
reason
that
people
do
not
want
to
review
it,
but
anyway,
to
move
forward
to
receive
adoption.
We
would
like
any
ideas.
B
B
So
just
the
chairs
have
looked
at
it
and
one
point
for
echo:
is
we
had
a
special
session
on
the
data
analysis?
Last
time
we
gave
each
other
a
chance
to
you
know
present
their
data
in
detail
and
asked
a
lot
of
questions.
There
was
got
a
good
feedback
from
Carsten
and
me
and
other
people
in
the
room
I'm
wondering
if
you're
able
to
take
action
on
any
of
that
feedback.
H
B
B
B
Thank
you
very
much
all
right
so
with
with
getting
one
reviewer
here,
reco
I,
think
and
of
course,
I'll
take
a
look
at
it
too.
I
think
some
of
my
comments
on
on
the
mor
draft
are
probably
applicable
to
your
draft
as
well,
so
a
police
try
to
take
advantage
of
those
in
the
in
the
next
revision
and
and
I
think
well,
I!
Think
we'll
close
it
there.
Thank
you
very
much.
Yes,.
B
Okay,
so
we,
the
chairs,
realized
we
jumped
right
over
an
important
topic
which
was
item
1a
on
our
on
our
agenda.
It's
the
benchmarking
methodology
for
evpn
and
PBB
evpn.
This
is
just
a
working
group
last
called
determination.
That's
what
it
sort
of
says
on
our
agenda
and
that's
what
we're
going
to
do
right
now,
since
Sarah
kind
of
bound
a
draft
that
I'm
working
on
in
with
this
topic.
It
ended
up
now
that
that
she's
got
to
make
the
call.
So
this
is
why
we
have
co-chairs
somebody
can
be
working.
C
C
Item
very
seriously
and
I'm
happy
to
report
that
the
authors
have
had
that
discussion
and
that
they
mutually
reached
the
consensus
that
the
works,
while
related,
can
happen
in
parallel
to
one
another,
particularly
where
one
draft
is
about
to
go
through,
where
I'm
going
to
call
consensus
for
working
verb
last
call
on
the
list.
So
that's
the
formal
decision,
I
really
appreciate
that
you
guys
sat
down
spent
the
time
during
your
day,
jobs
to
do
this,
particularly
what
the
time
zone
changes
for
you
guys
as
well.
C
So
thank
you
for
doing
that,
I
think
as
a
working
group.
That
makes
sense
if
that
doesn't
make
sense
to
anybody
in
the
room.
Please
let
me
know,
but
basically
they're
covering
two
different
pieces
where,
as
a
tester,
if
I
was
going
to
test,
this
I
would
ultimately
use
both
arcs
these
or
drafts
when
they're.
In
the
meantime,
it's
just
ones
ahead
of
the
other
and
we
don't
see
a
reason
to
slow
one
down
for
the
other.
C
A
B
You
both
thank
you
both
sitting
and
Jim
Jim
Yutaro,
who
did
a
lot
of
the
email
discussions
and
and
set
up
the
sort
of
agreed
to
the
time
we
did
this
at
7:00
a.m.
but
it
was
a
really
good
discussion
thanks
again
all
right.
So
our
next
topic
is
the
the
multiple
level
search
and
that's
going
to
be
presented
by
Maciek,
I'm,
sure
and
so
Maciek.
We
heard
you
very
clearly
before
so.
Let's
make
sure
we
can
still
do
that.
G
G
Sorry
I
got
thrown
from
the
program
from
the
room,
because
reconnect
so
I
will
I
know
that
the
connection
I'm
presenting
from
Paris.
So
if
any
point
on
my
audio,
which
not
coming
I,
know
Lauren
so
from
me,
Decker
is
everything
only
Turing
it,
but
if
it's
choppy
just
let
me
know,
I
will
scroll
down
and
disable
the
video,
and
so
this
is
about
the
the
LR
search
and
protocol
of
ratio
search,
&,
Brasco
and
logic
sampler,
which
are
the
the
office
and
they
slightly
okay.
G
And
so
we
presented
the
draft
on
the
last
meeting
in
Prague,
and
here
are
the
main
changes
from
version
0
1
to
2
0
to
the
I.
Think
the
main
ones
related
to
the
ml
are
search
applicability
and
usability
and
I'm
going
to
cover
some
of
these
on
the
flowing
slides
and
we
also
greatly
updated
the
terminology
section
just
in
case.
G
Our
search
is
it's
about
discovering
performance
at
like
a
throughput
rates
in
a
single
search,
but
instead
of
looking
and
searching
for
a
single
ratio,
a
single
rate
it,
its
aim
is
to
find
multiple
rates
and-
and
they
are
distinguished
with
you-
know-
the
the
configured
packets
loss
ratio
and
we
were
using
it
apparently
to
today
from
the
implementation
perspective
and
go.
The
code
that
is
running
in
energy
or
system
is
also
available
as
a
as
a
pipe
I
library
to
find
two
rates.
G
The
the
number
of
grades,
which
is
a
zero
packet
loss
as
per
RFC.
Two
five,
four,
four
and
and
the
other
weight
is
a
partial
drop
right
with,
where
Pilar
that
the
packet
loss
ratio
is
higher
than
zero.
So
it's
a
non
zero
and
and
we're
using
it
really
for
nav
benchmarking
to
see
how
close
the
two
are
to
to
each
other
and
for
you
know,
a
below,
add
your
blank
box
and
s
UT
and
UT
evaluation
purposes
if
they
are
close
to
each
other.
G
So
why?
Why
is
this
useful
and
applicable?
Well,
it
reduces
the
amount
of
searches
you
do
I
or
one
does,
and
the
aim
is
also
to
to
reduce
the
overall
execution
time
required
to
find
those
rates,
and
we
then
I
propose
another
way
to
reduce
the
execution
time
further
by
using
shorter
trial
durations
to
start
with,
for
intermediate
steps
and
and
using
the
finally
specified
final
duration
and
just
for
the
final
four,
the
final
measurements
and
from
that
perspective,
I,
would
believe
the
canal.
G
So,
in
terms
of
sample
implementations,
this
is
the
same
slide.
We
showed
last
time,
that's
where
the
the
code
lives.
A
current
code
lives
in
the
Linux
Foundation
networking
FDA
insisted
project.
There
is
also
a
pipeline
package
as
as
listed,
and
it
is
being
tried
also
by
the
ENFP
bench,
the
open
V,
also
in
its
foundation,
networking
open
a
project
or
sub
project
and
for
for
exactly
the
first
purpose.
Finding
the
Indians
and
PBR
writes
a
Finals
length.
B
B
Several
hands,
okay,
so
any
any
questions
on
updates
the
things
like
that.
B
One
one
thing
I
wanted
to
talk
about:
Maciek
is
the
this
Garrett,
where
you're,
where
you're
tracking
the
the
changes,
I
guess
I
kind
of
I
kind
of
forgot
about
that
from
your
last
presentation,
so
I
sent
comments
to
the
list,
but
I
think
we've
got
to
keep
you
know.
Every
working
group
is
trying
to
strike
a
balance
between
the
you
know,
the
developer
tools
and
the
mailing
lists
of
a
volt,
we're
kind
of
stuck
old.
B
So
for
now,
but
but
look
that's
I
mean
that's
something
that
we
can
sort
of
begin
to
mix
here
a
bit.
Maybe
we
can
I
mean
I
did
first
off
I
didn't
get
an
email
from
Garret
to
say
that
you
know
that
you
would
address
any
comments
there
or
anything
like
that.
I'm,
not
sure
whether
I've
got
to
do
something
to
get
my
my
credentials
in
the
cset.
Maybe
maybe
this
is
part
of
the
cset
Garrett.
B
B
G
Get
my
time
so
no
I
think
he's
required.
It's
just
it's
just
the
link,
so
we
indeed
we
did
not
see
or
your
comments
I
sent
updates
with
it
with
it,
with
a
Garrett
link
to
the
list,
I
think
after
the
crack
ITF
and
also
once
we
get
Alec
comments
before
we
published
the
the
updated
version
but
I
think
I.
G
B
B
E
B
B
J
J
J
J
They
talk
proper
ones
in
the
continuous
infrastructure,
but
so
we
we
just
restocked
so
many
factors
on
the
additional
consideration,
part
and
the
testing,
but
we
I
think
is
in.
We
need
provide
more
detailed
in
the
industry
after
so
we
added
a
surah
chapter.
The
one
is
the
container
networking
classification
and
resource
consideration
and
the
pinchy
Maki
Sonya
for
the
container
is
infrastructure,
so
among
the
DS
scepters,
so
we
categorize
the
container
networking
technology,
and
then
we
also
try
to
describe
the
different
resource.
Utilization
suffers
between
p.m.
J
We
also
classified
the
device
pasture
and
the
p-switch
model,
and
then
we
also
mapped
current
in
there
talking
a
containerized
networking
technology
to
this
this
classification.
So
we
add
the
ten
reference
with
a
death
model,
so
yeah
and
then
in
the
resource
consideration.
Part
of
we
just
risked
up
to
three
things
for
for
the
continuance
truck
sure.
So
so,
when
you
think
about
that
this
factor
we
just
we
just
think
about
how
we
want
what
vector
should
attack
to
affect
the
containerized
infrastructure.
J
J
Actually
in
the
VM
we
can
use
the
huge
Pacey,
but
basically
we
can't
we
should
given
the
resource
and
it's
the
one
gigabyte
for
the
huger
pages.
But
in
the
container
infrastructure
we
can
adjust
more
granularly,
so
I
think.
We
think
that
we
thought
that
maybe
it
will
affect
the
container
at
talking
performance
and
Douma,
the
like
duma,
or
so
we
considered
for
the
resource
consideration.
J
Actually
duma
is
a
non-deterministic
and
the
when,
when
he,
when
installation
in
insulate
of
the
syrian
app
the
the
there
may
be
resources
non,
theta,
mystic
and
the
doom
are
so
known
agonistic.
So
maybe
we
cannot
figure
out
the
exact
exact,
the
pokemons
relationship
between
duma
and
the
networking,
but
we
we
will
figure
out
and
then
we
describe
about
iterator
and
RX
TX,
multi,
cue
or
so
actually
in
the
when
we
test
the
networking
pokemons
in
the
game
in
prob,
RX
TX
multi
cure.
J
So
we
consider
the
but
in
the
container
now
maybe
is
not
supported,
but
maybe
it
also
one
of
the
factor
to
affect
the
the
container
at
talking
performance
and
in
the
benchmarking
scenario,
for
the
container
infrastructure.
So
basically,
according
to
the
GTS
design
document,
they
their
to
scenario,
the
one
is
the
container
to
container
and
another
one
is
pot
pot.
But
in
this
raft
we
are,
we
requested
the
more
to
scenario.
J
One
is
the
BMP
another
one
is
the
BMP,
because
when
we
implant,
when
we
implemented
the
container
infrastructure
in
the
real
environment,
we
were
so
maybe
we
can
deploy
the
pod
on
the
VM
so
so
considered
that
scenario.
So
in
this
trip
in
the
current
draft,
we
added
a
two
additional
test
scenario
and
LV
also
wrote
the
figure
and
yeah
and
then
the
last
time
when
he
had
a
meeting
in
the
last
meeting.
So
many
people
also
comment
about
our
draft
and
then
yeah.
The
one
of
the
comment
is,
we
should
consider
the
operational
consideration.
J
So
maybe
we
didn't,
we
couldn't
add
the
concrete
information
about
the
operational
consideration,
but
when
you
think
about
the
real
service
in
the
in
the
container
as
infrastructure,
maybe
just
we
think
about
the
only
one
vnf
way
which
is
based
on
this
I
mean
made
by
this
container,
is
not
enough.
So,
like
the
district,
the
substance
is
wrapped.
We
are
so
considered
the
how
we,
how
we
measure
the
performance
in
the
container
is
infrastructure.
So
we
also
considered
that
part
two
and
then
the
next
step.
J
Maybe
we
will
update
about
our
draft
or
game
because
we
have
some
typo,
so
we
will
update
soon
and
then
any
feedback
welcome,
and
then
we
are
so
actually.
We
also
try
to
add
the
more
currently
new
technology
in
added
to
into
the
district,
and
then
we
also
think
about
the
hackathon
in
the
ITF,
because.
J
Because
we
now
we
just
classify
the
networking,
they're
talking
problem
and
then
a
networking
model,
and
then
they
think
about
just
factors,
but
we
also
proved
that
concept
and
that
we
just
want
to
discuss
with
a
VM
working
groups
members.
So
we
don't
have
the
specific
plan
about
it.
But
if
you
have
interesting
in
the
hecka
song,
please
ask
me
and
ask
to
other
members.
Thank
you.
B
D
B
Was
going
to
just
gonna
refer
to
you
refer
you
to
the
testing
we
did
not
in
the
container
world,
but
in
the
hypervisor
vm
world
as
part
of
OPN
fe.
Yes,
the
villa's
perf
project,
and
it's
all
of
that
work
has
been
adopted
in
the
latest
version
of
the
Etsy
NFV
TST:
zero,
zero.
Nine
document,
which
I
saw
in
your
reference
yeah.
D
B
No
there's
a
new
version
of
that
coming
out
in
June.
So
if
you
look
way
in
the
back,
I
think
it's
section
there
cause
12.4
you'll,
see
some
procedures
for
testing
in
that
area
and
then
you'll
see
some
sample
results
on
a
wiki
page
and
an
additional
discussion
in
the
appendices
about
that
kind
of
testing.
So
I
encourage
you
to
take
a
look
at
that
and
maybe
reference
some
of
it
and
examine
how
that
works
in
the
container
world.
I'd
really
love
to
see
that
as
well.
Okay,.
B
G
But
but
this
draft
is
approaching
the
problem
from
the
more
you
know:
technology
use
perspective
where,
on
the
in
the
other
work,
we're
looking
at
more
of
the
sort
of
abstracted
of
Suffolk
way,
as
I
will
present
a
bit
a
bit
later,
but
I
think
that
two
pieces
of
work
are
related
and
they
are
addressing
a
very
important
problem
space.
That's
all
thanks.
J
F
F
So,
although
the
director
mate
was
mailing,
updated
because
we
needed
to
have
some
initially
some
clear
considerations
regarding
this
benchmarking
procedures,
that
are
the
defining
section
4.2,
basically,
they
define
the
generic
manners
for,
and
considerations
for,
a
benchmarking
procedures
in
an
automated
way
that
we
plan
to
address
in
in
the
draft.
We
also
saw
that
we
needed
more
comparison
factors
in
our
vnf
benchmarking,
descriptor
definition,
that
we
saw
that
based
on
our
experiments.
It
was
not
yet
a
fully
functional.
F
F
So
the
issues
we
are
trying
to
address
mailing.
The
draft
was
actually
to
refine
the
terminology
to
focus
only
on
the
what
is
being
addressed
in
the
draft,
so
we
removed
the
mostly
the
MFE
generic
terminology
from
the
draft.
We
also
defined
that
the
generic
benchmarking
procedures
would
reflect
the
overall
terminal
methodology
that
we
were
addressing
in
the
in
the
main
section
of
the
draft,
and
especially
when
we
are
running
the
tests
with
our
open
source
reference
implementations.
F
We
had
issues
ourselves
in
the
members
of
the
the
draft
that
we
sought.
Divergences
in
in
the
robbers
and
listeners
I
mean
the
how
you're
executing
the
parameters
of
the
test.
So
we
would
have
better
definitions
of
this
test
parameters,
so
we
can
have
better
comparison
test.
We've
had
the
Harris
implementation
next
slide,
please
so,
based
on
that,
the
major
technical
changes
that
we
had
word
that
we
filtered
this
only
important
concept
in
terminology.
We
define
it
in
this
section
for
the
two.
F
The
generic
faces
that
we
think
that
address
is
a
generic
benchmarking
process
procedures.
Since
the
deployment
of
the
scenario,
the
configuration
of
the
of
the
whole
benchmarking
deployment
scenario,
the
execution
of
the
test
itself
and
how
the
tests
are
reported
and-
and
the
main
important
updates
in
this
draft
was
the
refinement.
We
met.
Mitch
marking
the
scripture
structure
in
section
6.1,
where
we
define
and
the
whole
fields
and
definitions
of
what
is
inside
each
one
of
these
sections
in
the
VMF
PD
definition,
the
header
is
the
information
of
the
target,
how
the
experiments
are
defined.
F
F
Here
we
are
thinking
about
referring
to
the
the
actual
existing
task,
our
young
references
that
we're
still
trying
to
see
if
it's
feasible
or
not
for
this
VM
FBD
model
or
make
it
more
generic
as
possible,
and
we
also
define
under
the
proceedings
that
mainly
mainly
define
how
the
VMF
are
dead.
Less
shootings
in
this
case
is
benchmarking
and
the
whole
tools
and
the
parameters
for
these
tools
to
operate
and
generate
the
test
results.
F
F
F
So
in
this
case,
we
are
mostly
concerned
about
the
generic
representation
but
useful
enough
for
foreign
registration
solution
to
have
it
as
a
source
of
of
data
for
analytics
platforms.
We
are
also
mailing
a
working
parallel
with
the
writing
of
the
draft
and
the
comparison
tests
with
our
open
source
reference
implementations.
So
we
are
mostly
concerned
enough
about
name.
F
The
next
version
fully
demonstrated
the
importance
of
the
draft
in
the
whole
definition
of
the
VNS
marking
descriptor
in
the
V&F
performance
profile,
how
they
can
be
compared
compared
with
each
other
in
different
instance,
and
especially
in
different
references,
our
reference
implementations.
So
this
is
all
open
source
as
well,
and
then,
in
this
case,
we
are
also
planning
to
to
showcase
the
validity
and
utility
of
the
young
models
that
you
are
developing
for
the
DNF
niche
marking
descriptor
in
the
performance
profile.
F
F
B
Very
good
thanks
Raphael,
so
any
any
comments
on
the
topics
that
Raphael
covered
very
efficiently
there
there's.
Obviously
a
lot
of
work
behind
this
and
I
I
was
interested
to
try
to
get
a
side
meeting
going
where
we
could
see
a
demonstration
of
everything
we
we
have
seen
contributed
by
Raphael
and
his
team,
but
that
doesn't
work
out
of
your
remote
rocky
hill.
So.
D
B
Yeah,
the
the
side
meetings
are
all
sort
of
not
covered,
and
but
that's
okay.
Well,
so
we'll
try
to
get
we'll
try
to
get
that
done
at
a
at
a
future
meeting
or
some
other
opportunity
where
we
can
see
the
demo.
Maybe
an
interim
meeting,
something
like
that:
okay,
so
any
any
volunteers
to
review
the
draft
and/or
look
into
the
code
repo,
yes
one
in
the
back
there.
Okay,
please
go
up
to
the
mic
and
give
us
your
name.
B
Thank
you,
okay,
so
we've
got
a
volunteer
for
the
review
and
anything
else.
Any
I'm
glad
you,
you
recognized
the
overlap
with
some
of
the
other
efforts
Raphael,
especially
the
yang
model
for
for
the
for
the
tester
control
that
Vladimir
has
proposed
he's
got
he's,
got
some
text
on
that
and
that's
something
to
look
into
too,
as
a
kind
of
like
a
cross
review
for
sure.
I
see.
D
B
B
B
G
G
So
we
here,
we
also
received
comments
on
LF,
an
FDA
OCC
Garrett,
which
which
we
addressed,
and
the
major
updates
are
related
to
the
adding
specific
nav
service.
Take
the
plane,
benchmarking
metrics,
which
I
will
cover
on
one
of
the
next
slides
and
and
also
on
the
requests
from
from
electromagnetic
bench.
I
added
the
references,
and
we
added
the
references
and
also
the
placeholder
for
the
results,
as
as
he
is
applying
the
methodology
to
the
nav
bench
tests,
including
the
black
vests.
G
D
G
They
are
in
the
prisons
and
other
applications,
and
you
know
trying
to
address
the
noisy,
neighbor
problem
so
in
the
context
of
your
running
colony,
network
applications
and
about
so
many
of
them
and
then
being
arranged
in
some
sort
of
virtual
topology
in
you
know:
circus
trains
or
service
topologies.
The
the
presence
of
the
food
allergies
and
also
here
complicates
we
give
the
problem.
The
idea
is
to
come
up
with
a
a
fairly
Universal
generic
way
to
benchmark
those
heavy
services,
as
next
slide.
G
G
But
there
are
also
two
other
indirect
factors
and
that's
the
the
virtualization
technologies
used
to
create
those
virtual
cookies
and,
like
you
know,
patio
or
Mikko
net,
we
hope
the
user
or
mimic
in
container
space
and
our
anniversary
and
then
also
the
way
that
the
record
application
is
connected
to
the
physical
network,
which
we
are
calling
here
and
a
host.
That's
working,
I'm
excited
so
most
solution
proposed
in
the
draft
and
it's
pretty
much
the
same
slide
as
I
presented
last
time.
G
So
a
very
quick
recap
is
to
separate
the
three
aspects:
coordinate
design
and
the
service.
I
can
processing
part
so
that
the
Metro
application
layer
itself
and
the
shared
requisition
infrastructure
and
then
the
shared
health
infrastructure
and
from
the
resource
allocation
perspective.
Pocos
addressing
annoys
me.
The
most
neighbor
problem
is,
for
starters,
the
problems
on
you
know
allocating
the
resources
as
per
current
best
practice.
So
that's
happening,
my
affinity
and
limiting
the
linguist
editor
I
should
operate
the
system.
Editor
I
should
add
that
bullet
point
it's
covered
in
the
draft
and
going
forward.
G
So
the
way
we
would
define
the
service
and
we
abstract
it-
I
feel
a
topology,
the
top,
which
is
how
they,
how
the
network
application
support
functions,
are
connected
or
interconnected
and
put
that
host
data
plane
or
directly
a
configuration
so
that
how
they
are
configured
and
and
the
final
one
is.
You
know
the
what
is
the
actual
packet
packet
flow
and
the
packet
path
through
the
food
for
the
service
definition
or
service
or
service
channel?
G
And
so
that's
the
defined
service
obstructions
so
far,
and
the
next
slide
shows
you
know
that
defined
scenarios
that
are
basically
applying
the
first
abstraction
to
the
to
the
scenario.
So
we
may
draft,
we
defined
three
scenarios.
One
of
them
based
on
network
applications,
run
new
VMs,
so
we
refer
to
them
as
as
vnfs
and
vmf
service
chain
for
the
virtual
switch,
as
shown
and
with
the
the
other
two
are
based
on
containers,
which
we
refer
to
as
your
container
Network
function.
G
So
this
is
the
network
application
onion
container
and-
and
we
have
a
snake
forwarding.
So
you
know
service
chain
which
is
like
forwarding
and
the
last
one
is
we
refer
to
as
sin
of
service
pipeline
and
pipeline
for
working
with
horizontal
interfaces
next
slide
in
terms
of
the
benchmarking
metric.
So
this
is
the
complaint
we
objected
to
the
version
zero
one.
G
We
want
to
measure
packet
throughput
in
packets
per
second
and
then
calculate
the
bandwidth
throughput
based
on
the
you
know,
the
packet
size
and
and
such
in
in
bits
per
second
and
be
applicable.
Types
of
super
trades
that
we
have
listed
includes
the
NVR,
so
zero
packet
loss,
PDR,
so
partial
packet
loss
and
mr
r,
which
is
the
result
for
maximum
receive
rate,
and
we
provide
them.
So
we
added
the
definitions
of
what
that
exactly
is
in
in
the
draft.
G
G
So
these
are
the
the
tests
that
we
run
in
in
MVA,
assisted
and
also
there
is
a
CNF,
a
testbed
project
within
the
CNC
F,
where
they
run
a
similar
test,
and
there
is
a
reference
later
on
to
that
that
project.
So
this
is
basically
running
multiple
of
either
vnf
service
chains,
CNF,
service
chains
or
or
CNF
service
pipelines
and
you've
got
two
versions
of
software
users.
Here,
in
this
case
it's
VPP
1904,
which
is
what
we
run
in
the
vio
system
next
slide.
G
G
G
So
as
similar
to
previous
comments
that
I
made
on
the
on
our
search
I
welcome
more
reviews
from
BMW
G,
we
have
received
received
comments
from
Alec
from
OPM
AV
MFP
bench
I
project,
but
that's
pretty
much
about
it.
We
had
no
discussion
at
the
the
proc
ITF,
but
we'll
welcome
more
more
comments
in
terms
of
applicability
of
this
work
and
specific
steps
to
make
this
a
BMW
G
draft
of
the
current
things
are.
B
Thank
you
Maciek.
So
there's
there's
a
lot
of
obvious
reviewers
here,
the
folks
that
mentioned
some
overlap
and
and
that
you've
observed
some
overlap
with
this
work.
So
I
hope
some
of
those
folks
will
will
volunteer
to
be
reviewers
here.
One
one
one
technical
comment
before
we
do
that,
though
I
noticed
your
data
plane
metrics
for
you
know,
for
latency
and
and
so
forth,
I
think,
there's
I
think
there's
already
good
metrics
in
test
your
zero
nine
that
cover
most
of
that
space.
You
know
all
there's
slightly
different
names,
but
we've
got
good
metric
definitions.
B
There
and
I
would
hate
to
see
that
sort
of
get
renamed
in
in
in
this
work.
You
know
we
can.
We
can
do
the
cross-referencing
look
to
look
to
BMW,
G
terminology
first
and
then
the
Etsy
nfe
terminology.
Next,
before
we
go
inventing
something
new
and
I
know
I
know
stuff
has
been
used
in
in
practice
for
a
long
time.
But
let's,
but
let's
try
to
reuse
the
of
the
standard
definitions
where
we
can,
of
course,
yeah.
G
B
B
L
Thank
you
yeah,
good
morning,
everyone,
this
the
benchmarking
draft.
This
is
based
on
the
parent.
You
know
the
evpn
RFC
RFC,
seven,
four
three
two
and
the
recent
draft,
which
is
adopted
by
the
eye
IETF
best
working
group,
EVP
and
I
JMB
MLD
proxy,
which
is
widely
deployed
in
data
center
and
the
MPLS
service
provider
network.
So
we
are
defining
certain
parameters
to
benchmark
you
know,
because
currently
there
is
no
way
we
can
benchmark
these
services,
which
is
already
implemented.
So
we
are
defining
certain
parameters
to
measure
this
EVP
and
IGM
be
a
proxy.
L
L
So
we
these
are
the
parameters
we
are
defining
to
benchmark
this
service,
so
we
are
defining
the
IgM
b.join
latency.
So
how
fast,
which
is
a
IgM
be
in?
It
will
join
in
that
once
8i
j
mb
report,
the
membership
report
comes
to
the
dot
and
it
sends
the
type
six
routes
and
able
to
get
the
traffic
and
then
the
IgM
believe
there
are
two
types
of
one
you
send
I
believe,
or
it
is
a
time
out.
If
the
host
is
not
responding
to
that.
So
how
fast
you
know
it
is,
it
is
gonna
leave.
L
You
know
that
it
has
to
stop
the
traffic
unnecessary
flooding
flooding
to
the
system.
So
these
are
the
two.
You
know
clearing
the
state
as
well
as
the
leave
latency,
as
well
as
the
IgM
be
in
the
leaf,
as
well
as
the
timeout.
Then
that
is
for
the
single
homing
that
in
multihoming
we
have
it,
which
is
different
from
the
single
homing
devices,
which
is
difficult.
If
I
nin
the
IgM
be
draught
bearing
you
have
active
active
scenarios,
they
have
a
couple
of
routes
which
is
defined
to
join
sync,
as
well
as
the
leaf
sync.
L
L
So
we
have
a
scale
test
wherein
we
emulate
lot
of
hosts
as
hosting
the
network,
and
we
send
lot
of
IgM
B
membership
report
and
so
how?
How
fast,
which
is
gonna
this
handle
the
Box
is
gonna
handle
this
this
kind
of
sargent
the
traffic
as
well
as
it
starts
forwarding
it
during
scaled
convergence
testing.
Then
it's
a
traditional
high
availability
test
run,
run
for
a
period
of
24
hours
and
see
we
are
not
getting
any
core
or
any
memory
leaks
and
the
soak
the
I
availability.
L
Sorry
high
availability
is
where
there
is
a
routing
engine
failure,
so
how
fast
it's
gonna
take
the
load
and
we
are
defining
the
soap
which
is
running
running
over
a
period
of
time
for
24
hours,
so
the
expectation
is
we
ship
with
the
system
should
perform.
There
should
not
be
any
memory
leak
and
there
should
not
be
any
core
generated
and
this
the
system
should
work
as
expected.
So
these
are
the
parameters
we
are
defining
for
benchmarking,
this
murray
VPN,
I
GMB
proxy
services.
So
any
questions
on
this.
L
So
he'll,
this
is
just
an
expansion
of
what
we
the
parameters.
We
already
mentioned
like
how
you
know
the
learning
rate,
because
that
is
very
important,
because
different
boxes,
the
performances,
are
different
so
because,
in
data
center
or
in
a
service
provider,
you
have
a
lot
of
multicast
channels.
L
I
mean
services
which
is
going
so
the
learning
rate
is
one
of
the
important
parameter
to
define
a
box
like
how
fast
it
is.
It
is
going
to.
You
know,
learn
the
membership
report
and
sends
the
route
and
just
joins
that
you
know
it
just
sends
edge
then
joins
the
tree
and
sends
the
traffic
to
this
segment
as
well
as
the
leave
is
also
because
that
is
one
of
the
important
factor
wherein
it
should
not
fled
the.
L
L
I
am
I
mean
if
the
I
j
mb
messages
once
it
is
not
refreshing
it
so
how
fast
it
is
going
to
time
out
and
stop
that
the
multicast
traffic
to
the
segment
so
the
system
I
mean
it
should
not.
You
know,
even
if
it
is
timeout,
it
should
not
send
the
traffic
to
the
system
and
it
will
be
a
wastage
of
bandwidth,
so
the
the
leave
latency
is
where
in
when
it
gets
an
IgM,
be
you
know
so
I
djembe
report.
So
it's
for
leave
messages
and
it
should
stop
it's
a
very
large
rate.
L
It
should
stop
immediately
that,
because
there
will
be
a
note
box
performance
based
on
different
boxes,
there
will
be
a
you
know.
Sometimes
you
know
the
the
reset
lag
in
that
it
won't
process
immediately
there
there
there
is
a
time
period
where
in
its
again
start
sending
to
to
that
local
segment.
So
we
measure
that
how
fast
the
box
is
gonna
one
sees
get,
get
the
leave
messages.
How
fast
the
box
is
gonna
behave.
This.
L
So
it
is,
this
is
a
typical
scenario
where,
in
this
routes,
because
the
VPN
new
type
of
routes
which
is
defined
in
the
draft
because
type,
six,
seven
and
eight
it
the
six
route-
is
bearing
it's
a
IJ
MB
estimate
and
seven
and
eight
is
joined
sink
in
multihoming,
as
well
as
the
leap
sink,
because
the
multihoming
is
very
important
like
if
one
of
the
anon
DF
I
mean
one
of
them.
Don
DF
gets
the
join
messages,
then
it
has
to
inform
the
DF
and
its
it
has
to
send
the
type
six
routes.
So
how?
L
L
Okay
yeah:
this
is
an
expansion
which
I've
already
mentioned
like
it's
a
we
are
measuring.
The
failures,
like
the
failure
rate,
is
important
so
that
to
out
how
fast
it
is
going
to
mitigate
this
failures,
so
we
are
doing
that
either
the
designated
forward
down
and
I
mean
doing
local
link
failure,
core
failures,
then
the
routing
engine.
Suppose,
if
the
routing
ancient
process
itself
fails
and
sometimes
the
DF,
the
designated
forwarder
itself
fails.
L
So
these
are
all
the
type
of
you
know:
failure
we
do
it
on
the
system
and
see
how
fast
it
is
recovering
it
and
our
mitigate
the
failures
and
the
scale
is
where
we
send
large
number
of
we
first
determined.
These
are
the
number
of
VLANs.
Where
will
where
will
be
using
it,
and
this
is
the
number
of
groups
and
based
based
on
a
determined,
a
scale
of
n
how
the
system
is
gonna
behave
on
that
there
are
a
couple
of
scale.
Convergence
test.
L
First,
is
a
you
know,
normal
scenario
where
and
how
fast
it
it
is
gonna.
You
know
how
how
fast
it
is
going
to
set
the
traffic
and
during
any
failure.
How
fast
is
it
going
to
recover
it
at
a
scaled
environment,
so
every
box
a
and
it
depends
on
the
CPU
and
the
memory
it
behaves
different.
So
this
is
where
we
parameter.
We
define
a
parameter
and
measure
the
system
performance.
L
Yep,
so
this
is
a
VPN
be
PWS.
This
is
defined
in
RFC
8
to
1
4,
which
is
recently
converted
into
RFC.
Rivet
was
running
a
society
of
draft
recently
I
think
around
six
five
six
months
packet.
It
is
I
moved
seven
months
back,
it
moved
into
RFC.
So
no
it
traditional
because
in
vpw
services
is
there
for
a
pretty
long
time
in
service
provider.
L
The
problem
with
EVP
W
services,
a
point-to-point,
but
you
don't
have
a
multihoming
features
in
bbws,
so
the
VPN
vpw
services,
one
of
the
advantages
like
we
can
have
a
active,
active
forwarding
to
the
system
point.
Even
though
it
is
a
point-to-point
services,
you
can
have
more
more
than
one
router.
You
know
a
load
balancing
the
traffic
to
the
same
C,
so
that
and
you
can
work
in
two
dual
modes
like
single
single
active
as
as
well
as
the
active
active
either
you
can
work
work
as
active
backup.
L
You
don't
need
to
lose,
lose
a
link,
so
you
can
work.
As
you
know,
primary
backup
or
it
is
active,
active
you.
You
can
utilize
both
the
links.
So
this
is
our
test
scenario
where
in
the
dot
is
being
used
as
one
of
the
multihoming
piece
and
which
is
connected
to
AC,
and
we
have
a
router
tester
which
is
connected
to
the
C
as
well
as
to
the
single
home
P.
L
So
these
are
the
parameters
to
measure
how
to
measure
this
services
like
this
services
are
widely
deployed.
So
how
do
you
are
gone
and
rate
the
system,
so
we
are
defining
certain
parameters
on
a
you
know,
based
on
our
design,
our
exhaustive
testing
so
and
we
found
out
okay,
hey.
These
are
the
list
of
parameters
based
on
this
particular
device
under
test
or
the
this
particular
we
know
routers
and
depends
on
the
memory
depends
on
CPU.
The
performance
will
be
different,
so
these
are
the
parameters
we
defined.
L
L
L
L
B
My
suggestion,
why
don't
you
circulate
this
draft
and
on
the
I
guess
the
best
mailing
list
is
try
to
get
some
evpn
expertise
to
review
it
there
and
to
place
their
comments
on
our
list
if
they're
willing,
or
at
least
at
least
exchanged
some
commentary
here,
has
anybody
read
either
of
these
two
drafts?
Two
people?
Okay?
Okay,
that's
good!
So.
B
C
Talked
a
lot
about
having
this
reviewed
outside
of
this
working
group,
so
putting
it
on
the
working
group
list
is
a
good
start.
If
you
don't
get
the
traction
that
you're
looking
for
make
sure
you're
signing
up,
potentially
ask
the
chairs.
If
you
can
get
airtime
and
the
next
meeting,
so
you
can
trump
that
support,
but
we're
going
to
need
other
folks
to
read
this
assume.
L
That
I'll,
you
know,
I'll
circulate
in
the
best.
I
am
also
part
of
the
best
working
group
so
because
I
am
also
subscribing
to
that,
because
I'm
also
active
participant
and
that
and
definitely
I'll
talk
to
them,
I
mean
perhaps
the
first
step
is
like
I
will
first
send
the
mailing
list,
as
you
said
like,
if
you
you
know,
if
it
depends
on
the
attraction,
the
support
is
not
al
check
that
if
you
get
some
air
time
like
you
know,
yeah.
B
M
So,
okay,
so
yeah,
the
idea
of
this
raft
is
basically
to
start
digging
on
the
implications
of
ie
in
the
transport
networks,
so
yeah
I
think
so.
Basically,
the
motivation
is
that
now
several
operators
across
the
world,
Americas
Asia
Europe,
are
starting
to
deploy
if
AG
access,
so
Feige
is
for
sure
in
impact
in
multiple
technological
areas,
the
radio
for
sure
the
mobile
packet
core,
but
also
the
transport
network.
M
So
the
the
idea
of
the
limitation
this
is
off
is
to
start
pushing
on
having
the
directions
guidelines
for
which
mark
in
the
transport
networks
in
such
a
way
that
could
assist
us.
The
operators
in
the
deployment
of
T
stand
in
the
support
of
the
services
that
are
respected
to
be
offered
to
the
to
the
customers.
So
there
in
the
graph,
you
can
see
the
different
kind
of
imposition.
M
This
is
more
or
less
well-known,
so
I
will
pass
to
the
next
slide,
so
this
couple
of
the
table
will
be
basically
to
overview
available
solutions,
so
the
the
for
sure
that
would
be
available
solutions
and
even
in
in
the
benchmarking
multi
group
and
also
outside
so
I
started
by
reviewing
what
is
available,
there
identify
the
gaps
that
could
require
for
their
work.
So
this
idea
of
the
ability
to
provide
also
directions
for
future
work,
future
technologies
and
so
I.
M
Finally,
to
provide
guidelines
on
the
5g
transport
networking
and
also
yeah,
basically
defining
these
next
steps.
So
the
site
tells
of
the
inter
market
analysis
that
we
have
identified
so
far,
and
probably
this
could
be
somehow
refined
in
next
versions
could
be
basically
data
plane
in
the
data
plane
we
have.
We
could
identify
two
different
kind
of
impacts,
our
capabilities
or
either
encapsulations
examples
of
these
with
the
TSM,
for
instance
I
Triple,
E
TSN.
M
But
basically
there
is
a
different
highly
enough
of
Ethernet
frames,
and
so
we
did
button
a
on
the
hardware
and
regarding
encapsulations,
clear
test
could
be
Samaritan
version.
6
also
impacting
sector
of
analysis
could
be
the
control
plane,
and
here
we
have
a
clear
example
with
the
FCAT
456
for
Sdn.
Probably
this
could
be
enough,
maybe
not
so
it's
something
to
analyze.
M
Regarding
the
management
plane,
an
example
of
potential
topic
could
be
the
the
network's
life
lifecycle
focused
on
the
transport
capabilities.
Basically,
and
I
got
an
architecture.
Another
topic
that
is
being
developed
in
IDF
could
be
the
deterministically
networking,
so
all
the
architectures
about
the
net,
so
the
proposed
next
steps
is
basically
for
today
to
collect
interest
from
the
working
group.
M
If
this
is
interesting
for
the
working
group
to
progress
on
on
this
direction
or
not,
and
if
so
so,
basically
studied
in
to
find
requirements
and
characteristics
that
Feige
imposes
in
in
each
of
the
studies
of
interest
that
we
we
have
seen
before.
I
am
preparing
some
progress
for
the
job
here
for
the
next
IDM
meeting,
so
the
Vietnam
will
be
now
whole
in
the
Middle
East
to
see
if
it
is
of
interest
for
all
of
us
and
so
start
defined
next
double
issuance
a
draft.
Basically,
so
thank
you
very
much.
Thanks.
N
This
is
will
from
Poway
I
briefly
read
this
draft
and
I.
Think,
for
example,
first
of
all
5g
series
is
promising
from
promotion
here.
However,
sorry
service
is
a
kind
of
large
topic.
I
see
that
you
mentioned
all
this.
We
kind
of
use
cases
and
several
categories
of
benchmarks,
hi,
Ryan,
I,
think
that
will
make
a
lot
of
efforts
to
fulfill
this
work.
So
I
would
suggest
that
we
limit
the
scope
into,
for
example,
a
year
I,
don't
know
whether
you
wanted
to
the,
for
example,
the
5g
slicing
or
5g
dating
services.
Something
like
that.
B
M
Mentioning
might
be
basically,
oh,
please,
just
probably
will
be
to
to
point
out.
I
mean
what
I
expect.
This
is
not
to
cover
everything
in
just
one
single
document,
so
having
a
different
number
of
documents.
All
these
are
probably
could
provide
an
overview
with
the
development
of
specific
areas
in
other
specific
documents
for
that.
So
maybe
this
could
be
a
way
of
proceeding.
C
So
comment
from
Sarah
as
a
participant
we're
not
typically
the
experts.
Every
single
person
in
BMW
G
is
in
the
expert
on
wireless
network,
cellular
networks
right.
So
the
first
question
that
I
have
when
I
went
to
saw
this
was
well.
Why
can't
you
characterize
these
same
things?
The
way
you
did
a
4G
network
and
I'm
not
proposing
you
answer
that
now
I
realize
that
that's
absolutely
a
little
bit
of
a
longer
answer,
but
it
would
be
nice
if
you
would
consider
I
can
ask
that
on
the
list
and
you
can
answer
on
the
list.
C
E
E
C
B
B
All
right
so
suden
thanks
very
much
for
your
comments.
We
had
a
detailed
reply
on
the
list.
This
is
a
summary
of
changes
in
the
the
O
2
version
that
follows
so
we
got
the
car.
That's
that's
for
the
last
guy
al
Morton
participant
okay,
so
these
are
some
of
the
things
that
we've
added
as
clarifications
it
basically
comes
down
to.
B
You
know
a
recognition
that
we
have
a
lot
of
material
that
we've
drawn
from
other
benchmarking
RFC's
into
this
draft,
and
you
know
we
we
we
skipped
over
it
in
the
early
versions.
Now
we're
trying
to
make
it
very
clear.
So
you
know
we're
we're
going
to
be
very
careful
about
the
connectivity
between
the
the
CES
and
the
PE
stuff,
like
that.
The
diagrams
need
to
show
that
pretty
well
flows
and
five
tuples
and
so
forth.
B
This
is
going
to
be
they're
gonna,
be
full
mesh
testing
in
some
way,
and
you
know
be
the
tester
requirements
from
RFC
2889
you're
gonna
see
that
all
over
the
place
here.
So
that's
part
of
the
updates.
This
is
a
figure
two
that
we
have
clarified
there
to
be
sure
that
you
know
we're
showing
how
the
the
c
e
on
the
right
hand,
side
c
e
2
is
connected
to
the
test
of
ice.
We
kind
of
ran
out
of
ascii
space
there
and
and
had
to
move
the
whole
thing
over.
B
No
big
deal
right,
okay,
so
then-
and
this
here's
an
example
of
sort
of
the
unicast
flows
and
the
and
the
frame
formats
must
be
specified.
All
this
stuff
is
2889
specified
already,
but
now
we've
got
explicit
pointers
to
everything,
so
it
should
be.
It
should
be
much
easier
to
move
between
the
two
documents
and
not
repeat
everything
else
here
so
I
think
that's
what
we've
accomplished
basically
out
of
the
adding
this
new
section.
Basically,
5.3
is
a
detailed
procedure
which
is
a
completely
new
section,
and
so
then,
let's
discuss
or
draw
the
solution.
B
B
We
can
do
that,
but
then
we
lose
a
simple,
symmetrical
test
case,
that
that
was
my
response
and
so
so
we're
and
we're
also
kind
of
looking
at
maybe
a
new
setup,
like
figure.
Two
should
include
a
c
e,
which
is
single
home
to
p
e1,
with
es
es
i0.
So
going
back
to
that
picture,
oh
there
we
go
so
we'd
have
so
we'd
have
another
c
e
up
here,
which
is
single
home.
B
I
think
that's
one
of
the
pictures
that
Jim
drew
so
that's
I
mean
that's
one
of
the
things
we
need
to
sort
out
and
try
to
try
to
accomplish
here,
while
we're
all
sitting
around
together
and
so
I.
You
know
there
was
other
interpretations
of
that.
We
wanted
to
be
sure
that
that
was
really
kind
of
what
you
wanted
and
and
what
Jim
had
had
proposed
as
well.
So,
let's,
let's
the
three
of
us,
get
sit
down
and
make
some
progress
on
on
your
comments
this
week.