►
From YouTube: IETF103-BMWG-20181108-0900
Description
BMWG meeting session at IETF103
2018/11/08 0900
https://datatracker.ietf.org/meeting/103/proceedings/
A
A
B
Right
I
think
we'll
get
going
because
we've
got
them,
we've
got
a
fairly
full
agenda
and
we've
also
got
somebody
supposedly
joining
us
remotely
who
didn't
want
to
stay
up
all
night.
So
we'll
we'll
get
going
here
so
good
morning,
everybody
I'm
al
Morton
I'm
one
of
the
co-chairs
of
the
benchmarking
methodology
working
group.
That's
the
session
you're
in
this
morning,
Mike
my
co-chair
Sarah
banks
is
unable
to
join
us
and
I,
don't
see
her
on
remote
access
yet
but
I
see
our
sort
of
our
second
presenter
has
joined
us
via
meet
echo.
B
So
that's
a
that's.
Gonna
work
out
great
timing,
wise.
So,
if
you're,
if
you're
new
to
BMW,
G
and
you'd
like
to
join
the
working
group,
especially
the
mailing
list,
there's
a
link
right
here
in
the
the
first
slide
and
you
can
go
there
and
join
it
quickly.
Who's
who's,
new
to
BMW
G
raise
your
hands.
Oh
four
or
five
people.
That's
that's
great!
So
you
know
different
it.
B
B
Read
some
graphs,
make
comments
on
the
mailing
list
and
you'll
be
one
of
us
before
you
know
it,
and
when
you
read
a
draft
you'll
find
that
there's
usually
the
fundamental
references
are
pointed
out
for
you
right
in
there
and
RFC
2544
2889.
Those
are
some
of
the
key
ones.
Those
are
the
ones
we're
actually
looking
at
improving
in
the
future.
You'll
see
some
of
that
work
today.
So
welcome
to
the
group,
those
of
you
who
haven't
attended
before
and
those
of
you
who
have
welcome
back
all
right.
So,
let's
get
going
here.
B
This
thing
I
can
go
to
full
screen.
Actually,
here
we
go
and
I
can
use
the
clicker
as
well.
So
here's
the
note
well,
one
of
our
rules
is
basically
that
any
contribution
you
make
at
the
microphone
or
anything
you
say
at
the
microphone
is
a
contribution.
It's
a
subject
to
our
IPR
disclosure
rules
have
IPR
and
something
that
you're
presenting
today.
Please
disclose
that
you
have
any
questions
or
you
want
more
details.
You
can
talk
to
me
or
you
can
read
any
one
of
the
yes,
that's
six
BCP
he's
there.
B
So
there's
plenty
of
information
about
our
IQ,
our
disclosure
process
process.
So
here's
our
agenda,
the
the
blue
sheets,
are
going
around
please
sign
in
especially
today,
because
we
have
seen
seemingly
a
small
smallish
group.
Starting
up,
you
mean
we're
not
going
to
do
jabber
today,
I've
I've
offered
people
other
ways
to
join
us,
and
we've
had
some
trouble
with
that
as
well.
B
B
Here,
go
well,
keep
this
out
in
the
audience,
so
you
guys
can
hand
it
around
in
a
little
a
knife:
interaction
with
the
Etsy
nfe
test,
Wharton
group,
in
terms
of
a
liaison
over
the
last
few
months
and
I'll
just
quickly
mention
that
they've
reached
a
publication
on
their
standard
and
then
we're
gonna
have
a
benchmarking
methodology
for
network
security
device
performance
presentation
from
samurai,
schneir
who's
joined
us
he's
going
to
be
making
this
presentation
remotely.
Thank
you
for
doing
that.
Semrush.
B
B
So
what
we'll
cover
Eve
you
can
altogether
then
we're
going
to
look
at
a
couple:
continuing
proposals,
the
updates
to
the
back
to
back
frame
benchmark
and
one
of
our
colleagues,
Yoshiaki
Ito,
has
been
doing
some
work
in
this
area
and
so
he's
very
prolific
in
his
lab
testing
and
has
shared
a
shared.
A
few
slides
with
us
he's
unable
to
join
us.
But
I'll
share
the
slides
and
get
your
feedback
on
that.
We
have
a.
We
have
a
couple
up.
C
B
Okay:
okay,
that's
good!
That's
good,
because
we're
coming
up
to
you
who
has
summarized
very
good!
So,
let's
see
here
yes,
so
then
we
got
to
the
new
proposals
which
are
actually
going
to
be
presented
by
EMS
trolley
drone
who's
here
in
front
Maciek
and
Bracco
are
the
authors,
but
they
neither
one
of
them,
could
make
it
at
the
surrenders
hour
for
a
European
time
zone.
So
Olli
is
here
to
stand
in
much-appreciated
going.
B
The
last
topic
is
liaison
from
itu-t
study
group
12,
which
was
actually
sent
to
the
IPM
work,
but
there's
overlap
and
the
work
that
I
described
there
and
with
our
work
here
so
I've
decided
to
spend
a
few
minutes
on
that
and
I
think
it
would
be
useful
for
us,
assuming
we
have
time
if
there's
also,
if
there's
time
we
may
have
other
topics
brought
up.
Maybe
you
had
some
new
proposals
and
so
forth.
B
D
B
B
So
thanks
for
standing
in
for
Warren
I'm
sure
you
appreciate
okay,
so
we've
got
to
the
point
where,
where
folks
can
go
to
the
microphone
and
bash
the
agenda
or
simply
sit
quietly
and
the
Chairman
will
say
the
agendas
approved
and
we'll
move
on.
Thank
you.
Let's,
let's
just
do
that.
So
we're
heading
into
the
Working
Group
status.
Now
we've
completed
two
RFC's,
the
SDN
controller
graphs
on
terminology
and
methodology.
We've
got
a
whole
list
of
proposals
that
keep
coming
you're
going
to
hear
about
most
of
them
today.
B
So
these
are
you
know
this
is:
are
sort
of
our
quick
status,
new
RFC's
and
lots
of
proposals.
I
mentioned
that
we've
also
got
this
liaison
relationship
going.
Although
we
looked
at
this
draft
through
a
couple
of
meetings
and
we
didn't
prepare
any
that's
okay,
but
now
it's
been
published
and
I'm
actually
using
it
as
a
reference
in
one
of
my
drafts,
so
I
encourage
people
to
to
take
a
look
at
this.
B
There's
a
link
to
the
document
in
the
basically
in
the
agenda,
the
text
version
of
the
agenda
online
and
so
I
encourage
you
to
look
at
that.
It's
it's
essentially
RFC
2544,
our
throughput
and
latency
and
and
some
other
benchmarks
tuned
up
for
the
new
world
of
network
function.
Virtualization
we've
learned
a
lot
of
things
that
we
had
to
improve
and
we
make
that
that
work
more
reliable,
especially
about
search
algorithms
and
there
we
have
really
documented
some
new
things
based
on
testing
a
lot
of
collaboration
between
the
FDA
Oh
cease
it.
B
You
know
the
systems
integration
testing
at
the
FDA
Oh
project,
that's
a
VPP
form
of
V
Spectre
packet
process
and
also
the
OPN
of
the
vias
perf
and
nfe
bench
projects.
So
a
lot
of
collaboration
here
between
Etsy
MV
and
those
groups.
We
have
collaboration
going
on
with
Opie
and
raphy
and
and
and
soon
FD
IFC
set
as
well.
So
there's
a
there's,
a
nice
community
developing
here
of
folks
testing,
folks
writing
specifications
and
the
feedback
loop.
Where
we're
improving
everything
based
on
what
we
all
find.
B
So
here's
our
current
milestones.
We
got
one
in
red,
but
we've
got.
Is
this
the
one
for
next-gen
firewalls?
It's
it's
in
currently
in
working
group
adoption,
so
I
encourage
you
to
take
a
look
at
that.
Actually
I
think
the
adoption
call
ends
today.
We've
had
some
good
feedback
on
that
on
the
list,
we'll
be
looking
for
more
feedback
here
today
in
the
meeting.
B
Everything
else
is
in
well
precarious
shape
because
we
haven't
adopted
and
any
of
the
other
ones,
except
for
the
the
evpn
benchmarking
craft
there.
That's
the
the
third
one
in
the
list
is
adopted,
we'll
hear
about
that
today,
and
maybe
we
can
move
that
along
as
well.
Let's,
let's
see
how
what
the
what
the
group
feels
about
them.
B
So
congratulations
to
the
authors,
it's
a
long
list
of
authors,
but
we're
we're
now
effectively
done
with
those
and
that's
published,
so
very
good
work.
We're
done
with
our
charter
update
and
we
have
a
supplementary
web
page
and
you're
welcome
to
take
a
look
at
that.
It's
so
there's
some
advice
there
on
how
to
join
the
working
group
quickly.
Our
work
proposal
summary
matrix
is
it's
actually
sort
of
going
beyond
the
bounds
of
one
slide.
B
I
haven't
even
tried
to
update
it
with
the
new
proposals
that
arrived
in
this
context,
and
what
that
means
is
we're
gonna
look
hard
at
the
ones
that
are
here
and
say
you
know.
Is
this
really
still
alive,
I'm
kind
of
doubting
that
SFC
is
still
alive
back
to
back
frame?
Is
there
next-gen
firewalls?
Is
there
the
vnf
proposal
we
haven't
heard
from
that
in
a
while,
we'll
see
so
I
mean
there's
going
to
be
a
little
cleaning
house
on
this
proposal
and
the
tracking
and
I
think
that's
worthwhile
to
do
so.
B
B
Ietf
last
call
that
this
is
a
working
group
that
has
a
laboratory,
only
scope,
so
the
kinds
of
things
that
we
do,
that
absolutely
saturate.
You
know
ten
gig
e
or
40
gig
e
links,
they're
not
going
to
have
operational
implications
or
security
implications,
and
once
people
read
these
these
paragraphs
in
our
Security
section,
they
usually
figure
out
that
that's
the
case.
Otherwise
a
lot
of
reviewers
pick
up
our
drafts
and
have
no
idea
what
our
Charter
is.
So
it's
a
it's
been
helpful,
very
helpful.
B
B
H
D
H
You
everybody
I,
hope
you're
all
enjoying
much
rather
actually
like
to
be
there
so
I'm
here
to
present
the
draft
standard
for
net
sec
open
and
present
on
the
current
status
as
it
stands,
and
do
a
little
bit
on
what
we're
doing
at
netskope
it,
and
my
name
is
Sam
ration
here,
I'm
a
senior
product
manager
working
with
power
to
read
books,
I'm
one
of
the
contact
occurs
towards
nesic,
open
forum
and
I'll
be
happy
to
present
the
cross
RSS
and
take
any
questions.
If
you
have
a
piece.
H
H
All
right
so
currently
the
draft
the
status
it
it's
a
currently
a
draft
standard
and
we
are
in
version
five,
currently
a
version
forward.
We
are,
we
have
done
some
extensive
reviews
and
some
of
the
sections
number
one
to
four
have
been
reviewed
and
we
will
be
presenting
an
update
after
the
IETF
103.
H
H
Obviously,
the
whole
project
is
aimed
at
testing
next-generation
firewall
performance
tests.
So
what
we
want
to
make
sure
is
that
first,
the
security
inspection
functions
on
the
duty
is
turn
on.
So,
unlike
what
was
done,
sometimes
we
are
earlier
where
the
problem
starts.
Saying
well
stand
with
no
security
tests,
no
security
inspection
turn
on
the
objective
over
here
is
to
make
sure
that
the
security
inspection
is
done
on
a
duty,
sorry
for
typos,
but
yeah.
H
H
Next
slide
yeah,
so
the
second
step
is,
we
have
actually
done
aperture
of
the
TV
list,
which
is
basically
the
vulnerabilities
to
the
device
and
the
criteria
that
we
abused
is
basically
a
high
severity,
which
is
CBS
s
core
7
to
10,
and
we've
taken
the
severe
lists
from
2010
to
18.
That's
really
what
the
consider
is
relevant
in
and
and
anything
earlier
than
that
probably
is
not
relevant
now.
So
with
all
of
those
we
have
about
a
1200
lists
of
CBS
that
were
considered,
and
we
also
are
using
couple
of
hairs
tools.
H
We
have
430
CDs
that
we
are
using
for
proof
of
concept
testing
right
now
out
of
those,
we
have
actually
split
it
into
two
versus
four
hundred
CVS
or
what
we
call
is
a
public
list,
and
that
is
shared
with
all
the
vendors
which
are
participating
right
now,
so
we
are
basically
giving
it
it's
like
an
open
book.
We
are
giving
it
away.
Okay,
these
are
all
the
CVS
that
we'll
be
testing.
You
want
to
make
sure
that
all
those
CDs
are
are
detected
by
the
security
device.
H
The
thirty
of
them
is
kind
of
a
secret
list
that
is
kept
aside,
not
shared
with
the
vendors.
The
idea
is
that
the
vendors
should
not
be
cheating
in
the
test,
so
so
with
a
public
list.
Give
me
all
the
details,
the
thirty,
and
we
want
to
make
sure
that
the
vendors
are
not
kind
of
trying
to
work
around
by
just
having
coverage
only
for
those-
and
you
know
trying
to
is
kind
of
call
it
cheap
to
cheat
the
system
with
only
those.
H
It's
like
this,
so
once
all
the
when
we
run
the
test
for
security,
make
sure
that
all
of
those
CDs
are
detected,
the
main
test,
charts
and
and
in
terms
of
the
testing
right
now
we
are
looking
at
all
these
tests
that
you
see
on
a
screen.
So
basically,
this
is
a
bunch
of
throughput
tests
and
session
scale
and
capacity
test,
tcp,
cps
and
HTTP
transaction
per
seconds,
and
things
like
that.
A
whole
bunch
of
tests,
as
you
can
see
on
the
screen.
H
All
right,
so
those
are
all
the
tests
that
are
that
will
be
tested
in
what
are
we
actually
trying
to
achieve
with
this?
The
first
is
this
is
a
proof
of
concept
testing,
which
means
the
test.
Vendors
with
the
test
tools
have
actually
created
certain
profiles
to
be
able
to
use
it.
Next,
the
open
forum
insuk
open
drafts
andrew
you
want
to
make
sure
that
it
has
fools,
are
actually
able
to
do
that,
which
means
that
soul
is
kind
of
ready
for
the
testing.
H
You
also
want
to
make
sure
that
you
have
logistically
everything
set
up,
which
means
the
test
tools,
passwords.
Everybody
is
able
to
run
through
this
test.
One
make
sure
the
practically
there
are
no
issues
in
running
the
test,
so
in
this
whole
exercise
there
are
few
players.
So,
first
of
all,
there
are
three
labs
who
are
actually
participating.
This
ENT
see
you
all
and
UNH
IOL.
These
are
three
labs
and
the
three
labs
are
mainly
using
two
different
cuss
tools,
mainly
X
here
and
parlin.
H
H
H
We
want
to
make
sure
that
we
review
them
and
go
make
some
adjustments
if
necessary,
and
then
we
should
have
an
updated
draft
around
early
December
and
will
submit
to
update
or
draft
to
benchmark
working
group
early
December
once
ready
in
terms
of
the
results.
So
since
this
is
just
sort
of
a
proof
of
concept,
testing
is
not
like
a
public
test,
so
we
do
not
want
to
actually
publish
these
results.
H
It's
kind
of
confidential
and
that's
why
you're
not
publishing
the
results
as
such
and
it's
kind
of
requested
by
test
participant
and
then
they're
there.
There
are
certain
differences
between
test
tools
that
have
been
identified,
so
we
see
a
spine
here
and
they
work
a
little
bit
differently.
So
you
have
looked
at
all
of
those
and
there
are
certain
differences
that
are
found
and
you're
looking
at
making
sure
that
the
overall
objective
is
still
met.
Regardless
of
the
differences.
H
H
I
C
B
Anybody
else
any
comments
on
the
draft.
It's
pretty
big
draft
actually
knows
yeah
yeah,
so,
okay,
well
I,
I'll
I'll
be
considering
this.
This
topic,
with
my
co-chair
of
course,
Sarah,
hasn't
joined
us
yet,
but
hopefully
she'll
join
us
during
the
session
and
then
we'll
we'll
we'll
consider
the
results
of
the
call
for
adoption
and
and
let
the
let
the
working
group
know
SEMRush
a
question
I
had
since
there,
since
no
one's
rushing
to
the
microphone
here,
the
the
proof
of
concept
testing.
B
I
Sorry,
your
question
was
very
hard
for
me.
I
couldn't,
I
just
said:
I'm
sorry
so
I'm
bright,
yeah,
so
yeah
we
would.
We
will
be
reviewing
all
the
list
and
based
on
the
issues
found
any.
We
will
update
draft
and
we
will
provide
you
with
the
changes
that
are
made
beautiful
between
the
previous
rap
dropped
in
a
new
trap
and
to
your
request
on
the
anonymized
feedback.
B
B
B
H
Before
that,
the
previous
the
comments
so
the
topology
expansion
it
there
was
a
comments
from
the
last
IETF,
then
test
case
details
and
details
of
the
environment
and
inclusion
of
EVP
and
VPD
of
Lewis
benchmarking.
Then
we
had
a
discussion
with
different
test
teams
and
because
we
cannot
compare
evpn
and
we
pita
players
because
it's
a
it's,
not
a
apples
to
apples,
comparison
so
be
floated
another
draft
because
it
was
a
need
coming
from
the
community.
H
C
H
H
That's
a
local
as
well
as
the
remote,
because
he
EVP-
and
you
know
that
the
max,
which
is
learned
locally,
will
be
advertised
via
the
BGP,
so
they're
different,
the
learning
we
have
Mac
learning
in
the
local
and
remote,
then
Mac
flesh,
then
the
Mac
aging,
then
Mac
aging,
then
high
availability.
Then
art
scaling
then
scale
the
scale
with
convergence.
How
fast?
It
is
convergence
and
to
you
know,
avoid
the
flight
in
the
network,
because
the
flood
is
very
dangerous
and
because
it's
going
to
chalking
the
bandwidth.
H
So
these
are
all
the
and
then
the
convergence
is
it's
a
common
test,
the
convergence
test,
the
soap
and
I
availability.
It's
just
a
high
availability
is
just
a
failover
test
to
see
ideal
cases
a
zero
packet
loss,
but
there
will
be
some
packet
loss
will
be
there.
When
you
do
the
routing
engine
failed
over,
then
the
soak
soak
test,
which
will
be
running
over
a
period
of
time
so
either
24
to
48
hours.
Normally,
this
48
hours
will
be
running
it
so
that
no
cause
no
nothing
should
be
available.
Yeah
questions.
K
H
This
is
this:
is
you
know,
as
a
service,
it
will
be
testing
both
the
control
and
the
data
plane,
data
plane,
ISM
RFC,
as
mentioned
228,
that
the
data
plane
learning
rate,
as
well
as
the
control
plane
BGP
advertise
to
that.
That
is
where
the
mac
learning
through
the
remote.
So
the
mac
learning
is
first
parameter,
which
you
know,
checks
the
first,
the
local
learning,
as
well
as
the
remote
learning
both
will
be
covered
because
the
BGP
takes
the
serialization
delay
it
advertised
as
a
type
2
routes
2
to
the
D.
H
K
H
H
B
C
B
J
B
B
But
the
if
there's
any,
if
there's
any
additional
comments
folks
would
like
to
make
now
we
could
we
can
do
that.
Otherwise,
you
know
I
didn't
buy
some
of
the
new
folks
who
are
interested
in
this
technology
to
read
the
draft
and
provide
comments
during
working
group
last
call
in
in
BMW
G.
Sometimes
we
have
several
working
group
last
calls
to
kind
of
stir
up
the
comments,
so
that
that's
that's
happened
in
the
past.
H
K
H
Type
5
is
not
type
Phi,
because
it's
a
separate
draft,
that's
a
type
fire
out
as
itself
is
not
part
of
a
VPN
RFC,
so
that
is
not
added,
but
Mac,
plus
IP,
that
is
the
ARP
and
the
ipv6
Mac
plus
ipv6.
That
is
covered
because
it's
one
of
the
parameters,
the
scaling
of
that
it
is
covered
that
so,
at
least
for
my
perspective,
I
would
suggest
type
wave
was
asked
you
know
in
earlier.
H
Also,
then
it
was
a
big
discussion
because,
because
the
type
5
then
with
others
also,
they
said
one
of
the
author
of
the
RFC
to
four
three
two
Jim
also
said
why
you
add,
because
it's
a
separate
one,
it's
a
the
prefixes
which
is
gone
to
that
it
is
a
separate
one,
so
why
it
is
needed
in
this.
So
that
is
a
reason
we
initially
we
added
in
I
think
in
the
six
version
six
or
that
and
after
they
pulled
out
from
one
of
the
authors
of
the
track
team,
then
that's
why
we
pulled
out.
H
K
K
H
Route,
because
it's
still
in
the
draft
phase
that
we
could
not
die
file,
let
me
see
that,
because
already
we
cover
a
generic
as
as
I
agree
with
you
already,
we
included,
then
we
excluded
after
comments
from
the
author,
along
with
Ally,
so
I
will
work
to
it
to
reach
a
middle
ground
on
that
I
promise
you
but
type
six.
No,
because
that's
a
multicast
one.
This
is
purely
on
the
you
know
only
RFC,
seven,
four
three
two,
because
that
is
the
scope
is
defined.
So
that's
why
we
could
not
go
beyond
that
boxing
parameters.
H
K
K
H
Is
evpn
you
know,
benchmarking,
it
is
for
MPLS
is
currently
defined
that
tea
and/or
so
the
same
parameters
you
can
apply
for
IP
because
the
eb
pin
doesn't
change
only
the
underlay
changes
like
v
excellent,
so
now
I
think
genève
is
also
coming
up.
So,
irrespective
of
the
we
defined
the
parameter
there,
irrespective
of
the
underlay,
so
now
we
tested
in
our
lab
with
MPLS,
we
can
have
VX
LAN.
Also
the
same
thing
works
fine,
so
we
have
tested
that
too,
but
now
Jenny
was
also
coming.
So
it's
in
that
draw
phase.
H
I
checked
with
author
use,
telling
it
is
gonna
come
as
RFC,
so
this
is
so
it's
like.
You
know
it's
like
a
container
the
overlay.
It's
like
container,
you
keep
the
payload.
So
what
is
a
payload?
How
we
are
going
to
define
that
payload?
That
is
what
we
are
defining
here.
So,
irrespective
of
the
container,
you
can
have
a
you
know:
II
VPN
MPLS
as
a
container
or
you
can
use
as
a
I
know,
transport
overlay
as
VX
LAN
or
tomorrow,
Geneva's
coming
so
I,
think
I
agree.
K
I
K
H
Both
the
deployments-
because
that's
why
be
somewhere
we
have
to
box
in
because
this
RFC.
So
if
you
are
gonna
because
the
under
life
you've
gone,
it
will
be
a
very
big,
so
the
type
5
we
will
try
to
skews
in
an
ocean
2
into
this.
We
will
see
because
there
was
a
lot
discussion,
I
think
in
Chicago
or
so
soul.
So
this
was
a
discussion
which
was
asking.
Then
the
gym
was
the
one
of
a
as
I
said
earlier.
He
was
the
cause
of
the
RFC
Sun
4
3
2.
H
So
he
said
that
that's
why
we
pulled
it.
So
we
made
it
generic
as
ARP
and
in
descaling,
so
this
I
will
consider
here.
I.
Take
it
positively
the
estimate,
issues
and
type
6.
This
we
will
float
as
a
new
draft,
because
currently
this
is
beyond
the
scope
of
7
three
twice.
You
are
aware
right
in
the
best
world
to
be
still
going
as
it
rafts
so
that
we
will
consider
a
sinew.
One
said:
let
me
observe
this
thanks
for
the
thank
you
so
much
yeah
thanks.
H
Six
and
seven
is
also
a
because
I'm
not
aware
of
the
multi
casting
seven
because
they
have
defined
new
draft
for
the
multi
cast.
So
in
that
draft
we
can
evpn
to
type
routes
like
in
normal
sand.
Four
three:
two
elite,
four
routes:
are
there
one
two
three
and
four,
then
they
added
typhoid
in
a
separate
one
and
prefix,
and
this
for
the
multi
gas.
Okay,
you.
B
H
Iii
love
to
get
their
deployment
because
I
always
follow
the
deployments
and
based
on
their
input
solely
because
that's
how
you
know
you
have
to
have
apples
to
apples
comparison.
So
how
do
you
kind
of
you
know
boxing
the
certain
things
and
you
you
plug
in
your
RT
and
test
and
get
it
hey?
This
is
giving
you
this
much,
and
this
is
having
this
Masoli
as
a
provider
as
implement
or
you
get
you
will
get
the
benefit.
H
B
H
Thank
you.
So
this
is
the
next
Rock
of
the
evpn
vp
w
s,
because
the
last
IETF
you
know
the
the
last
away,
I've
ITF
in
Montreal,
so
they're,
one
of
the
quorum
of
the
seventy
three
to
Jim,
asked
to
you
know
as
one
of
the
comments
which
I
mentioned
that
write
to
incorporate
this
into
the
draft,
then
I
said
it
and
I
test
with
you
know:
I
have
concurrence
with
the
community
as
well
as
the
chair
and
said
because
it's
not
Apple
to
Apple
comparison.
So
this
is
a
reason.
H
We
float
a
new
draft
on
this
to
have
to
benchmark
evpn
vp
w
s,
so
he
will
pn
b.
P.
Ws
is
a
currently
new
RFC,
it's
a
8
to
1
4,
and
this
is
one
of
we
can
say
lot
of
benefits
in
this
particular
RFC
because
mean
this
VP
WS
been
in
the
metro
cloud
for
a
long
time
and
in
the
service
provider.
For
a
long
time.
H
The
one
of
the
basic
drawback
of
VP
WS
is
there
is
no
active,
active
forwarding,
so
one
will
be
your
active
forwarding
and
another
will
be
hot
standby,
so
you
cannot
use
as
a
implementer
or
as
a
customer.
You
cannot
use
both
the
links
for
forwarding.
So
that
is
what's
a
one
drawback
and
you
can,
you
can
have,
cannot
have
you
know.
H
You
know
both
the
links
utilized,
and
these
are
all
the
kind
of
problems
which
are
faced
then
once
the
evpn
came
is
it
was
one
of
the
kind
of
a
mothership
and
lot
of
other
things
which
started
gluing
into
it.
It
starts
growing
and
growing
and
growing
so
now
it
it
becomes
almost
of
the
providers
they
have
adopted.
This
evpn
technologies
and
most
of
the
conventional
which
were
using
the
legacy
systems,
are
now
coming
into
the
evpn
days.
So
the
best
is
working
lot
of
things
on
this.
H
So
I
request
to
lot
of
the
developments,
which
is
asking
I
request
community
to
just
have
a
tab
on.
If
you
are
interested
in
this
technology,
Cape
tau
is
exclusively
on
the
best
working
group.
So
there
as
I
said-
and
this
is
one
of
the
load
balancing
capability-
you
can
utilize
from
the
remote
P.
Also
you
can
send
the
traffic
to
you
know
the
same.
The
customer
site
you
know,
based
on
the
you
know,
multihoming
features
you
can
send
it
and
lot
of
benefits
which
gives
advantages
to
the
customer
and
it
will
reduce
the
Opaques.
H
So
otherwise
it
will
be
wasting
one
link
unnecessary
and
you
are
wasting
your
dollars.
So
this
is
so
to
avoid
that
this
is
one
of
the
capability
you
use
both
the
links,
so
you
get
the
benefit
of
it.
That
is
a
VP
WS
traffic.
So
relatively
16
page
RFC,
it's
a
pretty
good
one,
so
we
defined
a
certain
parameter.
Since
it
is
AE
line
service,
it
have
the
learning
capability.
What
old
comes
in?
Take
it
through
the
pipe
and
send
it
out.
So
we
cannot
have
the
learning.
That
was
one
of
the
challenge.
H
We
cannot
have
a
learning,
Mack
learning
or
anything
in
the
service
provider
edge
router.
What
all
comes
in
just
take
it
and
put
it
into
that:
it's
like
a
pipe!
So
how
do
we?
That
was
a
challenge
when
we
were
testing
in
the
lab?
So
how
do
we
benchmark
this?
How
do
we
performance
monitor
this
particular
service
and
between
the
different
boxes?
I
mean
say
for
X
when
y,
when
Traci
bender,
so
these
are
the
certain
parameters
when
we
test
it.
Okay,
you
jot
down
on
it,
and
we
see
okay,
generalize
certain
parameters
on
this.
H
This
is
one
of
the
local
link
failure.
So
the
link
failure
in
which
how
fast
that
is
in
a
multi,
harming
scenario
how
fast
it
will
switch
from
the
one
PE
to
another.
So
it
depends
on
the
you
know:
the
routers
in
routers,
two
routers,
so
the
timing,
so
we
are
taking
the
average
of
average
of
you
know
they
repeat
the
test
and
get
an
average
on
it.
Then
the
core
link
failure,
so
that
how
fast
is
the
remote
PE
will
switch.
The
you
know
switch
that
to
the
prior.
H
You
know
earlier
primary
to
the
new
primary
and
how
fast
you
know
it
is
coming
up
to
avoid
the
packet
loss.
Then
the
link
flap
link
goes
up
and
down
and
how
fast
the
Dutt
X,
because
these
are
all
tested
or
based
on
the
single
active,
because
in
active
active
there
is
no
D
F
election,
so
it's
both
will
be
the
forwarding,
but
in
single
active
you
have
the
primary
and
the
backup
concept.
So
this
test
is
purely
if
you
go
through
the
draft.
H
This
is
test
is
only
done
on
single
active
scenarios,
so
the
link
flap.
What
happens
is
like
when
you
flap
the
link
once
primary
goes
down
and
it
goes
back
up
becomes
the
primary.
Then
it
comes
up.
Then
again
the
reelection
takes
place
and
again
the
primary
kicks
in.
So
there
will
be
a
you
know:
gap.
If
either
you
have
preference
base,
bf
election
or
be
more
then
so
there
will
be
some
packet
loss
so
how
fast
it
is
converging
that
is,
we
are
measuring.
H
It
then
normally
in
BPW
is
one
thing
is
like
adding
the
services
in
service
provider.
You
add
the
services
like
say:
15,
customers,
you
add
it
or
150
customers,
automate,
testing,
direct
writing
scripts
and
adding
it
and
deactivating.
If
they
didn't
pay
the
bills.
Do
you
deactivate
it?
This
is
automated
testing,
so
you
deactivate
the
customers
and
once
they
pay
the
you
activate
it
so
that
the
services
takes
you,
no
expectation
and
services
should
come
up.
The
packets
should
flow
and
the
existing
services
should
not
be
affected
and
the
scale
convergence.
H
H
Then
high
availability,
you
it's
a
the
common
test
in
the
BMW
G,
you
failover
the
active,
the
routing
engine,
so
it
will
be
running
it
non-stop
forwarding,
so
you
flap
it
or
you
reboot
the
primary
routing
engine,
so
the
other
routing
engine
takes
up
so
ideal
case
is
0
because
idle
is
ideal.
1
and
practically
you
will
see
a
fact.
Few
packet
drops.
So
that
is,
the
soap
is
running.
H
The
full-blown
system
with
traffic
in
a
multi
dimension,
scale
running
over
a
period
of
24
to
48
hours
and
expectation
is
no
code,
no
crashes,
no
memory
leaks,
so
we
have
automated.
You
know
thing
tools
to
check
that.
If
the
you
know
it
will
check
it
will
poll
frequently
every
our.
What
is
the
status
and
gives
us
the
result
at
end
or
end
of
date,
so
thanks
Allan
Zahra
for
the
support
so
next
step.
Please
comment
it.
H
H
H
H
It
is
the
currently
because
it's
a
test
setup,
we
I
have
only
one
peer.
So
if
you
have
you
know
the
I
mean
with
one
peer,
we
have
tested
it.
So
it
is
it's
a
test
setup
right.
So
with
one
peer
I
have
you
know
two
multihoming
P
which
acting
as
the
same
Ethernet
segment,
and
we
have
one
single
home
P
which
I
serve
remote
in
single
home,
P
and
wonder
of
out
reflector.
H
K
C
G
H
Problem
so
in
the
current
scenario,
because
you
know
the,
if
you
have
the
RP
or
the
router
tester
like
exe
our
Spirent,
this
should
support
that
many
peers,
I
think
EVP
nvx
land
ixia
supports.
We
can
automatically
synthesize
the
number
of
peers,
but
in
MPLS,
when
you
generalize
right,
so
we
need
a
physical
box.
So
that's
why
we
boxing
the
topo2
like
this
so
EVP
now
I
think
the
latest
Ixia
IX
Network
version.
They
have
the
synthesizer
who
you
can
have
the
number
of
peers
and
they
support
the
way
we
excellent
payload.
H
H
Prf
it
is
there,
it's
part
of
be
Arabs
and
we
are
F
is
number
of
it.
That's
a
scale
scale
convergence.
It
is
part
of
so
very
nice
as
well
yeah
that
does
vni.
We
are
not
exclusive
silly.
As
I
said
right,
it
works
on
both
evpn
and
you
know:
VX
lag
so
gruffs
rub
scaling.
Is
there,
so
you
might
want
to
consider
both
which
one
yeah,
because
the
proof
as
I
said
right
the
graph
we
can
take
the
next
hop
where
either
it
will
be
the
the
label
I'll
be
provided
by
the
overlay.
H
The
overlay
overlay
versus
an
overlay,
be
nice.
I
didn't
get
you
in
that,
because
the
verbs
comes
in
the
overlay,
so
yeah
that
is,
as
I
said,
write
overlay
verbs
are
considered,
so
the
underlay
it
will
be
independent.
It
will
be
independent
of
as
I
said
earlier.
It
is
independent
of
any
encapsulation.
H
Vn,
I
or
you
can
have
a
you
know
a
VPN
MPLS,
so
that
is
independent,
so
we're
scaling
is
there
scale.
Convergence
is
a
parameter.
It
is
the
power
of
site.
Is
that
you,
you
can
read
that
it
doesn't
mention
clearly.
Thank
you.
Tai
Phi
I
will
take
into
consideration
in
this.
We
cannot
have
number
of
years
in
this
test.
What
currently
I'm
handicapped
in
that?
Why?
Because
the
Exia
or
the
is
this
parent
test
center
right,
you
have
to
emulate
it.
H
That
kind
of
emulation
is
not
our
I
can
have
a
bgp
peering
I
can
I
will
fix
here
should
follow
you,
so
that
is
a
challenge
I
have
so
I
have
to
you
know
this
is
as
all
set
in
there.
This
is
a
lab
setup,
so
you
define
this
is
my
topo,
and
this
is
a
the
you
know.
As
overlay
as
I
said
right,
it's
sorry
overlay.
We
are,
you
know
looking
into
it
underlay.
We
make
it
as
a
fix
to
payload,
and
this
is
a
certain
parameters
we
are
done
on
the
underlay
and
overlay.
H
We
are
scaling
it
and
we
are
parameter.
We
are
defining
the
parameter
to
do
that
now,
if
I
think
I
definitely
work
in
my
lab
and
I'll
get
back
to
you.
The
type
6
I
type,
six
and
seven
I
said
right
now,
because
I
never
tested
on
it.
It's
a
relatively
new
thing,
then
definitely
I'll
get
back
to
you.
You
know
coming
as
a
separate
draft
for
that.
Let
me
go
through
it.
It
was,
as
I
said,
I
have
to
confess
I
didn't
test
it.
It's
really
a
new
one.
So.
H
I
will
look
into
the
type
5
for
this
I
because
since
it's
a
need,
I
I'll
try
to
excuse
cues
in
that
al
because
it's
a
deployment
so
I
will
try
to
I
mean
I'm,
not
you
know,
I'll
walk
to
towards
it,
because
I
have
a
commitment.
I
will
work
towards.
Let
me
see
to
that
how
best
I
can
excuse
in
that
way,
so
that
that's
it
about
this
draft.
So
this
I'll
ask
you
a
question
like
this
is
all
tested
based
on
the
traffic
right
with
traffic.
You
know
northbound
southbound
and
bi-directional
traffic
good.
H
B
B
The
next
one
is
from
your
chair,
acting
as
a
participant
and
my
colleague
Jim
Yutaro
from
18
T
labs,
I
economized
on
my
time
here
and
did
not
prepare
slides
here's.
Why
we've
only
got
two
things
that
we're
trying
to
benchmark
here
in
this
multihomed
evpn
scenario
and
I'm
gonna
flip
ahead
to
what
they
are.
We're
gonna
do
some
throughput
and
other
tests
on
Ethernet
segment,
which
is
this
this
part
of
this
figure
right
here
and-
and
this
is
obviously
for
you
know-
and
it's
another
test
for
you
VPN,
but
then
our
next
plan.
B
B
I
really
welcome
sharp
readers
folks
with
evpn
experience,
to
take
a
look
at
this
and
tell
me
where
I
messed
up,
because
I'm
sure
he
did
and
we'd
be
happy
to
fix
this
thing
up,
but
I
think
that's
I
mean
these
are
a
little
different
than
the
kinds
of
things
you've
done
Sabine.
So
we
might
progress
this
as
an
individual
draft.
B
H
C
H
All
the
remote
routers
in
EBX
land
percent,
so
what
we
are
going
to?
What
are
the
parameters
you
are
going
to
do
it
because
you
get
roid,
then
the
other
one,
the
other
P
will
take
it
as
sorry
to
active
the
forward
or
the
designated
forward.
So
what
what
are
the
parameters
in
this
you
are
defining
it?
I
will.
H
Would
you
what
would
you
like
to
say
I'm
putty
in
your
hands,
okay,
yeah?
What
what?
What
is
actual
the
deployment
is
like
what
is
their
link?
Flap
happens
like
the
mass
withdrawal.
Then
it
has
to
clean
up
the
old
entries.
The
forward
next
up
should
point
to
e
media.
How
fast
it
is
pointing
pointing
to
the
next
outer
and
reduce
the
flag.
So
the
flood
is
a
dangerous
thing
in
the
production
network.
So
the
longer
the
flood.
H
You
know
we
draw
it
clean
up
the
you
know
what
all
the
clean
up,
because
the
1p
is
down,
as
we're
told
one
p
is
down
what
kicks
in
so
the
mass
with
all
happened,
so
there
forward
is
Cle.
Forwarding
table
is
cleaned
up
immediately
switches
to
the
new
one
and
starts
relearning,
so
that
is
the
Delta.
You
have
some.
You
know
packet
drops
coming
in
and
the
relearning
so
that
end,
which
will
cut
down
the
flag.
Okay,.
H
B
B
Yeah
we
I
confess
that
we
we
used
the
even
the
extra
day
on
draft
submissions,
because
that
was
the
only
day
Jim
and
I
could
get
together.
So
so
we've
we
stressed
the
the
end
of
the
envelope
to
get
this
thing
put
together
for
you.
But
thanks
for
your
comment,
city,
I'm,
very
good,
great,
alright,
so
I'm
gonna
make
it
I'm.
Gonna
make
a
quick
note
of
that
here.
K
B
Okay,
okay,
yeah,
in
fact
them
I'm
fairly
sure
that
I
didn't
that
I
didn't
do
the
throughput
test
in
like
a
50/50
kind
of
split
where,
where
it
could
all
be
restored
in
the
next
step
of
the
procedure.
In
other
words,
when,
when
the
let's
say
in
this
picture,
what
PE
one
goes
down
and
and
and
the
traffic
split
across
the
the
ESI
yeah.
G
G
B
K
B
Right
all
right
on
to
the
next,
so
now
we're
so
now
we're
in
the
continuing
proposals.
You
know
we
where
we
did
all
of
the
evpn
stuff
together
and
two
of
those
drafts
were
brand
new,
just
to
get
everything
evpn
done
at
once.
So
now
we're
looking
at
the
updates
for
the
back-to-back
training
benchmark.
This
is
where
we,
this
is
a
stand
up
from
it.
This
is
the
this
is
the
case
where
we're
updating
the
RFC
2544
back
to
back
frame
benchmark.
B
B
We're
on
a
single
point,
single
port
relationship,
the
device
under
test
can't
transfer
all
the
traffic,
in
other
words,
there's
a
limitation
for
64
bytes,
typically
64,
octets
or
128
octets
and
there's
our
those
are
basically
cases
where
you
can't
get
a
single
port
to
single
port
kind
of
connectivity.
Now
you
know
in
physical
devices
today
this
is
not
an
issue,
but
it's
it's
still
an
issue
in
the
virtualized
networking
devices
world.
So
that's
a
good
reason
to
update
this.
B
So,
like
I
said
it's
the
25:44
update
we're
testing
the
extent
of
data
buffering
in
the
device.
The
original
procedure
in
2544
was
very
concise
by
that
I
mean
really
short
and
sweet.
We're
doing
a
lot
more
with
it
here
now,
so
we
ran
some
tests
in
2017
under
the
Oh
piano,
thievius
perf
project
and
those
indicated
we
needed
some
areas
for
refining
this
calculations.
B
So,
besides
calculating
the
number
of
back-to-back
frames
that
we
can
launch
into
a
device
without
loss,
that's
an
indicator
of
the
buffering,
that's
available
and
we're
going
to
measure
that,
in
terms
of
all
these
different
statistics,
but
as
I
was
explaining
to
Barak
and
during
breakfast
yesterday,
some
of
the
frames
flow
out
of
the
device
when
you're
sending
them
in
because
it's
actually
processing
it's
live.
So
what
we're
doing
is
actually
correcting
the
calculations
based
on
the
throughput,
which
has
been
measured
as
a
prerequisite
for
this
test.
B
So
we
have
this
implied
buffer
time,
which
is
the
average
number
of
back-to-back
frames
in
the
burst,
divided
by
the
actual,
the
theoretical
frame
rate.
And
then
we
have
a
corrected
buffer
time
where
we
take
away
the
frames
that
have
been
delivered
during
that
time
frame.
And
that's
so
that's
the
the
implied
buffer
time
times
this.
This
ratio
of
the
measured
throughput
and
the
max
the
maximum
theoretical
frame
rate,
so
those
that
that
factor
should
remove
the
frames
that
have
been
passed
through
the
device
as
it's
under
tests.
B
So
in
version
3,
we
resolved
all
the
open
points
about
search
algorithms,
we're
doing
that
on
the
basis
of
the
Etsy
spec.
That
I
showed
you
before
we
have
a
new
search,
algorithm,
they're,
called
binary
search
with
the
loss.
Verification
that'll,
be
very
helpful
in
our
virtualized
environment.
I,
encourage
you
to
take
a
look
at
that,
and
also
so
the
results
that
we
presented
on
this
during
IETF
102
and
the
buffers
are
key
to
absorbing
the
transient
interrupts
that
caused
us
problems
when
we're
benchmarking,
art
devices.
B
If
the
buffers
are
large
and
forwarding
is
suspended,
you
know
the
buffers
absorb
that
and,
and
you
have
a
cost
free
environment,
but
you
have
more
latency
in
your
device
too.
So
you've
got
a
trade-off
between
these
two
things
and
you
can't
just
throw
it
all.
You
know,
throw
it
all
into
buffer.
That's
a
you
know,
be
a
problem,
so
you
know
devices
are
going
to
be
designed
with
some
limited
buffer.
B
That's
going
to
determine
how
much
transient
or
the
extent
of
the
transient
interrupts
eration
that
you
can
stand
and
and
then
this
specification
is
how
we're
going
to
characterize
that
affective
buffer
size.
So
these
two
things
are
very
well
married
in
terms
of
what
we've
learned
about
this.
You
know
the
transient
benchmarking
world.
B
We
didn't
see
any
transients
in
this
this
case
on
the
the
left
here:
Phi
2
Phi,
but
any
time
we
included
virtual
machines
and
the
logical
ports
through
the
V
switch.
We
saw
lots
of
transients
and
then
our
binary
search
with
a
loss,
verification
really
paid
off
it.
It
reduced
the
variation
tremendously
sometimes
about
a
hundred
percent
of
the
variation
was
reduced
and
and
in
fact
we
got
much
closer
to
the
true
resource
limitations
by
testing.
B
That
way,
and-
and
that's
that's
incredibly
important
for
determining
the
total
cost
of
ownership,
we've
got
two
kinds
of
problems
now:
resource
limitations
and
transients
that
come
occasionally,
and
we
want
to
characterize
these
things
separately.
We
want
to
attack
the
tuning
of
these
two
problems
separately,
so
we'll
do
that
with
a
long-term
testing
for
the
transients.
So,
like
I
said,
we
observe
the
transients
in
any
certain
scenario
with
the
VM.
So
what
do
we
do?
Next?
We
ran
some
tests
with
SR
IO
V.
This
is
intel's
bypass
technology,
so
now
we've
got
no
fees.
B
B
Again
we
saw
very
few
transients
in
fact
of
trying
this
of
four
or
five
times.
This
is
the
only
case
where
we
ran.
Let's
see
if
I
can
do
this
with
a
pointer
here.
Is
this
going
to
work?
I
thought
I
saw
something
there
for
a
moment
there.
It
is
there
it
is
yeah,
so
we
we
have
all
these
iterations
and
they're
numbered
there
right
and
we
start
out
at
a
hundred
percent
of
line
rate,
which
is
ten
GigE
in
in
both
of
these
cases
and
you'll
notice
that
you
know
I
mentioned
this
last
time.
B
Actually
the
test
device
can't
quite
send
a
ten
Giggy.
So
we
get
these
extended
durations
here,
but
at
50
it's
it's
good
because
we're
doing
a
10,
mil
10,
second
duration
test
and
and
that's
what
we
get
pretty
much
every
time
here.
I
noticed
at
75.
It's
not
quite
right
either,
but
in
any
case
look
at
this.
We
one
time
we
tested
at
50%
of
the
max
rate,
basically
five
gigabit
per
second
right
at
664,
octet
frames
and
and
one
time
we
saw
a
loss.
Another
time
we
didn't
all
the
other
times.
B
We
saw
no
loss
at
50.
So
this
little
this
little
transient
that
came
along
here
could
have
fouled
up
our
results
in
this
in
this
test.
That's
the
value
of
the
binary
search
with
loss,
verification,
okay.
So
so
then
we
got,
you
know
a
lot
more
work
to
do
and
it
turns
out
that
the
that
the
throughput
limit
is
way
down
here
at
the
bottom.
It's
you
know
fifty
point
three,
nine
one
percent
of
the
of
the
rate
and
that's
where
we
get
a
solid
zero.
B
So
and
the
main
point
here
is
across
all
these
trials.
We
didn't
see
hardly
any
transients,
so
what
the
hell's
going
on
we
I
mean
we
saw.
We
saw
all
these
transients
back
here
with
the
VMS,
and
my
original
conjecture,
as
I
said
here.
I
think
was
that
the
VM
was
the
one
attracting
the
transients.
Well,
it
turns
out
it's
not
it's
probably
oops,
sorry
about
that.
Actually,
there's
the
answer
right
there.
So
the
conjecture
was
wrong
and
it's
not
the
VM.
That's
attracting
these
transients!
It's
it's
something
else.
B
B
So
so
this
is,
you
know
this
is
a
the
most
fun
I've
had
in
a
couple
of
months
anyway.
So
the
the
next
steps
are.
We've
looked
at
this
draft
three
four
times
now:
we've
done
some
testing
with
it.
It's
it
seems
to
be
valuable.
I'm
asking
for
a
working
group.
Adoption
of
this
draft
who's
read
the
draft.
B
Apparently
nobody
in
the
room.
Okay,
so
we're
gonna
have
to
ask
this
on
the
list.
If
we
ask
it
at
all
and
I
will
also
ask,
then
please
read
and
send
your
comments
to
the
list.
I'll
discuss
with
Sara
the
possibility
of
starting
an
adoption
on
this,
but
we
did
have.
We
have
had
comments
in
the
past
and
in
fact,
from
the
gentleman
who
prepared
the
slides
that
I'm
gonna
present
for
him.
Next,
he
was
unable
to
join
us
today,
mr.
B
B
Okay,
so
mr.
Yoshiki
Ito
prepared
a
set
of
slides
for
us,
and
we
looked
at
most
of
these
during
our
last
meeting,
so
I'm
going
to
cover
some
of
these
very
quickly.
This
is
the
case
in
RC,
82
39,
what
it
was
designed
for
data
center
switches
and
other
data
center
devices.
So
now
we're
in
the
modern
world,
where
you
know
you
you,
if
you
send
in
a
hundred
percent
rate
on
a
single
port
here,
you
you
get
it.
You
know,
there's
no,
no
problem
here
with
with
packet
size
or
anything
like
that.
B
K
B
Okay,
well,
this
was
this:
was
the
authors
of
80
to
39
imagined
the
case
where
you,
where,
if
you,
if
you
don't
have
the
luxury
of
you,
know
failing
to
support
port
to
port
connectivity,
then
you
and
you
have
to
go
with
this
over
subscription
thing.
This
is
the
case
that
they're
trying
to
handle
so
I.
M
G
B
A
that's
a
good
point,
you
know
I
think
I
mean
there's
a
lot
of
good
benchmarks
in
this
RFC
80
to
39,
but
I.
Think
what
we're
headed
for
is
a
sort
of
a
revision
of
this
and
I
encourage
everybody
to
write
down
their
ideas
about.
You
know
how
we
can
better
understand
the
test
configurations
and
so
forth.
Barak
in
particular,
you're
you're,
you're
you're,
bringing
a
lot
of
expertise
to
the
group
today.
I
really
appreciate
it.
Thank
you,
but
let's
cover
this
quickly.
B
B
Suddenly
you
get
some
loss
and
then
the
number
of
buffered
frames
before
that
happened
was
it
was
twenty
four,
so
that
relates
to
the
latency,
and
he
tried
this
with
another
case
with
a
hundred
percent
rate,
and
you
can
see
that
again,
it's
you
know
it's
so
doing
very
well.
It's
it's
basically
saying:
24
is
the
answer
again
24
frames
kind
of
consistent,
but
then
he
tries
it
with
three
and
now
it's
twelve
and
twelve
and
twelve.
So
that's
a
sort
of
an
interesting
case
that
we
oughta
be
looking
at
a
little
differently
here.
G
B
He
he
he
kind
of
he.
He
looked
it
looked
at
each
one
of
these
streams
separately,
so
so
this
this
seems
to
be
the
linear
case,
you're
right,
you're,
right
all
right.
So
now
that
we've
come
to
the
new
slides
that
mr.
Ito
has
shared
with
us,
so
he's
now,
looking
at
this
possibility,
so
he's
saying:
we've
got
this
buffer.
All
the
frames
are
either
going
in
the
buffer
or
somehow
getting
out
of
port
for,
and
he
says,
since
the
test
frame
also
flows
to
the
received
port
during
transmission.
B
It
is
not
known
whether
all
the
test
frames
were
accumulated
in
the
buffer
or
not.
So
this
is
actually
the
the
kind
of
correction
factor
I
was
trying
to
put
in
for
the
single
port
to
port
case.
So
what
does
he
say?
He
says
in
phase
one:
we're
gonna
push
a
lot
of
frames
in
the
device
into
the
DUT
and
they're
going
to
be
stored
in
the
buffer,
and
why
is
that
true?
Because
we've
got
pause
on
pause
means
nothing's
coming
out,
so
we're
filling
the
buffer
and
then
in
Phase,
two
of
the
test.
Mr.
B
K
G
B
Might
get
some
wrong
results
well,
I
think
we're
in
you
know
we're
sort
of
an
experimental
mode
here
and
and
barak
gives
me
a
thumbs
up
on
experimental
mode
that
would
have
been
lost
in
the
recording.
Thank
you
very
so
we
recorded
gonna
learn
things
here
that
we
haven't
anticipated,
maybe
more
complexities
than
are
expressed
in
this
diagram.
So
here's
here's
the
here's,
the
test
and.
B
B
N
K
B
And,
and
between
these
two,
the
hundred
percent,
the
the
dual
hundred
percent
and
the
one
percent
oversubscription,
you
know,
I
mean
we
got
two
different
answers
here
we
got
24
and
forty-eight,
because
I
guess
this
1%
really
really
I
mean
there
must
be
some.
There
must
be
some
buffer
in
in
this
path
that
wasn't
really
getting
filled
by
the
1%
stream
and.
C
K
K
F
B
Right
well,
I
think,
oh
yeah,
all
three
of
these
there's
three
types
of
tests
here
that
are
all
in
82
39.
So,
with
more
experience,
we
may
be
able
to
pair
these
down
to
the
ones
that
that
make
no
sense.
In
any
case,
thanks
very
much
for
that
feedback.
Anybody
else
comments
on
this
work,
I'm,
really
glad
that
we
haven't
met
mr.
Ito
in
person,
but
hopefully
he'll
be
able
to
join
us
in
a
future
meeting
remotely
so
very,
very
kind
of
him
to
do
this.
Work,
hey
thanks
everybody!
B
So
Oh
a
it
is
now
your
chance
to
represent
some
drafts
here.
So
mr.
electron
is
representing
Veronica
Pollock
and
mafia
Constantino.
It's
on
two
brand
new
drafts
that
showed
up
in
the
group.
Actually,
they
didn't
show
up
in
the
group.
You'll
notice
that
our
new-
these
are
brand
new
drafts
to
authors,
but
they
forgot
to
include
BMW
G
someplace
in
the
file
name.
So
that's
that's
feedback
number
one
I'd
like
to
see
them
republish
this
with
BMW
G
in
the
name.
So
it
shows
up
in
the
rest
of
our
stuff
that.
C
B
C
So
it's
about
search
algorithms,
finding
the
the
the
rate
for
testing
one
is
for
deterministic
systems.
So
this
multiple
loss
ratio,
search
for
packet,
throughput
new
search
algorithm
and
it
finds
the
maximum
Ricci
rate
quite
quickly,
and
it's
it's
being
used
in
in
the
seasick
testing
project
for
VPP.
That's
all
open
source.
You
can
go
and
look
at
that
and
I
think
there's
some
recently
good
description
of
the
algorithm
in
in
the
draft
and
unfortunately
we
don't
have
that
deterministic
system.
C
So
in
these
new
software
based
root
risks
with
you
know,
it's
all
be
well
all
runs
in
software.
You
can
sorta
make
it
reasonably
deterministic,
but
as
soon
as
you
add
VMs
and
stuff,
it
becomes
quite
hard
and
even
modern
CPUs
are
really
not
deterministic
anymore.
So
then
they
have
this
probability.
Probabilistic
loss
ratio
search.
C
So
we
certainly
seen
this
on.
You
know
all
this
nfe
testing
that
we
do
for
for
people
Peter,
and
there
is
some
variability
there
that
it's
hard
to
account
for,
and
so
we
modify
these
test
study.
I
think
most
of
these
drafts
are
proposing
updates
to
2544
and
I
would
expect
them.
You
know
this
is
just
provision
zero,
zero,
but
at
some
point
I'm
sure
they
would
like
for
these
to
be
adopted
in
this
working
group
and
then
take
Eunice
I,
don't
know
if
you
had
plans
for
missing
25:44
but
well.
B
They're
the
way,
that's
a
good
question
and
I'll
be
glad
to
address
that
only,
but
what
we've
been
doing
is
sort
of
updating
parts
of
it
as
we
learn
new
things
that
need
to
be
improved
so,
for
example,
the
loss,
the
the
long-term
loss
ratio
testing.
Here,
that's
actually
that's
that's
kind
of
a
thing
that
isn't
actually
part
of
2544
right
now,
I
think
the
longest
sort
of
the
longest
duration
testing
we
talked
about
there
is
in
the
verification
stage
of
about
sixty
seconds
something
like
that.
B
B
Now
we
talked
about
a
lot
of
different
search:
algorithms
in
RFC,
2544,
linear
and
and
binary
search
and
I
mentioned
that
in
the
Etsy
now
published
a
specification
test,
0:09
a
new
search
algorithm
there,
that's
designed
to
be
robust
to
these
transients
and
the
you
know,
non-deterministic
problems
that
we're
talking
about
measuring
here
in
the
long
term
and
to
some
extent
the
the
the
other
draft,
the
ml
research
search
algorithm.
That
has
a
that's
that's
trying
to
uncover
multiple
thresholds
within
a
single
search
right
with
different
sets
of
packet
loss
ratios.
C
C
B
We've
done
in
the
binary
search,
so
there's
I
mean
there's,
there's
lots
of
good
interaction
here
and
I
think
that,
ultimately,
if
we
you
know,
we
may
continue
to
update
25:44
in
a
piecewise
basis
and
then
that's
at
some
point.
You
know,
declare
a
Flag,
Day
and
and
update
the
whole
thing,
replace
it
with
a
full
bit
right.
So
I
think
that
maybe.
B
B
We
have
no
idea
what
the
answer
should
be
and
and
that's
where
a
binary
search
for
the
less
verification,
just
searching
for
a
single
threshold
is,
is
one
where
you
spend
lots
of
time
dedicated
to
that,
and
then
there's
then
there's
other
testing
where
you
know
you
may
be
searching
for.
You
know
a
non
zero
loss
threshold.
This
covers
that,
but
it
also
works
differently
in
terms
of
getting
feedback
from
the
test.
Results
which
is
more
than
just
was.
Was
there
loss
or
not?
B
The
key
thing
that
that
they're
pulling
out
here
is
very
often
the
the
maximum
frame
rate
that's
revealed
on
the
receive
site
should
I
mean
that
should
be
put
in
to
your
testing
search
algorithm,
which
is
exactly
what
what
they've
done
here
it
when
you
look
at
the
when
you
look
at
the
results,
it's
it's
pretty
obvious
that
that's
that's
what
you
should
do.
In
fact,
if
you
go
back
and
look
at
one
of
the
slides,
I
showed
there
before
it's,
it's
a
it's
a
painfully
obvious
there.
B
So,
unfortunately,
for
a
lot
of
people,
the
the
iterations,
the
trials
of
the
search
algorithm
are
kind
of
hidden
inside
a
like
a
turnkey
operation,
and
you
get
the
results
at
the
end.
And,
and
most
people
are
happy
with
that,
but
when
you're
tweaking
the
search,
algorithms,
you
you
want
to
look
at
the
whole
thing
and
that's
when
you
begin
to
understand.
You
know
how
they're.
B
In
our
in
our
world,
with
transients
and
sometimes
a
lack
of
determinism
so
well,
we
thanks
very
much
for
presenting
and
my
chicken
and
Greko
thanks
for
bringing
this
work
to
us,
I
encourage
everybody
to.
Has
anybody
read
the
drafts
right
that
would
that
was
really
aspirational.
Oh
somebody
has
their
hand
up.
Thank
you
well
they're.
They're
short,
they
intend
to
expand
them
and
they
intend
to
show
up
in
person
at
the
sort
of
the
next
session,
so
not
the
next
session,
but
the
next
IETF.
B
B
It
was
written
from
iqt
study
group,
twelve
to
ATF,
I
ppm
and
many
other
standards
bodies,
but
the
main
thing
that
I
wanted
to
raise
to
everyone
here
is
the
fact
that
these
folks
are
all
trying
to
measure
Internet
access
performance
and
what
because
many
standards
bodies
are
involved.
It's
an
attempt
to
harmonize
the
methods
of
measurement
that
will
be
used
in
the
future.
Now
don't
do
that
here.
It
is.
B
What
you,
if
you,
if
you
open
this
liaison
statement
and
you
have
a
link
to
it
right
there,
what
we
have
is
a
sort
of
a
description
of
the
progress
of
ITT,
Study,
Group,
twelve
question,
seventeen
on
packet
performance
and
their
work,
in
conjunction
with
the
Etsy
speech
and
multimedia
transmission
quality
committee.
That's
Etsy
stq,
and
what
they're
basically
doing
is
entering
into
an
evaluation
plan
where
different
methods
of
measurement
for
measuring
Internet
access,
speed
and
latency
and
other
performance
characteristics
are
all
going
to
be
compared
objectively
in
the
laboratory.
B
So
our
method
is
to
use
a
calibration
system
establishing
a
ground
truth,
and
this
has
relevance
in
our
work.
We've
never
actually
done
this
here
as
far
as
I
know,
in
other
words,
we're
we're
searching
around
for
the
the
capacity
of
switches
and
routers
and
so
forth,
but
we've
never
put
our
measurement
systems
up
against
a
device
under
test
where
we
think
we
know
what
the
capacity
is.
B
So
now,
there's
relevance
to
us
to
not
just
in
that
category
of
calibration,
but
in
in
the
in
this
new
world,
where
I've
drawn
I've
drawn
a
nice
picture
here
of
a
house
and
a
phone
and
a
very
big
blue
access
capacity,
pipe
and
you'll
notice
that
it's
four
digits
and
and
the
megabits
so
we're
talking
about
something
in
the
gigabit
per
second
range.
And
this
is
what's
happening
in
our
world
today,
Comcast
about
two
weeks
ago,
Comcast,
a
big
cable,
TV
and
Internet
provider.
In
the
u.s.
B
they
announced
that
they
have
gigabit
access
service
available
to
60
million
homes
when
I
first
got
involved
with
voiceover
cable
television
work,
all
of
the
cable
TV
suppliers
in
the
United
States
offered
television
service
to
60
million
homes.
Now
we're
talking
about
one
multiple
service
operator
and
gigabit
internet
access
same
number
of
homes.
Roughly
25
years
later,
that's
a
huge
change.
Now.
What
does
that
mean
for
the
characterisation
world?
Well,
in
the
benchmarking
techniques
that
we've
been
using
here,
we've
been
measuring
rates
like
that
ezel?
B
What
we've
been
doing
it
in
the
lab,
where
we
have
you
know
very
controlled
circumstances,
and
fortunately,
now
we
have
new,
robust
search.
Algorithms
coming
on
the
scene.
We've
been
drone
to
that.
Driven
to
that,
because
some
of
the
devices
under
tests
are
less
reliable
for
us
that
the
general-purpose
computing
platforms,
where
we
have
virtualized
Network
functions.
They
have
interrupts
that
show
up
every
once
in
a
while
and
those
interrupts
can
affect
our
searches.
B
But
are
they
going
to
be
able
to
measure
a
gigabit
per
second
accurately
right
now,
they're
using
three
or
four
connections?
Something
like
that
I
mean?
Are
they
gonna
continue
to
add
to
the
number
of
connections
and
have
reliable
results,
I
kind
of
think
the
answer's?
No
and
our
laboratory
characterization
is
going
to
show
that.
So
what
we're
basically
doing
in
the
laboratory
is
we're
taking
this
picture
with
the
house
and
the
phone
so
we're
both
fixed
and
mobile.
That's
our
scope
and
we're
we're
saying
that
we're
going
to
measure
just
this
access
pipe.
B
So
this
measurement
system
that
I've
depicted
there
as
the
as
the
host
with
the
storage
and
we're
idealizing
that
at
first
we're
replacing
it
with
this
laboratory
setup
and
we
have
the
you
know:
the
phi
2
phi,
physical
port,
the
physical
port
and
v
switch
and
our
test
device
is
connected
to
that's
isolated
from
the
from
the
test
device.
But
for
the
first
time
we're
introducing
a
cue
discipline
which
is
typically
a
token
bucket
filter
and
it
operates
on
egress
of
the
of
the
device.
B
So
I'm
showing
that
there
and
you
know
kind
of
operating
with
the
physical
ports.
And
so
now
we
can
set
calibrated
levels
of
capacity
that
the
token
bucket
will
allow
and
we
blindly
launch
our
search.
Algorithms
and
we
say:
do
you
find
the
right
answer
and
if
we
launch
five,
three,
four,
five,
six,
seven,
eight
nine
TCP
connections.
Do
they
find
the
right
answer
and
if
we
get
other
methods
of
measurement
that
are
suggested
like
the
IP
PM
model
based
metrics?
Does
that
find
the
right
answer?
B
So
we're
basically
contriving
a
little
bit
of
a
bake-off
here
with
very
fixed
circumstances
and
we're
going
to
find
out
in
this
phase
one
of
the
evaluation
who
does
the
who
who
gets
the
right
answer
here?
Who
who
measures
the
calibrated
path
accurately?
So
we've
already
started
to
do
this.
We've
started
it
with
our
benchmarking.
With
the
UDP
packets
we
did
64
and
and
1518
packet
sizes.
B
We
we've
briefly.
The
results
said
that
we
we
were
able
to
find
the
capacity
limits
doing
that
testing
with
the
binary
search
for
the
lost
verification,
the
best
performance
closest
to
the
right
answer.
The
calibrated
value
was
using
the
the
full
MTU
packets
of
1518
in
this
laboratory.
Setup
I
still
haven't,
got
iperf
working
who's
familiar
with
iperf,
quite
a
few.
B
It's
it's
capable
of
launching
the
you
know
the
multiple
TCP
connections,
but
there's
other
versions
of
this
floating
around
there's
an
ID
/
3
now,
which
supposedly
works
better,
also
net
perf
and
and
some
of
the
things
in
persona
are
probably
out
there
right
bill
and,
and
things
have
been
used
in
Internet.
So
there's
I
mean
there's
a
lots
of
possibilities
for
things
that
can
make
these
measurements,
including
math.
This
is
code
which
implements
most
of
the
model
based
metrics
RFC,
not
all
of
it,
but
a
good
portion
of
it.
B
So
my
plan
is
to
get
as
many
of
these
software
based
traffic
generator
and
measurement
devices
implemented
as
possible
in
this
blue
box
here
and
we'll
well,
basically,
in
this
phase,
one
have
this
run
off
and
we'll
compare
the
results,
and
what
I
expect
to
see
is
that
some
of
them
will
not
measure
the
full
family.
Some
will
be
very
confused
by
this
and
and
and
that's
an
important
thing
to
quantify,
because
at
the
end
of
the
day,
people
are
going
to
be
measuring
access
capacity.
B
Regulators
are
going
to
be
interested
in
that,
because
they're
going
to
want
us
to
prove
that
if
we're
selling
a
gigabit
per
second
service
that
we're
actually
providing
a
gigabit
for
second
service,
but
note
that
the
scope
of
that
is
really
just
this
access
pipe,
and
that
makes
a
lot
of
sense.
The
the
user
traffic
is
going
to
go
to
either
an
edge
device
like
as
shown
here
or
maybe
somewhere
close
by
in
a
CDN
or
a
video
server.
B
But
when
you,
when
you've,
got
a
video
downloads,
4k
video
download,
25
megabits
per
second,
it's
not
gigabit.
So
beyond
the
ax
after
you've
proven
that
the
access
pipe
can
handle
a
gigabit
per
second
now
you've
got
something
else
to
test
in
the
distribution
you
want
to
test
from
the
end
of
the
access
pipe
to
the
video
server
is
25
megabits
per
second
available
there.
It's
not
a
gigabit
and
that's
very
friendly
for
testing
your
distribution
network.
You
don't
want
to
be
running
a
gigabit
around
to
2030
test
sites
in
a
full
mesh.
L
B
B
I've
actually
got
a
slide
that
helps
okay,
so
look
so
you're
a
great
straight
man
for
the
state.
You
stay
up
in
the
mic
because
you
may
have
questions
alright,
so
so
so
remember,
I
said:
I
said
that
in
the
in
the
green
dots
there,
that's
where
we're
that's,
where
we're
introducing
the
token
token
bucket
filters
and
so
we're
gonna
we're
gonna,
set
that
token
bucket
to
100
megabits
per
second
and
we're
also
going
to
add
four
milliseconds
latency.
B
We
need
to
measure
the
latency
at
the
same
time
we're
doing
capacity
testing
a
lot
of
the
access
systems.
Today,
don't
do
that.
What
do
they
do?
They
measure
TCP,
flooded,
throughput,
wasting
tons
of
packets
just
trying
to
get
the
maximum
throughput
through,
and
then
they
turn
all
that
stuff
off
and
they
measure
latency
with
pain.
B
So
they
don't
ever
see
the
full
buffers
that
you've
just
completely
occupied,
while
you're
measuring
the
maximum
throughput
and
now
you're
not
going
to
see
the
true
latency
that
matters
here
and
by
the
way,
these
future
services
with
gigabit
capacity.
They
also
have
strong
demands
on
latencies
and
that's
what
5g
is
all
about.
They're
extremely
high
bandwidths
and
extremely
low
latencies
to
support
serve
to
support
applications
like
virtual
reality.
Virtual
reality
works
pretty
good.
B
B
B
So
you
really
need
a
pair
of
things
to
end
your
search
and
the
distance
between
those
two
offered
load
levels
is
the
tolerance
you
have
to
get
within
the
top
allowed
tolerance
for
zero
loss
and
loss
above
it.
So
that's
that's
the
sort
of
the
ending
criteria
for
the
binary
search
now.
Both
of
these
values
are
well
within
that
tolerance.
B
So
far,
doug
has
volunteered,
and
my
colleague,
ignacio
from
argentina,
he's
also
volunteered
his
team
to
work
on
the
liaison
reply
and
we're
also
asking
people
to
join
the
effort
to
contribute
methods
of
measurement
to
perform
calibration
in
labs
other
than
the
one
that
I'm
currently
using
now.
It's
part
of
the
Opie
Anna
fee,
vs
perf
project
to
make
access
technologies
available
for
the
phase
2
of
the
testing.
So
in
other
words,
if
you
have
a
fiber
passive
optical
network
that
you
would
be
willing
to,
let
us
test
in
the
laboratory
that
would
be
cool.
B
You
have
a
connected
scenario
for
your
mobile
devices.
That
would
be
cool.
So
looking
ahead
here
to
phase
2
there's
more
work
to
do
we
want
to
look.
We
see
how
these
things
work
in
the
in
the
real
access
environment,
so
I'll
pause,
I'll,
pause
there
for
comments
and
if
you're
gonna
say
something
city
and
pop
up
at
the
mic.
H
Yes
edenia,
so
you
said
for
the
calibration
right
for
the
beauty,
but
the
calibration.
Normally
we
do
for
the
testing
and
measuring
equipments.
Like
you
say
this
is
a
say:
1
kilo
means,
though,
how
do
you
define
1
kilo,
so
you
if
there
is
a
standard
thing
in
SI
unit?
This
is
based
on
this.
This
is
called
as
one
kilo.
Then
you
base
based
on
this
testing
equipment
or
one
kilo.
What
does
that
then?
Then
we
write
so
in
the
test.
The
normally
the
calibration
is
done
for
all
the
testing
measuring
equipments.
H
The
duty
is
normally
we
don't
do
it.
So
you
don't
do
you
know,
do
it,
you
don't
do
it
in
the
D
ut
the
bit,
then
we
use
RT
or
the
router
test,
or
the
x
ER,
the
spire
and
or
whatever
it
right
tools
available
to
you
know
measure
what
is
the
degree
of
tower
the
Delta
based
on
the
parameters
we
are
doing
it
so
I
didn't
follow
what
do
a
calibration.
So
so
so
the
the
I
mean
the
typical.
B
But
you
want
to
know
that,
for
example,
that
you're
filling
the
10
gigabit
per
second
pipe
when
you
think
you
are
and
and
as
I
as
I
pointed
out
before,
with
those
tests
or
serrations
the
the
software
traffic
generator
that
we're
using
isn't
quite
measuring
up
to
that.
So
you
know
it's
a
I
mean
that
that's
a
kind
of
a
test
on
the
calibration
itself
right.
There.
C
B
I
mean
it
may
be.
It
may
be
too
strong
to
say
that
in
the
benchmarking
world
we
haven't
done
calibrations
because
we
have,
but
what
we're?
What
we're
doing
now
is
calibrating
against
against
a
range
of
targets,
so
I
think
that's
a
good
clarification,
sadena
I.
Thank
you.
Thank
you
for
for
mentioning
that.
You
know
the
kinds
of
calibration
that
are
going
on
today.
Okay,.
H
And
one
more
question
is,
like
you
said:
I
mean
the
the
pipe
right,
so
a
Gigabit
Ethernet
internet.
That
means
normally
the
provider
will
give
big
a
bit
Ethernet
to
the
first
hope
router,
because
they
don't
have
that
much
because
the
bandwidth
is
shared.
So
we
have
a
contention
like
oh
I
know
one
is
220
or
one
is
200
like
that,
because
the
the
last
of
from
the
first
router
to
the
work,
all
the
users
will
be
sharing
a
common
pipe.
So
you
won't
be
getting
a
you
know:
Gigabit
Ethernet,
you
know.
H
If
you
are
going
to
the
Google
server,
you
won't
be
getting
that
so
the
they
always
mention
like
the
last
last
mile.
Okay,
you
have
that
pipe
right
then,
after
that
it
does
it,
as
you
said
like
when
I
you
know,
25
megabits
to
the
Syrian
Network
or
that
so
that
will
be.
You
know,
testing.
That
will
be
a
complicated
right,
because
you
cannot
simulate
all
these
things
in
the
lab
and
we
have
a
confined
area
or
we
have
to
use
emulators
to
do
that.
Well,.
B
The
there
are
some
laboratories
that
have
that
capability.
There
they're
not
they're,
not
in
this
current
phase.
One
scenario
where
we're
doing
this
calibration,
but
but
labs
are
out
there
that
exists
that
have
this
and
you're
gonna,
we're
even
in
the
production
environment.
Where
there's
things
going
on
we're
gonna
need
to
demonstrate
that
at
least
some
minimum
rate
is
achieved,
and
and
and
very
often
the
specifications
have
some
variation
in
them
like
you
know,
they
say
up
to
a
gigabit
and
maybe
the
minimum,
then
they
will
give
in
some
lesser
value.
B
H
B
There's
there
I
mean
there
are
there
are
there
are
ways
to
test
this
where
in
fact,
you're
truly
searching
for
the
available
bandwidth
and
if
it's
dependent
on
on
other
users,
like
the
mobile
environment
typically
is
then
you're
going
to
be
searching
over
a
wider
range.
But
the
first
thing
you
should
try
is
the
the
rate
that
that
you've
been
guaranteed
or
the
the
rate
that
you're
is
part
of
your
service
contract.
So
that's
I
mean
that
that
may
be
a
very
short
test.
Where
you
say
you
know,
is
this
rate
available
or
not?
B
H
But
it
certainly
ways
to
do
this
yeah,
because
because
it
takes
a
longer
time-
and
it
requires
the
complexity
testing
to
have
this
much
of
slots
available
or
like
that,
because
right
and
obviously
mobile
is
a
lot
of
dynamics
dynamicism,
it
scary
agree,
it's
pretty
difficult,
that's
what
I
think!
Okay
yeah!
Thank
you.
Thank
you.
I'll.
F
Come
back
Dottie
telecom.
This
is
useful
for
internal
testing.
I
think
and
UDP
will
certainly
give
better
result
than
TCP,
but
when
it
comes
to
the
regulator,
they
want
us
to
show
that
the
customer
actually
get
the
bandwidth
we
and
advertise,
including
all
the
TCP
and
city
and
stuff,
and
this
is
a
problem
I-
would.
B
I
would
challenge
that
a
little
bit
because
the
barrack
regulation,
your
DT
right,
the
the
the
barrack
regulations,
specifically
talk
about
IP
capacity,
now
they're
now,
they're
testing
tools
are
using
TCP
yeah,
but
but
the
but
the
regulation
which
which,
incidentally,
your
guide
and
I
have.
You
have
talked
about
at
some
length
they're
there
on
the
right
target
for
the
capacity
of
the
IP
payloads.
B
Now
they're
gonna
have
to
do
some
kind
of
correction
factor
with
the
systems
that
they've
just
chosen,
which
do
use
TCP
but
and
another
factor
that
I
didn't
mention
here
is
that
obviously
IETF
is
working
on
the
quic
protocol.
Yeah
and
that's
the
thing
that's
changed
over
the
last
five
years,
I
mean
I
mean
in
five
years
ago.
We
could
basically
trust
almost
any
internet
access
was
the
bottleneck
now
that
that
picture
has
changed.
With
this
introduction.
B
B
K
Pair,
what
we're
discussing
this
breakfast
I
consider
maybe
to
extend
the
work
that
has
been
done
on
this
RFC
that
you
presented
before
one
important
factor
of
the
buffering
and
the
performance
behavioral
switches
today
is
how
fair
and
they
apply
the
yesterday's
across
multiple
ports
by
always
flows,
etc.
So,
I
think,
like
we
see
a
lot
of
interest
about
that
in
the
industry,
mm-hmm,
so
I
think
this
would
be
interesting
to.
E
B
K
K
K
B
D
K
O
G
G
G
O
B
Sure,
in
fact,
maybe
Ignace
can
you
suggest
any
mailing
lists
where
varrock
might
make
his
proposal
beyond
the
BMW
G
and
and
possibly
bring
some
folks
in
that
aren't
currently
working
with
us.
So.
O
C
O
L
L
O
L
B
I
I
would
I
would
suggest
I
mean
what
you're
doing
right
now
is
basically
indicating
support
for
the
work.
Okay.
What
I'm
going
to
ask
you
to
do,
though,
is
read
the
internet
draft.
Okay,
maybe
provide
some
comments
on
the
list,
so
we
got.
We
got
support
from
two
people,
face-to-face
and
three,
three
okay,
so
yeah
so
now,
I've
got
got
you
sitting
and
and
barak
and
Mike
I've
also
indicated
support
for
them.
I.
Finally,.
B
That's
good
well!
Well,
actually,
this
is
Sabine's
doing
all
the
evpn
sorts
of
things.
This
is
this
is
the
next-generation
firewall
style,
yeah.
B
B
Okay,
all
right
all
right,
very
good,
okay,
folks,
we're
right
on
our
two-hour
ending
time
so
I.
Thank
you
for
your
attendance.
If
anybody
hasn't
signed
the
blue
sheet,
please
sign
it
and
and
the
guys
in
the
back,
please
locate
it
so
that
did
you
sign
it,
sir
yeah,
okay,
okay,
very
good!
All
right!
Thanks
for
your
attention
today,
I
really
appreciate
it.